id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2308.15568
Over-Squashing in Graph Neural Networks: A Comprehensive survey
Graph Neural Networks (GNNs) revolutionize machine learning for graph-structured data, effectively capturing complex relationships. They disseminate information through interconnected nodes, but long-range interactions face challenges known as "over-squashing". This survey delves into the challenge of over-squashing in Graph Neural Networks (GNNs), where long-range information dissemination is hindered, impacting tasks reliant on intricate long-distance interactions. It comprehensively explores the causes, consequences, and mitigation strategies for over-squashing. Various methodologies are reviewed, including graph rewiring, novel normalization, spectral analysis, and curvature-based strategies, with a focus on their trade-offs and effectiveness. The survey also discusses the interplay between over-squashing and other GNN limitations, such as over-smoothing, and provides a taxonomy of models designed to address these issues in node and graph-level tasks. Benchmark datasets for performance evaluation are also detailed, making this survey a valuable resource for researchers and practitioners in the GNN field.
Singh Akansha
2023-08-29T18:46:15Z
http://arxiv.org/abs/2308.15568v6
# Over-Squashing in Graph Neural Networks: ###### Abstract Graph Neural Networks (GNNs) revolutionize machine learning for graph-structured data, effectively capturing complex relationships. They disseminate information through interconnected nodes, but long-range interactions face challenges known as "over-squashing". This survey delves into the challenge of over-squashing in Graph Neural Networks (GNNs), where long-range information dissemination is hindered, impacting tasks reliant on intricate long-distance interactions. It comprehensively explores the causes, consequences, and mitigation strategies for over-squashing. Various methodologies are reviewed, including graph rewiring, novel normalization, spectral analysis, and curvature-based strategies, with a focus on their trade-offs and effectiveness. The survey also discusses the interplay between over-squashing and other GNN limitations, such as over-smoothing, and provides a taxonomy of models designed to address these issues in node and graph-level tasks. Benchmark datasets for performance evaluation are also detailed, making this survey a valuable resource for researchers and practitioners in the GNN field. Graph Neural Networks (GNNs), Over-squashing, Over-smoothing, Graph-rewiring ## I Introduction In recent years, the explosion of data in various domains has led to an increased interest in harnessing the power of graph structures for modeling complex relationships [1, 2, 3, 4, 5]. Graphs, which consist of nodes and edges representing entities and their connections, respectively, have emerged as a fundamental data representation in fields such as social networks [2, 6, 7], recommendation systems [8, 9, 10, 11], biology [12, 13], and more. As the diversity and complexity of graph-structured data grow, so does the demand for advanced tools to analyze and understand these intricate relationships. This surge in interest has sparked the development of a remarkable class of machine learning models known as Graph Neural Networks (GNNs) [14, 15, 16]. GNNs are a novel approach to learning representations from graph-structured data, enabling us to capture both local and global information of nodes in a unified manner [17, 18]. In essence, GNNs extend the neural network architecture to accommodate graph data, where nodes represent entities and edges denote relationships. This extension opens the door to a multitude of applications, ranging from node classification [19, 20, 21] and link prediction to graph-level tasks like community detection [6, 22] and molecular property prediction [23, 24]. GNNs leverage the underlying graph structure to enable information propagation and aggregation, enabling them to capture intricate patterns that traditional machine learning models struggle to discern. Notwithstanding their remarkable achievements, GNNs are not immune to certain inherent limitations, including over-smoothing [25, 26], vanishing gradients [27, 28], Out-of-Distribution (OOD) data challenges [29, 30], overfitting [31], and the relatively less explored phenomenon of over-squashing [32, 33, 34]. While exhaustive research has been dedicated to addressing the former issues, the latter--over-squashing--remains relatively less explored. Over-squashing is a phenomenon that manifests in tasks requiring the integration of information from distant nodes [32, 35], primarily through edges that serve as bottlenecks within graph data. To put it succinctly, over-squashing denotes the distortion-prone nature of information transfer between nodes that are widely separated [34]. This distortion emerges due to the inherent tension between the limited feature representation capacity of graph embeddings and the exponential growth in the number of neighbors as graphs expand. This interplay often hampers the faithful transmission of distant information. This survey article aims to provide a comprehensive panorama of this specific limitation. We delve into the intricate nuances of over-squashing, shedding light on its conceptual framework and its implications. Additionally, we meticulously outline the repertoire of methods proposed thus far to grapple with this intricate issue. By presenting a systematic exploration of the landscape, we contribute to a deeper understanding of over-squashing's impact on GNNs and offer insights into the evolving strategies engineered to surmount this challenge. To summarize, this paper makes the following key contributions: 1. _Pioneering Survey_: This paper serves as the inaugural comprehensive survey on 'over-squashing,' a pivotal limitation in message-passing graph neural networks. It addresses a burgeoning area of interest among researchers. 2. _Systematic Categorization_: We provide a systematic categorization of existing methods, offering a detailed taxonomy that simplifies the understanding of various strategies to mitigate over-squashing. 3. _Benchmark Datasets_: We extensively discuss commonly used benchmark datasets employed for evaluating mod els in the context of over-squashing, both at the node and graph levels. 4. _Added Value:_ Additionally, this survey explores the interplay of over-squashing with other fundamental GNN limitations, such as 'over-smoothing,' providing a more holistic perspective on the challenges faced in this domain. These contributions collectively make this paper a valuable resource for researchers and practitioners delving into the intricate domain of over-squashing in Graph Neural Networks. ## II Background ### _Graph Neural Networks_ A GNN is a neural network architecture designed to operate on graph-structured data. The core idea of a GNN is to iteratively aggregate information from neighboring nodes and update the node features through multiple layers. Consider a graph denoted as \(G=(V,E)\), where \(V\) represents the set of nodes and \(E\) is the set of edges (or links), with \(E\subseteq V\times V\). In the context of Graph Neural Networks (GNNs), the primary objective is to learn effective representations for nodes, links, and even entire graphs. This is achieved through a fundamental process called message-passing, as defined by Gilmer et al. (2017) and elaborated by Zhang et al. (2022). In this process, GNNs iteratively refine node representations using the following equations: At layer \(l\): \[h_{u}^{(l)}=COM\bigg{\{}h_{u}^{(l-1)},AGG\{h_{v}^{(l-1)}\text{ where }v\in N_{u}\}\bigg{\}}\] Where \(h_{u}^{(l-1)}\) represents the representation of node \(u\) at the \(l-1\)th layer. It is typically initialized with the node's feature at the initial layer. \(N_{u}\) signifies the set of neighbors of node \(u\). \(AGG(\cdot)\) denotes the aggregation function that gathers information from neighboring nodes. \(COM(\cdot)\) is the combination function responsible for integrating aggregated information into the node's representation. By iteratively applying this message-passing mechanism, GNNs refine node representations while considering the relationships with neighboring nodes. This step is crucial for capturing the structural and semantic information within the graph. Once the node representations are established, GNNs extend their influence to edges (links) and the entire graph. _Link Representations_: The representations of connected nodes are leveraged to derive meaningful representations for the links connecting them. _Graph-Level Representations_: The collective information from all nodes in the graph is distilled using a readout (or pooling) operation. This operation yields a representation of the entire graph, encapsulating its characteristics. Ultimately, the acquired representations of nodes, links, or entire graphs can be harnessed to address a variety of graph-based tasks. These tasks span different levels of complexity, ranging from node-specific tasks like node classification and link prediction to higher-order tasks involving the entire graph structure. ### _Over-squashing_ The phenomenon of over-squashing has been described as a challenge arising within Message Passing Neural Networks (MPNNs) when messages traverse through distant nodes. This issue stems from the exponential expansion of a node's receptive field, which results in numerous messages being compressed into fixed-size vectors. Topping et al. (2022)[35] have formally substantiated this occurrence through a sensitivity analysis of the Jacobian matrix of node features. They have partly attributed over-squashing to the presence of edges exhibiting high-negative curvature. To elaborate, let's consider a receptive field \(B_{r}=\{j\in V:d_{G}(i,j)\leq r\}\) associated with a \(r\)-layer GNN, where \(d_{G}\) signifies the shortest-path distance and \(r\) is a natural number. The Jacobian matrix \(\partial h(r)_{i}/\partial x_{j}\) represents the sensitivity of a node embedding \(h(r)_{i}\) to a specific input feature \(x_{j}\) in node \(j\). Over-squashing can be conceptualized as the inability of \(h(r)_{i}\) to be influenced by \(x_{j}\) at a distance \(r\). Topping et al. [35] have mathematically established that \[\frac{\partial h_{i}^{(r+1)}}{\partial x_{j}}\leq(\alpha\beta)^{r+1}A^{r+1}(i,j)\] under certain conditions, where \(|\Delta\phi_{l}|\leq\alpha\) and \(|\Delta\psi_{l}|\leq\beta\) for \(0\leq l\leq r\), and \(\phi_{l},\psi_{l}\) are differentiable functions. This inequality highlights how the influence of input features diminishes exponentially with distance \(r\), particularly noticeable when \(|B_{r}|\) grows exponentially. For instance, in a binary tree where \(d_{G}(i,j)=r+1\), the term \(A^{r+1}(i,j)\) equals \(2^{(-1/3)}*2^{(-r)}\), leading to an exponential decay in node dependence on input features at distance \(r\). This phenomenon is what researchers refer to as the over-squashing of information [34, 35]. In [34] authors tried to answer (1) what is the impact of width in mitigating over-squashing? (2) can over-squashing can be avoided by sufficiently deep models? (3) how does over-squashing relate to graph spectrum and the underlying topology beyond curvature bounds that only apply to 2-hop neighbors? Last question is important because recent works trying to combat over-squashing via methods that depends on the graph spectrum [36, 37, 38] ## III Handling over-squashing in graph neural networks(GNNs) In scenarios where tasks necessitate spanning multiple layers within a network, the depth of the network often mirrors the range of interactions between nodes. Nevertheless, a rising number of layers corresponds to an exponential increase in the number of nodes contributing to the receptive field of each individual node. This amplification leads to the phenomenon of over-squashing [32, 35]. Essentially, over-squashing manifests as a compression of information originating from a receptive field that encompasses numerous nodes. This compression results in fixed-length node vectors, impeding the accurate propagation of messages from distant nodes. This distortion takes shape due to graph bottlenecks that emerge as the number of \(k\)-hop neighbors undergoes exponential growth with each \(k\). In a bid to surmount these challenges, the literature has proposed strategies such as graph rewiring [32]. Consider a graph denoted as \(G\) with number of node as \(n\), adjacency matrix \(A\), and a mapping function \(R:\mathbb{R}^{n\times n}\rightarrow\mathbb{R}^{n\times n}\). When we talk about the graph \(G\) being "rewired" by \(R\), it signifies a transformation where the message exchanges among nodes occur on a graph denoted as \(R(G)\), rather than the original \(G\). In this context, \(R(G)\) is the graph characterized by its adjacency matrix \(R(A)\). The challenge of over-squashing within Graph Neural Networks (GNNs) has spurred the development of various methodologies, each aiming to alleviate this phenomenon. Broadly, these methods can be categorized into two types of graph rewiring methods, each offering unique insights into the resolution of the over-squashing predicament. ### _Spatial Graph Rewiring Methods:_ **Curvature-Based Rewiring and Comprehensive Topological Analysis:** Topping et al. [35] and Di Giovanni et al. [34] contributed insights into the origins of over-squashing, its topological implications, and the influence of GNN design choices. **SDRF** Topping et al. [35] yielded a novel approach, Stochastic Discrete Ricci Flow (SDRF), to mitigating the pervasive issue of over-squashing in Graph Neural Networks (GNNs) through a curvature-based graph rewiring procedure. The crux of this innovative methodology lies in its meticulous treatment of graph edges based on their curvature properties. Specifically, edges exhibiting negative curvatures, indicative of potential sources of over-squashing, become the focal point of attention. By orchestrating the construction of supplementary connections tailored to support these edges, the proposed rewiring process adeptly combats the adverse effects of over-squashing. **FA** Alon and Yahav [32] introduced a graph rewiring method that adds a fully-adjacent matrix in the last GNN layer to mitigate over-squashing. This approach is simple to implement and can be couple to any existing GNN architecture. It involves incorporating a Fully-adjacent layer (FA) in GNNs to alleviate the problem. **BORF** Recently, Nguyen introduced a novel rewiring technique known as Batch Ollivier-Ricci Flow (BORF), which harnesses the power of Ollivier-Ricci curvature to address the interrelated challenges of over-smoothing and over-squashing in Graph Neural Networks (GNNs) as detailed in [39]. BORF is a novel approach designed to address the issues of over-squashing and over-smoothing in Graph Neural Networks (GNNs). It operates in batches and begins by identifying two sets of edges in each batch: h edges with minimal curvature and k edges with maximal curvature. By focusing on edges with low and high curvature values, respectively, BORF aims to simultaneously mitigate over-smoothing and over-squashing. It then optimizes the graph's connectivity by adding connections to the minimally curved edges, ensuring efficient communication between distant nodes. This alleviates over-squashing. To minimize computational overhead, BORF reuses previously calculated optimal transport plans for edge addition. Additionally, BORF removes the maximally curved edges to prevent over-smoothing, as these can lead to excessive smoothing of node features. Furthermore, the algorithm's flexibility allows it to operate as a net edge addition, subtraction, or net-zero rewiring, providing adaptability to different data characteristics. BORF effectively balances these two key challenges, enhancing the performance of GNNs in graph-related tasks. **GTR** Within the realm of addressing over-squashing in Graph Neural Networks (GNNs), Black et al. [40] have conducted a comprehensive analysis by investigating the phenomenon through the lens of commute time between node pairs. They proposed Greedy Total Resistance (GTR) rewiring, method to minimize the total resistance. Effective resistance offers an alternative metric for evaluating bottlenecks within graph topology [41]. This measure quantifies the level of resistance between two nodes in proportion to their commute time. Commute time represents the expected number of steps required for a random walk to traverse back and forth between nodes within the graph. In essence, high resistance between two nodes indicates a greater difficulty for messages to traverse from node \(i\) to node \(j\). Black et al. [40] have established a sensitivity bound which links elevated effective resistance between pairs of nodes to a reduced sensitivity of the representations, \(h^{(r+1)_{i}}\) concerning input features \(x_{j}\). Furthermore, it's important to note that effective resistance exhibits an inverse relationship with the square of the Cheeger constant. In a parallel vein, Di Giovanni et al. [34] have undertaken similar methodologies, ultimately converging on a shared conclusion. Their findings underline the pivotal role of effective resistance in influencing the degree of over-squashing within GNNs. Furthermore, the work by Di Giovanni et al. [34] extends beyond a singular focus on effective resistance. They delve into the impact of GNN architecture's width and depth on the occurrence of over-squashing. This comprehensive analysis probes into how various dimensions of GNN design interplay with the manifestation of over-squashing, enriching our understanding of this intricate phenomenon. In their work, Di Giovanni et al. [42] build upon their previous findings and concentrate on two pivotal factors: the network's architecture, characterized by weight norms and depth, and the intrinsic graph structure, evaluated using commute times. In doing so, they establish upper limits on the ability of Message Passing Neural Networks (MPNNs) to efficiently integrate features. Significantly, they introduce the notion of "over-squashing," which is fundamentally linked to MPNNs' maximum node mixing capacity and operates inversely to it. **DRew** Gutteridge et al. [43] argue that while some rewiring approaches attempt to enhance connectivity for long-range tasks, they often sacrifice the inductive bias provided by graph distance by enabling instant communication between distant nodes at every layer. Hence to tackle these issues a layer-dependent rewiring technique is proposed in [43] which gradually densify the graph. A delay mechanism that facilitates skip connections based on node distance and layer is also introduced in [43] so that graph's inductive bias is preserved. ### _Spectral Graph Rewiring Methods:_ To explain graph rewiring in the context of spectrum of the graph we would like to explain the connectedness of a graph with eigen values of the graph Laplacian. The connectedness of a graph \(G\) can be measured via a quantity known as the Cheeger constant, denoted as \(h_{Cheeg}\), is defined as follows: \[h_{Cheeg}=\min(U\subset V)\frac{|\{(u,v)\in E:u\in U,v\in V\ U\}|}{\min(vol(U),vol(V\ U))}\] Here, \(vol(U)\) represents the volume of set \(U\) and is calculated as the sum of degrees of nodes \(u\in U\). The Cheeger constant, \(h_{Cheeg}\), essentially quantifies the energy required to divide graph \(G\) into two separate communities. A smaller \(h_{Cheeg}\) implies that \(G\) tends to have two communities with only a few connecting edges. In such cases, over-squashing is more likely to occur when information needs to traverse from one community to another. It's important to note that while computing \(h_{Cheeg}\) is generally a complex task, the Cheeger inequality provides a useful relationship: \(h_{Cheeg}\) is approximately proportional to the smallest positive eigenvalue of the graph Laplacian. In light of this relationship, some recent approaches have proposed selecting a rewiring strategy that depends on the spectrum of \(G\). The goal is to generate a new graph \(R(G)\) that satisfies \(h_{Cheeg}(R(G))>h_{Cheeg}(G)\). This strategy has been explored in the works [36, 37, 38]. The underlying assumption is that propagating messages over the rewired graph \(R(G)\) can mitigate over-squashing. However, it's important to note that this claim lacks formal analytical proof at this stage. **Augmenting the Spectral Gap:** The prevailing strategy in mitigating over-squashing has largely revolved around increasing the spectral gap of the graph, specifically targeting the smallest eigenvalue of the Laplacian matrix. Intuitively, the spectral gap is linked to the presence of bottlenecks within graphs, as elucidated by the Cheeger inequality [55]. Consequently, augmenting the spectral gap serves to reduce these bottlenecks, fostering a smoother flow of information. Various strategies have emerged to decrease the spectral gap, encompassing methods such as edge addition [38], edge flipping [47], edge reweighting [36], or the utilization of expanders to perform specific GNN layers [37]. These approaches seek to fine-tune the graph's structural characteristics to mitigate over-squashing while recognizing the pivotal role played by the spectral gap in this intricate balance. **DiffWire** Arnaiz et al. [36] introduced a unified approach that bridges the concepts of commute time and graph spectral gap. This approach comprises two distinct layers within a Graph Neural Network (GNN). The first layer is a differentiable, parameter-free component designed to learn the commute time, while the second layer, known as the rewiring layer, optimizes the spectral gap based on the specific characteristics of the network and the task at hand. This integrated framework empowers the GNN to adaptively learn and apply rewiring strategies, effectively alleviating the challenges associated with over-squashing while considering the nuances of the graph structure and task requirements. **EGP Model** Deac et al. [37] introduced the Expander Graph Propagation (EGP) model for graph classification tasks. Their approach leverages expander graphs to tackle bottlenecks in global information propagation within the graph. In graph classification, it's essential to compute node features that consider both local interactions within their neighborhood and the broader global context of the graph structure. Deac et al. achieved this by adding one layer of EGP after each layer of GNN utilizing Cayley graphs to construct efficient expander graphs of a specified size. The EGP model is designed to enhance connectivity for long-range tasks, ensuring efficient communication between distant nodes. **RLEF** graphs to address bottlenecks in the global propagation of information within a graph. Their method introduces two local graph rewiring algorithms: the Random Local Edge Flip (RLEF) and the Greedy Random Local Edge Flip (G-RLEF). These algorithms operate by adding and removing edges at specific locations while preserving the node degrees and overall connectivity of the graph. This framework provides a robust foundation for conducting a comprehensive analysis of the information decay that arises due to oversquashing in Graph Neural Networks (GNNs). They clarify how an information percolation bound serves as an effective means to encapsulate the core concept of oversquashing. The primary objective of employing these techniques is to enhance connectivity for long-range tasks. They achieve this by ensuring efficient and effective communication between nodes that are geographically distant from each other within the graph structure. **Trade off between over-smoothing and over-squashing** While these rewiring methods aim to enhance graph connectivity, they come with certain drawbacks, particularly when excessive modifications are made to the input graph. One prominent concern is the loss of valuable topological information inherent to the original graph. When we introduce extensive changes by adding or removing edges, it can diminish the relevance of the original graph's structural characteristics for the given task. Additionally, the act of adding edges has a smoothing effect on the graph. If we introduce an excessive number of edges to the input graph, a standard Graph Convolutional Network (GCN) may encounter a common issue known as oversmoothing, as highlighted by Li et al. in 2018. In simpler terms, when we opt for this straightforward rewiring approach, we find ourselves facing a trade-off between addressing over-squashing and dealing with the problem of oversmoothing. ## IV Unifying approaches for over-squashing and over-smoothing Certain methodologies have emerged that tackle the intertwined challenges of over-smoothing and over-squashing in unison, establishing an interconnected relationship between these fundamental limitations within graph neural networks. **SJLR** Giraldo et al. [33] established a profound connection between over-smoothing and over-squashing and the spectral gap of the graph Laplacian in Graph Neural Networks (GNNs). Their work revealed how these challenges are intricately linked to the spectral gap of the normalized Laplacian matrix, unveiling a noteworthy trade-off illuminated by the Cheeger inequality. In response to these challenges, Giraldo et al. introduced the Stochastic Jost and Liu Curvature Rewiring (SJLR) algorithm, a notable departure from previous curvature-based techniques [35, 38, 47]. SJLR stands out for its computational efficiency and its ability to preserve essential graph properties. One distinctive feature of the SJLR algorithm is its dynamic capability to add and remove edges during the training phase of Graph Neural Networks (GNNs) while maintaining the fundamental graph structure unaltered during the testing phase. This adaptability sets SJLR apart as a promising approach to address the intricate challenges posed by over-smoothing and over-squashing in GNNs. **MHKG** The study described in [50] takes on the persistent challenges that have plagued the performance of Graph Neural Networks (GNNs), notably over-smoothing, over-squashing, and limited expressive capabilities. Drawing inspiration from physics, the authors employ a novel approach, reversing the direction of the graph heat equation, which substantially sharpens node features. They introduce the Multi-Scaled Heat Kernel based GNN (MHKG), which amalgamates diverse filtering functions to counter these issues. Generalizing MHKG into G-MHKG, they provide an in-depth analysis of its components' roles in controlling over-smoothing, over-squashing, and expressive power. Notably, they uncover a trade-off between over-smoothing and over-squashing, wherein enhancing node feature sharpness may lead to heightened over-squashing, and vice versa. G-MHKG effectively handles these challenges in the graph spectral domain through controlled manipulation of time. **FoSR** Karhadkar et al. [38] proposed empirical solutions to mitigate both over-smoothing and over-squashing. While acknowledging the trade-off between these issues, their method primarily involves edge addition. The authors introduce a novel rewiring method called FoSR (First-order Spectral Rewiring) with the objective of optimizing the spectral gap of the graph input to the GNN. This algorithm meticulously computes the first-order change in the spectral gap resulting from the addition of each edge and subsequently selects the edge that maximizes this change. Within this framework, the authors propose a comprehensive approach, which not only introduces this innovative rewiring method but also incorporates a relational Graph Neural Network (GNN) to leverage these rewired edges effectively. This GNN operates on the transformed graph, where the relationships within the network indicate whether each edge was originally part of the input graph or added during the rewiring process. This integrated strategy ensures the preservation of the input graph's underlying topology while utilizing newly added edges to enhance its overall connectivity. **CurvDrop** Liu et al. [44] addressed both problems by focusing on edge removal based on curvature metrics. They devised a Curvature-based topology-aware Dropout-sampling technique, CurvDrop, which integrates Discrete Ricci Curvature in tGNNs for more expressive graph models. Drawing inspiration from the geometric analogy of Ricci curvature, Liu et al. established a compelling relationship between the Ricci curvature of an edge and the spectral gap. They harnessed this insight to address the challenges of over-smoothing and over-squashing by introducing a sampling layer driven by Ricci curvature. This sampling layer selectively drops a portion of edges with low Ricci curvature at each GNN layer, effectively mitigating the issues associated with over-smoothing and over-squashing. **CurvPool** CurvPool is a novel graph pooling technique designed by Sanders et al. in [52] to tackle over-smoothing and over-squashing issues in Graph Neural Networks (GNNs) during graph classification tasks. It relies on the Balanced Forman curvature (BFC) to identify critical structures in the graph that contribute to these problems. This method calculates curvature values for each edge and employs a criterion to group nodes into clusters, ensuring that nodes with similar curvature profiles are pooled together. The resulting node clusters are transformed into new nodes in the pooled graph, and node representations within each cluster are aggregated using operators like mean, sum, or maximum. To retain the original graph structure, CurvPool remaps old edges to the new node clusters. By leveraging graph curvature to guide the pooling process, CurvPool effectively balances over-smoothing and over-squashing, ultimately improving the performance of GNNs in graph classification tasks. It offers adaptability to various data characteristics while maintaining computational efficiency and effectiveness. **CBED** Inspired by the geometric analogy of Ricci curvature, a curvature-based edge dropping algorithm known as Curvature-Based Edge Dropping (CBED) is introduced in the work by Dai Shi et al. [48]. This innovative approach strategically removes edges with the highest positive curvature. By doing so, it aims to enhance the model's adaptability to graphs characterized by heterophily and, in the process, alleviate the issue of over-smoothing. **PowerEmbed** Huang et al. have introduced a pioneering normalization technique called PowerEmbed in their work [45] aimed at mitigating the challenges of over-smoothing and over-squashing in graph neural networks. PowerEmbed employs a layer-wise normalization approach that empowers message-passing neural networks (MPNNs) to effectively express the top-k eigenvectors of a graph while capturing crucial global spectral information. Remarkably, this technique exhibits adaptability by remaining agnostic to the specific topology of the graph, rendering it suitable for graphs characterized by both homophily and heterophily. Moreover, the authors seamlessly integrated PowerEmbed with an inception network. This synergistic combination is engineered to facilitate the learning of comprehensive representations, allowing for a seamless transition from local message-passing features to the incorporation of essential global spectral information. Notably, this strategic amalgamation is endowed with a provable capability to preemptively mitigate the challenges associated with over-smoothing and over-squashing. **DGN** Beaini et al. introduced Directional Graph Networks (DGN) to combat over-squashing in Graph Neural Networks (GNNs). Over-squashing hinders GNNs, causing issues like over-smoothing and reduced discriminative power. They contend that over-squashing occurs in GNNs due to their incapacity to capture directional information within graphs, which constrains their grasp of graph structures and feature transformations. To resolve this, Beaini et al. presented globally consistent anisotropic kernels for GNNs, enabling them to incorporate directional flows based on graph topology. Their approach employs vector fields within the graph, utilizing low-frequency eigenvectors to define directional flows at each node. Many GNNs are insensitive to the order of neighbor features, causing multiple layers to focus on simple changes rather than learning higher-level features, contributing to over-squashing. In summary, Beaini et al.'s DGN model, through globally consistent anisotropic kernels and directional information, effectively addresses over-squashing. This empowers GNNs to comprehend local graph structures, perform meaningful feature transformations, and mitigate over-squashing's adverse effects. ## V Graph Transformers and other GNN Architectures Graph transformers have gained substantial attention as an alternative approach to combating over-smoothing and over-squashing in the context of graph and computer vision domains [56, 57, 58]. This approach leverages the inherent strengths of transformer architectures: **Over-smoothing Resilience:** Ying et al. [59] observed that transformers are less susceptible to over-smoothing compared to traditional Graph Neural Networks (GNNs). Their ability to model graph data efficiently contributes to mitigating the over-smoothing problem. **Over-squashing Resilience:** Kreuzer et al. [60] highlighted the resilience of transformers to over-squashing. Transformers establish direct paths connecting distant nodes, which alleviates the over-squashing challenge. However, it's worth noting that transformers have limitations, including significant computational and memory requirements due to the need for every node to attend to all others. This can make them less suitable for large-scale graph applications and may result in improper training leading to a blend of local and non-local interactions. **Graph ViT\(\backslash\)Mixer MLP:** Xiaoxin [61] introduces a novel approach as an alternative to global attention mechanisms. This approach draws inspiration from ViT\Mixer MLP architectures initially introduced in computer vision. The resulting "graph ViT\Mixer MLP" GNNs excel in capturing long-range dependencies while effectively mitigating over-squashing issues. They offer improved computational efficiency, speed, and memory advantages compared to existing models. Gabrielsson et al. [62] employ Transformer-inspired positional encoding techniques to expand the receptive field of each node within a graph. Gabrielsson et al. [62] employ positional encoding techniques within a modified graph framework to effectively extend the receptive field of each node in Graph Neural Networks (GNNs) to encompass \(r\)-hop neighborhoods. The approach involves expanding the receptive fields by introducing modifications to the graph structure and incorporating positional encodings as both edge and node features. This innovative method differs from conventional graph transformers, which often replace the original graph topology with a complete graph and blend local and global information. Instead, Gabrielsson et al.'s approach facilitates the gradual expansion of the receptive field, allowing nodes to capture inductive biases by spanning from 1-hop neighborhood to \(r\)-hop neighborhoods. This strategic extension of the receptive field is designed to mitigate the challenges associated with over-squashing in GNNs. **PASTEL** In their recent paper [53], Qingyun et al. tackle the issue of over-squashing in Graph Neural Networks (GNNs) by highlighting its association with topology imbalance. To combat this problem, they introduce PASTEL (Position-Aware STructurLE Learning). They redefine topology imbalance in terms of under-reaching and over-squashing and establish two quantitative metrics to evaluate these issues. PASTEL aims to enhance the intra-class connectivity of nodes in GNNs by optimizing information propagation paths. To achieve this, they employ an anchor-based position encoding mechanism to capture the relative positions of unlabeled nodes concerning labeled nodes. Additionally, a class-wise conflict measure, utilizing Group PageRank, quantifies the influence of labeled nodes from different classes, guiding edge weight adjustments to boost intra-class connectivity. PASTEL's contributions include a novel perspective on topology imbalance, improved modeling of node relationships through position encodings, and demonstrated effectiveness across diverse data annotation scenarios. **A-DGN** Anti-Symmetric Deep Graph Network (A-DGN) is introduced by Gravina et al. in [51] as an innovative framework tailored to address the challenge of long-term information propagation in Deep Graph Networks (DGNs). This approach is devised by leveraging principles from ordinary differential equations (ODEs) and their connection to deep neural architectures. Gravina et al establishes theoretical conditions under which a stable and non-dissipative ODE system can be realized on graph structures, utilizing anti-symmetric weight matrices. Within the A-DGN framework, the A-DGN layer is formulated through the forward Euler discretization of the obtained graph ODE. This process enforces specific properties on the ODE system, resulting in the preservation of long-term dependencies between nodes within the graph and alleviating the problem of over-squashing in GNNs. Additionally, it mitigates issues related to gradient explosion or vanishing during the training process. **RFGNN** In their paper [54], Rongqin et al. address the challenge of over-squashing in Graph Neural Networks (GNNs) by identifying its connection to message redundancy during the aggregation process. They observed that conventional GNNs often struggle to efficiently propagate long-length path information, which limits their capacity to learn graph similarities and support long-range interactions. To tackle this issue, they propose the Redundancy-Free Graph Neural Network (RFGNN). RFGNN utilizes a path-search-tree concept, constructed through breadth-first search, to eliminate redundancy in message propagation. This ensures efficient information transmission without over-squashing. They also introduce the notion of extended paths (epaths) to capture complex graph structures and implement truncated ePaths trees (TPTs) for message-passing. RFGNN's de-redundancy technique balances epath influence, effectively mitigating over-squashing and enhancing GNNs' ability to capture structural information in original graphs. In summary, RFGNN improves structural information propagation by efficiently aggregating information through path-search trees, avoiding redundancy. The reduction of redundancy plays a crucial role in addressing the over-squashing challenge. **GESN** The Graph Echo State Network (GESN), introduced by Tortorella and Mechelli in their work [46], represents a reservoir computing model designed to address specific challenges in node classification tasks, particularly within heterophilic graphs where nodes from the same class or category are typically positioned farther apart from each other, resulting in a lower density of intra-class edges. One distinctive feature of GESN is its training-free nature. This aspect is intriguing considering that GESN is a reservoir computing model. Reservoir computing models typically involve an untrained reservoir, which acts as a fixed, random structure to process input data. This training-free characteristic makes GESN an efficient and effective solution for node classification tasks, offering a promising approach to mitigate the issues of long-range message passing and over-squashing in heterophilic graphs. Each of these approaches offers a unique perspective and set of techniques to address the challenges of over-squashing in graph-based machine learning models. ## VI Datasets The common datasets employed for node and graph classification tasks in the models listed in Table I are presented in Table II, along with detailed dataset statistics. It's important to note that this list is not exhaustive, as there are numerous other datasets, including synthetic and large-scale real-world ones, utilized for various research purposes. Table II displays the statistics of the datasets used in this study, where \(H(G)\) represents the graph's homophily, as defined in [63], calculated as \[H(G)=\frac{1}{|V|}\sum_{v\in V}\frac{\#v\text{'s neighbors with the same label as }v}{\#v\text{'s neighbors}}\] For node classification tasks, we employ a diverse set of 12 datasets, encompassing graphs of varying sizes and characteristics. Cora, CiteSeer, PubMed are paper citation networks. Node features are represented as bag-of-words from paper content, and the task is to classify the research topics. These datasets are characterized by high homophily. Film is constructed based on the co-occurrences of actors on the same Wikipedia page, categorized into five groups. It serves as a node classification task with a low homophily nature. TwitchDE constitutes a social network comprising German gamer accounts from Twitch, categorized as suitable for work or adult profiles. The task involves classifying the profiles. Tolkers is a collaboration network originating from the crowdsourcing platform Toloka. The objective here is to determine whether a user is active or not. Due to class imbalance, the evaluation metric is the area under the ROC curve. Cornell, Texas, Wisconsin are additional node classification tasks, each originating from university-related interactions. The Cornell dataset comprises research paper citation data. The Texas dataset represents friendships in a Texas college, and the Wisconsin dataset is derived from a university-related network. Node features and specific targets for these datasets can vary. Chameleon, Squirrel, Actor are also novel datasets introduced for node classification. Chameleon captures interactions within a university community. Squirrel is a network of interactions among squirrels in a park. The Actor dataset models collaborations among actors in the film industry. Each of these datasets presents unique characteristics and classification tasks. For graph classification tasks, we utilize the following datasets: NCI-1 and NCI-109 datasets involve classifying molecules as cancerous or non-cancerous. Node input features are one-hot encodings of atom types, and edges represent chemical bonds. Reddit-B, Reddit-5K, Reddit-12K capture interactions between users in Reddit discussion threads. The task is to determine the type of subreddit a discussion belongs to. Collab comprises ego-networks from three distinct scientific collaboration fields. Unlike the previous datasets, Reddit tasks, and Collab, these datasets do not have node input features. Enzymes is a bioinformatics dataset for graph classification. It involves classifying enzymes based on their structures and functions. The BZR dataset is a small molecule dataset used for graph classification tasks. It is commonly employed for evaluating graph-based machine learning algorithms. MUTAG is another bioinformatics dataset for graph classification, primarily used for evaluating chemical informatics algorithms. The task is to predict mutagenicity. PTC is a bioinformatics dataset for graph classification, focusing on carcinogenicity prediction. The graphs represent chemical compounds. COX2 is a small molecule dataset, often used to assess graph-based machine learning models in chemistry-related tasks. The classification task is centered around predicting the inhibition of the COX-2 enzyme. Proteins is a bioinformatics dataset used for graph classification. The task is to classify proteins based on their functions. In all these tasks, we intentionally avoid introducing structural input features such as node degrees or positional encodings. A summary of relevant dataset statistics is provided in Table II for reference. ## VII Conclusion This survey has delved into the depths of over-squashing, unearthing its origins in information compression across distant nodes. We've journeyed through a diverse array of strategies aimed at mitigating its impact - from innovative graph rewiring methods and curvature-based approaches to spectral techniques and the promise of graph transformers. As we tre this path, a nuanced interplay between over-smoothing and over-squashing has come into focus, demanding a balanced resolution. This exploration stands as a testament to the ongoing dialogue among researchers, driven by the pursuit of more refined and capable Graph Neural Networks. In closing, the quest to unravel over-squashing continues to be a beacon guiding our pursuit of more effective models, driven by the dynamic nature of graph data. ## Acknowledgment I extend my heartfelt appreciation to Dr. Karmvir Singh Phogat for providing invaluable insights and essential feedback on the research problem explored in this article. His thoughtful comments significantly enriched the quality and lucidity of this study.
2305.04499
Building Footprint Extraction with Graph Convolutional Network
Building footprint information is an essential ingredient for 3-D reconstruction of urban models. The automatic generation of building footprints from satellite images presents a considerable challenge due to the complexity of building shapes. Recent developments in deep convolutional neural networks (DCNNs) have enabled accurate pixel-level labeling tasks. One central issue remains, which is the precise delineation of boundaries. Deep architectures generally fail to produce fine-grained segmentation with accurate boundaries due to progressive downsampling. In this work, we have proposed a end-to-end framework to overcome this issue, which uses the graph convolutional network (GCN) for building footprint extraction task. Our proposed framework outperforms state-of-the-art methods.
Yilei Shi, Qinyu Li, Xiaoxiang Zhu
2023-05-08T06:50:05Z
http://arxiv.org/abs/2305.04499v1
# Building footprint extraction with graph convolutional network ###### Abstract This is the pre-acceptance version, to read the final version please go to IEEE XPlore. Building footprint information is an essential ingredient for 3-D reconstruction of urban models. The automatic generation of building footprints from satellite images presents a considerable challenge due to the complexity of building shapes. Recent developments in deep convolutional neural networks (DCNNs) have enabled accurate pixel-level labeling tasks. One central issue remains, which is the precise delineation of boundaries. Deep architectures generally fail to produce fine-grained segmentation with accurate boundaries due to progressive downsampling. In this work, we have proposed a end-to-end framework to overcome this issue, which uses the graph convolutional network (GCN) for building footprint extraction task. Our proposed framework outperforms state-of-the-art methods. Yilei Shi 1, Qinyu Li 2, Xiaoxiang Zhu 2,3 1 1 Chair of Remote Sensing Technology (LMF), Technical University of Munich, Munich, Germany 2 Signal Processing in Earth Observation (SiPEO), Technical University of Munich, Munich, Germany 3 Remote Sensing Technology Institute (IMF), German Aerospace Center (DLR), Wessling, Germany Building footprint, Deep convolutional neural networks, Graph convolutional network ## 1 Introduction Building footprint generation is of great importance to urban planning and monitoring, land use analysis, and disaster management. High-resolution satellite imagery, which can provide more abundant detailed ground information, has become a major data source for building footprint generation. Due to the variety and complexity of buildings, building footprint requires significant time and high costs to generate manually. As a result, the automatic generation of a building footprint not only minimizes the human role in producing large-scale maps but also greatly reduces time and costs. Over the past few years, the most popular and efficient classification approach has been deep learning (DL) [1], which has the computational capability for big data. DL methods combine feature extraction and classification and are based on the use of multiple processing layers to learn good feature representation automatically from the input data. Therefore, DL usually possesses better generalization capability, compared to other classification-based methods. In terms of particular DL architectures, several impressive convolutional neural network (CNN) structures, such as ResNet [2] and U-Net [3], have already been widely explored for RS tasks. Many deep learing methods have been developed for building footprint generation. In [4], authors propose a multistage ConvNet with an upsampling operation of bilinear interpolation. The trained model achieves a superior performance on very-high-resolution aerial imagery. Recently, an end-to-end trainable active contour model (ACM) was developed for building instance extraction [5], which learns ACM parameterizations using a DCNN. In [6], authors exploit the improved conditional Wasserstein generative adversarial network to generate the building footprint automatically. Recent work [7] shows that most of the tasks, such as building segmentation, building height estimation, and building contour extraction, are still difficult for modern convolutional networks. In this work, we show a significant performance improvement of building footprint extraction by using our proposed novel framework. ## 2 Methodology ### Review of semantic segmentation Semantic segmentation with a fully convolutional network (FCN) was first introduced in [8], which replaces the last few fully connected layers by convolutional layers to make efficient end-to-end learning and inference that can take arbitrary input size. In [9], SegNet was proposed, which used an alternative decoder variant. The decoder uses pooling indices computed in the max-pooling step of the corresponding encoder to perform nonlinear upsampling. This makes SegNet more memory efficient than FCN. Another variant of the encoder-decoder architecture is U-Net [3]. The architecture by its skip connections allows the decoder at each stage to learn back relevant features that are lost when pooled in the encoder. One issue in FCN approaches is that by propagating through several alternated convolutional and pooling layers, the resolution of the output feature maps is downsampled. In order to overcome the poor localization property, [10] offered an alternative to raise the output resolution, which used a probabilistic graph model CRF to refine the object bound ary. CRFasRNN [11] extended to an end-to-end trainable network by introducing a fully connected CRF. In this work, we extended DCNNs to topologies that differ from the low-dimensional grid structure. The grid-like data can be viewed as a special type of graph data, where each node has a fixed number of ordered neighbors. ### Proposed method An undirected and connected graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) consists of a set of nodes \(\mathcal{V}\) and edges \(\mathcal{E}\). The unnormalized graph Laplacian matrix \(\mathbf{L}\) is defined as \[\mathbf{L}=\mathbf{D}-\mathbf{A}, \tag{1}\] where \(\mathbf{A}\) is the adjacency matrix representing the topology of \(\mathcal{G}\) and \(\mathbf{D}\) is the degree matrix with \(D_{ii}=\sum_{j}A_{ij}\). As the graph Laplacian matrix \(\mathbf{L}\) is a symmetric positive semi-definite matrix, its eigenvalue decomposition can be expressed as \[\mathbf{L}=\mathbf{\Phi}\mathbf{\Lambda}\mathbf{\Phi}^{T}, \tag{2}\] where \(\mathbf{\Phi}=(\phi_{1},\phi_{2},...,\phi_{n})\) are the orthonormal eigenvectors, known as the graph Fourier modes and \(\mathbf{\Lambda}=\mathrm{diag}\left(\lambda_{1},\lambda_{2},...,\lambda_{n}\right)\) is the diagonal matrix of corresponding non-negative eigenvalues. Assuming a signal \(\mathbf{f}\) on the graph nodes \(\mathcal{V}\), its graph Fourier transform is then defined as \(\mathbf{\hat{f}}=\mathbf{\Phi}^{T}\mathbf{f}\). If \(\mathbf{g}\) is a filter, the convolution of \(\mathbf{f}\) and \(\mathbf{g}\) can be written as \[\mathbf{f}*\mathbf{g}=\mathbf{\Phi}\left(\left(\mathbf{\Phi}^{T}\mathbf{g} \right)\circ\left(\mathbf{\Phi}^{T}\mathbf{f}\right)\right)=\mathbf{\Phi} \mathbf{\hat{g}}\mathbf{\Phi}^{T}\mathbf{f}, \tag{3}\] where \(\mathbf{\hat{g}}\) is the spectral representation of the filter. Rather than computing the Fourier transform \(\mathbf{\hat{g}}\), the filter coefficients can be parameterized as \(\mathbf{\hat{g}}=\sum\limits_{k=0}^{r}\alpha_{k}\beta_{k}\) in [12]. With the polynomial parametrization of the filter, the spectral filter is exactly localized in space, and its learning complexity is same as classical DCNNs. However, even with such a parameterization of the filters, the spectral GCN still suffers a high computational complexity. Instead of explicitly operating in the frequency domain with a spectral multiplier, it is possible to represent the filters via a polynomial expansion \(\mathbf{\hat{g}}=g(\mathbf{\Lambda})\) with the Chebyshev basis. \[g(\mathbf{\Lambda})=\sum\limits_{k=0}^{r}\alpha_{k}T_{k}(\mathbf{\tilde{ \Lambda}}), \tag{4}\] where \(T_{k}(\mathbf{\tilde{\Lambda}})\) is the Chebyshev polynomials. The convolution can be formulated as \[\mathbf{f}*\mathbf{g}=\sum\limits_{k=0}^{r}\alpha_{k}T_{k}(\mathbf{\tilde{L}} )\mathbf{f}, \tag{5}\] where \(\mathbf{\tilde{L}}=2/\lambda_{max}\cdot\mathbf{L}-\mathbf{I}\) and \(\lambda_{max}\) is the maximal eigenvector. In [13], the authors further simplify the Chebyshev framework, setting \(r=1\) and assuming \(\lambda_{max}\approx 2\), allowing them to redefine a single convolutional layer. **The propogation model** The proposed propogation model can be written as \[\mathbf{H}_{i}^{l}=\sigma_{r}\left(\mathbf{\tilde{D}}^{-1/2}\mathbf{\tilde{A} }\mathbf{\tilde{D}}^{-1/2}\mathbf{W}\mathbf{H}_{i}^{l-1}\right) \tag{6}\] where \(\mathbf{H}^{l}\) is the matrix of activations in the \(l^{th}\) layer. \(\mathbf{\tilde{A}}=\mathbf{A}+\mathbf{I}\) is the adjacency matrix of the undirected graph \(\mathcal{G}\) with added self-connections. \(\mathbf{I}\) is the identity matrix, \(\tilde{D}_{ii}=\sum_{j}\tilde{A}_{ij}\), and \(\mathbf{W}\) is the trainable weight matrix. \(\sigma_{r}(\cdot)\) denotes a nonlinear activation function. This simplified form improves computational performance on larger graphs and predictive performance on small training sets. Figure 1: The framework of graph convolutional network \[p=\mathrm{softmax}(\mathbf{H}_{i}^{l}) \tag{7}\] ## 3 Experiments ### Datasets In this work, we use Planetscope satellite images [14] with RGB bands at a 3 m spatial resolution. The imagery is acquired by Doves, which form a satellite constellation that provides a complete image of the earth once per day. The study sites cover four cities: (1) Munich, Germany; (2) Rome, Italy; (3) Paris, France; (4) Zurich, Switzerland. The corresponding building footprint layer was downloaded from OpenStreetMap (OSM). The imagery is processed using a 64 \(\times\) 64 sliding window with a stride of 19 pixels to produce 48,000 sample patches. The training data has 80% patches, and the testing data has 20% patches. The training and testing data are spatially separated. ### Experimental Setup For all networks, a stochastic gradient descent (SGD) with a learning rate of \(10^{-4}\) was adopted as an optimizer and negative log-likelihood loss (NLLLoss) was taken as the loss function. The implementation is based on the Pytorch and runs on a single NVIDIA Tesla P100 16 GB GPU. Semantic segmentation methods based on FCN-32s, FCN-16s, FCN-8s, ResNet-DUC, E-Net, SegNet, U-Net, CWGAN-GP, FC-DenseNet, GCN were taken as the algorithms of comparison. ### Results and Analysis In this work, we evaluated the inference performances using metrics for a quantitative comparison: overall accuracy (OA), F1 scores, and the Intersection over Union (IoU) scores. We evaluate the performance of different deep convolutinal neural networks and compare to our proposed method. The quantitative results are listed in Table 1, and results of the sample for visual comparison are in Fig. 2. FCN-32s and FCN-16s exhibit poor performance, since the feature map of later layers have only high-level semantics with poor localization. ResNet-DUC can achieve better results than the previous two because of hybrid dilated convolution and dense upsampling convolution. It is limited due to the lack of skip connections. Max-pooling indices are reused in SegNet during the decoding process, which can reduce the number of parameters enabling end-to-end training. However, since it only uses max-pooling indices to decoder, some local details cannot be recovered, e.g. small buildings will be neglected. FCN-8s and U-Net outperform previous networks due to the concatenation of low-level features. Compared to other CNN models, CWGAN-GP shows promising results for building footprint generation. The skip connections in the generator combine both the lower and higher layers to generate the final output, retaining more details and better preserving the boundary of the building area. Moreover, the min-max game between the generator and discriminator of the GAN can motivate both to improve their functionalities. FC-DenseNet has better performance than previous networks, since DenseNet block concatenates feature maps learned by different layers, which can increase variation in the input of subsequent layers and improve efficiency. GCN outperforms all other semantic segmentation neural networks in numerical accuracy and visual results. On one hand, it can aggregate the information from neighbor nodes (short range), which allows the model to learn about local structures. On the other hand, the DCNN can extract more comprehensive and representative features which enhance the feature fusion by embedding more spatial information into high-level features. ## 4 Conclusion In this work, we develop a novel framework for semantic segmentation, which combines the DCNN and the GCN. Our proposed framework outperforms the state-of-the-art approaches for building footprint extraction. Furthermore, the proposed framework will be applied for the semantic segmentation of 3-D point clouds, which could be considered as a general graph. ## 5 Acknowledgment This work is supported by the Bavaria California Technology Center (Project: Large-Scale Problems in Earth Observa \begin{table} \begin{tabular}{c c c c} \hline \hline Methods & **OA** & **F1** & **IoU** \\ \hline FCN-32s & 0.7318 & 0.2697 & 0.1559 \\ FCN-16s & 0.7698 & 0.3993 & 0.2494 \\ ResNet-DUC & 0.7945 & 0.4542 & 0.2930 \\ E-Net & 0.8243 & 0.5427 & 0.3724 \\ SegNet & 0.8261 & 0.5558 & 0.3848 \\ U-Net & 0.8412 & 0.6043 & 0.4329 \\ FCN-8s & 0.8472 & 0.6222 & 0.4513 \\ CWGAN-GP & 0.8483 & 0.6268 & 0.4562 \\ FC-DenseNet & 0.8551 & 0.6328 & 0.4628 \\ CRFasRNN & 0.8592 & 0.6415 & 0.4757 \\ GCN & **0.8640** & **0.6677** & **0.5012** \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison of different deep convolutional neural networks on the test datasets tions, the European Research Council (No. ERC-2016-StG-714087, So2Sat), Helmholtz Association Young Investigators Group "SiPEO" (Vrhng-1018, www.sipeo.bgu.tum.de). The authors thank the Gauss Centre for Supercomputing (GCS) and the Leibniz Supercomputing Centre (LRZ). The authors thank Planet provide the datasets.
2310.18206
FLSH -- Friendly Library for the Simulation of Humans
Computer models of humans are ubiquitous throughout computer animation and computer vision. However, these models rarely represent the dynamics of human motion, as this requires adding a complex layer that solves body motion in response to external interactions and according to the laws of physics. FLSH is a library that facilitates this task for researchers and developers who are not interested in the nuisances of physics simulation, but want to easily integrate dynamic humans in their applications. FLSH provides easy access to three flavors of body physics, with different features and computational complexity: skeletal dynamics, full soft-tissue dynamics, and reduced-order modeling of soft-tissue dynamics. In all three cases, the simulation models are built on top of the pseudo-standard SMPL parametric body model.
Pablo Ramón, Cristian Romero, Javier Tapia, Miguel A. Otaduy
2023-10-27T15:29:38Z
http://arxiv.org/abs/2310.18206v1
# FLSH - Friendly Library for the Simulation of Humans ###### Abstract Computer models of humans are ubiquitous throughout computer animation and computer vision. However, these models rarely represent the dynamics of human motion, as this requires adding a complex layer that solves body motion in response to external interactions and according to the laws of physics. FLSH is a library that facilitates this task for researchers and developers who are not interested in the nuisances of physics simulation, but want to easily integrate dynamic humans in their applications. FLSH provides easy access to three flavors of body physics, with different features and computational complexity: skeletal dynamics, full soft-tissue dynamics, and reduced-order modeling of soft-tissue dynamics. In all three cases, the simulation models are built on top of the pseudo-standard SMPL parametric body model. 3d avatar model, physics simulation ## 1 Introduction Parametric models of humans [2] have greatly simplified the modeling, animation, reconstruction and tracking of humans across computer vision and computer graphics applications. However, to date these parametric models are intrinsically geometric, and their shape, pose and motion must be either designed or inferred through optimization methods. Parametric human models lack the ability to move according to the laws of physics. We believe that endowing parametric human models with physics-based dynamics can open up novel application possibilities, across animation, virtual reality, or computer vision. The images in Fig.1 highlight the relevance of body physics in two examples: (left) color-coding of the amount of soft-tissue dynamics in 5 different motions; (right) extreme differences in avatar interaction without or with contact mechanics. Deformation, dynamics and contact are obtained in a natural way by solving the laws of physics on top of parametric human representations. To simplify the adoption of this technology across human modeling, animation, reconstruction and tracking, we provide FLSH, a simple software library with dynamic simulation capabilities for the popular SMPL parametric human model [2]. In FLSH, the core library runs in C++, but it offers both C++ and Python APIs to allow for flexible integration. In FLSH, the simulation of human bodies is presented in three different flavors: 1. Articulated skeleton: This is an articulated skeleton dynamics model with geometric skinning. Bodies react to contact, and they can be controlled according to target trajectories or forces. This is the simplest and fastest representation, when tissue deformation due to contact or dynamics is not necessary. Fig. 1: Two examples that evidence the relevance of body physics. On the left, skin dynamics (color-coded) induced by skeletal motion. On the right, body-body contact ensures collision-free avatar pose and motion. 2. FEM soft skeleton: This is a full finite-element model (FEM) of soft-tissue dynamics on top of the articulated skeleton. Bodies contain both the bone structure and deformable soft-tissue, simulated with state-of-the-art finite-element non-linear elasticity, for high-accuracy applications. 3. RQM soft skeleton: This is a reduced-order model (ROM) of the FEM soft skeleton. Bodies solve a real-time approximation of soft-tissue deformation, including contact, following a research solution [6]. This representation offers an optimal trade-off between accuracy and performance. The rest of the document is structured as follows. First, we describe the different simulation models supported in the library, covering a range of features and allowing maximal performance for each feature set. Second, we outline the high-level API of the simulation library. Third, we show and discuss some simulation results. ## 2 Simulation Methods All three simulation models in FLASH are solved using a common methodology. This simplifies the transition between simulation models based on user needs. The simulation methodology is also easy to extend. In the rest of this section, we present the overall algorithm, the main characteristics of all three simulation models, and some performance comparisons. Fig. 2 shows a comparison of avatar deformation under the three simulation models. ### Overall algorithm All simulation models are defined on top of the parametric body model SMPL [2]. SMPL is characterized by a pose \(\theta\) and shape \(\beta\). Given a template body \(\tilde{x}_{0}\) in reference pose, it is mapped to a person-specific body \(\tilde{x}=f_{shape}\left(\tilde{x}_{0},\beta\right)\) based on the shape parameters. Then, a skinning transformation \(x=f_{skinning}\left(\tilde{x},\theta\right)\) based on \(\theta\) defines the posed body shape \(x\). SMPL's skinning transformation is linear blend skinning augmented with pose and shape blend-shapes. Even though the original SMPL is defined only for the body surface, FLASH is built on top of the extension to a volumetric body by [5]. Let us consider a set of generic dynamic degrees of freedom (DoFs) \(q\), which depend on the specific simulation model of choice. The DoFs \(q\) alter the standard SMPL definition of the deformed body \(x\), as discussed below for each simulation model. FLSH simulates dynamics using backward Euler numerical integration. This can be formulated as the following optimization problem [1]: \[q=\arg\min\frac{1}{2\,h^{2}}\left(q-q^{*}\right)^{T}\,M\left(q-q^{*}\right)+V(q)\] In this optimization, \(h\) is the time step, \(q^{*}\) the tentative state resulting from explicit integration, \(M\) is the mass matrix, and \(V\) collects all potential energy terms. In essence, the API of FLASH provides the evaluation of the objective, gradient and Hessian of the optimization. This allows extensibility, as users can program their own DoFs and energy terms, which are added to those of FLASH. Furthermore, FLASH offers separate access to the inertial and potential-energy terms in the optimization. In this way, simply by dropping the inertial term, FLASH can also simulate static deformations. The library is complemented with a solver example. FLSH offers some default contact handling capabilities, as well as tools for generalizing contact handling. First, it allows the internal definition of some simple parametric colliders (spheres, cylinders). Second, it provides access to the Jacobian \(\frac{\partial x}{\partial q}\). In this way, users can define externally arbitrary energy terms \(V_{users}(x)\), and obtain gradients \(\frac{\partial V_{user}}{\partial q}=\frac{\partial V_{user}}{\partial x} \frac{\partial x}{\partial q}\) and approximate Hessians \(\frac{\partial^{2}V_{user}}{\partial q^{2}}=\frac{\partial x^{T}}{\partial q} \frac{\partial^{2}V_{user}}{\partial x^{2}}\frac{\partial x}{\partial q}\) with respect to the DoFs. ### Articulated skeleton For each body bone in the SMPL skeleton, we define a rigid transformation \(\phi_{l}\). Then, the DoFs of the skeletal dynamics model correspond to the collection of rigid bone transformations, \(q=\{\phi_{l}\}\). We can obtain the SMPL pose \(\theta\) by extracting the skeleton root transformation and the joint angles from \(q\), and the deformed body \(x\) is directly obtained through the skinning transformation defined above. In the articulated skeleton model, the mass matrix \(M\) is derived from rigid-body inertia terms associated with each bone. The potential energy terms included in the model are: joint constraints to maintain the skeletal structure, and tracking constraints between the skeleton and target configurations (e.g. coming from mocap) implemented as joint rotation springs. ### FEM soft skeleton In the FEM soft skeleton model, we define a skin displacement field \(u(\bar{x})\). We apply the skin deformation in unposed state, hence transforming the reference body shape as \(\bar{x}+u\). By adding SMPL's skinning transformation, the deformed body can be described as \(x=f_{skinning}\left(\bar{x},\theta\right)\). We use an FEM discretization with nodal displacements \(\{u_{j}\}\). Grouping all nodal displacements together with the rigid bone transformations yields the degrees of freedom \(q=\left(\{\phi_{i}\},\{u_{j}\}\right)\). In FLSH, all body shapes use the same discretization, independent of the shape coefficients \(\beta\). This allows easy transition between body shapes if needed by the user's application. We model skin elasticity using the approach in [5], which implies: 1. The definition of deformation gradient \(F=I+\frac{\partial u}{\partial x}\), which accounts for contact and inertial deformations, but respects SMPL's data-driven detail under pose changes. 2. A strain energy density \(\Psi(F)\) corresponding to an orthotropic Saint Venant-Kirchhoff hyperelastic model with Fung-type saturation. The soft-tissue parameters are estimated from the DYNA dataset [3] following the multi-person optimization approach of [4]. In the FEM soft skeleton model, the energies of the articulated skeleton are extended in the following way. The mass matrix \(M\) is augmented with terms corresponding to the skin displacements, as well as cross-terms with the bone transformations. The new terms are derived from the kinetic energy of world-space skin velocities \(\dot{x}\). The potential energy \(V\) now also includes the integral of the strain energy density \(\Psi\) over the body's volume. ### ROM soft skeleton The reduced model retains much of the structure of the FEM soft skeleton, but it represents the skin displacement field in a linear reduced basis, \(u(\bar{x})=U(\bar{x})\)\(z\), with \(z\) the vector of reduced DoFs and \(U\) the matrix of basis coefficients. We use as reduced model the bounded generalized biharmonic coordinates of [7], following the model architecture of [6]. This allows seamless integration in the basis of handle points in the skin together with the skeletal bones. The DoFs of the ROM soft skeleton model are \(q=\left(\{\phi_{i}\},z\right)\). The deformed body \(x\) is defined with the same expression as in the FEM soft skeleton, with the only difference that skin displacements are represented in the reduced basis. Figure 2: Comparison of deformations produced by the simulation modes in FLSH. From left to right: articulated skeleton, FEM soft skeleton, and ROM soft skeleton. The colormaps indicate the amount of soft tissue deformation. As shown in the images, the ROM soft skeleton provides a good approximation to the FEM soft skeleton, and therefore offers a good tradeoff between speed and accuracy. Using a reduced basis instead of a full FEM discretization dramatically reduces the solver cost, thanks to the reduced size of the problem gradient and Hessian. We also add cubature, following again the data-oblivious approach of [6], to approximate the skin inertia and elastic energy. ### Performance The performance of the various simulation models is highly dependent on the number of degrees of freedom. The SMPL model, which serves as basis for all models, consists of 24 bones, 13,776 triangles and 6,890 vertices. Pose \(\theta\) is parameterized with 144 coefficients and shape \(\beta\) with 10 coefficients. Our tetrahedral volume discretization of the body for the FEM soft skeleton consists of around 43500 tetrahedra and 10400 nodes. Furthermore, the ROM skeleton model contains 87 point handles, and we use 500 cubature samples. In our performance tests, we have measured both the runtime cost and the generation of the avatars. Note that the soft models require a more costly generation process which includes tetrahedral meshing and, in the case of the ROM soft skeleton, cubature sampling. The current version of FLASH implements avatar generation every time a new avatar is created, but future versions will support caching of avatar model data. Table 1 shows the performance measurements on a PC with a 3.4 GHz Intel Core i7-4770 CPU with 32GB of memory. ## 3 Library API The FLASH library contains two main objects that are accessible through its API: the SoftAvatar class, which stores the different data structures to represent the avatar body, and the Simulable class, which provides access to simulation state and optimization terms. ### Initialization and settings The creation of an avatar must follow these steps: 1. Call the constructor of the SoftAvatar class. The constructor receives as arguments an SMPL avatar object and a vector of shape coefficients. 2. Enable settings that define the type of simulation model. There are 4 different settings: enableSoftTissue() If disabled, the simulation model is the default articulated skeleton. enableReducedModel() It allows selecting FEM or ROM soft skeleton. enableInertialCubature() enableElasticCubature() 3. Call SoftAvatar::initVolumetricData(). The creation of the simulation object must follow these steps: 1. Call the constructor of a Simulable class. The constructor receives the following arguments: a SoftAvatar object, the SMPL object, and possibly a sequence of mocap frames to be tracked. 2. Call Simulable::initialize(). If the user wants to include multiple avatars in the scene, then this is as simple as creating multiple SoftAvatar objects and Simulable interfaces. Each Simulable will read/write data referred to its state size, and this state can be merged into larger vectors and/or matrices corresponding to the full scene. \begin{table} \begin{tabular}{|c|c|c|} \hline & **Avatar generation** & **Simulation** \\ \hline Articulated skeleton & \textless{}1 s & 45 fps \\ \hline FEM soft skeleton & 15 s & 0.2 fps \\ \hline ROM soft skeleton & 25 s & 20 fps \\ \hline \end{tabular} \end{table} Table 1: Benchmarking. ### Runtime API High-level per-step calls: preUpdate: updates the active frame of mocap data, if mocap tracking is used. postUpdate: updates internal data. State management: getState: it gets the current state of the DoFs. updateState: it sets a new state computed by the external solver. pushState: adds the current state to a stack. This can be used e.g. in line-search solvers. popState: resets the previous state from the stack. getStateJacobian: it gets the Jacobian of the skeleton pose, for external skeleton-based energies. getNodesStateJacobian: it gets the Jacobian of mesh nodes, for external mesh-based energies. Optimization evaluation: getDynamicEnergyScalar: get the inertial part of the optimization function. getPotentialEnergyScalar: get the potential-energy part of the optimization function. getDynamicGradientVector: get the inertial part of the optimization gradient. getPotentialGradientVector: get the potential-energy part of the gradient optimization gradient. fixVectorKinematic: it receives a full gradient and applies Dirichlet boundary conditions. getDynamicHessianTripletVector: get the inertial part of the optimization Hessian. getPotentialHessianTripletVector: get the potential-energy part of the optimization Hessian. fixMatrixKinematic: it receives a full Hessian and applies Dirichlet boundary conditions. ## 4 Simulation Results Next, we discuss some simulation examples that highlight the features of the avatar models in FLSH, as well as differences between the various simulation models. Fig. 2 compares body deformations with all three simulation models while tracking a mocap sequence. The images also highlight the differences in soft-tissue deformation. As expected, the articulated skeleton's deformation is due simply to SMPL's skinning transformation, but there is no additional soft-tissue deformation. In the soft skeleton models, on the other hand, there is soft-tissue deformation induced by the body's own inertia. The ROM soft skeleton provides a good approximation to the FEM soft skeleton, and therefore offers a good tradeoff between speed and accuracy. As mentioned earlier, in the soft skeleton models in FLSH, soft-tissue properties are adapted based on body shape. Fig. 3 shows an example of this shape-based soft-tissue parameterization, for 5 different body shapes. As expected, the differences in soft-tissue parameters produce noticeable differences in the soft-tissue deformation of the various body shapes. Finally, we highlight the benefit of physics-based simulation to resolve contact-based interactions. Fig. 4 shows an example of articulated skeleton where the lack of physics-based simulation leads to disturbing avatar interpenetration. As shown in the images, the FLSH avatars are endowed with a hierarchy of analytical capsules for fast collision detection. Fig. 5 compares the accuracy of contact resolution with the different simulation model in FLSH. From the images, we draw three main conclusions: (1) Lack of simulation produces a very unrealistic result. (2) In the articulated skeleton model, the full reaction to contact is absorbed by the skeleton, while in the soft skeleton models part is absorbed by the skeleton and part by the soft tissue, as evidenced by the displacement of the chin. (3) The ROM soft skeleton approximates well contact for large objects like this sphere, but the FEM soft skeleton becomes necessary to resolve contact with small objects and/or sharp features. ## 5 Discussion FLSH is conceived as a living project to offer physics-based avatar simulation to the research community. Its first version includes the three simulation models described in this paper, and we hope that adoption is facilitated by the versatile cross-platform API and the examples provided. We understand that in many computational problems involving humans, the shape, pose or motion of the human are variables that are optimized. To provide stronger support to such applications, we would like to augment FLSH with differentiability capabilities in the future.
2302.01546
Group Fairness in Non-monotone Submodular Maximization
Maximizing a submodular function has a wide range of applications in machine learning and data mining. One such application is data summarization whose goal is to select a small set of representative and diverse data items from a large dataset. However, data items might have sensitive attributes such as race or gender, in this setting, it is important to design \emph{fairness-aware} algorithms to mitigate potential algorithmic bias that may cause over- or under- representation of particular groups. Motivated by that, we propose and study the classic non-monotone submodular maximization problem subject to novel group fairness constraints. Our goal is to select a set of items that maximizes a non-monotone submodular function, while ensuring that the number of selected items from each group is proportionate to its size, to the extent specified by the decision maker. We develop the first constant-factor approximation algorithms for this problem. We also extend the basic model to incorporate an additional global size constraint on the total number of selected items.
Jing Yuan, Shaojie Tang
2023-02-03T04:51:54Z
http://arxiv.org/abs/2302.01546v2
# Group Fairness in Non-monotone Submodular Maximization ###### Abstract Maximizing a submodular function has a wide range of applications in machine learning and data mining. One such application is data summarization whose goal is to select a small set of representative and diverse data items from a large dataset. However, data items might have sensitive attributes such as race or gender, in this setting, it is important to design _fairness-aware_ algorithms to mitigate potential algorithmic bias that may cause over- or under- representation of particular groups. Motivated by that, we propose and study the classic non-monotone submodular maximization problem subject to novel group fairness constraints. Our goal is to select a set of items that maximizes a non-monotone submodular function, while ensuring that the number of selected items from each group is proportionate to its size, to the extent specified by the decision maker. We develop the first constant-factor approximation algorithms for this problem. We also extend the basic model to incorporate an additional global size constraint on the total number of selected items. ## 1 Introduction Submodular function refers to a broad class of functions which satisfy the natural diminishing returns property: adding an additional item to a larger existing subset is less beneficial. A wide range of machine learning and AI problems, including exemplar-based clustering [7], feature selection [6], active learning [12], influence maximization in social networks [20], recommender system [9], and diverse data summarization [19], can be formulated as a submodular maximization problem. This problem, whose goal is to select a set of items to maximize a submodular function, and its variants [14, 18] have been extensively studied in the literature subject to various constraints, including cardinality, matroid, or knapsack-type restrictions. We notice that in practise, items or individuals are often associated with different groups based on various attributes, such as gender, race, age, religion, or other factors. Existing algorithms might exhibit bias if left unchecked, for example, some of the groups might be over- or under-represented in the final selected subset. Therefore, it becomes increasingly important to design _fairness-aware_ algorithms to mitigate such issues. Towards this end, we propose and study the classic non-monotone submodular maximization problem subject to novel group fairness constraints. Our goal is to select a _balanced_ set of items that maximizes a non-monotone submodular function, such that the ratio of selected items from each group to its size is within a desired range, as determined by the decision maker. Non-monotone submodular maximization has multiple compelling applications, such as feature selection [6], profit maximization [21], maximum cut [13] and data summarization [16]. Formally, we consider a set \(V\) of items (e.g., datapoints) which are partitioned into \(m\) groups: \(V_{1},V_{2},\cdots,V_{m}\) such that items from the same group share same attributes (e.g., gender). We say that a set \(S\subseteq V\) of items is \((\alpha,\beta)\)-fair if for all groups \(i\in[m]\), it holds that \(\lfloor\alpha|V_{i}|\rfloor\leq|S\cap V_{i}|\leq\lfloor\beta|V_{i}|\rfloor\). Using our model, it allows for the decision maker to specify the desired level of fairness by setting appropriate values of \(\alpha\) and \(\beta\). Specifically, setting \(\alpha=\beta\) leads to the highest level of fairness in that the number of selected items is _strictly_ proportional to its group size; if we set \(\alpha=0\) and \(\beta=1\), there are no fairness constraints. Our goal is to find such a \((\alpha,\beta)\)-fair subset of items that maximizes a submodular objective function. Our definition of fairness, which balances solutions with respect to sensitive attributes, has gained widespread acceptance in the academic community, as demonstrated by its frequent use in previous studies [4, 10, 5]. There are several other notations of fairness that can be captured by our formulation such as the 80%-rule [1], statistical parity [8] and proportional representation [17]. ### Our Contributions * Our study breaks new ground by examining the classic (non-monotone) submodular maximization problem under \((\alpha,\beta)\)-fairness constraints. Our model offers flexibility in capturing varying degrees of fairness as desired by the decision maker, by adjusting the values of \(\alpha\) and \(\beta\). * We develop the first constant-factor approximation algorithm for this problem. We observe that the parameter \(\alpha\) is closely linked to the complexity of solving the \((\alpha,\beta)\)-fair non-monotone submodular maximization problem. In particular, when \(\alpha\leq 1/2\), we design a \(\frac{\gamma}{2}\)-approximation algorithm and when \(\alpha>1/2\), we develop a \(\frac{\gamma}{3}\)-approximation algorithm, where \(\gamma\) is the approximation ratio of the current best algorithm for matroid-constrained submodular maximization. We also extend the basic model to incorporate an additional global size constraint on the total number of selected items. We provide approximation algorithms that have a constant-factor approximation ratio for this extended model. ### Additional Related Works In recent years, there has been a growing awareness of the importance of fair and unbiased decision-making systems. This has led to an increased interest in the development of fair algorithms in a wide range of applications, including influence maximization [25], classification [26], voting [4], bandit learning [15], and data summarization [3]. Depending on the specific context and the type of bias that one is trying to mitigate, existing studies adopt different metrics of fairness. This can lead to different optimization problems and different fair algorithms that are tailored to the specific requirements of the application. Our notation of fairness is general enough to capture many existing notations such as the 80%-rule [1], statistical parity [8] and proportional representation [17]. Unlike most of existing studies on fair submodular maximization [4] whose objective is to maximize a monotone submodular function, [10] develop fair algorithms in the context of streaming non-monotone submodular maximization. Their proposed notation of fairness is more general than ours, leading to a more challenging optimization problem which does not admit any constant-factor approximation algorithms. [24, 23] aim to develop randomized algorithms that satisfy average fairness constraints. Very recently, [22] extend the studies of fair algorithms to a more complicated adaptive setting and they propose a new metric of fairness called group equality. ## 2 Preliminaries and Problem Statement We consider a set \(V\) of \(n\) items. There is a non-negative submodular utility function \(f:2^{V}\rightarrow\mathbb{R}_{+}\). Denote by \(f(e\mid S)\) the marginal utility of \(e\in V\) on top of \(S\subseteq V\), i.e., \(f(e\mid S)=f(\{e\}\cup S)-f(S)\). We say a function \(f:2^{V}\rightarrow\mathbb{R}_{+}\) is submodular if for any two sets \(X,Y\subseteq V\) such that \(X\subseteq Y\) and any item \(e\in V\setminus Y\), \[f(e\mid Y)\leq f(e\mid X).\] Assume \(V\) is partitioned into \(m\) disjoint groups: \(V_{1},V_{2},\cdots,V_{m}\). We assume that there is a given lower and upper bound on the fraction of items of each group that must be contained in a feasible solution. These two bounds, namely \(\alpha\) and \(\beta\), represent group fairness constraints. The problem of \((\alpha,\beta)\)-fair submodular maximization problem (labelled as **P.0**) can be written as follows. \begin{tabular}{|l|} \hline **P.0**\(\max f(S)\) **subject to:** \\ \(\lfloor\alpha|V_{i}|\rfloor\leq|S\cap V_{i}|\leq\lfloor\beta|V_{i}|\rfloor, \forall i\in[m]\). \\ \hline \end{tabular} One can adjust the degree of group fairness in a feasible solution through choosing appropriate values of \(\alpha\) and \(\beta\). I.e., strict group fairness is achieved at \(\alpha=\beta\) in which case every feasible solution must contain the same \(\alpha\) fraction of items from each group; if we set \(\alpha=0\) and \(\beta=1\), then there is no group fairness constraints. We next present the hardness result of this problem. Lemma 1: _Problem **P.0** is NP-hard._ Proof: We prove this through reduction to the classic _cardinality constrained submodular maximization problem_ which we define below. Definition 1: The input of cardinality constrained submodular maximization problem is a group of items \(U\), a submodular function \(h:2^{U}\rightarrow\mathbb{R}_{+}\), and a cardinality constraint \(b\); we aim to select a group of items \(S\subseteq U\) such that \(h(S)\) is maximized and \(|S|\leq b\). We next show a reduction from cardinality constrained submodular maximization problem to **P.0**. Consider any given instance of cardinality constrained submodular maximization problem, we construct a corresponding instance of **P.0** as follows: Let \(V=U\), \(f=h\), assume there is only one group, i.e., \(V=V_{1}\), and let \(\alpha=0\), \(\beta=b/|U|\). It is easy to verify that these two instances are equivalent. This finishes the proof of the reduction. \(\Box\) ## 3 Non-monotone Submodular Maximization with Group Fairness Warm-up: Monotone Utility FunctionIf \(f\) is monotone and submodular, we can easily confirm that **P.0** can be simplified to **P.1** by removing the lower bound constraints. This is because in this case, increasing the size of a solution by adding more items will not decrease its utility. As a result, the lower bound constraints in **P.0**, which state that \(\lfloor\alpha|V_{i}|\rfloor\leq|S\cap V_{i}|\) for all \(i\in[m]\), can always be met by adding sufficient items to the solution. \begin{tabular}{|l|} \hline **P.1**\(\max f(S)\) **subject to:** \\ \(|S\cap V_{i}|\leq\lfloor\beta|V_{i}|\rfloor,\forall i\in[m]\). \\ \hline \end{tabular} Since \(f\) is a monotone submodular function, **P.1** is a well-known problem of maximizing a monotone submodular function subject to matroid constraints1. This problem has a \((1-1/e)\)-approximation algorithm. Footnote 1: A matroid is a pair \(\mathcal{M}=(V,\mathcal{I})\) where \(\mathcal{I}\subseteq 2^{V}\) and \(1\). \(\forall Y\in\mathcal{I},X\subseteq Y\to X\in\mathcal{I}\). \(2\). \(\forall X,Y\in\mathcal{I};|X|<|Y|\rightarrow\exists e\in Y\setminus X;X\cup\{e \}\in\mathcal{I}\). We then proceed to develop approximation algorithms for non-monotone functions. We will examine two scenarios, specifically when \(\alpha\leq 1/2\) and when \(\alpha>1/2\). ### The case when \(\alpha\leq 1/2\) In the scenario where \(\alpha\leq 1/2\), we use the solution of **P.1** as a building block to construct our algorithm. First, it is easy to verify that **P.1** is a relaxed version of **P.0** with lower bound constraints \(\lfloor\alpha|V_{i}|\rfloor\leq|S\cap V_{i}|\) in **P.0** being removed. Because \(f\) is a submodular function, **P.1** is a classic problem of maximizing a (non-monotone) submodular function subject to matroid constraints. There exist effective solutions for **P.1**. Now we are ready to present the design of our algorithm as below. 1. Apply the state-of-the-art algorithm \(\mathcal{A}\) for matroid constrained submodular maximization to solve **P.1** and obtain a solution \(A^{P.1}\). 2. Note that \(A^{P.1}\) is not necessarily a feasible solution to **P.0** because it might violate the lower bound constraints \(\lfloor\alpha|V_{i}|\rfloor\leq|S\cap V_{i}|\) for some groups. To make it feasible, we add additional items to \(A^{P.1}\). Specifically, for each group \(i\in[m]\) such that \(|A^{P.1}\cap V_{i}|<\lfloor\alpha|V_{i}|\rfloor\), our algorithm selects a backup set \(B_{i}\) of size \(\lfloor\alpha|V_{i}|\rfloor-|A^{P.1}\cap V_{i}|\), by randomly sampling \(\lfloor\alpha|V_{i}|\rfloor-|A^{P.1}\cap V_{i}|\) items from \(V_{i}\setminus A^{P.1}\). Define \(B_{i}=\emptyset\) if \(|A^{P.1}\cap V_{i}|\geq\lfloor\alpha|V_{i}|\rfloor\). 3. At the end, add \(\cup_{i\in[m]}B_{i}\) to \(A^{P.1}\) to build the final solution \(A^{approx}\), i.e., \(A^{approx}=A^{P.1}\cup(\cup_{i\in[m]}B_{i})\). ``` 1:Apply \(\mathcal{A}\) to solve **P.1** and obtain a solution \(A^{P.1}\) 2:for every group \(i\in[m]\)do 3:if\(|A^{P.1}\cap V_{i}|<\lfloor\alpha|V_{i}|\rfloor\)then 4: select a random backup set \(B_{i}\) of size \(\lfloor\alpha|V_{i}|\rfloor-|A^{P.1}\cap V_{i}|\) from \(V_{i}\setminus A^{P.1}\) 5:else 6:\(B_{i}\leftarrow\emptyset\) 7:\(A^{approx}\gets A^{P.1}\cup(\cup_{i\in[m]}B_{i})\) 8:return\(A^{approx}\) ``` **Algorithm 1** Approximation Algorithm for **P.0** when \(\alpha\leq 1/2\) The pseudocode of this approximation algorithm is given as Algorithm 1. Observe that \(A^{P.1}\) is a feasible solution to **P.1**, hence \(A^{P.1}\) satisfies upper bound constraints of **P.1** and hence **P.0**, i.e., \(|S\cap V_{i}|\leq\lfloor\beta|V_{i}|\rfloor,\forall i\in[m]\). According to the construction of \(B_{i}\), it is easy to verify that adding \(\cup_{i\in[m]}B_{i}\) to \(A^{P.1}\) does not violate the upper bound constraints because \(\cup_{i\in[m]}B_{i}\) are only supplemented to those groups which do not satisfy the lower bound constraints of **P.0**, i.e., \(\lfloor\alpha|V_{i}|\rfloor\leq|S\cap V_{i}|\). Moreover, adding \(\cup_{i\in[m]}B_{i}\) to \(A^{P.1}\) makes it satisfy lower bound constraints of **P.0**. Hence, \(A^{approx}\) is a feasible solution to **P.0**. Lemma 2: \(A^{approx}\) _is a feasible solution to **P.0**._ #### 3.2.2 Performance Analysis We next analyze the performance of Algorithm 1. We first introduce a useful lemma from [2]. Lemma 3: _If \(f\) is submodular and \(S\) is a random subset of \(V\), such that each item in \(V\) is contained in \(S\) with probability at most \(p\), then \(\mathbb{E}_{S}[f(S)]\geq(1-p)f(\emptyset)\)._ The next lemma states that if \(A^{P.1}\) is a \(\gamma\)-approximate solution of **P.1**, then \(f(A^{P.1})\) is at least \(\gamma\) fraction of the optimal solution of **P.0**. Lemma 4: _Suppose \(\mathcal{A}\) is a \(\gamma\)-approximate algorithm for non-monotone submodular maximization subject to a matroid constraint. Let \(OPT\) denote the optimal solution of **P.0**, we have \(f(A^{P.1})\geq\gamma f(OPT)\)._ _Proof:_ Because \(\mathcal{A}\) is a \(\gamma\)-approximate algorithm for non-monotone submodular maximization subject to a matroid constraint, we have \(f(A^{P.1})\geq\gamma f(O^{P.1})\) where \(O^{P.1}\) denotes the optimal solution of **P.1**. Moreover, because **P.1** is a relaxed version of **P.0**, we have \(f(O^{P.1})\geq f(OPT)\). Hence, \(f(A^{P.1})\geq\gamma f(OPT)\). \(\Box\) We next show that augmenting \(A^{P.1}\) with items from the random set \(\cup_{i\in[m]}B_{i}\) reduces its utility by a factor of at most \(1/2\) in expectation. Here the expectation is taken over the distribution of \(\cup_{i\in[m]}B_{i}\). Lemma 5: _Suppose \(\alpha\leq 1/2\), we have \(\mathbb{E}_{A^{approx}}[f(A^{approx})]\geq\frac{1}{2}f(A^{P.1})\) where \(A^{approx}=A^{P.1}\cup(\cup_{i\in[m]}B_{i})\)._ _Proof:_ Recall that \(B_{i}=\emptyset\) for all \(i\in[m]\) such that \(|A^{P.1}\cap V_{i}|\geq\lfloor\alpha|V_{i}|\rfloor\), hence, adding those \(B_{i}\) to \(A^{P.1}\) does not affect its utility. In the rest of the proof we focus on those \(B_{i}\) with \[|A^{P.1}\cap V_{i}|<\lfloor\alpha|V_{i}|\rfloor. \tag{1}\] Recall that for every \(i\in[m]\) such that \(|A^{P.1}\cap V_{i}|<\lfloor\alpha|V_{i}|\rfloor\), \(B_{i}\) is a random set of size \(\lfloor\alpha|V_{i}|\rfloor-|A^{P.1}\cap V_{i}|\) that is sampled from \(V_{i}\setminus A^{P.1}\). It follows that each item in \(V_{i}\setminus A^{P.1}\) is contained in \(B_{i}\) with probability at most \[\frac{\lfloor\alpha|V_{i}|\rfloor-|A^{P.1}\cap V_{i}|}{|V_{i}\setminus A^{P.1 }|}. \tag{2}\] We next give an upper bound of (2). First, \[\lfloor\alpha|V_{i}|\rfloor-|A^{P.1}\cap V_{i}|\leq\lfloor\alpha|V_{i}| \rfloor\leq|V_{i}|/2, \tag{3}\] where the second inequality is by the assumption that \(\alpha\leq 1/2\). Moreover, \[|V_{i}\setminus A^{P.1}|=|V_{i}|-|A^{P.1}\cap V_{i}|=(\lfloor \alpha|V_{i}|\rfloor-|A^{P.1}\cap V_{i}|)+(|V_{i}|-\lfloor\alpha|V_{i}|\rfloor) \tag{4}\] \[\geq(\lfloor\alpha|V_{i}|\rfloor-|A^{P.1}\cap V_{i}|)+|V_{i}|/2, \tag{5}\] where the inequality is by the assumption that \(\alpha\leq 1/2\). Hence, \[(2)\leq\frac{\lfloor\alpha|V_{i}|\rfloor-|A^{P.1}\cap V_{i}|}{( \lfloor\alpha|V_{i}|\rfloor-|A^{P.1}\cap V_{i}|)+|V_{i}|/2}\leq\frac{|V_{i}|/2 }{|V_{i}|/2+|V_{i}|/2}=1/2, \tag{6}\] where the first inequality is by (5); the second inequality is by (3) and the assumption that \(\lfloor\alpha|V_{i}|\rfloor-|A^{P.1}\cap V_{i}|>0\) (listed in (1)). That is, the probability that each item in \(V_{i}\setminus A^{P.1}\) is contained in \(B_{i}\) is at most \(1/2\). It follows that the probability that each item in \(V\setminus A^{P.1}\) is contained in \(\cup_{i\in[m]}B_{i}\) is at most \(1/2\). Moreover, Lemma 3 states that if \(f\) is submodular and \(S\) is a random subset of \(V\), such that each item in \(V\) appears in \(S\) with probability at most \(p\), then \(\mathbb{E}_{A}[f(A)]\geq(1-p)f(\emptyset)\). With the above discussion and the observation that \(f(A^{P.1}\cup\cdot)\) is submodular, it holds that \(\mathbb{E}_{A^{approx}}[f(A^{approx})]=\mathbb{E}_{\cup_{i\in[m]}B_{i}}[f(A^{ P.1}\cup(\cup_{i\in[m]}B_{i}))]\geq(1-\frac{1}{2})f(A^{P.1}\cup\emptyset)= \frac{1}{2}f(A^{P.1})\). \(\Box\) Our main theorem as below follows from Lemma 4 and Lemma 5. Theorem 3.1: _Suppose \(\mathcal{A}\) is a \(\gamma\)-approximate algorithm for non-monotone submodular maximization subject to a matroid constraint and \(\alpha\leq 1/2\), we have \(\mathbb{E}_{A^{approx}}[f(A^{approx})]\geq\frac{\gamma}{2}f(OPT)\)._ One option of \(\mathcal{A}\) is the continuous double greedy algorithm proposed in [11] which gives a \(1/e-o(1)\)-approximation solution, that is, \(\gamma\geq 1/e-o(1)\). This, together with Theorem 3.1, implies that \(\mathbb{E}_{A^{approx}}[f(A^{approx})]\geq\frac{1/e-o(1)}{2}f(OPT)\). ### The case when \(\boldsymbol{\alpha>1/2}\) We next consider the case when \(\alpha>1/2\). We first introduce a new utility function \(g:2^{V}\rightarrow\mathbb{R}_{+}\) as below: \[g(\cdot)=f(V\setminus\cdot). \tag{7}\] We first present a well-known result, which states that submodular functions maintain their submodularity property when taking their complement. Lemma 6: _If \(f\) is submodular, then \(g\) must be submodular._ With utility function \(g\), we present a new optimization problem **P.2** as below: \begin{tabular}{|l|} \hline **P.2**\(\max g(S)\) \\ **subject to:** \\ \(|V_{i}|-\lfloor\beta|V_{i}|\rfloor\leq|S\cap V_{i}|\leq|V_{i}|-\lfloor\alpha|V _{i}|\rfloor,\forall i\in[m]\). \\ \hline \end{tabular} **P.2** is a flipped version of the original problem **P.0** in the sense that if there is a \(\gamma\)-approximate solution \(A^{P.2}\) to **P.2**, it can be easily verified that \(V\setminus A^{P.2}\) is a \(\gamma\)-approximate solution to **P.0**. As a result, we will focus on solving **P.2** for the rest of this section. To solve **P.2**, we introduce another problem (labeled as **P.3**) as follows: \begin{tabular}{|l|} \hline **P.3**\(\max g(S)\) \\ **subject to:** \\ \(|S\cap V_{i}|\leq|V_{i}|-\lfloor\alpha|V_{i}|\rfloor,\forall i\in[m]\). \\ \hline \end{tabular} **P.3** is relaxed version of **P.2** with lower bound constraints \(|V_{i}|-\lfloor\beta|V_{i}|\rfloor\leq|S\cap V_{i}|\) in **P.2** being removed. Because \(g\) is a submodular function, **P.3** is a classic problem of maximizing a submodular function subject to matroid constraints. Now we are ready to present the design of our algorithm. 1. Apply the state-of-the-art algorithm \(\mathcal{A}\) for matroid constrained submodular maximization to solve **P.3** and obtain a solution \(A^{P.3}\). 2. Note that \(A^{P.3}\) is not necessarily a feasible solution to **P.2** because it might violate the lower bound constraints \(|V_{i}|-\lfloor\beta|V_{i}|\rfloor\leq|S\cap V_{i}|\) for some groups. We add additional items to \(A^{P.3}\) to make it feasible. Specifically, for each group \(i\in[m]\) such that \(|A^{P.3}\cap V_{i}|<|V_{i}|-\lfloor\beta|V_{i}|\rfloor\), our algorithm selects a backup set \(B_{i}\) of size \(|V_{i}|-\lfloor\beta|V_{i}|\rfloor-|A^{P.3}\cap V_{i}|\), by randomly sampling \(|V_{i}|-\lfloor\beta|V_{i}|\rfloor-|A^{P.3}\cap V_{i}|\) items from \(V_{i}\setminus A^{P.3}\). Define \(B_{i}=\emptyset\) if \(|A^{P.1}\cap V_{i}|\geq|V_{i}|-\lfloor\beta|V_{i}|\rfloor\). 3. Add \(\cup_{i\in[m]}B_{i}\) to \(A^{P.3}\) to build \(A^{approx}\), i.e., \(A^{approx}=A^{P.3}\cup(\cup_{i\in[m]}B_{i})\). Return \(V\setminus A^{approx}\) as the final solution. ``` 1:Apply \(\mathcal{A}\) to solve **P.3** and obtain a solution \(A^{P.3}\) 2:for every group \(i\in[m]\)do 3:if\(|A^{P.3}\cap V_{i}|<|V_{i}|-\lfloor\beta|V_{i}|\rfloor\)then 4: select a random backup set \(B_{i}\) of size \(|V_{i}|-\lfloor\beta|V_{i}|\rfloor-|A^{P.3}\cap V_{i}|\) from \(V_{i}\setminus A^{P.3}\) 5:else 6:\(B_{i}\leftarrow\emptyset\) 7:\(A^{approx}\gets A^{P.3}\cup(\cup_{i\in[m]}B_{i})\) 8:return\(V\setminus A^{approx}\) ``` **Algorithm 2** Approximation Algorithm for **P.0** when \(\alpha>1/2\) The pseudocode of this approximation algorithm is given as Algorithm 2. Observe that \(A^{P.3}\) satisfies upper bound constraints of **P.3** and hence **P.2** because \(A^{P.3}\) is a feasible solution to **P.3**. According to the construction of \(B_{i}\), adding \(\cup_{i\in[m]}B_{i}\) to \(A^{P.1}\) does not violate the upper bound constraints because \(\cup_{i\in[m]}B_{i}\) are added to meet the lower bound constraints of **P.2** if necessary. Moreover, adding \(\cup_{i\in[m]}B_{i}\) to \(A^{P.3}\) makes it satisfy lower bound constraints of **P.2**. Hence, \(A^{approx}\) is a feasible solution to **P.2**. Lemma 7: \(A^{approx}\) _is a feasible solution to **P.2**._ #### 4.2.2 Performance Analysis We first introduce a technical lemma which states that if \(A^{P.3}\) is a \(\gamma\)-approximate solution of **P.3**, then \(f(A^{P.3})\) is at least \(\gamma\) fraction of the optimal solution of **P.2**. This lemma follows from the observation that **P.3** is a relaxation of **P.2**. Lemma 8: _Suppose \(\mathcal{A}\) is a \(\gamma\)-approximate algorithm for non-monotone submodular maximization subject to a matroid constraint. Let \(O^{P.2}\) denote the optimal solution of **P.2**, it holds that \(g(A^{P.3})\geq\gamma g(O^{P.2})\)._ We next show that augmenting \(A^{P.3}\) with items from \(\cup_{i\in[m]}B_{i}\) reduces its utility by a factor of at most \(2/3\) in expectation. Lemma 9: _Suppose \(\alpha>1/2\), \(\mathbb{E}_{A^{approx}}[g(A^{approx})]\geq\frac{1}{3}g(A^{P.3})\) where \(A^{approx}=A^{P.3}\cup(\cup_{i\in[m]}B_{i})\)._ _Proof:_ Recall that \(B_{i}=\emptyset\) for all \(i\in[m]\) such that \(|A^{P.3}\cap V_{i}|\geq|V_{i}|-\lfloor\beta|V_{i}|\rfloor\), hence, adding those \(B_{i}\) to \(A^{P.3}\) does not affect its utility. Therefore, we focus on those groups \(i\in[m]\) with \(|A^{P.3}\cap V_{i}|<|V_{i}|-\lfloor\beta|V_{i}|\rfloor\) in the rest of the proof. Let \(M=\{i\mid|A^{P.3}\cap V_{i}|<|V_{i}|-\lfloor\beta|V_{i}|\rfloor\}\) denote the set containing the indexes of all such groups and we assume \(M\neq\emptyset\) to avoid trivial cases. We next show that it is safe to assume \(\min_{i\in M}|V_{i}|>1\) without loss of generality, i.e., the smallest group in \(M\) contains at least two items. To prove this, we consider two cases, depending on the value of \(\beta\). If \(\beta=1\), then \(|A^{P.3}\cap V_{i}|<|V_{i}|-\lfloor\beta|V_{i}|\rfloor\) does not hold for any group \(i\) such that \(|V_{i}|=1\), that is, \(\min_{i\in M}|V_{i}|>1\). If \(\beta<1\), then according to the group fairness constraints listed in **P.0**, we are not allowed to select any items from those groups with \(|V_{i}|=1\). Hence, removing all groups with size one from consideration does not affect the quality of the optimal solution. With the assumption that \(\min_{i\in M}|V_{i}|>1\), we are now in position to prove this lemma. Recall that for every \(i\in M\), \(B_{i}\) is a random set of size \(|V_{i}|-\lfloor\beta|V_{i}|\rfloor-|A^{P.3}\cap V_{i}|\) that is sampled from \(V_{i}\setminus A^{P.3}\). It follows that each item in \(V_{i}\setminus A^{P.3}\) appears in \(B_{i}\) with probability at most \[\frac{|V_{i}|-\lfloor\beta|V_{i}|\rfloor-|A^{P.3}\cap V_{i}|}{|V_{i}\setminus A ^{P.3}|}. \tag{8}\] We next give an upper bound of (8). Because we assume \(\alpha>1/2\), we have \(\beta\geq\alpha>1/2\). This, together with the assumption that \(\min_{i\in M}|V_{i}|>1\), implies that for all \(i\in M\), \[|V_{i}|-\lfloor\beta|V_{i}|\rfloor-|A^{P.3}\cap V_{i}|\leq|V_{i}|-\lfloor\beta |V_{i}|\rfloor\leq 2|V_{i}|/3. \tag{9}\] Moreover, \[|V_{i}\setminus A^{P.3}|=|V_{i}|-|A^{P.3}\cap V_{i}| \tag{10}\] \[=(|V_{i}|-\lfloor\beta|V_{i}|\rfloor-|A^{P.3}\cap V_{i}|)+(|V_{i} |-(|V_{i}|-\lfloor\beta|V_{i}|\rfloor))\] (11) \[=(|V_{i}|-\lfloor\beta|V_{i}|\rfloor-|A^{P.3}\cap V_{i}|)+\lfloor \beta|V_{i}|\rfloor\] (12) \[\geq(|V_{i}|-\lfloor\beta|V_{i}|\rfloor-|A^{P.3}\cap V_{i}|)+|V_ {i}|/3, \tag{13}\] where the inequality is by the observation that \(\beta>1/2\). It follows that \[(8)\leq\frac{|V_{i}|-\lfloor\beta|V_{i}|\rfloor-|A^{P.3}\cap V_{i}|}{(|V_{i}|- \lfloor\beta|V_{i}|\rfloor-|A^{P.3}\cap V_{i}|)+|V_{i}|/3}\leq\frac{2|V_{i}|/3 }{2|V_{i}|/3+|V_{i}|/3}=2/3, \tag{14}\] where the first inequality is by (13) and the second inequality is by (9) and the assumption that \(|V_{i}|-\lfloor\beta|V_{i}|\rfloor-|A^{P.3}\cap V_{i}|>0\). That is, each item in \(V_{i}\setminus A^{P.3}\) appears in \(B_{i}\) with probability at most \(2/3\). Lemma 3 and the observation that \(g(A^{P.3}\cup\cdot)\) is submodular imply that \(\mathbb{E}_{A^{approx}}[g(A^{approx})]=\mathbb{E}_{\cup_{i\in[m]}B_{i}}[g(A^{ P.3}\cup(\cup_{i\in[m]}B_{i}))]\geq(1-\frac{2}{3})g(A^{P.3}\cup\emptyset)= \frac{1}{3}g(A^{P.3})\). \(\Box\) Lemma 8 and Lemma 9 together imply that \[\mathbb{E}_{A^{approx}}[g(A^{approx})]\geq\frac{1}{3}g(A^{P.3})\geq\frac{ \gamma}{3}g(O^{P.2}).\] By the definition of function \(g\), we have \[\mathbb{E}_{A^{approx}}[f(V\setminus A^{approx})]=\mathbb{E}_{A^{approx}}[g(A ^{approx})]\geq\frac{\gamma}{3}g(O^{P.2})=\frac{\gamma}{3}f(OPT)\] where the last equality is by the observation that **P.2** and **P.0** share the same value of the optimal solution. Hence, the following main theorem holds. Theorem 3.1: _Suppose \(\mathcal{A}\) is a \(\gamma\)-approximate algorithm for non-monotone submodular maximization subject to a matroid constraint and \(\alpha>1/2\), we have \(\mathbb{E}_{A^{approx}}[f(V\setminus A^{approx})]\geq\frac{\gamma}{3}f(OPT)\)._ If we adopt the continuous double greedy algorithm [11] as \(\mathcal{A}\) to compute \(A^{P.3}\), it gives a \(1/e-o(1)\)-approximation solution, that is, \(\gamma\geq 1/e-o(1)\). This, together with Theorem 3.1, implies that \(\mathbb{E}_{A^{approx}}[f(V\setminus A^{approx})]\geq\frac{1/e-o(1)}{3}f(OPT)\). ## 4 Extension: Incorporating Global Cardinality Constraint In this section, we extend **P.0** to incorporate a global cardinality constraint. A formal definition of this problem is listed in **P.A**. Our objective is to find a best \(S\) subject to a group fairness constraint \((\alpha,\beta)\) and an additional cardinality constraint \(c\). \begin{tabular}{|l|} \hline **P.A**\(\max f(S)\) \\ **subject to:** \\ \(|\alpha|V_{i}|\rfloor\leq|S\cap V_{i}|\leq\lfloor\beta|V_{i}|\rfloor,\forall i \in[m]\). \\ \(|S|\leq c\). \\ \hline \end{tabular} ### The case when \(\alpha\leq 1/2\) We first consider the case when \(\alpha\leq 1/2\). We introduce a new optimization problem **P.B** as follows: \begin{tabular}{|l|} \hline **P.B**\(\max f(S)\) \\ **subject to:** \\ \(|S\cap V_{i}|\leq\lfloor\beta|V_{i}|\rfloor,\forall i\in[m]\). \\ \(\sum_{i\in[m]}\max\{\lfloor\alpha|V_{i}|\rfloor,|S\cap V_{i}|\}\leq c\). \\ \hline \end{tabular} It is easy to verify that **P.B** is a relaxation of **P.A** in the sense that every feasible solution to **P.A** is also a feasible solution to **P.B**. Hence, we have the following lemma. Lemma 10: _Let \(OPT\) denote the optimal solution of **P.A** and \(O^{P.B}\) denote the optimal solution of **P.B**, we have \(f(O^{P.B})\geq f(OPT)\)._ It has been shown that the constraints in **P.B** gives rise to a matroid [10]. This, together with the assumption that \(f\) is a submodular function, implies that **P.B** is a classic problem of maximizing a submodular function subject to matroid constraints. Now we are ready to present the design of our algorithm. 1. Apply the state-of-the-art algorithm \(\mathcal{A}\) for matroid constrained submodular maximization to solve **P.B** and obtain a solution \(A^{P.B}\). 2. Note that \(A^{P.B}\) is not necessarily a feasible solution to **P.A** because it might violate the lower bound constraints \(\lfloor\alpha|V_{i}|\rfloor\leq|S\cap V_{i}|\) for some groups. To make it feasible, we add additional items to \(A^{P.B}\). Specifically, for each group \(i\in[m]\) such that \(|A^{P.B}\cap V_{i}|<\lfloor\alpha|V_{i}|\rfloor\), our algorithm selects a backup set \(B_{i}\) of size \(\lfloor\alpha|V_{i}|\rfloor-|A^{P.B}\cap V_{i}|\), by randomly sampling \(\lfloor\alpha|V_{i}|\rfloor-|A^{P.B}\cap V_{i}|\) items from \(V_{i}\setminus A^{P.B}\). Define \(B_{i}=\emptyset\) if \(|A^{P.1}\cap V_{i}|\geq\lfloor\alpha|V_{i}|\rfloor\). 3. At the end, add \(\cup_{i\in[m]}B_{i}\) to \(A^{P.B}\) to build the final solution \(A^{approx}\), i.e., \(A^{approx}=A^{P.B}\cup(\cup_{i\in[m]}B_{i})\). ``` 1:Apply \(\mathcal{A}\) to solve **P.B** and obtain a solution \(A^{P.B}\) 2:for every group \(i\in[m]\)do 3:if\(|A^{P.B}\cap V_{i}|<\lfloor\alpha|V_{i}|\rfloor\)then 4: select a random backup set \(B_{i}\) of size \(\lfloor\alpha|V_{i}|\rfloor-|A^{P.B}\cap V_{i}|\) from \(V_{i}\setminus A^{P.B}\) 5:else 6:\(B_{i}\leftarrow\emptyset\) 7:\(A^{approx}\gets A^{P.B}\cup(\cup_{i\in[m]}B_{i})\) 8:return\(A^{approx}\) ``` **Algorithm 3** Approximation Algorithm for **P.A** when \(\alpha\leq 1/2\) The pseudocode of this approximation algorithm is given as Algorithm 3. Observe that \(A^{P.B}\) satisfies the group-wise upper bound constraints of **P.A** because \(A^{P.B}\) meets the first set of constraints in **P.B**. According to the construction of \(B_{i}\), adding \(\cup_{i\in[m]}B_{i}\) to \(A^{P.B}\) does not violate the group-wise upper bound constraints of **P.A** because \(\cup_{i\in[m]}B_{i}\) are added to meet the lower bound constraints of **P.A** if necessary. Moreover, adding \(\cup_{i\in[m]}B_{i}\) to \(A^{P.B}\) does not violate the global cardinality constraint of **P.A** because \(A^{P.B}\) meets the second set of constraints in **P.B**. At last, it is easy to verify that adding \(\cup_{i\in[m]}B_{i}\) to \(A^{P.B}\) makes it satisfy the lower bound constraints of **P.A**. Hence, \(A^{approx}\) is a feasible solution to **P.A**. Lemma 11: \(A^{approx}\) _is a feasible solution to **P.A**._ Following the same proof of Theorem 1, we have the following theorem. Theorem 4.1: _Suppose \(\mathcal{A}\) is a \(\gamma\)-approximate algorithm for non-monotone submodular maximization subject to a matroid constraint and \(\alpha\leq 1/2\), we have \(\mathbb{E}_{A^{approx}}[f(A^{approx})]\geq\frac{\gamma}{2}f(OPT)\)._ ### The case when \(\alpha>1/2\) We next consider the case when \(\alpha>1/2\). Recall that \(g(\cdot)=f(V\setminus\cdot)\). We first present a flipped formation of **P.A** as below: ``` 1:\(\mathbf{P.C}\max g(S)\) 2:subject to: \(|V_{i}|-\lfloor\beta|V_{i}|\rfloor\leq|S\cap V_{i}|\leq|V_{i}|-\lfloor\alpha|V_ {i}|\rfloor,\forall i\in[m]\). 3:\(|S|\geq n-c\). ``` **Algorithm 4** Approximation Algorithm for **P.A** when \(\alpha\leq 1/2\) ``` 1:Apply \(\mathcal{A}\) to solve **P.B** and obtain a solution \(A^{P.B}\) 2:for every group \(i\in[m]\)do 3:if\(|A^{P.B}\cap V_{i}|<\lfloor\alpha|V_{i}|\rfloor\)then 4: select a random backup set \(B_{i}\) of size \(\lfloor\alpha|V_{i}|\rfloor-|A^{P.B}\cap V_{i}|\) from \(V_{i}\setminus A^{P.B}\) 5:else 6:\(B_{i}\leftarrow\emptyset\) 7:\(A^{approx}\gets A^{P.B}\cup(\cup_{i\in[m]}B_{i})\) 8:return\(A^{approx}\) ``` **Algorithm 5** Approximation Algorithm for **P.A** when \(\alpha\leq 1/2\) ### The case when \(\alpha>1/2\) We next consider the case when \(\alpha>1/2\). Recall that \(g(\cdot)=f(V\setminus\cdot)\). We first present a flipped formation of **P.A** as below: ``` 1:Apply \(\mathcal{A}\) to solve **P.B** and obtain a solution \(A^{P.B}\) 2:for every group \(i\in[m]\)do 3:if\(|A^{P.B}\cap V_{i}|<\lfloor\alpha|V_{i}|\rfloor\)then 4: select a random backup set \(B_{i}\) of size \(\lfloor\alpha|V_{i}|\rfloor-|A^{P.B}\cap V_{i}|\) from \(V_{i}\setminus A^{P.B}\) 5:else 6:\(B_{i}\leftarrow\emptyset\) 7:\(A^{approx}\gets A^{P.B}\cup(\cup_{i\in[m]}B_{i})\) 8:return\(A^{approx}\) ``` **Algorithm 6** Approximation Algorithm for **P.A** when \(\alpha\leq 1/2\) ``` 1:Apply \(\mathcal{A}\) to solve **P.B** and obtain a solution \(A^{P.B}\) 2:for every group \(i\in[m]\)do 3:if\(|A^{P.B}\cap V_{i}|<\lfloor\alpha|V_{i}|\rfloor\)then 4: select a random backup set \(B_{i}\) of size \(\lfloor\alpha|V_{i}|\rfloor-|A^{P.B}\cap V_{i}|\) from \(V_{i}\setminus A^{P.B}\) 5:else 6:\(B_{i}\leftarrow\emptyset\) 7:\(A^{approx}\gets A^{P.B}\cup(\cup_{i\in[m]}B_{i})\) 8:return\(A^{approx}\) ``` **Algorithm 7** Approximation Algorithm for **P.A** when \(\alpha\leq 1/2\) ``` 1:Apply \(\mathcal{A}\) to solve **P.B** and obtain a solution \(A^{P.B}\) 2:for every group \(i\in[m]\)do 3:if\(|A^{P.B}\cap V_{i}|<\lfloor\alpha|V_{i}|\rfloor\)then 4: select a random backup set \(B_{i}\) of size \(\lfloor\alpha|V_{i}|\rfloor-|A^{P.B}\cap V_{i}|\) from \(V_{i}\setminus A^{P.B}\) 5:else 7:\(B_{i}\leftarrow\emptyset\) 8:\(A^{approx}\gets A^{P.B}\cup(\cup_{i\in[m]}B_{i})\) 9:return\(A^{approx}\) ``` **Algorithm 8** Approximation Algorithm for **P.A** when \(\alpha\leq 1/2\) ``` 1:Apply \(\mathcal{A}\) to solve **P.B** and obtain a solution \(A^{P.B}\) 2:for every group \(i\in[m]\)do 4:if\(|A^{P.B}\cap V_{i}|<\lfloor\alpha|V_{i}|\rfloor\)then 5: select a random backup set \(B_{i}\) of size \(\lfloor\alpha|V_{i}|\rfloor-|A^{P.B}\cap V_{i}|\) from \(V_{i}\setminus A^{P.B}\) 6:else 7:\(A^{approx}\gets A^{P.B}\cup(\cup_{i\in[m]}B_{i})\) 8:return\(A^{approx}\) ``` **Algorithm 8** Approximation Algorithm for **P.A** when \(\alpha\leq 1/2\) ``` 1:Apply \(\mathcal{A}\) to solve **P.B** and obtain a solution \(A^{P.B}\) 3:for every group \(i\in[m]\)do 4:if\(|A^{P.B}\cap V_{i}|<\lfloor\alpha|V_{i}|\rfloor\)then 5: select a random backup set \(B_{i}\) of size \(\lfloor\alpha|V_{i}|\rfloor-|A^{P.B}\cap V_{i}|\) from \(V_{i}\setminus A^{P.B}\) 6:else 7:\(A^{approx}\gets A^{P.B}\cup(\cup_{i\in[m]}B_{i})\) 8:return\(A^{approx}\) ``` **Algorithm 8** Approximation Algorithm for **P.A** when \(\alpha\leq 1/2\) The pseudocode of this approximation algorithm is given as Algorithm 3. Observe that \(A^{P.B}\) satisfies the group-wise upper bound constraints of **P.A** because \(A^{P.B}\) meets the first set of constraints in **P.B**. According to the construction of \(B_{i}\), adding \(\cup_{i\in[m]}B_{i}\) to \(A^{P.B}\) does not violate the group-wise upper bound constraints of **P.A** because \(\cup_{i\in[m]}B_{i}\) are added to meet the lower bound constraints of **P.A** if necessary. Moreover, adding \(\cup_{i\in[m]}B_{i}\) to \(A^{P.B}\) does not violate the global cardinality constraint of **P.A** because \(A^{P.B}\) meets the second set of constraints in **P.B**. At last, it is easy to verify that adding \(\cup_{i\in[m]}B_{i}\) to \(A^{P.B}\) makes it satisfy the lower bound constraints of **P.A**. Hence, \(A^{approx}\) is a feasible solution to **P.A**. Lemma 11: \(A^{approx}\) _is a feasible solution to **P.A**._ Following the same proof of Theorem 1, we have the following theorem. Theorem 4.2: _Suppose \(\mathcal{A}\) is a \(\gamma\)-approximate algorithm for non-monotone submodular maximization subject to a matroid constraint and \(\alpha\ Suppose there is a \(\gamma\)-approximate solution \(A^{P.C}\) to **P.C**, it is easy to verify that \(V\setminus A^{P.C}\) is a \(\gamma\)-approximate solution to **P.A**. We focus on solving **P.C** in the rest of this section. We first introduce a new optimization problem (labeled as **P.D**) as follows: \begin{tabular}{|l|} \hline **P.D**\(\max g(S)\) \\ **subject to:** \\ \(|S\cap V_{i}|\leq|V_{i}|-\lfloor\alpha|V_{i}|\rfloor,\forall i\in[m]\). \\ \hline \end{tabular} **P.D** is relaxed version of **P.C** with both group-wise lower bound constraints \(|V_{i}|-\lfloor\beta|V_{i}|\rfloor\leq|S\cap V_{i}|\) and global lower bound constraints \(|S|\geq n-c\) in **P.C** being removed. Hence, we have the following lemma. Lemma 12: _Let \(O^{P.C}\) denote the optimal solution of **P.C** and \(O^{P.D}\) denote the optimal solution of **P.D**, we have \(g(O^{P.D})\geq g(O^{P.C})\)._ Recall that if \(f\) is submodular, \(g\) must be submodular (by Lemma 6). Hence, **P.D** is a classic problem of maximizing a submodular function subject to matroid constraints. We next present the design of our algorithm. 1. Apply the state-of-the-art algorithm \(\mathcal{A}\) for matroid constrained submodular maximization to solve **P.D** and obtain a solution \(A^{P.D}\). 2. Note that \(A^{P.D}\) is not necessarily a feasible solution to **P.C** because it might violate the group-wise or the global lower bound constraints of **P.C**. We add additional items to \(A^{P.D}\) to make it feasible. Specifically, for each group \(i\in[m]\), our algorithm selects a backup set \(B_{i}\) of size \(|V_{i}|-\lfloor\alpha|V_{i}|\rfloor-|A^{P.D}\cap V_{i}|\), by randomly sampling \(|V_{i}|-\lfloor\alpha|V_{i}|\rfloor-|A^{P.D}\cap V_{i}|\) items from \(V_{i}\setminus A^{P.D}\). Define \(B_{i}=\emptyset\) if \(|V_{i}|-\lfloor\alpha|V_{i}|\rfloor-|A^{P.D}\cap V_{i}|=0\). 3. Add \(\cup_{i\in[m]}B_{i}\) to \(A^{P.D}\) to build \(A^{approx}\), i.e., \(A^{approx}=A^{P.D}\cup(\cup_{i\in[m]}B_{i})\). Return \(V\setminus A^{approx}\) as the final solution. ``` 1:Apply \(\mathcal{A}\) to solve **P.D** and obtain a solution \(A^{P.D}\) 2:for every group \(i\in[m]\)do 3:if\(|A^{P.D}\cap V_{i}|<|V_{i}|-\lfloor\alpha|V_{i}|\rfloor\)then 4: select a random backup set \(B_{i}\) of size \(|V_{i}|-\lfloor\alpha|V_{i}|\rfloor-|A^{P.D}\cap V_{i}|\) from \(V_{i}\setminus A^{P.D}\) 5:else 6:\(B_{i}\leftarrow\emptyset\) 7:\(A^{approx}\gets A^{P.D}\cup(\cup_{i\in[m]}B_{i})\) 8:return\(V\setminus A^{approx}\) ``` **Algorithm 4** Approximation Algorithm for **P.A** when \(\alpha>1/2\) The pseudocode of this approximation algorithm is given as Algorithm 4. Observe that adding \(\cup_{i\in[m]}B_{i}\) to \(A^{P.D}\) ensures that each group contributes exactly \(|V_{i}|-\lfloor\alpha|V_{i}|\rfloor\) number of items to the solution. Because \(n-c\leq\sum_{i\in[m]}(|V_{i}|-\lfloor\alpha|V_{i}|\rfloor)\) (otherwise **P.C** does not have a feasible solution), \(A^{P.D}\cup(\cup_{i\in[m]}B_{i})\) must satisfy all constraints in **P.C**. Hence, we have the following lemma. Lemma 13: \(A^{approx}\) _is a feasible solution to **P.C**._ We next analyze the performance of \(A^{approx}\). The following lemma states that adding \(\cup_{i\in[m]}B_{i}\) to \(A^{P.D}\) reduces its utility by a factor of at most \(2/3\) in expectation. Lemma 14: _Suppose \(\alpha>1/2\), we have \(\mathbb{E}_{A^{approx}}[g(A^{approx})]\geq\frac{1}{3}g(A^{P.D})\)._ _Proof:_ Observe that \(B_{i}=\emptyset\) for all \(i\in[m]\) such that \(|A^{P.D}\cap V_{i}|=|V_{i}|-\lfloor\alpha|V_{i}|\rfloor\), hence, adding those \(B_{i}\) to \(A^{P.D}\) does not affect its utility. Therefore, we focus on those groups \(i\in[m]\) with \(|A^{P.D}\cap V_{i}|<|V_{i}|-\lfloor\alpha|V_{i}|\rfloor\) in the rest of the proof. Let \(Z=\{i\in[m]\mid|A^{P.D}\cap V_{i}|<|V_{i}|-\lfloor\alpha|V_{i}|\rfloor\}\) denote the set containing the indexes all such groups. We assume \(Z\neq\emptyset\) to avoid trivial cases. We next show that it is safe to assume \(\min_{i\in Z}|V_{i}|>1\) without loss of generality, i.e., the smallest group in \(Z\) contains at least two items. To prove this, we consider two cases, depending on the value of \(\alpha\). If \(\alpha=1\), then \(|A^{P.D}\cap V_{i}|<|V_{i}|-\lfloor\alpha|V_{i}|\rfloor\) does not hold for any group \(i\) such that \(|V_{i}|=1\). Hence, \(\min_{i\in Z}|V_{i}|>1\). If \(\alpha<1\), then according to the group fairness constraints listed in **P.A**, we are not allowed to select any items from those groups with \(|V_{i}|=1\). Hence, removing all groups with size one from consideration does not affect the quality of the optimal solution. With the assumption that \(\min_{i\in Z}|V_{i}|>1\), we are now ready to prove this lemma. Recall that for every \(i\in Z\), \(B_{i}\) is a random set of size \(|V_{i}|-\lfloor\alpha|V_{i}|\rfloor-|A^{P.D}\cap V_{i}|\) that is sampled from \(V_{i}\setminus A^{P.D}\). It follows each item in \(V_{i}\setminus A^{P.D}\) appears in \(B_{i}\) with probability at most \[\frac{|V_{i}|-\lfloor\alpha|V_{i}|\rfloor-|A^{P.D}\cap V_{i}|}{|V_{i}\setminus A ^{P.D}|}. \tag{15}\] We next give an upper bound of (15). Because we assume \(\alpha>1/2\) and \(\min_{i\in Z}|V_{i}|>1\), it holds that for all \(i\in M\), \[|V_{i}|-\lfloor\alpha|V_{i}|\rfloor-|A^{P.D}\cap V_{i}|\leq|V_{i}|-\lfloor \alpha|V_{i}|\rfloor\leq 2|V_{i}|/3. \tag{16}\] Moreover, \[|V_{i}\setminus A^{P.D}|=|V_{i}|-|A^{P.D}\cap V_{i}| \tag{17}\] \[=(|V_{i}|-\lfloor\alpha|V_{i}|\rfloor-|A^{P.D}\cap V_{i}|)+(|V_{i }|-(|V_{i}|-\lfloor\alpha|V_{i}|\rfloor))\] (18) \[=(|V_{i}|-\lfloor\alpha|V_{i}|\rfloor-|A^{P.D}\cap V_{i}|)+ \lfloor\alpha|V_{i}|\rfloor\] (19) \[\geq(|V_{i}|-\lfloor\alpha|V_{i}|\rfloor-|A^{P.D}\cap V_{i}|)+|V_ {i}|/3, \tag{20}\] where the inequality is by the assumptions that \(\alpha>1/2\) and \(|V_{i}|>1\). It follows that \[(\ref{eq:15})\leq\frac{|V_{i}|-\lfloor\alpha|V_{i}|\rfloor-|A^{P.D}\cap V_{i }|}{(|V_{i}|-\lfloor\alpha|V_{i}|\rfloor-|A^{P.D}\cap V_{i}|)+|V_{i}|/3}\leq \frac{2|V_{i}|/3}{2|V_{i}|/3+|V_{i}|/3}=2/3, \tag{21}\] where the first inequality is by (20) and the second inequality is by (16) and the assumption that \(|V_{i}|-\lfloor\alpha|V_{i}|\rfloor-|A^{P.D}\cap V_{i}|>0\). That is, each item in \(V_{i}\setminus A^{P.D}\) appears in \(B_{i}\) with probability at most \(2/3\). Lemma 3 and the observation that \(g(A^{P.D}\cup\cdot)\) is submodular imply that \(\mathbb{E}_{A^{approx}}[g(A^{approx})]=\mathbb{E}_{\cup_{i\in[m]}B_{i}}[g(A^{P.D }\cup(\cup_{i\in[m]}B_{i}))]\geq(1-\frac{2}{3})g(A^{P.D}\cup\emptyset)=\frac{1} {3}g(A^{P.D})\). \(\square\) Suppose \(\mathcal{A}\) is a \(\gamma\)-approximate algorithm for non-monotone submodular maximization subject to a matroid constraint, we have \[\mathbb{E}_{A^{approx}}[g(A^{approx})]\geq\frac{1}{3}g(A^{P.D})\geq\frac{ \gamma}{3}g(O^{P.D})\] where the first inequality is by Lemma 14. This, together with \(g(O^{P.D})\geq g(O^{P.C})\) (as proved in Lemma 12), implies that \(\mathbb{E}_{A^{approx}}[g(A^{approx})]\geq\frac{\gamma}{3}g(O^{P.C})\). By the definition of function \(g\), we have \[\mathbb{E}_{A^{approx}}[f(V\setminus A^{approx})]=\mathbb{E}_{A^{approx}}[g( A^{approx})]\geq\frac{\gamma}{3}g(O^{P.C})=\frac{\gamma}{3}f(OPT)\] where the last equality is by the observation that **P.A** and **P.C** share the same value of the optimal solution. Hence, the following main theorem holds. Theorem 4.1: _Suppose \(\mathcal{A}\) is a \(\gamma\)-approximate algorithm for non-monotone submodular maximization subject to a matroid constraint and \(\alpha>1/2\), we have \(\mathbb{E}_{A^{approx}}[f(V\setminus A^{approx})]\geq\frac{\gamma}{3}f(OPT)\)._ ## 5 Conclusion This paper presents a comprehensive investigation of the non-monotone submodular maximization problem under group fairness constraints. Our main contribution is the development of several constant-factor approximation algorithms for this problem. In the future, we plan to expand our research to explore alternative fairness metrics.
2309.03023
Universal Preprocessing Operators for Embedding Knowledge Graphs with Literals
Knowledge graph embeddings are dense numerical representations of entities in a knowledge graph (KG). While the majority of approaches concentrate only on relational information, i.e., relations between entities, fewer approaches exist which also take information about literal values (e.g., textual descriptions or numerical information) into account. Those which exist are typically tailored towards a particular modality of literal and a particular embedding method. In this paper, we propose a set of universal preprocessing operators which can be used to transform KGs with literals for numerical, temporal, textual, and image information, so that the transformed KGs can be embedded with any method. The results on the kgbench dataset with three different embedding methods show promising results.
Patryk Preisner, Heiko Paulheim
2023-09-06T14:08:46Z
http://arxiv.org/abs/2309.03023v1
# Universal Preprocessing Operators ###### Abstract Knowledge graph embeddings are dense numerical representations of entities in a knowledge graph (KG). While the majority of approaches concentrate only on relational information, i.e., relations between entities, fewer approaches exist which also take information about literal values (e.g., textual descriptions or numerical information) into account. Those which exist are typically tailored towards a particular modality of literal and a particular embedding method. In this paper, we propose a set of universal preprocessing operators which can be used to transform KGs with literals for numerical, temporal, textual, and image information, so that the transformed KGs can be embedded with any method. The results on the kgbench dataset with three different embedding methods show promising results. Knowledge Graph, Embedding, Representation, Literal Information + Footnote †: 0022 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). ## 1 Introduction Knowledge graphs have become a common means to represent information across various domains. [1, 2] They are comprised of entities and their relations, but many also contain literal information, like textual descriptions of entities, numerical values, or even images. For example, the following is an excerpt of the representation of the entity _Mannheim_ in DBpedia [3]: ``` dbr:Mannheimdbo:countrydbr:Germany. dbr:University_of_Mannheimdbp:citydbr:Mannheim. dbr:Mannheimdbo:populationMetro"2362046"\(\texttt{\tiny\mbox{\tiny\mbox{\tiny\mbox{\tiny\mbox{\tiny\mbox{\tiny\mbox{\tiny\mbox{\tiny\mbox{\tiny\mbox{\tiny\mbox{\mbox{\tiny\mbox{\mbox{\tiny\mbox{\mbox{\mbox{\mbox{\mbox{\mbox{\mbox{\mbox{\mbox{\mbox{\mbox{\mbox{\mbox{\mbox{\ Most embedding approaches only consider relations between entities when computing numeric representations for entities. In the above example, when learning a representation for the entity _Mannheim_, they would use only the first two statements, but neglect the latter three, containing textual, numerical, and image information. However, those also contain relevant information about the entity, which could lead to a better latent representation if they were used by the embedding approach. While a few embedding approaches have been proposed which take into account literal information, they have a few shortcomings: most of them (1) target only one modality (e.g., text, numbers, _or_ images), and (2) are adaptations of a particular embedding method and hence cannot be used in conjunction with arbitrary embedding methods. In this paper, we propose a set of knowledge graph preprocessing operators for textual, numeric, and image literals which can be used to create a KG with only relations from one containing literal information. The resulting knowledge graph can then be processed by any arbitrary embedding method. The rest of this paper is structured as follows. Section 2 positions our approach in the light of existing research. Section 3 introduces our approach, followed by a set of experiments described in section 4. We conclude with a summary and an outlook on future work. ## 2 Related Work Many standard benchmarks for knowledge graph embeddings, especially in the link prediction field, do not come with literals. Hence, the topic has not drawn as much attention as knowledge graph embeddings for purely relational KGs for quite some time. A survey from 2021 [4] lists a number of approaches, which mostly are extensions of existing knowledge graph embedding models, mostly classic models like TransE. Those approaches usually change the loss function of the underlying model and hence are bound to that model alone. An exception is LiteralE [5], which has been applied to different embedding algorithms like TransE, ComplEx, or DistMult. Moreover, most approaches focus only on one modality of literals. A more recent survey from 2023 [6] confirms that picture. In contrast, the work presented in this paper proposes to preprocess a KG with literals in a way that the information in the literals is represented in a KG with only relational information. We investigate a number of preprocessing techniques for various modalities, which can be applied together with arbitrary embedding models. The pyRDF2vec [7] implementation of RDF2vec [8] has a functionality to extract literals directly as features. This creates a heterogeneous representation of an entity (consisting of an embedding plus an additional vector of literal values), which is similar to the _Data Properties_ strategy described in [9]. In contrast, the approch in this work targets a uniform embedding representation. An alternative is to alter the knowledge graph upfront, aiming at transforming information in encoded literals into relational statements. Such approaches would not be bound to a particular embedding method, and, if developed for literals with different modalities, could also be combined to exploit However, approaches based on preprocessing are still rare. One exception is [10], who propose the use of binning of numerical values. We reuse some of their approaches in our work in this paper. Another paper [11] also proposes three strategies preprocessing literals, one of which is used as a baseline in this paper. ## 3 Approach Our approach relies on graph preprocessing. Instead of changing the embedding approach per se, we augment the graph with additional nodes and edges encoding some of the information encoded in the literals. Fig. 1 shows the overall framework. Specifically, the embedding step is decoupled from the augmentation step. The last two steps (classifier fitting and evaluation) are concerned with evaluation. For the experiments in this paper, we consider node classification problems, but other downstream tasks (such as link prediction, node regression, or node clustering) would also be possible. ### Baselines For all approaches, we employ three simple baselines. The first, tagged exclude, simply excludes all literals. Since most embedding approaches ignore literals, this should not have an impact. The second, tagged TRANSFORM, creates an entity for each combination of a literal value and a property. In the example above, dbr:Mannheim dbo:populationMetro "2362046"\(\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{ \texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{ \texttt{ \texttt{ }}}}}}}}}}}}}}}\)xsd:nonNegativeInteger. would be transformed to1 dbr:Mannheim dbo:populationMetro new:populationMetro2362046. This strategy is identical with the method called _Literal2Entity_ in [11]. Footnote 1: Note that all of the approaches technically turn an owl:DatatypeProperty into an owl:objectProperty. If this is not wanted, e.g., since the ontology should be further reused, this can trivially be changed, e.g., by moving the property into a different namespace. The third and final baseline, tagged ONENTITY, creates one single entity for each relation. The idea is to capture any information that is indicated only by the presence or absence of a datatype property (such as dbo:populationMetro), regardless of the actual literal value, similarly to the relation strategy in [9]. This strategy would transform the above triple to dbr:Mannheim dbo:populationMetro new:populationMetroAnyValue. Figure 1: Overall Framework ### Handling Numeric Literals Creating a single entity for each literal value may not be a good strategy for capturing the semantics of that value. Besides scalability issues, two very similar literal values are indistinguishable from two very dissimilar ones. To counter those issues, we employ a number of additional techniques for representing numeric literals, based on binning. The most basic one, tagged nbins, is simlar to the one proposed in [10]. We create \(n\) bins from the set of literal values for each predicate. Furthermore, the entities representing the bins are connected to each other. Fig. 2 shows the idea of this approach. While nbins requires setting a fixed value for \(n\), p%bins lets the user set a percentage of unique values. For example, for a datatype property with 1,000 occurences, and 200 unique values, 10%bins would create 20 bins (10% of 200). Moreover, we also adapt the idea of _overlapping bins_ and _hierarchical binning_ from [10], which allows for literal values to be contained in more than one bin, and therefore extends the expressivitiy of the entities representing bins. Since outliers can distort the bins created, we also combine the binning with a preceding outlier detection step. Specifically, we use the local outlier factor (LOF) method [12] to first discard outliers, then perform a binning. Finally, we adopt an idea from [13], which is based on the observation that the same property may be used for multiple types of objects, hence resulting in different blended value distributions. For example, the property height may be used for people and buildings, but binning should be conducted on values from both classes separately, since the bin high would have a different span for people and buildings. Since many knowledge graphs do not come with an extensive type system, we alter the original approach in [13] to use either sets of relations for identifying similar and dissimilar entity types (in the example above, people and buildings would come with different sets of relations), and sets of relations and entities. The two approaches are coined kl-rel and kl-relent. Both approaches build a lattice of entities with the datatype at hand, and compute Figure 2: Illustration of the nbins Approach the KL-divergence of the set of relations (or the set of relations and the connected entities, respectively) and split the population of values until it falls below a certain threshold (in our experiments, we use 300 values as a threshold). Then, the binning is performed individually for each subpopulation. All the approaches create one entity per bin and relation. Hence, the population statement in our example would be transformed to a statement like ``` dbr:Mannheimdbo:populationMetronew:populationMetroBin02. ``` ### Handling Temporal Literals For temporal literals, i.e., literals typed with xsd:date, we follow a different strategy. The first strategy for handling dates, coined DATBIN, turns the date into a UNIX timestamp and applies the nBINS strategy above. In the above example, the statement ``` dbr:Mannheimdbo:foundingDate"1607-01-24"\(\times\)xsd:date. ``` would be replaced by a statement like ``` dbr:Mannheimdbo:foundingDatenew:foundingDateBin14. ``` This strategy, however, does not capture the entire information in a date. For example, a similarity of two people with the same birthday (in different years) might not be captured with such an approach. Therefore, to handle temporal literals, we propose a second strategy coined DATFEAT and extract five new features from a date literal. In the above example, this would yield the statements ``` dbr:Mannheimdbo:foundingDate new:wednesday, new:day24, new:month1, new:quarter1, new:year1607. ``` As shown in Fig. 3, the new entities for days, months, and quarters can again be connected in order to also capture interrelations between them. Figure 3: Date Nodes Encoding Quarter of Date ### Handling Text Literals Many knowledge graphs contain rich textual information, but this cannot be represented as easily as the information in numbers and dates. In order to represent textual information, we use _topic modeling_, which assigns each text literal a certain number of topics [14]. Each of those topics is then represented as a node in the graph. Specifically, we run all values of a text literal (e.g., \(\texttt{dbo:abstract}\)) through a Latent Dirichlet Allocation (LDA) algorithm, and connect each entity to all topics exceeding a certain threshold (in our experiments in this paper, we use a threshold of 10%). With this strategy coined TXTLDA, the statement dbr:Mannheim dbo:abstract "Mannheim [...] officially the University City of Mannheim (German: Universitatsstadt Mannheim), is the second-largest city in the German state of Baden-Wurttemberg..."@en. could be replaced, e.g., by dbr:Mannheim dbo:abstract new:abstractTopic04, new:abstractTopic17. ### Handling Image Literals For images, we use a similar technique. We reuse a large-scale neural image classification model, which predicts tags for images (e.g., whether the building is showing a person or an animal). Those are then represented as nodes, which are then used to describe the image contents. In our experiments, we use the pre-trained VGG16 model [15], which computes probabilities for 1,000 classes of images. For each image, we classify it with VGG16 and use the most likely class for each image. In our example above, the triple dbr:Mannheim foaf:depiction <[http://commons.wikimedia.org/wiki/Special:FilePath/](http://commons.wikimedia.org/wiki/Special:FilePath/) NUB_Mannheim_2014-03-13.jpg>. could be replaced by dbr:Mannheim foaf:depiction new:VGG_building. Table 1 depictes the size changes of a knowledge graph for the individual strategies. It can be observed that the number of statements equals the number of original literal statements, and the number of entities is also changing only moderately. ## 4 Experiments We test all of the approaches above on the node classification benchmark kgbench [16], which contains four heterogeneous datasets, as shown in table 4. As embedding methods, we use TransE [17] and DistMult [18] using the pyKeen library [19], and RDF2vec [20] using the pyRDF2vec library [7]. As classifiers, we use kNN and SVM using the scikit-learn library [21]. Using the Adam optimizer, the two pyKeen embedders DistMult and TransE were trained in 100 epochs for TransE and 150 epochs for DistMult, using the LCWA train loop. We use a batch size of 75,000 for DistMult and 2,000 for TransE. For all additional parameters, the default parameters provided by pykeen were used. Hereby the pykeen selects the parameters used in the original paper that introduced the selected embedder as default parameters [19]. RDF2vec was trained using a maximum walk depth and 500 walks per node, and 50 training epochs for word2vec. For all additional parameters, the default parameters of pyRDF2vec are used. For the classifiers, we use a grid search for parameter optimization. For kNN, the parameters in the search space are \(k=\{2,4,7,9,15\}\), for SVM, the parameters in the search space are \(C=\{0.01,0.1,1,10,100\}\). For all other parameters, we use the default values defined by scikit-learn [21].2 Footnote 2: The code for all experiments is available online at [https://gitlab.com/patryk.preisner/mkga/](https://gitlab.com/patryk.preisner/mkga/) \begin{table} \begin{tabular}{l|c|c} Strategy & \(\delta E\) & \(\delta S\) \\ \hline EXCLUDE & – & – \\ TRANSFORM & V * R & S \\ ONENTITY & R & S \\ nBINS & n*R & S \\ DATBIN & n*R & S \\ DATFEAT & DW+DD+DM+DQ+DY & 5*S \\ LDA & T & T*S \\ VGG16 & 1,000 & S \\ \hline \end{tabular} \end{table} Table 1: Maximum size changes to the knowledge graph in number of entities (\(\delta E\)) and statements (\(\delta S\)). Variables used: number of distinct literal values (\(V\)), number of relations (\(R\)), number of literal assignment statements (\(S\)), number of distinct weekdays (\(DW\)), days (\(DD\)), months (\(DM\)), quarters, (\(DQ\)), and years (\(DY\)), topics in LDA (\(T\)). \begin{table} \begin{tabular}{l|r|r|r|r} Dataset & amplus & dmgfull & dmg77k & mdgenre \\ \hline Classes & 8 & 14 & 5 & 12 \\ Relations & 33 & 62 & 60 & 154 \\ Nodes & 1,153,679 & 842,550 & 341,270 & 349,344 \\ Triples & 2,521,046 & 1,850,451 & 777,124 & 1,252,247 \\ \hline objects thereof... & & & & \\...IRIs & 1,464,871 & 593,291 & 288,379 & 1,001,791 \\...blank nodes & 256,515 & – & – & – \\...literals & 799,660 & 1,257,160 & 488,745 & 250,456 \\ \hline thereof... & & & & \\...numbers & 160,959 & 88,168 & 10,706 & 14,352 \\...dates & 202,304 & – & – & 113,463 \\...text & 377,542 & 834,244 & 329,987 & 54,838 \\...images & 58,855 & 58,846 & 46,108 & 67,804 \\...others & – & 275,902 & 101,944 & – \\ \end{tabular} \end{table} Table 2: The kgbench dataset Table 3 shows the experiment results. For each literal type, we show the ones which got the best results overall, in addition to the three baselines.3 Theses are KL-REL with LOF for numeric literals, DATBIN for dates (however, only amplus and mdgentre contain dates), LDA for text, and VGG16 for images. Moreover, we report results of a combined approach using the combination of the five aforementioned strategies. Footnote 3: A full table with the results for all configurations can be found at [https://gitlab.com/patryk.preisner/mkga/](https://gitlab.com/patryk.preisner/mkga/). From the table, we can observe that in three out of four cases, the best baseline can be outperformed by a few percentage points (0.779 vs. 0.708 on amplus, 0.676 vs. 0.606 on dmg77k, 0.726 vs. 0.662 on dmgfull), whereas for mdgentre, none of the approaches yields an advantage over the best baseline excluding literals (RDF2vec+SVM). Moreover, we can observe that there is no clear correlation between the amount of literals of a particular modality (see table 4) and the improvement achieved by including the corresponding literals. While this might seem counter intuitive, the sheer amount of literals does not reflect the utility of the information contained therein.4 Footnote 4: As a thought experiment, imagine a numerical ID for each entity, which would greatly increase the number of numerical literals, but the literals would not contain any useful information. The baselines TRANSFORM and ONENENTITY are often strong competitors as well, indicating that in many of the cases, the presence of a literal is a strong signal, regardless of the actual literal value. ## 5 Conclusion and Future Work We have shown that graph preprocessing is a promising strategy for representing literal information in knowledge graph embeddings, which can be combined with arbitrary embedding methods. The set of preprocessing operators is not fixed, but can be extended. For example, for text or image representation, while we used basic models to demonstrate the effectiveness of our approach, newer representation models can also be easily plugged in. A staged approach would also be feasible, e.g., representing texts first by means of a BERT encoder and then binning the resulting dimensional values. Most of the approaches used do not only create entities (e.g., for numerical bins, topics, or image labels), but also come with some score for those. For example, LDA assigns probabilities to topics, given a text. In the experiments in this paper, we used a simple thresholding mechanism to include and exclude the corresponding edges, but it would also be possible to pass the scores to the embedding model as edge weights. [22]
2310.19118
The fractional Laplacian: a primer
In this note we give a glimpse of the fractional Laplacian. In particular, we bring several definitions of this non-local operator and series of proofs of its properties. It is structured in a way as to show that several of those properties are natural extensions of their local counterparts, with some key differences.
Rafayel Teymurazyan
2023-10-29T19:10:30Z
http://arxiv.org/abs/2310.19118v1
# The fractional Laplacian: a primer ###### Abstract. In this note we give a glimpse of the fractional Laplacian. In particular, we bring several definitions of this non-local operator and series of proofs of its properties. It is structured in a way as to show that several of those properties are natural extensions of their local counterparts, with some key differences. **Keywords:** Fractional Laplacian; comparison principle; Harnack inequality; Liouville theorem; approximation. **MSC 2020:** 35R11, 26A33, 47G30. ## 1. Introduction During the last decades the study of non-local equations was boosted by large range of applications in financial mathematics (as a pricing model for American options, [27]), optimal design problems, [29], competitive stochastic games, [11, 12], population dynamics, combustion processes, catalysis process, bio-technologies, chemical engineering, and other areas. This note is a primer to a classical example of a non-local operator - the fractional Laplacian. Over the years several advanced and comprehensive notes and books have been written on the subject, such as [1, 6, 8, 14, 20] among others. This primer is intended for those students and young researchers who are already acquainted with the classical Laplace operator and want to get a brief sense of the fractional Laplacian. The latter being a lot like classical Laplacian, at the same time is also quite different. As we will see, the fractional Laplacian can be defined in various ways, and like in case of the classical Laplacian, it satisfies to a mean value property, maximum principle, Harnack inequality, Liouville theorem, and so forth. Moreover, there are Poisson formula and Green function available, and fractional harmonic functions, like classical harmonic functions, are \(C^{\infty}\). Of course, the non-local nature of the fractional Laplacian dictates certain modifications in those results. However, once the reader gets a glimpse of the story "behind the scene", these modifications seem quite natural. Obviously, there are also striking differences between these operators. In this note we emphasize those too. The Laplacian, \[\Delta u:=\operatorname{div}(\nabla u),\] is a classical example of a local operator. It arises naturally, when for example, looking at a Brownian motion originated in a bounded domain (with a smooth boundary): the expected value of a function, when the motion hits the boundary for the first time, solves a Dirichlet problem for the Laplace operator. In other words, the unique solution of the problem \[\begin{cases}\Delta u=0&\text{in}\ \ \Omega,\\ u=f&\text{on}\ \ \partial\Omega,\end{cases}\] where \(\Omega\) is a bounded domain with a smooth boundary, and \(f\in C(\partial\Omega)\), is given by \[u(x)=\mathbb{E}\left(f(X_{\tau})\right).\] Here \(x\in\Omega\) is the point where the Brownian motion originated, \(X_{\tau}\) is where it hits the boundary for the first time (\(\tau\) is the stopping time), and \(\mathbb{E}\) is the expected value of the process. If the process tends to move in certain directions more than in other directions, then we deal with equations with coefficients. These kind of models arise in electromagnetism, fluid dynamics, thermodynamics, etc. (see, for example, [24]). Thus, continuous processes lead to a local problem. Jump processes, on the other hand, lead to non-local problems, [3, 11, 12, 24]. If in the example above instead of a continuous process one deals with a jump process, then we end up solving the non-local Dirichlet problem. More precisely, for a purely jump Levy process, originated in a bounded domain, the expected value of the function at the first exit point solves the non-local Dirichlet problem, i.e., the unique solution of the problem \[\begin{cases}(-\Delta)^{s}u=0&\text{in}\ \ \Omega,\\ u=f&\text{in}\ \ \mathbb{R}^{n}\setminus\Omega,\end{cases}\] is given by \[u(x)=\mathbb{E}\left(f(X_{\tau})\right).\] As before, \(x\in\Omega\) is the point where the jump process originated, \(f\in C(\mathbb{R}^{n}\setminus\Omega)\) and \(X_{\tau}\in\mathbb{R}^{n}\setminus\Omega\) is the first exit point. The operator \((-\Delta)^{s}\) is the fractional Laplacian and for \(s\in(0,1)\) is defined by \[(-\Delta)^{s}u(x): = c_{n,s}\,\text{P.V.}\int_{\mathbb{R}^{n}}\frac{u(x)-u(y)}{|x-y| ^{n+2s}}\,dy\] \[= c_{n,s}\lim_{\varepsilon\to 0+}\int_{\mathbb{R}^{n}\setminus B _{\varepsilon}(x)}\frac{u(x)-u(y)}{|x-y|^{n+2s}}\,dy,\] where \(c_{n,s}\) is a normalization constant depending only on \(n\) and \(s\). Here P.V. indicates that the integral should be understood in the "principle value sense" (defined by the last equality). Observe that unlike the problem driven from a continuous process, the boundary in the model obtained for a jump process is substituted by the whole complement of the set \(\Omega\). The idea behind this is that when jumping out of \(\Omega\) the process can end up at any point in \(\Omega^{c}\). That is, the complement of the domain in the non-local setting plays the role of the boundary in the local setting. This fact has its reflection on the modifications of some basic properties, as we will see later. In the local setting, to check whether a partial differential equation holds at a particular point, one needs to know only the values of the function in an arbitrarily small neighborhood of that point, whereas in the non-local setting it is the opposite: in order to check whether a non-local equation holds at a point, one needs information about the values of the function far away from that point. Therefore, when considering long-range integration, non-local models become more accurate. In other words, unlike local versions of problems, which can feel changes only on the boundary of the substance, non-local models become sensitive to changes that occur faraway. The following simple example shows the effect of non-locality. If \(0\leq u\leq 1\) is such that \(u\in C_{0}^{\infty}(B_{2})\) and \(u\equiv 1\) in \(B_{1}\), then for any \(x\in\mathbb{R}^{n}\setminus B_{4}\) one has \(\Delta u(x)=0\), while \[-(-\Delta)^{s}u(x) =c_{n,s}\operatorname{P.V.}\int_{\mathbb{R}^{n}}\frac{u(y)-u(x)} {|x-y|^{n+2s}}\,dy=\int_{B_{2}}\frac{u(y)}{|x-y|^{n+2s}}\,dy\] \[\geq\int_{B_{1}}\frac{dy}{(|x|+1)^{n+2s}}\geq C|x|^{-n-2s},\] for a constant \(C>0\). In fact, \(|(-\Delta)^{s}u(x)|\leq C|x|^{-n-2s}\) (see [1, Appendix B], for example). As we will see below, the non-local nature of the fractional Laplacian endows somewhat surprising behavior for solutions of equations driven by it, a remarkable example of which is the fact that any (smooth) function is fractional harmonic up to a small error, Theorem 12.1. This note is organized as follows. After introducing some notations, in Section 2 we bring several definitions of the fractional Laplacian. Yet another definition of this operator is given in Section 3, where also its fundamental solution and several properties are discussed. Some elementary properties are presented in Section 4. In Section 5, the mean value property is proved. Section 6 is dedicated to the maximum principle. Section 7 is devoted to the Harnack inequality. Liouville theorem in the fractional setting is proved in Section 8, followed by Schauder type estimates in Section 9. Section 10 concerns Green's function for the ball. In Section 11 it is shown that fractional harmonic functions are locally \(C^{\infty}\). Finally, in Section 12, we see that all functions are fractional harmonic up to a small error. ## Notations \(\Omega\subset\mathbb{R}^{n}\) is a bounded domain; \(D_{i}u:=D_{x_{i}}u:=\frac{\partial u}{\partial x_{i}}\); \(Du:=(D_{1}u,D_{2}u,\ldots,D_{n}u)\); \(D_{\nu}u:=\frac{\partial u}{\partial\nu}=Du\cdot\nu\); \(x_{+}:=\max\{x,0\}\). For a multi-index \(\gamma=(\gamma_{1},\gamma_{2},\ldots,\gamma_{n})\), we use \(|\gamma|:=\gamma_{1}+\gamma_{2}+\ldots+\gamma_{n}\). For \(\alpha\in(0,1]\) and \(k\in\mathbb{N}\), the Holder semi-norm is defined as follows: \[[u]_{C^{0,\alpha}(\Omega)}:=\sup_{x\neq y}\frac{|u(x)-u(y)|}{|x-y|^{\alpha}},\] \[[u]_{C^{k,\alpha}(\Omega)}:=\max_{|\gamma|=k}[D^{\gamma}u]_{C^{0,\alpha}(\Omega )},\] where \(D^{\gamma}u:=\partial_{x_{1}}^{\gamma_{1}}\dots\partial_{x_{n}}^{\gamma_{n}}u\). The Holder space \(C^{k,\alpha}(\Omega)\) consists of all functions \(u\in C^{k}(\Omega)\) for which \[\|u\|_{C^{k,\alpha}(\Omega)}:=\sum_{|\gamma|\leq k}\|D^{\gamma}u\|_{C(\Omega)} +\sum_{|\gamma|=k}[D^{\gamma}u]_{C^{0,\alpha}(\Omega)}<\infty.\] \(C^{\alpha}:=C^{0,\alpha}\), if \(\alpha\in(0,1]\) and \(C^{\alpha}:=C^{1,\alpha-1}\), if \(\alpha\in(1,2]\), and similarly, \(C^{k+\alpha}:=C^{k,\alpha}\), if \(\alpha\in(0,1]\) and \(C^{k+\alpha}:=C^{k+1,\alpha-1}\), if \(\alpha\in(k,k+1]\). We use \(\mathcal{S}\) for the Schwartz space of rapidly decreasing \(C^{\infty}\) functions in \(\mathbb{R}^{n}\). More precisely, \[\mathcal{S}:=\left\{u\in C^{\infty}(\mathbb{R}^{n});\ \sup_{x\in\mathbb{R}^{n}}|x ^{\beta}D^{\alpha}u(x)|<\infty,\forall\alpha,\beta\in\mathbb{N}_{0}^{n} \right\}.\] For \(s\in(0,1)\), set \[L^{1}_{s}(\mathbb{R}^{n}):=\left\{u\in L^{1}_{\rm loc}(\mathbb{R}^{n});\ \int_{ \mathbb{R}^{n}}\frac{|u(y)|}{1+|y|^{n+2s}}\,dy<+\infty\right\}. \tag{1.1}\] Also \(B_{r}(x_{0})\) is the ball of radius \(r\) centered at \(x_{0}\), and \(B_{r}:=B_{r}(0)\). ## 2. Several definitions of the fractional Laplacian In this section we bring five definitions of the fractional Laplacian (one more definition is given in the next section). These definitions are all equivalent once \(u\in\mathcal{S}\), [21]. There are several other definitions of the fractional Laplacian. We refer the interested reader to, for example, [20, 21, 28]. ### As a singular integral For \(s\in(0,1)\) and \(u\in\mathcal{S}\), the _fractional Laplacian_ of \(u\) is defined as \[(-\Delta)^{s}u(x):=c_{n,s}\,\mathrm{P.V.}\int_{\mathbb{R}^{n}}\frac{u(x)-u(y)} {|x-y|^{n+2s}}\,dy. \tag{2.1}\] Here \[c_{n,s}:=\int_{\mathbb{R}^{n}}\frac{1-\cos\zeta_{1}}{|\zeta|^{n+2s}}\,d\zeta \tag{2.2}\] is a normalization constant depending only on \(n\) and \(s\) (see the proof of Proposition 2.1 below). The integral in (2.1) is absolutely convergent when \(0<s<1/2\). Indeed, \[\int_{\mathbb{R}^{n}}\frac{|u(x)-u(y)|}{|x-y|^{n+2s}}\,dy \leq C\int_{B_{r}}\frac{|x-y|}{|x-y|^{n+2s}}\,dy+\|u\|_{L^{\infty}( \mathbb{R}^{n})}\int_{\mathbb{R}^{n}\setminus B_{r}}\frac{dy}{|x-y|^{n+2s}}\] \[\leq C\left[\int_{B_{r}}\frac{dy}{|x-y|^{n+2s-1}}+\int_{\mathbb{R }^{n}\setminus B_{r}}\frac{dy}{|x-y|^{n+2s}}\right]\] \[=C\left[\int_{0}^{r}\frac{dt}{|t|^{2s}}+\int_{r}^{+\infty}\frac{ dt}{|t|^{2s+1}}\right]<\infty,\] where the constant \(C>0\) depends only on \(\|u\|_{L^{\infty}(\mathbb{R}^{n})}\), \(\|Du\|_{L^{\infty}(\mathbb{R}^{n})}\) and \(n\). As for \(1/2\leq s<1\), the integral in (2.1) is understood in the "Principle Value" sense, i.e., \[(-\Delta)^{s}u(x)=c_{n,s}\lim_{\varepsilon\to 0+}\int_{\mathbb{R}^{n} \setminus B_{\varepsilon}(x)}\frac{u(x)-u(y)}{|x-y|^{n+2s}}\,dy.\] For \(s\in(0,1)\), the constant \(c_{n,s}\) in (2.1) does not play any essential role on the properties of the fractional Laplacian. Its role is important only in the limits as \(s\to 0^{+}\) and \(s\to 1^{-}\) (for the asymptotic of this constant, as \(s\to 0^{+}\) and \(s\to 1^{-}\) see [14, Section 4]). Observe that although in (2.1), the fractional Laplacian was defined for \(u\in\mathcal{S}\), however, the integral is well defined for less regular functions. In fact, the assumption on \(u\) at infinity can be weakened by assuming \(u\in L^{1}_{s}(\mathbb{R}^{n})\), where \(L^{1}_{s}(\mathbb{R}^{n})\) is defined by (1.1). This can be checked using approximation with Schwartz functions (for details we refer the reader to [27, Proposition 2.4]). Furthermore, the \(C^{\infty}\) regularity requirement on \(u\) can be relaxed as well by asking just \(u\in C^{2s+\varepsilon}\) in a neighborhood of \(x\in\mathbb{R}^{n}\), for \(\varepsilon>0\) small. Indeed, for \(s\in(0,\frac{1}{2})\) and \(2s+\varepsilon\leq 1\) and \(r>0\) small we have \[\int_{B_{r}(x)}\frac{u(x)-u(y)}{|x-y|^{n+2s}}\,dz\leq[u]_{C^{2s+\varepsilon}( B_{r}(x))}\int_{B_{r}(x)}\frac{|x-y|^{2s+\varepsilon}}{|x-y|^{n+2s}}\,dy<\infty,\] hence the fractional Laplacian is well defined at \(x\) by (2.1). For \(s\in[\frac{1}{2},1)\) still \(u\in C^{2s+\varepsilon}=C^{1,2s+\varepsilon-1}\) would suffice. ### Removing singularity Note that, in general, the right hand side of (2.1) is not well defined, as the integral may have singularity near \(x\). Also, one would like to get rid of the P.V. in the definition. As the kernel in (2.1) is symmetric, a simple change of variable, \(z:=y-x\), yields \[(-\Delta)^{s}u(x) =c_{n,s}\,\text{P.V.}\int_{\mathbb{R}^{n}}\frac{u(x)-u(y)}{|x-y|^ {n+2s}}\,dy \tag{2.3}\] \[=c_{n,s}\,\text{P.V.}\int_{\mathbb{R}^{n}}\frac{u(x)-u(x+z)}{|z|^ {n+2s}}\,dz\] \[=c_{n,s}\,\text{P.V.}\int_{\mathbb{R}^{n}}\frac{u(x)-u(x-z)}{|z|^ {n+2s}}\,dz.\] This leads to the following definition of the fractional Laplacian. **Definition 2.1**.: _For \(s\in(0,1)\) and \(u\in\mathcal{S}\), the fractional Laplacian of \(u\) is defined by_ \[(-\Delta)^{s}u(x):=\frac{c_{n,s}}{2}\int_{\mathbb{R}^{n}}\frac{2u(x)-u(x+y)-u(x- y)}{|y|^{n+2s}}\,dy, \tag{2.4}\] _where \(c_{n,s}\) is the constant defined by (2.2)._ Indeed, from (2.3) one gets \[(-\Delta)^{s}u(x)=\frac{c_{n,s}}{2}\operatorname{P.V.}\int_{\mathbb{R}^{n}} \frac{2u(x)-u(x+y)-u(x-y)}{|y|^{n+2s}}\,dy.\] This representation of the fractional Laplacian removes the singularity at the origin, as the second order Taylor expansion gives \[\frac{2u(x)-u(x+y)-u(x-y)}{|y|^{n+2s}}\leq\frac{\|D^{2}u\|_{L^{\infty}(\mathbb{ R}^{n})}}{|y|^{n+2s-2}},\] which is integrable near zero. Thus, we can remove P.V. in the previous equality and get (2.4). **Remark 2.1**.: _As a consequence, \((-\Delta)^{s}u\) is in fact well defined by (2.4) for any \(u\in C^{2}(\mathbb{R}^{n})\cap L^{\infty}(\mathbb{R}^{n})\). In that sense any constant, although not being in \(\mathcal{S}\) (unless identically zero), is fractional harmonic._ ### As a distribution In \(L^{1}_{s}(\mathbb{R}^{n})\) fractional Laplacian can be defined as a distribution by \[\langle(-\Delta)^{s}u,\varphi\rangle:=\int_{\mathbb{R}^{n}}u(x)(-\Delta)^{s} \varphi(x)\,dx,\ \ \forall\varphi\in C^{\infty}_{0}(\mathbb{R}^{n}). \tag{2.5}\] In other words, for the definition of the fractional Laplacian to make sense, it is enough to assume that \(u\) is locally integrable and has a suitable growth control at infinity. ### As a generator of a Levy process Fractional Laplacian can be defined also as a generator of \(2s\)-stable Levy process, [2]. More precisely, if \(X_{t}\) is the isotopic \(2s\)-stable Levy process starting at \(0\), then for a smooth function \(u\) \[(-\Delta)^{s}u(x)=\lim_{t\to 0^{+}}\frac{1}{t}\mathbb{E}\left[u(x)-u(x+X_{t}) \right].\] ### As a Fourier transform The fractional Laplacian is a pseudo-differential operator, as suggests the following proposition. It is here that the choice of the constant \(c_{n,s}\) becomes evident. **Proposition 2.1**.: _If \(s\in(0,1)\) and \(u\in\mathcal{S}\), then_ \[(-\Delta)^{s}u(x)=\mathcal{F}^{-1}\left((2\pi|\xi|)^{2s}\,\hat{u}(\xi)\right),\ \ \forall\xi\in\mathbb{R}^{n}, \tag{2.6}\] _where \(\mathcal{F}=\hat{u}\) is the Fourier transform, i.e._ \[\mathcal{F}u(\xi):=\hat{u}(\xi):=\int_{\mathbb{R}^{n}}u(x)e^{-2\pi i\xi\cdot x }\,dx.\] Proof.: This follows by applying Fourier transform in (2.4) and using Fubini theorem. Indeed, as observed above, (2.4) removes singularity at the origin, and hence, the integrant is in \(L^{1}\). Using Fubini theorem, we then exchange the integral in \(y\) with the Fourier transform in \(x\). Thus, if \(\xi\) is the frequency variable, from (2.4) one has \[\mathcal{F}\left((-\Delta)^{s}u(x)\right)(\xi) =\frac{c_{n,s}}{2}\int_{\mathbb{R}^{n}}\frac{\mathcal{F}\left(2u( x)-u(x+y)-u(x-y)\right)}{|y|^{n+2s}}\,dy\] \[=\frac{c_{n,s}}{2}\,\int_{\mathbb{R}^{n}}\hat{u}(\xi)\frac{2-e^{2 \pi i\xi\cdot y}-e^{-2\pi i\xi\cdot y}}{|y|^{n+2s}}\,dy\] \[=c_{n,s}\,\hat{u}(\xi)\int_{\mathbb{R}^{n}}\frac{1-\cos(2\pi\xi \cdot y)}{|y|^{n+2s}}\,dy.\] Therefore, to see (2.6), it remains to check \[c_{n,s}\,\int_{\mathbb{R}^{n}}\frac{1-\cos(2\pi\xi\cdot y)}{|y|^{n+2s}}\,dy= \left(2\pi|\xi|\right)^{2s}. \tag{2.7}\] Set \[I(\xi):=\int_{\mathbb{R}^{n}}\frac{1-\cos(2\pi\xi\cdot y)}{|y|^{n+2s}}\,dy.\] where we used the change of variable \(z:=|\xi|y\) (still labeling the new variable with \(y\)). If \(R\) is some rotation that takes \(e_{1}=(1,0,0,\ldots,0)\) to \(\xi/|\xi|\), i.e., \(Re_{1}=\xi/|\xi|\), then \[I(\xi) =|\xi|^{2s}\int_{\mathbb{R}^{n}}\frac{1-\cos\left(2\pi Re_{1} \cdot y\right)}{|y|^{n+2s}}\,dy\] \[=|\xi|^{2s}\int_{\mathbb{R}^{n}}\frac{1-\cos\left(2\pi R^{T}y \cdot e_{1}\right)}{|y|^{n+2s}}\,dy\quad(z:=R^{T}y)\] \[=|\xi|^{2s}\int_{\mathbb{R}^{n}}\frac{1-\cos\left(2\pi z_{1} \right)}{|z|^{n+2s}}\,dz\quad(\zeta:=2\pi z)\] \[=\left(2\pi|\xi|\right)^{2s}\int_{\mathbb{R}^{n}}\frac{1-\cos \zeta_{1}}{|\zeta|^{n+2s}}\,d\zeta,\] which confirms (2.7), since \(c_{n,s}\) is defined by (2.2). Here \(z_{1}\) and \(\zeta_{1}\) are the first coordinate of the vector \(z\) and \(\zeta\) respectively. To be correct, one needs to make sure that the constant \(c_{n,s}\) is a finite number. This is indeed the case, as inside the ball \(B_{1}\), using the Taylor expansion of the cosine function, one estimates \[\int_{B_{1}}\frac{|1-\cos\zeta_{1}|}{|\zeta|^{n+2s}}\,d\zeta\leq\int_{B_{1}} \frac{|\zeta_{1}|^{2}}{|\zeta|^{n+2s}}\,d\zeta\leq\int_{B_{1}}\frac{1}{|\zeta |^{n+2s-2}}\,d\zeta<\infty,\] and outside of \(B_{1}\) we have \[\int_{\mathbb{R}^{n}\setminus B_{1}}\frac{|1-\cos\zeta_{1}|}{|\zeta|^{n+2s}} \,d\zeta\leq\int_{\mathbb{R}^{n}\setminus B_{1}}\frac{2}{|\zeta|^{n+2s}}\,d \zeta<\infty.\] **Remark 2.2**.: _The constant \(c_{n,s}\) defined by (2.2), can be written in terms of the Gamma function in the following way_ \[c_{n,s}=\frac{s4^{s}\Gamma\left(\frac{n+2s}{s}\right)}{\pi^{\frac{n}{2}}\Gamma(1 -s)},\] _where_ \[\Gamma(r):=\int_{0}^{\infty}t^{r-1}e^{-t}\,dt,\,\,\,r>0.\] _We refer the reader to [20, Propositions 5.6 and 5.1] and [8, Lemma 2.3], where the calculations are carried out._ This last definition of the fractional Laplacian can be used to prove the following integration by parts formula and construct a non-trivial example of a fractional harmonic function. Namely, if \(u\), \(v\in\mathcal{S}\), then \[\int_{\mathbb{R}^{n}}(-\Delta)^{s}u(x)v(x)\,dx=\int_{\mathbb{R}^{n}}u(x)(- \Delta)^{s}v(x)\,dx. \tag{2.8}\] When \(s=1\), (2.8) is just integration by parts. For \(s\in(0,1)\) it follows from (2.6), [20, Lemma 5.4]. As commented above, Remark 2.1, constant functions are fractional \(s\)-harmonic. Below we bring another example of an \(s\)-harmonic function. **Theorem 2.1**.: _The function \(u(x):=x_{+}^{s}\) is \(s\)-harmonic in the upper half space. More precisely,_ \[(-\Delta)^{s}u(x)=\begin{cases}0,&x>0,\\ -C|x|^{-s},&x<0,\end{cases}\] _where \(C>0\) is a constant depending only on \(s\)._ Proof.: There are several proofs of this fact, [8, Section A.1 and Theorem 2.4.1]. It can be shown by direct calculations making use of the definition of the fractional Laplacian via Fourier transform, (2.6). For the probabilistic intuition behind this, we refer the reader to [8, Section 2.4]. ## 3. An extension argument and beyond Another definition of the fractional Laplacian can be given using the celebrated Caffarelli-Silvestre extension problem, [10] (for the argument in probabilistic terms see [23]). The construction of the extension hints a good candidate for the fundamental solution of the fractional Laplacian. More precisely, for a function \(u:\mathbb{R}^{n}\to\mathbb{R}\), consider its extension to the upper half space, i.e., \(v:\mathbb{R}^{n}\times[0,+\infty)\to\mathbb{R}\) such that it satisfies the following equation \[\Delta_{x}v+\frac{1-2s}{y}v_{y}+v_{yy}=0, \tag{3.1}\] \[v(x,0)=u(x), \tag{3.2}\] where \(v_{y}=\frac{\partial v}{\partial y}.\) Note that (3.1) can be written as \[\operatorname{div}\big{(}y^{1-2s}\nabla v\big{)}=0, \tag{3.3}\] which is the Euler-Lagrange equation of the functional \[\int_{y>0}y^{1-2s}|\nabla v|^{2}\,dx\,dy.\] To understand the intuition behind (3.1), suppose for a moment, that \(\tau:=1-2s\) is a non-negative integer and \(v(x,y):\mathbb{R}^{n}\times\mathbb{R}^{1+\tau}\to\mathbb{R}\) is radially symmetric in \(y\), i.e., \(v(x,y)=v(x,y^{\prime})\) for \(|y|=|y^{\prime}|=r\). Observe that the Laplacian of \(v\) in terms of the variables \(x\) and \(r\) looks like the left hand side of (3.1), \[\Delta v=\Delta_{x}v+\frac{\tau}{r}v_{r}+v_{rr}.\] Thus, the function \(v\) can be seen as the harmonic extension of \(u\) from \(\mathbb{R}^{n}\) to \(\mathbb{R}^{n+1+\tau}\). The latter, of course, has no meaning when \(\tau\) is not an integer, but as it turns out, solutions of (3.1) still carry many properties of harmonic functions when \(\tau\) is not an integer. The fundamental solution of the Laplacian in \(n+1+\tau\) dimension is, [16, p. 22], for \(n-1+\tau>1\), \[\phi(x,y):=\frac{b_{n,s}}{|(x,y)|^{n-1+\tau}}=\frac{b_{n,s}}{(|x|^{2}+|y|^{2}) ^{\frac{n-1+\tau}{2}}},\] where the constant \(b_{n,s}\) is defined by \[b_{n,s}:=\frac{\Gamma\left(\frac{n}{2}-s\right)}{4^{s}\pi^{\frac{n}{2}}\Gamma( s)}.\] The function \[\phi(x,0):=\phi(x):=\begin{cases}\frac{b_{n,s}}{|x|^{n-2s}},&\text{ if }n\geq 2,\\ \\ -\frac{1}{\pi}\log|x|,&\text{ if }n=1,\end{cases} \tag{3.4}\] where \(x\in\mathbb{R}^{n}\setminus\{0\}\) plays the role of the fundamental solution for the fractional Laplacian, i.e., it solves (in the distributional sense, (2.5)) the equation \((-\Delta)^{s}\phi=\delta_{0}\), where \(\delta_{0}\) is the Dirac delta evaluated at zero, [6, Theorem 2.3]. Observe also that as \(v\) solves the problem (3.1)-(3.2), it can be written (see [16, p. 37]) explicitly in terms of the Poisson kernel for the half-space: \[v(x,y)=\int_{\mathbb{R}^{n}}P(x-\xi,y)u(\xi)\,d\xi, \tag{3.5}\] where \[P(x,y):=B_{n,s}\frac{y^{2s}}{(|x|^{2}+|y|^{2})^{\frac{n+2s}{2}}}. \tag{3.6}\] The kernel \(P\) is indeed the Poisson kernel, since it solves (3.1) for \(y>0\) and noting that \(P(x,y)=y^{-n}P(x/y,1)\), converges, as \(y\to 0\), to a multiple of the Dirac delta. The constant \(B_{n,s}\) is chosen such that \[\int_{\mathbb{R}^{n}}P(x-\xi,y)\,d\xi=1. \tag{3.7}\] Finally, we bring another definition of the fractional Laplacian in terms of the extension function \(v\). **Proposition 3.1**.: \((-\Delta)^{s}u=-c_{n,s}\lim_{y\to 0+}y^{1-2s}v_{y}\)_._ Proof.: Recalling (3.5), (3.7), (3.2), (3.6) and (2.1), we compute \[\lim_{y\to 0^{+}}y^{1-2s}v_{y} =\lim_{y\to 0^{+}}\frac{v(x,y)-v(x,0)}{y^{2s}}\] \[=\lim_{y\to 0^{+}}\frac{1}{y^{2s}}\int_{\mathbb{R}^{n}}P(x-\xi,y) \left(u(\xi)-u(x)\right)\,d\xi\] \[=\lim_{y\to 0^{+}}\int_{\mathbb{R}^{n}}\frac{u(\xi)-u(x)}{(|x- \xi|^{2}+|y|^{2})^{\frac{n+2s}{2}}}\,d\xi\] \[=P.V.\int_{\mathbb{R}^{n}}\frac{u(\xi)-u(x)}{|x-\xi|^{n+2s}}\,d\xi\] \[=-c_{n,s}^{-1}(-\Delta)^{s}u(x).\] Furthermore, a reflection argument makes sure that (3.3) makes sense in a ball of radius \(r\) centered at \(\{y=0\}\) in dimension \(n+1\). **Lemma 3.1**.: _If \(v:\mathbb{R}^{n}\times[0,+\infty)\to\mathbb{R}\) solves (3.1) such that for \(|x|\leq r\),_ \[\lim_{y\to 0}y^{1-2s}v_{y}(x,y)=0, \tag{3.8}\] _then_ \[\tilde{v}(x,y):=\begin{cases}v(x,y),&y\geq 0,\\ v(x,-y),&y<0\end{cases} \tag{3.9}\] _is a weak solution of_ \[\operatorname{div}\left(|y|^{1-2s}\nabla\tilde{v}\right)=0\] _in the \((n+1)\) dimensional ball of radius \(r\)._ Proof.: We need to verify that \[\int_{B_{r}^{n+1}}|y|^{1-2s}\nabla\tilde{v}\cdot\nabla\varphi\,dx\,dy=0,\] for any test function \(\varphi\in C^{\infty}_{0}(B^{n+1}_{r})\), where \(B^{n+1}_{r}:=\left\{(x,y);\,|x|^{2}+|y|^{2}<r^{2}\right\}\). Separating a strip of width \(\varepsilon>0\) around \(y=0\) in \(B^{n+1}_{r}\), we write \[\int_{B^{n+1}_{r}}|y|^{1-2s}\nabla\tilde{v}\cdot\nabla\varphi\,dx \,dy=\int_{B^{n+1}_{r}\setminus|y|<\varepsilon}+\int_{B^{n+1}_{r}\cap|y|<\varepsilon}\] \[=\int_{B^{n+1}_{r}\setminus|y|<\varepsilon}\operatorname{div} \left(|y|^{1-2s}\varphi\nabla\tilde{v}\right)\,dx\,dy+\int_{B^{n+1}_{r}\cap|y| <\varepsilon}|y|^{1-2s}\nabla\tilde{v}\cdot\nabla\varphi\,dx\,dy\] \[=\int_{B^{n+1}_{r}\cap|y|=\varepsilon}\varphi|y|^{1-2s}\tilde{v} _{y}(x,\varepsilon)\,dx+\int_{B^{n+1}_{r}\cap|y|<\varepsilon}|y|^{1-2s}\nabla \tilde{v}\cdot\nabla\varphi\,dx\,dy.\] The first integral in the right hand side of the above equality goes to zero, as \(\varepsilon\to 0\). So does the second integral, as \(|y|^{1-2s}|\nabla v|^{2}\) is locally integrable. **Remark 3.1**.: _In fact (see Theorem 11.1 below) (3.8) implies that \(v\) is \(C^{\infty}\) near \(x\), and the limit in (3.8) is uniform. However, in general, we understand it in the weak sense._ Proposition 3.1 and (3.3) show the importance of the extension argument. As it turns out, the study of a non-local operator (the fractional Laplacian) can be reduced to the study of a local operator in a higher dimensional space (as, for example, in [9, 29]). This comes with the price of the weighted term \(|y|^{1-2s}\) in the equation, but that weight belongs to the second Muchenhoupt class \(A_{2}\), meaning \[\int_{B}|y|^{1-2s}\int_{B}|y|^{2s-1}<\infty,\] where \(B\) is any ball in \(\mathbb{R}^{n+1}\). Note also that this weight does not depend on the tangential variable, allowing to consider translations in \(x\). These lead to Sobolev embeddings, Poincare inequality, estimates of the Green function, etc., [6, 17, 18, 25]. The extension argument reveals that a stochastic process with jumps in \(\mathbb{R}^{n}\) can be seen as the "trace" of a classical stochastic process in \(\mathbb{R}^{n}\times[0,\infty)\) (a random walk with jumps in \(\mathbb{R}^{n}\) can be interpreted as a classical random walk in \(\mathbb{R}^{n+1}\)). In other words, every time the classical stochastic process in \(\mathbb{R}^{n}\times[0,\infty)\) hits \(\mathbb{R}^{n}\times\{0\}\), it induces a jump process in \(\mathbb{R}^{n}\). ## 4. Elementary properties It is obvious that the fractional Laplacian is a linear operator, i.e., \[(-\Delta)^{s}(u+v)=(-\Delta)^{s}u+(-\Delta)^{s}v\] and \[(-\Delta)^{s}(cu)=c(-\Delta)^{s}u,\quad c\in\mathbb{R}.\] It is noteworthy, that like the classical Laplacian, the fractional Laplacian is translation and rotation invariant, [20, Lemma 2.7]. We bring here other elementary properties, such as homogeneity, asymptotics of the fractional Laplacian and the semi-group property. In fact, (2.1), one easily checks that \[(-\Delta)^{s}u(\lambda u)=\lambda^{2s}(-\Delta)^{s}u.\] The latter means that the fractional Laplacian is a homogeneous operator of order \(2s\). **Lemma 4.1**.: _If \(u\in\mathcal{S}\), then_ \[\lim_{s\to 0^{+}}(-\Delta)^{s}u=u\quad\text{and}\quad\lim_{s\to 1^{-}}(- \Delta)^{s}u=-\Delta u.\] Proof.: This follows from (2.6). Indeed, the case of \(s=0\) is obvious. Also, \[-\Delta u(x) =-\Delta\left(\mathcal{F}^{-1}(\hat{u})\right)(x)=-\Delta\left( \int_{\mathbb{R}^{n}}\hat{u}(\xi)e^{2\pi i\xi\cdot x}\,d\xi\right)\] \[=\int_{\mathbb{R}^{n}}\left(2\pi|\xi|\right)^{2}\hat{u}(\xi)e^{2 \pi i\xi\cdot x}\,d\xi=\mathcal{F}^{-1}\left((2\pi|\xi|)^{2}\hat{u}(\xi)\right).\] **Remark 4.1**.: _Lemma 4.1 can also be deduced using Definition 2.1 and the assymptotics of constant \(c_{n,s}\),_ \[\lim_{s\to 0^{+}}\frac{c_{n,s}}{s(1-s)}=\frac{4n}{\omega_{n-1}}\quad\text{and} \quad\lim_{s\to 1^{-}}\frac{c_{n,s}}{s(1-s)}=\frac{2}{\omega_{n-1}},\] _where \(\omega_{n-1}\) is the \((n-1)\)-dimensional measure of the unit sphere. We refer the reader to [14, Section 4], where the calculations are carried out (see also [28, Theorems 3 and 4] for the proof using definition (2.1))._ The fractional Laplacian also enjoys the semi-group property, as states the following proposition. **Proposition 4.1**.: _If \(u\in\mathcal{S}\), \(s,t\in(0,1)\) and \(s+t\leq 1\), then_ \[(-\Delta)^{s+t}u=(-\Delta)^{s}(-\Delta)^{t}u=(-\Delta)^{t}(-\Delta)^{s}u.\] Proof.: This directly follows from (2.6). Indeed, \[\mathcal{F}\left((-\Delta)^{s+t}u\right) =(2\pi|\xi|)^{2(s+t)}\hat{u}=(2\pi|\xi|)^{2s}(2\pi|\xi|^{2t})\hat {u}\] \[=\mathcal{F}\left((-\Delta)^{s}(-\Delta)^{t}u\right)\] \[=\mathcal{F}\left((-\Delta)^{t}(-\Delta)^{s}u\right).\] It now suffices to apply the inverse Fourier transform. ## 5. The \(s\)-mean value property Classical harmonic functions enjoy the mean value property: a value of a harmonic function at a point is equal to its average over spheres (or balls) centered at that point. The converse to the mean value property also is true: if at a given point \(x\) a function is equal to its average over spheres centered at \(x\), then it must be harmonic in a neighborhood of \(x\). Similar principle is true for \(s\)-harmonic functions. The non-local nature of the fractional Laplacian, however, requires refinement of the argument. Once again we see that the spheres are replaced by the "non-local boundary". Namely, the value of an \(s\)-harmonic function at a point, is equal to its "average" defined by a convolution of the function with the \(s\)-mean kernel. More precisely, for \(r>0\) set \[A_{r}(y):=\begin{cases}a_{n,s}\frac{r^{2s}}{\left(|y|^{2}-r^{2}\right)^{s}|y|^{n} },&y\in\mathbb{R}^{n}\setminus\overline{B}_{r},\\ 0,&y\in\overline{B}_{r},\end{cases}\] where the constant \(a_{n,s}\) is chosen such that \[\int_{\mathbb{R}^{n}\setminus B_{r}}A_{r}(y)\,dy=1. \tag{5.1}\] In fact, [20, Section 15] we have \[a_{n,s}:=\frac{\sin(\pi s)\Gamma\left(\frac{n}{2}\right)}{\pi^{\frac{n}{2}+1}}.\] The following mean value property holds, [6, Theorem 2.2], [20, Proposition 15.7]. **Theorem 5.1**.: _Let \(u\in L^{1}_{s}(\mathbb{R}^{n})\) be \(C^{2s+\varepsilon}\) in a neighborhood of \(x\in\mathbb{R}^{n}\). If for any small \(r>0\) one has_ \[u(x)=\int_{\mathbb{R}^{n}\setminus B_{r}}A_{r}(y)u(x-y)\,dy, \tag{5.2}\] _then \(u\) is \(s\)-harmonic at \(x\)._ Proof.: From (5.1) and (5.2) we get \[0=u(x)-\int_{\mathbb{R}^{n}\setminus B_{r}}A_{r}(y)u(x-y)\,dy=a_{n,s}r^{2s} \int_{\mathbb{R}^{n}\setminus B_{r}}\frac{u(x)-u(x-y)}{\left(|y|^{2}-r^{2} \right)^{s}|y|^{n}}\,dy,\] therefore, \[\int_{\mathbb{R}^{n}\setminus B_{r}}\frac{u(x)-u(x-y)}{\left(|y|^{2}-r^{2} \right)^{s}|y|^{n}}\,dy=0.\] On the other hand, (2.3), one has \[(-\Delta)^{s}u(x)=\lim_{r\to 0}\int_{\mathbb{R}^{n}\setminus B_{r}}\frac{u(x)-u (x-y)}{|y|^{n+2s}}\,dy,\] hence, it is enough to show that \[\lim_{r\to 0}\int_{\mathbb{R}^{n}\setminus B_{r}}\frac{u(x)-u(x-y)}{|y|^{n+2s }}\,dy=\lim_{r\to 0}\int_{\mathbb{R}^{n}\setminus B_{r}}\frac{u(x)-u(x-y)}{ \left(|y|^{2}-r^{2}\right)^{s}|y|^{n}}\,dy. \tag{5.3}\] To see this, take \(R>\sqrt{2}r\) and split the integral, \[\int_{\mathbb{R}^{n}\setminus B_{r}}\frac{u(x)-u(x-y)}{\left(|y|^{2}-r^{2} \right)^{s}|y|^{n}}\,dy=\int_{\mathbb{R}^{n}\setminus B_{R}}+\int_{B_{R} \setminus B_{r}}:=I_{r}+J_{r}. \tag{5.4}\] For \(I_{r}\) we have \[\lim_{r\to 0}I_{r}=\int_{\mathbb{R}^{n}\setminus B_{R}}\frac{u(x)-u(x-y)}{|y|^{ n+2s}}\,dy. \tag{5.5}\] This is because when \(y\in\mathbb{R}^{n}\setminus B_{R}\), one has \[\frac{|y|^{2}}{|y|^{2}-r^{2}}<2,\] therefore, as \(u\in L^{1}_{s}(\mathbb{R}^{n})\), \[\int_{\mathbb{R}^{n}\setminus B_{R}}\frac{|u(x)-u(x-y)|}{\left(|y|^{2}-r^{2} \right)^{s}|y|^{n}}\,dy\leq 2^{s}\int_{\mathbb{R}^{n}\setminus B_{R}}\frac{|u(x)-u (x-y)|}{|y|^{n+2s}}\,dy<\infty,\] and we can use the dominated convergence theorem to pass to the limit, as \(r\to 0\) and obtain (5.5). To pass to the limit in \(J_{r}\), we notice that for \(y\in B_{R}\setminus B_{r}\) and \(s<1/2\) one has \[|u(x)-u(x-y)|\leq c|y|^{2s+\varepsilon},\] since \(u\in C^{2s+\varepsilon}\) in a neighborhood of \(x\), where \(c>0\) is a universal constant. For \(s\geq 1/2\) the \(C^{2s+\varepsilon}=C^{1,2s+\varepsilon-1}\) regularity of \(u\) in the same neighborhood provides \[\begin{split}|u(x)-u(x-y)-y\cdot Du(x)|&=\left| \int_{0}^{1}y\left(Du(x-ty)-Du(x)\right)\,dt\right|\\ &\leq|y|\int_{0}^{1}|Du(x-ty)-Du(x)|\,dt\\ &\leq c|y|^{2s+\varepsilon}.\end{split} \tag{5.6}\] Observe that \[\int_{B_{R}\setminus B_{r}}\frac{y\cdot Du(x)}{\left(|y|^{2}-r^{2}\right)^{s} |y|^{n}}\,dy=\int_{B_{R}\setminus B_{r}}\frac{y\cdot Du(x)}{|y|^{n+2s}}\,dy=0,\] since we are integrating even functions over a symmetrical domain. Therefore, setting \[\begin{split} H_{r}&:=J_{r}-\int_{B_{R}\setminus B _{r}}\frac{u(x)-u(x-y)}{|y|^{n+2s}}\,dy\\ &=\int_{B_{R}\setminus B_{r}}(u(x)-u(x-y)-y\cdot Du(x))\left[ \frac{1}{\left(|y|^{2}-r^{2}\right)^{s}|y|^{n}}-\frac{1}{|y|^{n+2s}}\right]\, dy.\end{split}\] Using (5.6), passing to polar coordinates and changing variables by \(\rho=rt\), we estimate \[\begin{split}|H_{r}|&\leq c\int_{B_{R}\setminus B _{r}}|y|^{2s+\varepsilon}\left[\frac{1}{\left(|y|^{2}-r^{2}\right)^{s}|y|^{n}} -\frac{1}{|y|^{n+2s}}\right]\,dy\\ &\leq c\int_{r}^{R}\rho^{\varepsilon-1}\left[\frac{\rho^{2s}}{( \rho^{2}-r^{2})^{s}}-1\right]\,d\rho\\ &\leq cr^{\varepsilon}\int_{1}^{\frac{R}{r}}t^{\varepsilon-1} \left[\frac{t^{s}}{(t-1)^{s}}-1\right]\,dt.\end{split}\] It remains to check that \[\int_{1}^{\frac{R}{r}}t^{\varepsilon-1}\left[\frac{t^{s}}{(t-1)^{s}}-1\right]\,dt <\infty, \tag{5.7}\] since combining it with the previous inequality we obtain \[\lim_{r\to 0}H_{r}=0,\] or equivalently, \[\lim_{r\to 0}J_{r}=\lim_{r\to 0}\int_{B_{R}\setminus B_{r}}\frac{u(x)-u(x-y)}{|y|^{n+2s} }\,dy,\] which together with (5.4) and (5.5) gives (5.3). To check (5.7), we split the integral and notice that \[\int_{1}^{\sqrt{2}}t^{\varepsilon-1}\left[\frac{t^{s}}{(t-1)^{s}}-1\right]\, dt\leq c\int_{1}^{\sqrt{2}}\left[\frac{1}{(t-1)^{s}}-\frac{1}{t^{s}}\right]\,dt<\infty\] and \[\lim_{r\to 0}\int_{\sqrt{2}}^{\frac{R}{r}}t^{\varepsilon-1}\left[ \frac{t^{s}}{(t-1)^{s}}-1\right]\,dt \leq\int_{\sqrt{2}}^{\infty}t^{\varepsilon-1}\left[\left(1-\frac {1}{t}\right)^{-s}-1\right]\,dt\] \[\leq c\int_{\sqrt{2}}^{\infty}t^{\varepsilon-2}\,dt<\infty.\] As a consequence of Theorem 5.1, one obtains a representation formula via Poisson kernel for the solution of the non-local Dirichlet problem on balls (just like in the local framework). For its proof we refer the reader to [6, Theorem 2.10] (see also [25, p. 17], [22, p. 122 and 112] and [20, Theorem 15.2]). The fractional Poisson kernel is defined by \[P_{r}(x,y):=C_{n,s}\left(\frac{r^{2}-|x|^{2}}{|y|^{2}-r^{2}}\right)^{s}\frac{ 1}{|y-x|^{n}}, \tag{5.8}\] where \(r>0\) and \[C_{n,s}:=\frac{\sin(\pi s)\Gamma(\frac{n}{2})}{\pi^{\frac{n}{2}+1}}.\] The choice of the constant \(C_{n,s}\) guarantees that \(\int_{\mathbb{R}^{n}\setminus B_{r}}P_{r}(x,y)\,dy=1\). **Theorem 5.2**.: _For \(g\in L^{1}_{s}(\mathbb{R}^{n})\cap C(\mathbb{R}^{n})\) the unique solution of_ \[\begin{cases}(-\Delta)^{s}u=0&\text{in}\ \ B_{r},\\ u=g&\text{in}\ \ \mathbb{R}^{n}\setminus B_{r}\end{cases}\] _is given by_ \[u(x)=\int_{\mathbb{R}^{n}\setminus B_{r}}g(y)P_{r}(x,y)\,dy,\quad x\in B_{r}.\] ## 6. The maximum principle As it is well known, classical harmonic function in a bounded domain \(\Omega\subset\mathbb{R}^{n}\) takes its extremal values on the boundary \(\partial\Omega\). In other words, a non-negative harmonic function in \(\Omega\) cannot vanish inside \(\Omega\) (unless its identically zero). The literal analog of this for the fractional Laplacian fails: there exists a bounded fractional harmonic function \(u\) that vanishes inside \(\Omega\). One can construct such a function by defining \(u\) outside \(\Omega\) in a way that makes it feel the effect of far away data, [8, Theorem 2.3.1]. However, as remarked earlier, if we think of \(\mathbb{R}^{n}\setminus\Omega\) as the "non-local boundary", several properties that the classical Laplacian enjoys remain true for the fractional Laplacian, including the maximum principle. **Theorem 6.1**.: _If \((-\Delta)^{s}u\geq 0\) in \(\Omega\) and \(u\geq 0\) in \(\mathbb{R}^{n}\setminus\Omega\), then \(u\geq 0\) in \(\Omega\). Moreover, \(u>0\), unless \(u\equiv 0\)._ Proof.: We divide the proof into two steps. _Step 1._ First we show that \(u\geq 0\) in \(\Omega\). If not, then there exists a point in \(\Omega\), where \(u\) is strictly negative. Let \(x_{0}\in\Omega\) be a point where \(u\) takes its minimum. We have \(u(x_{0})<0\). Note that in fact \(x_{0}\) is a global minimum, since \(u\geq 0\) outside of \(\Omega\), and hence \[2u(x_{0})-u(x_{0}+y)-u(x_{0}-y)\leq 0,\ \ \forall y\in\mathbb{R}^{n}. \tag{6.1}\] On the other hand, for \(R>0\) large enough, if \(y\in\mathbb{R}^{n}\setminus B_{R}\), then both \(x_{0}+y\) and \(x_{0}-y\) stay outside of \(\Omega\), and hence, \[u(x_{0}+y)\geq 0,\ \ \text{and}\ \ u(x_{0}-y)\geq 0. \tag{6.2}\] Consequently, using (2.4), (6.1) and (6.2), we obtain \[0 \leq\int_{\mathbb{R}^{n}}\frac{2u(x_{0})-u(x_{0}+y)-u(x_{0}-y)}{ |y|^{n+2s}}\,dy\] \[\leq\int_{\mathbb{R}^{n}\setminus B_{R}}\frac{2u(x_{0})-u(x_{0}+y )-u(x_{0}-y)}{|y|^{n+2s}}\,dy\] \[\leq\int_{\mathbb{R}^{n}\setminus B_{R}}\frac{2u(x_{0})}{|y|^{n+ 2s}}\,dy<0,\] a contradiction. _Step 2._ We now show that the inequality is strict in \(\Omega\), unless \(u\equiv 0\). If that is not the case, then there is \(z\in\Omega\) such that \(u(z)=0\). Step 1 provides \(u(z+y)\geq 0\) and \(u(z-y)\geq 0\) for any \(y\in\mathbb{R}^{n}\). Since \((-\Delta)^{s}u(z)\geq 0\), (2.4) implies \[0 \leq\int_{\mathbb{R}^{n}}\frac{2u(z)-u(z+y)-u(z-y)}{|y|^{n+2s}}\,dy\] \[=-\int_{\mathbb{R}^{n}}\frac{u(z+y)+u(z-y)}{|y|^{n+2s}}\,dy\leq 0,\] which is possible only when \(u\equiv 0\) As a direct consequence, we get the comparison principle for the fractional Laplacian and uniqueness of the solution of the Dirichlet problem. **Corollary 6.1**.: _If \((-\Delta)^{s}u\geq 0\), \((-\Delta)^{s}v\geq 0\) in \(\Omega\) and \(u\geq v\) outside of \(\Omega\), then \(u\geq v\) in the whole \(\mathbb{R}^{n}\)._ **Corollary 6.2**.: _If \(f\in C(\Omega)\) and \(\varphi\in C(\mathbb{R}^{n}\setminus\Omega)\), then there is a unique \(u\in L^{s}_{1}(\mathbb{R}^{n})\cap C^{2s+\varepsilon}_{\rm loc}\) such that \((-\Delta)^{s}u=f\) in \(\Omega\) and \(u=\varphi\) in \(\mathbb{R}^{n}\setminus\Omega\)._ We refer the interested reader to [26, Proposition 4.1] for a maximum principle for a general class of non-local operators. ## 7. The Harnack inequality The Harnack principle for the classical Laplacian states that if a function \(u\) is harmonic and non-negative in \(B_{r}(x_{0})\), then in a smaller ball its values are all comparable, i.e., there exists a constant \(C>0\) independent of \(u\), \(x_{0}\) and \(r>0\) such that \[\sup_{B_{\frac{r}{2}}(x_{0})}u\leq C\inf_{B_{\frac{r}{2}}(x_{0})}u.\] Since the maximum principle for the fractional Laplacian needed a refinement, it is not surprising that the Harnack inequality also needs a refinement, as the classical Harnack inequality fails in the non-local framework, [1, page 9], [3, Lemma 2.1]. It can be seen by constructing a counterexample using approximation of \(w(x):=|x|^{2}\) by fractional harmonic functions (for the proof of this remarkable property see Theorem 12.1 below). Namely, for \(\varepsilon\in(0,\frac{1}{8})\) let \(v_{\varepsilon}\) be fractional harmonic approximation of \(w\) in \(B_{1}\), i.e., \((-\Delta)^{s}v_{\varepsilon}=0\) in \(B_{1}\) and \[\|w-v_{\varepsilon}\|_{C^{2}(B_{1})}<\varepsilon.\] Then in \(B_{1}\setminus B_{1/4}\) one has \[v_{\varepsilon}(x)\geq w(x)-\|w-v_{\varepsilon}\|_{L^{\infty}(B_{1})}\geq \frac{1}{16}-\varepsilon>\varepsilon,\] but \[v_{\varepsilon}(0)\leq w(0)+\|w-v_{\varepsilon}\|_{L^{\infty}(B_{1})}<\varepsilon.\] Thus, \(v_{\varepsilon}(0)<v_{\varepsilon}(x)\) in \(B_{1}\setminus B_{1/4}\). Hence, \[\inf_{B_{1}}v_{\varepsilon}=\inf_{\overline{B}_{1/4}}v_{\varepsilon}.\] Set now \[u_{\varepsilon}(x):=v_{\varepsilon}(x)-v_{\varepsilon}(y),\,\,\,x\in B_{1},\] where \(y\in\overline{B}_{1/4}\) is a point, where \(v_{\varepsilon}\) reaches its infimum. By definition \(u_{\varepsilon}\) is \(s\)-harmonic and non-negative in \(B_{1}\). Moreover, \(u_{\varepsilon}>0\) in \(B_{1}\setminus B_{1/4}\) and still \[\inf_{B_{1/2}}u_{\varepsilon}=u_{\varepsilon}(y)=0.\] Therefore, the classical Harnack inequality fails in the non-local setting. However, a suitably refined Harnack inequality holds. **Theorem 7.1**.: _If \(u\in L^{\infty}(\mathbb{R}^{n})\cap C^{2}(B_{r}(x_{0}))\) is \(s\)-harmonic in \(B_{r}(x_{0})\) and \(u\geq 0\) in \(\mathbb{R}^{n}\), then there exists a constant \(C>0\) independent of \(u\), \(x_{0}\) and \(r>0\) such that_ \[\sup_{B_{\frac{r}{2}}(x_{0})}u\leq C\inf_{B_{\frac{r}{2}}(x_{0})}u.\] Proof.: This can be proved as in the classical case, using Theorem 5.2, [3, Lemma 2.1]. Another proof can be found in [8, Proposition 2.3.4]. The proof we bring here is much more compact and makes use of the extension argument, [10, Theorem 5.1]. Let \(v:\mathbb{R}^{n}\times[0,+\infty)\) be the solution of the extension problem (3.1)-(3.2). Since \(u\geq 0\) in \(\mathbb{R}^{n}\), recalling (3.5), also \(v\geq 0\). If \(\tilde{v}\) is the reflection of \(v(x,y)\) through the hyperplane \(\{y=0\}\), (3.9), then as \(u\) is fractional harmonic in \(B_{r}(x_{0})\), Lemma 3.1 yields \[\operatorname{div}\left(|y|^{1-2s}\nabla\tilde{v}\right)=0\] in the \((n+1)\) dimensional ball of radius \(r\) centered at \((x_{0},0)\). We can apply the Harnack inequality for \(\tilde{v}\), [18, Lemma 2.3.5], which gives the Harnack inequality for \(u\). ## 8. Liouville theorem For the classical Laplacian the Liouville theorem states that entire harmonic functions that are bounded from below (or above) are constants. The same conclusion is true for entire fractional harmonic functions, [5]. In fact, entire \(s\)-harmonic functions are affine, and constant when \(0<s\leq 1/2\), [13, Theorem 1.3], [19, Theorem 1.1]. Here we bring a proof of a Liouville type theorem under weaker condition, obtained in [13, Theorem 1.2]. **Theorem 8.1**.: _If \(u\in L^{1}_{s}(\mathbb{R}^{n})\) is \(s\)-harmonic and_ \[\liminf_{|x|\to\infty}\frac{u(x)}{|x|^{\gamma}}\geq 0 \tag{8.1}\] _for some \(\gamma\in[0,1]\), \(\gamma<2s\), then \(u\) is a constant in \(\mathbb{R}^{n}\)._ Proof.: This follows from the fact that for \(|x|<r\), \[u(x)=\int_{|y|>r}P_{r}(x,y)u(y)\,dy, \tag{8.2}\] where \(P_{r}\) is the fractional Poisson kernel for the ball \(B_{r}\), defined by (5.8), Theorem 5.2. Notice that it is enough to show that for all unit vectors \(\nu\) one has \[D_{\nu}u\geq 0. \tag{8.3}\] Indeed, since \(\nu\) is arbitrary, then \(Du=0\), hence \(u\) is a constant in \(\mathbb{R}^{n}\). To see (8.3), using (8.2) we calculate \[D_{i}u(x)=-\int_{|y|>r}P_{r}(x,y)\left[\frac{2sx_{i}}{r^{2}-|x|^{2}}+\frac{n(x_ {i}-y_{i})}{|y-x|^{2}}\right]u(y)\,dy,\] therefore, \[D_{\nu}u(x)=-\int_{|y|>r}P_{r}(x,y)\left[\frac{2sx\cdot\nu}{r^{2}-|x|^{2}}+\frac{ n(x-y)\cdot\nu}{|y-x|^{2}}\right]u(y)\,dy. \tag{8.4}\] On the other hand, for any \(\varepsilon>0\) fixed and \(|y|\) sufficiently large, (8.1) implies \[u(y)\geq-\varepsilon|y|^{\gamma}. \tag{8.5}\] For each fixed \(x\), one can choose \(r>0\) large enough to guarantee \[\left|\frac{2sx\cdot\nu}{r^{2}-|x|^{2}}\right|\leq\frac{1}{r}, \tag{8.6}\] and for \(|y|>r\) also \[\left|\frac{n(x-y)\cdot\nu}{|y-x|^{2}}\right|\leq\frac{n}{|y-x|}\leq\frac{2n} {r}. \tag{8.7}\] Rewriting (8.4) as \[D_{\nu}u(x)= -\int_{|y|>r}P_{r}(x,y)\left[\frac{2sx\cdot\nu}{r^{2}-|x|^{2}}+ \frac{n(x-y)\cdot\nu}{|y-x|^{2}}\right][u(y)+\varepsilon|y|^{\gamma}]\ dy\] \[+\int_{|y|>r}P_{r}(x,y)\left[\frac{2sx\cdot\nu}{r^{2}-|x|^{2}}+ \frac{n(x-y)\cdot\nu}{|y-x|^{2}}\right]\varepsilon|y|^{\gamma}\,dy:=I+J,\] and using (8.5), (8.6), (8.7) and (8.2), we have \[\begin{split} I&\geq-\frac{2n+1}{r}\int_{|y|>r}P_{ r}(x,y)\left[u(y)+\varepsilon|y|^{\gamma}\right]\,dy\\ &=-\frac{2n+1}{r}u(x)-\frac{2n+1}{r}\varepsilon\int_{|y|>r}P_{r} (x,y)|y|^{\gamma}\,dy.\end{split} \tag{8.8}\] Clearly the first term in the right hand side of (8.8) goes to zero, as \(r\to\infty\). We now aim to estimate the second term. Using definition of the Poisson kernel, (5.8), triangle inequality and changing variables (first \(|y|:=\tau\) then \(\tau:=rt\)), for a constant \(C>0\) we obtain \[\frac{2n+1}{r}\varepsilon\int_{|y|>r}P_{r}(x,y)|y|^{\gamma}\,dy\] \[=\frac{C\varepsilon}{r}\left(r^{2}-|x|^{2}\right)^{s}\int_{|y|>r} \frac{|y|^{\gamma}}{\left(|y|^{2}-r^{2}\right)^{s}|y-x|^{n}}\,dy\] \[\leq\frac{C\varepsilon}{r}\left(r^{2}-|x|^{2}\right)^{s}\int_{|y| >r}\frac{|y|^{\gamma}}{\left(|y|^{2}-r^{2}\right)^{s}\left(|y|-|x|\right)^{n}} \,dy\] \[=\frac{C\varepsilon}{r}\left(r^{2}-|x|^{2}\right)^{s}\int_{r}^{ \infty}\frac{\tau^{\gamma+n-1}}{\left(\tau^{2}-r^{2}\right)^{s}\left(\tau-|x |\right)^{n}}\,d\tau\] \[=\frac{C\varepsilon}{r^{2s-\gamma+1}}\left(r^{2}-|x|^{2}\right)^ {s}\int_{1}^{\infty}\frac{t^{\gamma+n-1}}{\left(t^{2}-1\right)^{s}\left(t- \frac{|x|}{r}\right)^{n}}\,dt\] \[\leq\frac{C\varepsilon}{r^{1-\gamma}}\int_{1}^{\infty}\frac{t^{ \gamma+n-1}}{\left(t^{2}-1\right)^{s}\left(t-\frac{|x|}{r}\right)^{n}}\,dt \leq C\varepsilon.\] The last inequality is a consequence of the fact that the previous integral is convergent (since \(\gamma\) is assumed to be less than \(2s\)), and \(\gamma\leq 1\). Also, \[|J|\leq\frac{2n+1}{r}\varepsilon\int_{|y|>r}P_{r}(x,y)|y|^{\gamma}\,dy\leq C\varepsilon.\] Hence, for \(r\) large enough, \[D_{\nu}u(x)\geq-C\varepsilon.\] Letting \(\varepsilon\to 0\) in the last inequality, we deduce (8.3). **Remark 8.1**.: _Theorem 8.1 has interesting consequences. If \((-\Delta)^{s}u=P\) in \(\mathbb{R}^{n}\) in the distributional sense, (2.5), where \(s\in(0,1)\) and \(P\) is a polynomial, then \(u\) is affine and \(P=0\), [19, Theorem 1.2]. Furthermore, if \(p\geq 1\) and \(u\in L^{p}(\mathbb{R}^{n})\) is fractional harmonic in the sense of distributions, then \(u\equiv 0\), [19, Corollary 1.3]._ ## 9. Regularity estimates The following regularity estimates are from [27]. **Lemma 9.1**.: _If \(u\in C^{\alpha}(\mathbb{R}^{n})\) for \(\alpha\in(2s,1]\), then \((-\Delta)^{s}u\in C^{\alpha-2s}(\mathbb{R}^{n})\). Moreover,_ \[\left[(-\Delta)^{s}u\right]_{C^{\alpha-2s}}\leq C[u]_{C^{\alpha}},\] _where \(C>0\) is a constant depending only on \(\alpha\), \(s\) and \(n\)._ Proof.: We use (2.3) to compute \[|(-\Delta)^{s}u(x)-(-\Delta)^{s}u(y)| =c_{n,s}\left|\int_{\mathbb{R}^{n}}\frac{u(x)-u(x+z)-u(y)+u(y+z)} {|z|^{n+2s}}\,dz\right|\] \[\leq I_{1}+I_{2},\] where \(I_{1}\) is the integral over a ball of radius \(r\), and \(I_{2}\) is the integral over \(\mathbb{R}^{n}\setminus B_{r}\). Since \(|u(x)-u(x+z)|\leq[u]_{C^{\alpha}}|z|^{\alpha}\) and \(|u(y)-u(y+z)|\leq[u]_{C^{\alpha}}|z|^{\alpha}\), we estimate \[I_{1}\leq c_{n,s}\left|\int_{B_{r}}\frac{2[u]_{C^{\alpha}}|z|^{\alpha}}{|z|^{n +2s}}\,dz\right|\leq C[u]_{C^{\alpha}}r^{\alpha-2s}.\] To estimate \(I_{2}\), we make use of \(|u(x+z)-u(y+z)|\leq[u]_{C^{\alpha}}|x-y|^{\alpha}\) to write \[I_{2}\leq c_{n,s}\left|\int_{\mathbb{R}^{n}\setminus B_{r}}\frac{2[u]_{C^{ \alpha}}|x-y|^{\alpha}}{|z|^{n+2s}}\,dz\right|\leq C[u]_{C^{\alpha}}r^{-2s}|x -y|^{\alpha}.\] Thus, taking \(r=|x-y|\), we obtain \[|(-\Delta)^{s}u(x)-(-\Delta)^{s}u(y)|\leq C[u]_{C^{\alpha}}|x-y|^{\alpha-2s}.\] **Corollary 9.1**.: _If \(u\in C^{1,\alpha}(\mathbb{R}^{n})\) for \(\alpha\in(2s,1]\), then \((-\Delta)^{s}u\in C^{1,\alpha-2s}(\mathbb{R}^{n})\). Moreover,_ \[[(-\Delta)^{s}u]_{C^{1,\alpha-2s}}\leq C[u]_{C^{1,\alpha}},\] _where \(C>0\) is a constant depending only on \(\alpha\), \(s\) and \(n\)._ Proof.: This follows from Lemma 9.1 combined with the fact that the fractional Laplacian commutes with differentiation. **Lemma 9.2**.: _If \(u\in C^{1,\alpha}(\mathbb{R}^{n})\) for \(\alpha\in(0,2s)\), then \((-\Delta)^{s}u\in C^{\alpha-2s+1}(\mathbb{R}^{n})\). Moreover,_ \[[u]_{C^{\alpha-2s+1}}\leq C[u]_{C^{1,\alpha}},\] _where \(C>0\) is a positive constant depending only on \(\alpha\), \(s\) and \(n\)._ Proof.: If \(s<1/2\), we argue as in the proof of Lemma 9.1 to get \[|(-\Delta)^{s}u(x)-(-\Delta)^{s}u(y)|\leq I_{1}+I_{2},\] with the same \(I_{1}\) and \(I_{2}\) as in the proof of Lemma 9.1. As \(u\in C^{1,\alpha}(\mathbb{R}^{n})\), we can estimate \[|u(x)-u(x+z)-u(y)+u(y+z)| \leq|(Du(x)-Du(y))\cdot z|+[u]_{C^{1,\alpha}}|z|^{1+\alpha}\] \[=\left(|x-y|^{\alpha}|z|+|z|^{1+\alpha}\right)[u]_{C^{1,\alpha}},\] therefore \[I_{1}\leq C\left(|x-y|^{\alpha}r^{1-2s}+r^{1+\alpha-2s}\right)[u]_{C^{1, \alpha}},\] and as before, taking \(r=|x-y|\) gives the desired result. For the case of \(s\geq 1/2\), using Proposition 4.1, we can decompose \[(-\Delta)^{s}u=(-\Delta)^{s-1/2}(-\Delta)^{1/2}u\] and observe that \[(-\Delta)^{1/2}u=\sum_{i=1}^{n}R_{i}D_{i},\] where \(R_{i}\) is the \(i\)-th Riesz transform (see, for example, [20, Section 6]). An iteration of the last two lemmas leads to the following result. **Lemma 9.3**.: _If \(u\in C^{k,\alpha}\) and \(k+\alpha-2s\) is not an integer, then \((-\Delta)^{s}u\in C^{\beta,\gamma}\), where \(\beta\) is the integer part of \(k+\alpha-2s\) and \(\gamma=k+\alpha-2s-\beta\)._ **Remark 9.1**.: _Schauder type estimates hold for the fractional Laplacian. In fact, if \((-\Delta)^{s}u\in C^{\alpha}(B_{1})\cap C(\overline{B}_{1})\), then \(u\in C^{\alpha+2s}(B_{1/2})\), [7, Theorem 1.2]. Actually, a more general estimate holds,_ \[\|u\|_{C^{\alpha+2s}(B_{1/2})}\leq C\left[\|(-\Delta)^{s}u\|_{C^{\alpha}(B_{1}) }+\|u\|_{L^{\infty}(B_{1})}+\int_{\mathbb{R}^{n}\setminus B_{1}}\frac{u(y)}{| y|^{n+2s}}\,dy\right],\] _for any \(\alpha\geq 0\) such that \(\alpha+2s\) is not an integer, as long as the terms in the right hand side are well defined._ ## 10. Green's function for the ball As in the case of the classical Laplacian, the notions of fundamental solution and Poisson kernel allow one to define the Green function. For \(x\), \(z\in B_{r}\) and \(x\neq z\), the Green function is defined in the following way, \[G(x,z):=\phi(x-z)-\int_{\mathbb{R}^{n}\setminus B_{r}}\phi(z-y)P_{r}(y,x)\,dy,\] where \(\phi\) is the fundamental solution defined by (3.4), and \(P_{r}\) is the Poisson kernel defined by (5.8). It can be displayed in a more explicit way, [6, Theorem 3.1], \[G(x,z)=\kappa_{n,s}|z-x|^{2s-n}\int_{0}^{R(x,z)}\frac{t^{s-1}}{(t+1)^{\frac{n} {2}}}\,dt, \tag{10.1}\] where \[R(x,y):=\frac{(r^{2}-|x|^{2})(r^{2}-|z|^{2})}{r^{2}|x-z|^{2}},\ \ \ \text{if}\ \ \ n\geq 2,\] and \[G(x,z)=\frac{1}{\pi}\log\left(\frac{r^{2}-xz+\sqrt{(r^{2}-x^{2})(r^{2}-z^{2})} }{r|z-x|}\right),\ \ \ \text{if}\ \ \ n=1, \tag{10.2}\] with \[\kappa_{n,s}:=\frac{\Gamma\left(\frac{n}{2}\right)}{4^{s}\pi^{\frac{n}{2}} \Gamma^{2}(s)}.\] The proof can be found in the celebrated work of Riesz, [25] (see also [4, 20] and [6, Theorem 3.2]). **Theorem 10.1**.: _If \(f\in C^{2s+\varepsilon}(B_{r})\cap C(\overline{B}_{r})\), then the unique solution of_ \[\begin{cases}(-\Delta)^{s}u=f&\text{in}\ \ \ B_{r},\\ u=0&\text{in}\ \ \mathbb{R}^{n}\setminus B_{r}\end{cases}\] _is given explicitly in terms of the Green function by_ \[u(x)=\int_{\mathbb{R}^{n}\setminus B_{r}}G(x,y)f(y)\,dy,\quad x\in B_{r}.\] As a consequence of Theorems 5.2 and 10.1 we obtain an explicit representation of the solution of the Dirichlet problem in the ball of radius \(r>0\). **Theorem 10.2**.: _If \(f\in C^{2s+\varepsilon}(B_{r})\cap C(\overline{B}_{r})\) and \(g\in L^{1}_{s}(\mathbb{R}^{n})\cap C(\mathbb{R}^{n})\), then the unique solution of the problem_ \[\begin{cases}(-\Delta)^{s}u=f&\text{in}\ \ \ B_{r},\\ u=g&\text{in}\ \ \ \mathbb{R}^{n}\setminus B_{r}\end{cases}\] _is given by_ \[u(x)=\int_{\mathbb{R}^{n}\setminus B_{r}}g(y)P_{r}(x,y)\,dy+\int_{\mathbb{R}^{ n}\setminus B_{r}}G(x,y)f(y)\,dy,\ x\in B_{r},\] _where \(P_{r}\) and \(G\) are defined by (5.8) and (10.1)-(10.2) respectively._ ## 11. Fractional harmonic functions are \(C^{\infty}\) As it is well known, classical harmonic functions are \(C^{\infty}\). Fractional harmonic functions enjoy the same regularity. This may seem like an obvious observation, as in several definitions above the fractional Laplacian was defined for \(C^{\infty}\) functions, but in fact there is no loss of generality. Namely, even if one starts with the "weakest" regularity assumptions, fractional harmonic functions turn out to be \(C^{\infty}\), [7, Theorem 2.10], [28, Corollary 1]. **Theorem 11.1**.: _If \(u\in L^{\infty}(\mathbb{R}^{n})\cap C(\mathbb{R}^{n}\setminus B_{r})\) is such that \((-\Delta)^{s}u=0\) in \(B_{r}\), \(r>0\), then for any multi-index \(\alpha\in\mathbb{N}^{n}_{0}\)_ \[\|D^{\alpha}u\|_{L^{\infty}(B_{r/2})}\leq Cr^{-|\alpha|}\|u\|_{L^{\infty}( \mathbb{R}^{n}\setminus B_{r})},\] _where \(C>0\) is a constant depending only on \(n\), \(s\) and \(\alpha\)._ Proof.: This follows from the smoothness of the Poisson kernel (5.8). Observe that without loss of generality we may assume \(r=1\). Indeed, if \[\|D^{\alpha}u\|_{L^{\infty}(B_{1/2})}\leq C\|u\|_{L^{\infty}(\mathbb{R}^{n} \setminus B_{1})}, \tag{11.1}\] then by rescaling \(y:=rx\), \(v(y):=u(x)\), \(x\in B_{1}\), one has \(D^{\alpha}u(x)=r^{|\alpha|}|D^{\alpha}v(y)|\), which yields, \[r^{|\alpha|}|D^{\alpha}v(y)|=|D^{\alpha}u(x)|\leq C\|u\|_{L^{\infty}(\mathbb{ R}^{n}\setminus B_{1})}=C\|v\|_{L^{\infty}(\mathbb{R}^{n}\setminus B_{r})},\] and the result follows. To prove (11.1), note that from (5.8) and Theorem 5.2, we have \[u(x)=\int_{\mathbb{R}^{n}\setminus B_{1}}u(y)P_{1}(x,y)\,dy=C_{n,s}\int_{ \mathbb{R}^{n}\setminus B_{1}}u(y)\left(\frac{1-|x|^{2}}{|y|^{2}-1}\right)^{s }\frac{dy}{|y-x|^{n}}.\] Hence \[D_{i}u(x)= 2s\int_{\mathbb{R}^{n}\setminus B_{1}}\frac{u(y)}{(|y|^{2}-1)^{s}} \frac{x_{i}(1-|x|^{2})^{s-1}}{|x-y|^{n}}\,dy\] \[-\int_{\mathbb{R}^{n}\setminus B_{1}}\frac{u(y)}{(|y|^{2}-1)^{s}} \frac{n(1-|x|^{2})^{s}(x_{i}-y_{i})}{|x-y|^{n+2}}\,dy,\] therefore, \[|Du(x)|\leq C\int_{\mathbb{R}^{n}\setminus B_{1}}\frac{|u(y)|}{(|y|^{2}-1)^{s }}\left[\frac{|x|(1-|x|^{2})^{s-1}}{|x-y|^{n}}+\frac{(1-|x|^{2})^{s}}{|x-y|^{n+ 1}}\right]\,dy, \tag{11.2}\] where \(C>0\) is a constant depending on \(s\) and \(n\). On the other hand, if \(|x|\leq\frac{1}{2}\), then \[\frac{3}{4}\leq 1-|x|^{2}\leq 1\ \ \text{ and }\ \ |x-y|\geq\frac{|y|}{2},\] which combined with (11.2) and passing to polar coordinates, yields \[|Du(x)| \leq C\|u\|_{L^{\infty}(\mathbb{R}^{n}\setminus B_{1})}\int_{ \mathbb{R}^{n}\setminus B_{1}}\left[\frac{1}{(|y|-1)^{s}|y|^{n}}+\frac{1}{(|y |-1)^{s}|y|^{n+1}}\right]\,dy\] \[\leq C\|u\|_{L^{\infty}(\mathbb{R}^{n}\setminus B_{1})}\int_{1}^ {\infty}\left[\frac{1}{(\rho-1)^{s}\rho}+\frac{1}{(\rho-1)^{s}\rho^{2}}\right] \,d\rho\] \[\leq C\|u\|_{L^{\infty}(\mathbb{R}^{n}\setminus B_{1})}.\] Thus, \[|Du(x)|\leq C\|u\|_{L^{\infty}(\mathbb{R}^{n}\setminus B_{1})},\ \ \forall x\in B_{1/2}.\] Reiterating the computation, we get (11.1) for any multi-index \(\alpha\). ## 12. Density of fractional harmonic functions As we have seen above, fractional harmonic functions share lots of properties with classical harmonic functions. Obviously, there are also several significant differences that come from the non-local nature of the fractional Laplacian. In particular, as we will see below, any given smooth function can be locally approximated by fractional harmonic functions. This striking property, obtained in [15], shows how faraway oscillations of a fractional harmonic function affect on its local behavior. In other words, fractional harmonic functions are dense in the set of locally smooth functions. There is no local counterpart of this property. Indeed, classical harmonic functions cannot have a strict local maximum, hence, functions with strict local maximum cannot be approximated by harmonic functions. It is noteworthy, that although this is purely non-local phenomenon, but a similar result does not hold for any non-local operator. **Theorem 12.1**.: _If \(f\in C^{k}(\overline{B}_{1})\), for \(k\in\mathbb{N}\), then for any \(\varepsilon>0\), there exists \(R>0\) and \(u\in H^{s}(\mathbb{R}^{n})\cap C^{s}(\mathbb{R}^{n})\) such that \(u\) is fractional \(s\)-harmonic in \(B_{1}\), vanishes outside of \(B_{R}\)_ \[\|f-u\|_{C^{k}(\overline{B}_{1})}<\varepsilon.\] Proof.: We sketch the proof in the one-dimensional case, as in [8, Section 2.5]. For the general proof we refer the reader to [15, Theorem 1.1]. Notice that it is enough to prove the result for monomials. Indeed, by Stone-Weierstrass Theorem, for any \(\varepsilon>0\) and a given \(f\in C([0,1])\), there exists a polynomial \(P\) such that \[\left\|f-P\right\|_{C^{k}(\overline{B}_{1})}<\varepsilon.\] Combined with the linearity of the fractional Laplacian, this implies that it is enough to prove the theorem for monomials, i.e., it is enough to show that \(P(x)=x^{m}\), \(m\geq 1\) can be approximated by an \(s\)-harmonic function \(u_{m}\). In turn, to prove the latter, it is enough to show that for any \(m\in\mathbb{N}\), there exist \(R>r>0\), \(x\in\mathbb{R}\) and \(u\) such that \[\begin{cases}(-\Delta)^{s}u=0&\text{in }\ (x-r,x+r),\\ u=0&\text{in }\ \mathbb{R}\setminus(x-R,x+R),\end{cases} \tag{12.1}\] and \[D^{i}u(x)=0,\ i\in\{0,1,\ldots,m-1\},\ D^{m}u(x)=1. \tag{12.2}\] Indeed, it implies that, up to a translation, \(u(x)=x^{m}+O(x^{m+1})\) near the origin, hence, its blow-up \[u_{\lambda}(x):=\frac{u(\lambda x)}{\lambda^{m}}=x^{m}+\lambda O(x^{m+1}),\] being an \(s\)-harmonic function, for \(\lambda\) small is arbitrarily close to \(x^{m}\), which, as stated earlier, provides the desired result. Thus, it remains to makes sure there exists a function \(u\) satisfying (12.1) and (12.2). To that aim, let \(\mathbb{L}\) be the set of all pairs \((u,x)\) satisfying (12.1). Define the vector space \[V:=\left\{(u(x),Du(x),\ldots,D^{m}u(x))\,,\ \text{for }\ (u,x)\in\mathbb{L}\right\}.\] Directly can be verified that \(V\) is a linear spaces. Moreover, \[V=\mathbb{R}^{m+1}. \tag{12.3}\] Assume for a moment, that (12.3) is verified. As \((0,\ldots\,0,1)\in\mathbb{R}^{m+1}=V\), the pair \((u,x)\) satisfies (12.1) and (12.2). Thus, we are left to prove (12.3). We argue by contradiction and assume that (12.3) fails. Since \(V\) is a linear space, it has to be a proper subspace of \(\mathbb{R}^{m+1}\) and so it lies in a hyperplane. Consequently, there exists \(c=(c_{0},c_{1},\ldots,c_{m})\in\mathbb{R}^{m+1}\setminus\{0\}\) such that \[V\subseteq\left\{\mu\in\mathbb{R}^{m+1};\ c\cdot\mu=0\right\}.\] This means that the vector \(c\) is orthogonal to any vector in \(V\), i.e., \[\sum_{i\leq m}D^{i}u(x)=0. \tag{12.4}\] If \(u(x)=x_{+}^{s}\), then \(D^{i}u(x)=s(s-1)\ldots(s-i+1)x^{s-i}\), and multiplying with \(x^{m-s}\), \(x\neq 0\), from (12.4) we get \[\sum_{i\leq m}c_{i}s(s-1)\ldots(s-i+1)x^{m-i}=0,\] i.e., \(c_{i}=0\) for each \(i\), or equivalently \(c=0\), which is a contradiction. This completes the proof. Strictly speaking the function \(x_{+}^{s}\), being \(s\)-harmonic, Theorem 2.1, does not satisfy (12.1), because it does not have a compact support. So to deduce the contradiction, one should assume that \(u\) is a fractional harmonic function with compact support, which behaves like \(x^{s}\) near the origin, and apply (12.4) for \(x>0\) small. **Acknowledgments.** The author was partially supported by the King Abdullah University of Science and Technology (KAUST) and by the Centre for Mathematics of the University of Coimbra (UIDB/00324/2020, funded by the Portuguese Government through FCT/MCTES).
2307.02849
NatLogAttack: A Framework for Attacking Natural Language Inference Models with Natural Logic
Reasoning has been a central topic in artificial intelligence from the beginning. The recent progress made on distributed representation and neural networks continues to improve the state-of-the-art performance of natural language inference. However, it remains an open question whether the models perform real reasoning to reach their conclusions or rely on spurious correlations. Adversarial attacks have proven to be an important tool to help evaluate the Achilles' heel of the victim models. In this study, we explore the fundamental problem of developing attack models based on logic formalism. We propose NatLogAttack to perform systematic attacks centring around natural logic, a classical logic formalism that is traceable back to Aristotle's syllogism and has been closely developed for natural language inference. The proposed framework renders both label-preserving and label-flipping attacks. We show that compared to the existing attack models, NatLogAttack generates better adversarial examples with fewer visits to the victim models. The victim models are found to be more vulnerable under the label-flipping setting. NatLogAttack provides a tool to probe the existing and future NLI models' capacity from a key viewpoint and we hope more logic-based attacks will be further explored for understanding the desired property of reasoning.
Zi'ou Zheng, Xiaodan Zhu
2023-07-06T08:32:14Z
http://arxiv.org/abs/2307.02849v1
# NatLogAttack: A Framework for Attacking Natural Language Inference Models with Natural Logic ###### Abstract Reasoning has been a central topic in artificial intelligence from the beginning. The recent progress made on distributed representation and neural networks continues to improve the state-of-the-art performance of natural language inference. However, it remains an open question whether the models perform real reasoning to reach their conclusions or rely on spurious correlations. Adversarial attacks have proven to be an important tool to help evaluate the Achilles' heel of the victim models. In this study, we explore the fundamental problem of developing attack models based on logic formalism. We propose NatLogAttack to perform systematic attacks centring around _natural logic_, a classical logic formalism that is traceable back to Aristotle's syllogism and has been closely developed for natural language inference. The proposed framework renders both label-preserving and label-flipping attacks. We show that compared to the existing attack models, NatLogAttack generates better adversarial examples with fewer visits to the victim models. The victim models are found to be more vulnerable under the label-flipping setting. NatLogAttack provides a tool to probe the existing and future NLI models' capacity from a key viewpoint and we hope more logic-based attacks will be further explored for understanding the desired property of reasoning. 1 Footnote 1: The code of NatLogAttack is available at [https://github.com/orianna-zzo/NatLogAttack](https://github.com/orianna-zzo/NatLogAttack). ## 1 Introduction While deep neural networks have achieved the state-of-the-art performance on a wide range of tasks, the models are often vulnerable and easily deceived by imposing perturbations to the original input Goodfellow et al. (2014); Kurakin et al. (2018), which seriously hurts the accountability of the systems. In depth, this pertains to model robustness, capacity, and the development of models with more advanced intelligence. Natural language inference (NLI), also known as textual entailment Dagan et al. (2005); Iftene and Balahur-Dobrescu (2007); MacCartney (2009); Bowman et al. (2015), is a fundamental problem that models the inferential relationships between a premise and hypothesis sentence. The models built on _distributed_ representation have significantly improved the performance on different benchmarks Bowman et al. (2015); Chen et al. (2017); Williams et al. (2018); Chen et al. (2018); Devlin et al. (2019); Liu et al. (2019); Zhang et al. (2020); Pilault et al. (2021). However, it is still highly desirable to conduct research to probe if the models possess the desired reasoning ability rather than rely on spurious correlation to reach their conclusions Glockner et al. (2018); Poliak et al. (2018); Belinkov et al. (2019); McCoy et al. (2019); Richardson et al. (2020). Adversarial attacks have proven to be an important tool to reveal the Achilles' heel of victim models. Specifically for natural language inference, the logic relations are easily broken if an attack model does not properly generate the adversarial examples following the logic relations and related semantics. Therefore, unlike other textual attack tasks such as those relying on semantic similarity and relatedness, it is more challenging to create effective attacks here. In this study, we explore the basic problem of developing adversarial attacks based on logic formalism, with the aim to probe victim models for the desired reasoning capability. Specifically, we propose NatLogAttack, in which the adversarial attacks are generated based on _natural logic_ Lakoff (1970); Van Benthem (1995); MacCartney (2009); Icard (2012); Angeli et al. (2016); Hu and Moss (2018); Chen et al. (2021), a classical logic formalism with a long history that has been closely developed with natural language inference. From a general perspective, natural language inference provides an appropriate setup for probing the development of _distributed representation_ and the models based on that. A robust solution for the task requires manipulation of discrete operations and adversarial attacks can help understand whether and how the required symbols and inference steps emerge from the data and the learned distributed representation. Our work has also been inspired by recent research on exploring the complementary strengths of neural networks and symbolic models (Garcez et al., 2015; Yang et al., 2017; Rocktaschel and Riedel, 2017; Evans and Grefenstette, 2018; Weber et al., 2019; De Raedt et al., 2019; Mao et al., 2019; Feng et al., 2020, 2022). Our research contributes to the development of logic-based adversarial attacks for natural language understanding. Specifically, we propose a novel attack framework, NatLogAttack, based on natural logic for natural language inference. Our experiments with both human and automatic evaluation show that the proposed model outperforms the state-of-the-art attack methods. Compared to the existing attack models, NatLogAttack generates better adversarial examples with fewer visits to the victim models. In addition to the commonly used attack setting where the labels of generated examples remain the same as the original pairs, we also propose to construct label-flipping attacks. The victim models are found to be more vulnerable in this setup and NatLogAttack succeeds in deceiving them with much smaller numbers of queries. NatLogAttack provides a systematic approach to probing the existing and future NLI models' capacity from a basic viewpoint that has a traceable history, by combining it with the recent development of attacking models. The proposed framework is constrained by the natural logic formalism and we hope more logic-based attacks will be further explored for understanding the desired property of natural language reasoning. ## 2 Related Work Adversarial Attacks in NLP.White-box attacks leverage the architecture and parameters of victim models to craft adversarial examples (Liang et al., 2018; Wallace et al., 2019; Ebrahimi et al., 2018). Black-box models, however, have no such knowledge. Pioneering blind models (Jia and Liang, 2017), for example, create adversarial examples by adding distracting sentences to the input. More recently, score-based (e.g., Zhang et al. (2019); Jin et al. (2020)) and decision-based attack models (Zhao et al., 2018) also query the prediction scores or the final decisions of victim models. In terms of perturbation granularities, character-level attacks modify characters (Ebrahimi et al., 2018) while word-level models rely on word substitutions that can be performed based on word embeddings (Sato et al., 2018), language models (Zhang et al., 2019), or even external knowledge bases (Zang et al., 2020). Sentence-level attack models add perturbation to an entire sentence by performing paraphrasing (Iyyer et al., 2018) or attaching distracting sentences (Jia and Liang, 2017). Kang et al. (2018) generated natural language inference examples based on entailment label composition functions with the help of lexical knowledge. Minervini and Riedel (2018) utilized a set of first-order-logic constraints to measure the degree of rule violation for natural language inference. The efforts utilized the generated examples for data augmentation. The focus is not on adversarial attack and the adversarial examples' quality, e.g., the attack validity, is not evaluated. Natural Logic.Natural logic has a long history and has been closely developed with natural language inference (Lakoff, 1970; Van Benthem, 1995; MacCartney, 2009; Icard, 2012; Angeli et al., 2016; Hu and Moss, 2018; Chen et al., 2021). Recently, some efforts have started to consider monotonicity in attacks, including creating test sets to understand NLI models' behaviour (Richardson et al., 2020; Yanaka et al., 2019, 2019, 2020; Geiger et al., 2020). The existing work, however, has not performed systematic attacks based on natural logic. The core idea of monotonicity (e.g., downward monotone) and projection has not been systematically considered. The models have not been combined with the state-of-the-art adversarial attack framework and search strategies for the general purpose of adversarial attacks. For example, Richardson et al. (2020) and Yanaka et al. (2020) generate adversarial examples from a small vocabulary and pre-designed sentence structures. The effort of Yanaka et al. (2019) is limited by only considering one-edit distance between a premise and hypothesis. We aim to explore principled approaches to constructing perturbations based on natural logic, and the control of the quality of attack generation can leverage the continuing advancement of language models. The proposed attack settings, along with the breakdown of attack categories, help reveal the properties of victim models in both label-preserving and label-flipping attacks. ## 3 NatLogAttack: A Natural-logic-based Attack Framework This section introduces NatLogAttack, a systematic adversarial attack framework centring around natural logic. The overview of NatLogAttack's generation and attack process is depicted in Figure 1. Below we will introduce the background, attack principles, setups, and each component of the framework. ### Background The study of natural logic can be traced back to Aristotle's syllogisms. Rather than performing deduction over an abstract logical form, natural logic models inference in natural language by operating on the structure or surface form of language (Lakoff, 1970; van Benthem, 1988; Valencia, 1991; Van Benthem, 1995; Nairn et al., 2006; MacCartney and Manning, 2009; Icard, 2012; Angeli and Manning, 2014; Hu and Moss, 2018; Chen and Gao, 2021; Chen et al., 2021). It allows for a wide range of intuitive inferences in a conceptually clean way that we use daily and provides a good framework for attacking inference models--we doubt that a victim model vulnerable to such natural attacks indeed performs reliable reasoning. Our work uses the natural logic variant proposed by MacCartney and Manning (2009) and MacCartney (2009), which extends the prior formalism to model the entailment relations between two spans of texts with seven relations \(\mathfrak{B}=\{\,\equiv,\sqsubseteq,\sqsubseteq,\wedge,\,|\,,\smile,\#\,\}\), representing _equivalence_, _forward entailment_, _reverse entailment_, _negation_, _alternation_, _cover_, and _independence_, respectively. Through projection based on _monotonicity_ in context, local lexical-level entailment relations between a premise and hypothesis can be aggregated to determine the entailment relations at the sentence-pair level. For completeness of this paper, we highlight the key building blocks in Appendix A. ### NatLogAttack Setups and Principles Formally, given a premise sentence \(P\), its \(n\)-word hypothesis \(H=(h_{1},h_{2},\cdots,h_{n})\), and the ground-truth natural language inference label \(y_{g}=\mathbb{L}(P,H)\), NatLogAttack generates a hypothesis \(H^{*}\) that satisfies a desired target label \(y^{*}_{g}=\mathbb{L}(P,H^{*})\). The attacking pair \(\langle P,H^{*}\rangle\) is generated only if the original pair \(\langle P,H\rangle\) is correctly classified by a victim model \(\mathbb{F}\). Accordingly, we denote \(y=\mathbb{F}(P,H)\) as the natural language inference label predicated by the victim model \(\mathbb{F}\) for the original pair and denote \(y^{*}=\mathbb{F}(P,H^{*})\) as the predicted label for the attacking pair. We propose to perform the attacks in two setups: the _label-preserving_ and _label-flipping_ attacks. The attack principles and setups are summarized in Table 1. A _label-preserving_ attack generates adversarial examples with \(y^{*}_{g}=y_{g}\), aiming to test the robustness of victim models on different inputs that have the same label--it attacks victim models under perturbations that do not change the inferential labels of the original premise-hypothesis pair. The _label-flipping attacks_, on the other hand, aim at attacking victim models with perturbations that are key to differentiating two different logical relations where \(y^{*}_{g}\neq y_{g}\). Note that natural logic can be naturally used to generate label-flipping attacks, and our work here is among the first to explore this type of attacks for natural language understanding, although label-flipping attacks have been explored in image attacks (Tramer et al., 2020). \begin{table} \begin{tabular}{c c c c} \hline \hline **Setups** & **Label \(y_{g}\to y^{*}_{g}\)** & **Strategy** & **NatLogic Relations** \\ \hline \multirow{2}{*}{Label-preserving} & E \(\rightarrow\) E & \(H\)\(\vdash\)\(H^{*}\) & \(H\)\(\vdash\)\(H^{*}\) & \(H\)\(\vdash\)\(H\)\(\vdash\)\(H^{*}\) \\ & C \(\rightarrow\) C & \(H^{*}\)\(\vdash\)\(H\) & \(H\)\(\vdash\)\(H\)\(\vdash\)\(H\)\(\vdash\)\(H\)\(\vdash\)\(H^{*}\) \\ & N \(\rightarrow\) N & \(H^{*}\)\(\vdash\)\(H\) & \(H\)\(\vdash\)\(H^{*}\) \\ \hline \multirow{2}{*}{Label-flipping} & E \(\rightarrow\) C & \(H\)\(\vdash\)\(\neg\)\(H^{*}\) & \(H\)\(\vdash\)\(H^{*}\) or \(H\)\(\vdash\)\(H^{*}\) \\ & E \(\rightarrow\) N & \(H\)\(\vdash\)\(H^{*}\) and \(H\)\(\vdash\)\(H^{*}\) & \(H\)\(\vdash\)\(H^{*}\) \\ \cline{1-1} & C \(\rightarrow\) E & \(\neg\)\(H^{*}\)\(\vdash\)\(H\) & \(H\)\(=\)\(H^{*}\)\(\vdash\)\(\neg\)\(H^{*}\) \\ \hline \hline \end{tabular} \end{table} Table 1: Generation principles of NatLogAttack and natural logic relations between the original hypothesis \(H\) and the generated hypothesis \(H^{*}\), where E, C and N stand for _entailment_, _contradiction_ and _neutral_. Figure 1: Overview of NatLogAttack generation and attacking process. The third column of the table (_strategy_) lists the logic conditions between the generated hypothesis \(H^{*}\) and the original hypothesis \(H\) that satisfy the desired properties of preserving or flipping labels to obtain the target label \(y_{g}^{*}\). Consider the second row of the label-preserving setup (_i.e._, \(C\to C\)), in which NatLogAttack generates a hypothesis \(H^{*}\) with \(y_{g}^{*}=y_{g}=\textit{contradiction}\). This is achieved by ensuring the natural language inference label between \(H^{*}\) and \(H\) to obey _entailment_: \(H^{*}\vDash H\). 2 This guarantees the sentence pair \(\langle P,H^{*}\rangle\) to have a _contradiction_ relation. In the natural logic formalism [15], this is implemented with \(H\equiv H^{*}\) or \(H\sqsupset H^{*}\). Consider another example. In the last row of the _label-flipping_ setup, NatLogAttack generates a new hypothesis \(H^{*}\) with \(y_{g}^{*}\) = _entailment_ from a _contradiction_ pair, implemented by following the natural logic relations \(H\equiv\neg H^{*}\) or \(H\sqsupset\neg H^{*}\). Footnote 2: We use the _entailment_ notation that is same as in [15]. **Constraint 3.1**: _We constrain NatLogAttack from generating neutral attack examples (\(y_{g}^{*}\)= neutral) using the premise-hypothesis pairs with \(y_{g}\)=contradiction, because two contradictory sentences may refer to irrelevant events from which a neutral pair cannot be reliably generated. 3_ Footnote 3: For example, The SNLI [1] and MNLI datasets [16] were annotated under a guideline with a specific assumption of treating potentially irrelevant events as _contraction_. **Constraint 3.2**: NatLogAttack _is also constrained from generating contradiction and entailment attacks (\(y_{g}^{*}\)= contradiction or \(y_{g}^{*}\)= entailment) from neutral pairs (\(y_{g}\)=neutral), as there are many ways two sentences being neutral, including reverse entailment and diverse semantic relations. The contradiction and entailment pairs cannot be reliably generated._ ### Generation and Quality Control #### 3.3.1 Preparing Natural Logic Relations As shown in the bottom-left part of Figure 1, given a premise-hypothesis pair \(\langle P,H\rangle\), the ground-truth label \(y_{g}\), and the target label \(y_{g}^{*}\), NatLogAttack retrieves natural logic relations from the last column of Table 1. Consider _label-preserving_ attacks and take \(y_{g}^{*}=y_{g}=\textit{entailment}\) as an example. From the last column in the first row of the _label-preserving_ setup, NatLogAttack finds and pushes the relations \(\equiv\) and \(\sqsubset\) into the _natural-logic relations set_, \(\mathfrak{R}_{g}^{*}=\{\equiv,\sqsubset\}\), where \(\mathfrak{R}_{g}^{*}\) includes the natural-logic relations between \(H\) and \(H^{*}\) and will be used to generate the latter. Note that \(r_{g}^{*}\in\mathfrak{R}_{g}^{*}\) is one of relations in \(\mathfrak{R}_{g}^{*}\). We first copy \(H\) to \(H^{(1)}\), denoted as \(H^{(1)}\ \leftarrow\ H\) for the convenience of notation, because the generation-and-attack process may be performed multiple rounds if one round of attacks fail. Then we use the notation \(H^{(1)}\) and \(H^{(2)}\) to refer to the original and a generated hypothesis sentence in each round. Note that in the above example, as will be discussed below, within each round of generation, NatLogAttack will provide a set of attacks to perform multiple (iterative) attacks. #### 3.3.2 Candidate Generation ``` Input: Sentence \(H^{(1)}\) with tokens \((h_{1}^{(1)},\cdots,h_{n}^{(1)})\), target natural-logic relation set \(\mathfrak{R}_{g}^{*}\) Output: Candidate sentence set \(\mathcal{H}\) 1Init\(\mathcal{H}=\varnothing\) 2\(\mathfrak{L}=\mathrm{natlog}(H^{(1)})\) 3foreach\(h_{1}^{(1)}\in H^{(1)}\) and \(r_{g}^{*}\in\mathfrak{R}_{g}^{*}\)do 4\(\mathfrak{R}_{\textit{local}}^{*}=\mathfrak{L}_{\mathfrak{B}}[idx^{\textit{ \scriptsize\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text mation using the Stanford _natlog_ parser4_(line 2)_. Specifically for \(h_{i}^{(1)}\), suppose the parser outputs an ordered relation list: \(\mathfrak{L}_{i}=\langle\equiv,\sqsupseteq,\sqsubseteq,\wedge,\mid,\smile,\#\rangle\), this returned list actually encodes the contextualized projection information, which we leverage to substitute \(h_{i}^{(1)}\) with \(h_{i}^{\prime}\) to generate \(H_{i}^{(2)}\) that satisfies relation \(r_{g}^{*}\). Footnote 4: [https://stanfordnlp.github.io/CoreNLP/natlog.html](https://stanfordnlp.github.io/CoreNLP/natlog.html). In natural logic, when determining the sentence-level logical relation between a premise and hypothesis sentence, _projection_ is used to map local lexicon-level logical relation to sentence-level relations by considering the context and monotonicity. However, in adversarial attacks, NatLogAttack needs to take the following reverse action: \[\mathfrak{R}_{local}=\mathfrak{L}_{\mathcal{B}}[idx^{\mathfrak{L}_{i}}(r_{g}^{* })] \tag{1}\] where \(r_{g}^{*}\) is the target sentence-level natural logic relation (in our above example, suppose \(r_{g}^{*}\)='\(\sqsubseteq\)'). Then \(idx^{\mathfrak{L}_{i}}(.)\) returns the index of that relation in \(\mathfrak{L}_{i}\). For '\(\sqsubseteq\)', the index is 3. Then the index is used to find the lexicon-level (local) relation from the predefined ordered list \(\mathfrak{L}_{\mathcal{B}}=\langle\,\sqsubseteq,\sqcap,\wedge,\mid,\smile,\#\,\rangle\). In the above example we will get \(\mathfrak{L}_{\mathcal{B}}[3]\)='\(\sqsubseteq\)'. Again, Equation 1 presents a reverse process of the regular _projection_ process in natural logic. In other words, the ordered relation list provided by the _natlog_ parser for each word token, when used together with the predefined (ordered) relation list \(\mathfrak{L}_{\mathcal{B}}\), specifies a mapping between global (sentence-level) natural-logic relations and local (lexicon-level) relations. Note also that the output \(\mathfrak{R}_{local}\) is a set, because \(\mathfrak{L}_{i}\) is an ordered list that may contain the same relation multiple times. Basic Word Perturbation.For a word token \(h_{i}\), we replace it with word \(h_{i}^{\prime}\) to ensure the local relation \(\langle h_{i},h_{i}^{\prime}\rangle\) to be \(r_{local}\in\mathfrak{R}_{local}\). NatLogAttack extracts natural-logic relation knowledge from knowledge bases to obtain word candidates for the desired relation types. The word perturbation of NatLogAttack focused on five relations in Table 8. **Constraint 3.3**: _Since cover (\(\smile\)) is very rare and independence (\(\#\)) is ambiguous, NatLogAttack is constrained to only focus on utilizing the remaining five relations: \(\{\equiv,\sqsubseteq,\sqsupseteq,\wedge,\mid\}\)._ We attack the victim models using the most basic semantic relations explicitly expressed in knowledge bases and knowledge implicitly embedded in large pretrained language models. Specifically, we use WordNet [10] to extract the desired lexical relations. For a word token \(h_{i}\), we search candidate words \(h_{i}^{\prime}\) that has one of the following relations with \(h_{i}\): \(\{\equiv,\sqsubseteq,\sqsupseteq,\wedge,\mid\}\). Synonyms are used as \(h_{i}^{\prime}\) to substitute \(h_{i}\) for constructing \(H^{(2)}\) with an _equivalence_ relation to \(H^{(1)}\) (_line 6_), hypernyms are used for _forward entailment (line 10)_, and hyponyms for _reverse entailment (line 14)_. Due to the transitiveness of _forward entailment_ (\(\sqsubseteq\)) and _reverse entailment_ (\(\sqsupseteq\)), we centre around \(h_{i}\) to find its hypernyms and hyponyms but restrict the distances within a threshold to avoid generating sentences that are semantically unnatural, contain overgeneralized concepts, or are semantically implausible. Later, we will further use a language model to control the quality. For _alternation_, the perturbation candidates \(h_{i}^{\prime}\) are words that share the common hypernym with \(h_{i}\) (_line 18_). Following MacCartney (2009), we do not use antonyms of content words for the _negation_ relation but instead use them to construct _alternation_ hypotheses (_line 19_). For the _negation_ (_line 23_), a list of negation words and phrases is used to construct new hypotheses. Note that while our experiments show the NatLogAttack has been very effective and outperforms other attack models, some of the components can be further augmented as future work. Enhancing Alternation.As discussed above, attacks may run multi-rounds if the prior round fails. For _alternation_ substitution, NatLogAttack does not replace the word token that has been substituted before, since the _alternation_ of _alternation_ does not guarantee to be the _alternation_ relation. In addition to constructing _alternation_ hypotheses using WordNet, we further leverage DistilBert [11] to obtain the alternation candidates using the function _AltLM_ (_line 20_). Specifically, we mask the target word (which is a verb, noun, adjective or adverb) and prompt the language model to provide candidates. The provided candidates and replaced words are required to have the same POS tags. Insertion and Deletion.In addition to substitution, NatLogAttack also follows natural logic and \begin{table} \begin{tabular}{c c c} \hline \hline **Monotonicity** & **Upward** & **Downward** \\ \hline \multirow{3}{*}{**Syntax**} & \(adj+n\sqsubseteq n\) & \(adj+n\sqsupset n\) \\ & \(v+adv\sqsubseteq v\) & \(v+adv\sqsubseteq v\) \\ & \(s+PP\sqsubseteq s\) & \(s+PP\sqsubseteq s\) \\ \hline \hline \end{tabular} \end{table} Table 2: Insertion and deletion operations applied in the upward and downward context. \(s\) is short for _sentence_. monotonicity to construct examples using the insertion and deletion operations. As shown in Table 2, adjectives, adverbs and prepositional phrases are leveraged in the upward and downward context of monotonicity to enhance the attacks for entailment ('\(\sqsubset\)') and reverse entailment ('\(\sqsupset\)'). We include the details in Appendix B, which is built on Stanford _CoreNLP_ parser and pretrained language models. Note that the syntactic rules do not guarantee to generate sentences with the desired NLI labels (e.g., see Partee (1995) for the discussion on the semantic composition of _adjective + noun_) and the process is only for generating candidates. We will use the pretrained language model to further identify good adversarial examples at a later stage. Both the insertion and deletion operations are used with monotonicity and projection context to generate different relations. #### 3.3.3 Attack Quality Control \(\mathtt{NatLogAttack}\) uses DistilBert Sanh et al. (2019) to calculate the pseudo-perplexity scores Salazar et al. (2020) for all generated hypotheses \(\mathcal{H}=\{H_{1}^{(2)},H_{2}^{(2)},\cdots,H_{m}^{(2)}\}\), and keeps only a maximum of 100 candidates with the lowest perplexity values. In our development, we found that the quality control stage is important for ensuring the quality of attack examples, particularly for reducing word perturbation mistakes resulting from incorrect interpretation of the words being substituted, which often results in unnatural hypothesis sentences, as well as reducing other sources of low-quality attacks including over-generalization of concepts and implausible semantics caused by insertion and deletion. The output of this stage is an ordered list of candidate attacks \(\mathcal{H}_{sqc}=\langle H_{r_{1}}^{(2)},H_{r_{2}}^{(2)},\cdots,H_{r_{k}}^{(2 )}\rangle\). ### Iterative and Multi-rounds Attacking As discussed above, \(\mathtt{NatLogAttack}\) performs iterative attacking within each round of generation and then multi-round attacks if the current round fails. Within each round, the original premise \(P\) and each hypothesis in the ranked hypotheses list \(\mathcal{H}_{sqc}\) form an attack list \(\langle\langle P,H_{r_{1}}^{(2)}\rangle,\cdots,\langle P,H_{r_{k}}^{(2)}\rangle\rangle\). As shown in Figure 1, when an attack succeeds, we output the corresponding hypothesis as \(H^{*}\), which is sent for evaluation. If an attack fails, the next pair in the ranked attack list will be tried until the list is exhausted. Then \(\mathtt{NatLogAttack}\) organizes the next round of attacks. In total \(\mathtt{NatLogAttack}\) generates a maximum of 500 attacks for each \(\langle P,H\rangle\) pair. When generating the next round attacks, we identify the adversarial pair for which the victim model has the lowest confidence (indexed as \(j_{lc}\)) over the ground-truth class \(y_{g}^{*}\): \[j_{lc} =\operatorname*{arg\,min}_{j\in\{r_{1},\ldots,r_{k}\}}\{s_{r_{1 }},\ldots,s_{r_{k}}\} \tag{2}\] \[s_{r_{j}} =o(y_{g}^{*}|(P,H_{r_{j}}^{(2)})) \tag{3}\] where \(o(*)\) returns the corresponding softmax probabilities of the output layer. We then copy \(H_{j_{lc}}^{(2)}\) to \(H^{(1)}\), denoted as \(H^{(1)}\gets H_{j_{lc}}^{(2)}\). The attack continues until the victim model is deceived to make a wrong prediction \(y^{*}\) that is different from the ground truth \(y_{g}^{*}\) or the maximum number of attacks is reached. ## 4 Experiments and Results ### Experimental Setup DatasetOur study uses SNLI Bowman et al. (2015), MNLI Williams et al. (2018), MED Yanaka et al. (2019), HELP Yanaka et al. (2019), and SICK Marelli et al. (2014); Hu et al. (2020) datasets. The MED upward and downward subsets are denoted as \(\text{MED}_{\text{up}}\) and \(\text{MED}_{\text{down}}\), respectively. Details of the datasets and the setup for training can be found in Appendix C. Attack and Victim ModelsWe compared the proposed model to five representative attack models including the recent state-of-the-art models: \(\mathtt{Clare}\)Li et al. (2021), BertAttack Li et al. (2020), PWWS Ren et al. (2019), TextFooler Jin et al. (2020) and PSO Zang et al. (2020). Specifically, we used the implementation made publicly available in TextAttack.5 For victim models, we used uncased BERT Devlin et al. (2019) and RoBERTa base models Liu et al. (2019). The accuracy of victim models is included in Table 3, which is comparable to the state-of-the-art performance. Footnote 5: [https://github.com/QData/TextAttack](https://github.com/QData/TextAttack) Evaluation MetricsThree metrics are used to evaluate the models from different perspectives. The sign \(\uparrow\) (\(\downarrow\)) indicates that the higher (lower) the values are, the better the performance is. \begin{table} \begin{tabular}{c c c c c c c} \hline \hline Models & SNLI & MED & \(\text{MED}_{\text{up}}\) & \(\text{MED}_{\text{down}}\) & MNLI & SICK \\ \hline BERT & 89.99 & 77.68 & 74.42 & 81.72 & 84.32 & 87.06 \\ RoBERTa & 91.53 & 73.37 & 80.97 & 70.72 & 87.11 & 87.79 \\ \hline \hline \end{tabular} \end{table} Table 3: Victim models’ accuracy on different datasets. * **Human Validated Attack Success Rate (HVASR \(\uparrow\)).** Most existing attacking methods are evaluated with attack success rates that are not validated by human subjects, assuming that the attacking methods could generate adversarial examples of the desired labels. This assumption works for many NLP tasks such as sentiment analysis and text classification. However, this is not the case in NLI, since the logical relationships can be easily broken during the generation process. As observed in our experiments, although the state-of-art attacking models (BertAttack and Clare) attain high attack success rates on various NLP tasks, human-validated evaluation demonstrates that they are much less effective in attacking natural language reasoning. To reliably evaluate the attack performance, we use _Human Validated Attack Success Rate_ (HVASR). Specifically, we used Amazon Mechanical Turk6 to validate if the generated attack examples belong to the desired relations. Each example was annotated by at least three workers and the label is determined by the majority voting. HVASR is the percentage of _successful-and-valid_ adversarial examples that successfully deceived the victim models to make the wrong prediction and at the same time the majority of the annotators think their NLI labels are the desired target labels \(y_{g}^{*}\). While HVASR is our major evaluation metric, we also use query numbers and perplexity to provide additional perspectives for observations. Footnote 6: [https://www.mturk.com/](https://www.mturk.com/) * **Query number (QN \(\downarrow\))** refers to the average number of times that a successful attack needs to query the victim model (Zang et al., 2020; Li et al., 2020). QN can reflect the efficiency (but not effectiveness) of an attack model. * **Perplexity (PPL \(\downarrow\))** reflects the fluency and quality of generated examples. Same as in (Zang et al., 2020; Li et al., 2021), it is computed with GPT-2 (Radford et al., 2019) during evaluation. ### Results and Analysis Results on Label Preserving AttacksTable 4 shows the performance of different models on _label-preserving attacks_. We can see that NatLogAttack consistently achieves the best performance on HVASR. The detailed results on MED also show that NatLogAttack has a better ability to construct adversarial examples in both upward and downward monotone. NatLogAttack also shows superior performance on average QN and PPL in nearly all setups. We can see that NatLogAttack has a large HVASR and small QN value in \(\text{MED}_{\text{up}}\), suggesting that NatLogAttack can easily generate attacks in the upward monotone. However, in \(\text{MED}_{\text{down}}\), NatLogAttack needs more efforts (QN). Our further analysis reveals that this is because in the downward monotone, the attack model relies more on the insertion operation than deletion, and the former is more likely to result in unsuccessful attempts. Figure 2 further compares the query numbers (QNs) of different attack models on BERT and RoBERTa in terms of the medians (instead of means) and density of QN. We can see that the majority of query numbers of NatLogAttack are rather small and medians are less than 12 for on both SNLI and MED, showing that NatLogAttack could attack successfully with very limited attempts in most cases. For each attack model, the density of QN on \begin{table} \begin{tabular}{l|l|c c c c c c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Victim} & \multirow{2}{*}{Attack} & \multicolumn{2}{c|}{SNLI} & \multicolumn{2}{c|}{MED} & \multicolumn{2}{c|}{MED\({}_{\text{up}}\)} & \multicolumn{2}{c|}{MED\({}_{\text{down}}\)} & \multicolumn{2}{c|}{MNLI} & \multicolumn{2}{c}{SICK} \\ \cline{3-14} & & \multirow{2}{*}{Model} & \multicolumn{1}{c|}{HVASR} & \multicolumn{1}{c|}{ON} & \multicolumn{1}{c|}{PPL} & HVASR & \multicolumn{1}{c|}{QN} & PPL & HVASR & \multicolumn{1}{c|}{QN} & PPL & HVASR & \multicolumn{1}{c|}{QN} & PPL & HVASR & \multicolumn{1}{c|}{QN} & PPL & HVASR & \multicolumn{1}{c|}{QN} & PPL \\ \hline \multirow{8}{*}{**Deep**} & PWWS & 29.9 & 175.8 & 15.96 & 45.9 & 115.3 & 18.18 & 43.1 & 119.1 & 17.98 & 48.3 & 111.6 & 18.38 & 27.8 & 184.2 & 13.87 & 31.0 & 147.1 & 17.75 \\ & TextFooler & 34.5 & 58.4 & 15.88 & 47.3 & 51.2 & 17.96 & 47.8 & 51.2 & 17.77 & 46.9 & **51.2** & 18.15 & 37.3 & 74.7 & 13.62 & 30.7 & 50.0 & 17.62 \\ & PSO & 20.5 & 91.8 & 16.06 & 38.8 & 81.9 & 18.9 & 37.7 & 83.9 & 18.4 & 39.7 & 79.7 & 18.25 & 32.0 & 103.4 & 13.81 & 22.3 & 115.86 & 17.77 \\ & BertAttack & 31.6 & 76.4 & 17.07 & 39.9 & 62.3 & 18.66 & 31.1 & 63.2 & 18.7 & 47.4 & 61.5 & 19.02 & 32.7 & 86.5 & 14.77 & 32.2 & 91.7 & 18.18 \\ & Clare & 19.9 & 33.8 & 16.7 & 36.7 & 19.7 & 19.8 & 13.1 & 29.9 & 20.55 & 18.30 & 42.8 & 19.8 & 32.5 & 22.99 & 18.67 & 23.1 & 246.9 & 18.60 \\ & NatLogAttack & **35.7** & **42.8** & **14.78** & **56.9** & **42.7** & **17.43** & **57.9** & **31.71** & **72.4** & **56.0** & **55.4** & **17.62** & **39.7** & **50.1** & **13.47** & **43.6** & **40.3** & **16.73** \\ \hline \multirow{8}{*}{**Deep**} & PWWS & 35.5 & 177.1 & 16.05 & 39.8 & 118.5 & 18.15 & 41.3 & 12.1 & 18.30 & 38.7 & 115.8 & 18.00 & 28.7 & 189.6 & 13.83 & 35.2 & 143.4 & 17.91 \\ & TextFooler & 30.0 & 59.7 & 15.93 & 42.6 & 50.2 & 18.06 & 38.7 & 49.5 & 17.98 & 45.6 & 50.82 & 18.13 & 34.0 & 78.2 & 13.61 & 33.8 & 49.6 & 17.69 \\ \cline{1-1} & PSO & 19.2 & 92.9 & 16.17 & 34.3 & 81.8 & 18.14 & 27.1 & 83.2 & 18.03 & 39.3 & 80.19 & 18.26 & 28.3 & 99.4 & 13.85 & 24.9 & 115.0 & 17.75 \\ \cline{1-1} & BertAttack & 34.9 & 78.3 & 16.89 & 47.3 & 61.1 & 18.77 & 47.2 & 59.7 & 18.66 & 47.4 & 62.4 & 18.89 & 39.2 & 91.2 & 14.65 & 35.6 & 95.8 & 18.21 \\ \cline{1-1} & Clare & 14.7 & 326.6 & 16.65 & 27.4 & 199.8 & 18.54 & 17.9 & 203.7 & 18.20 & 35.2 & 195.9 & 18.88 & 22.6 & 296.7 & 16.44 & 27.5 & 244.0 & 18.16 \\ \cline{1-1} & NatLogAttack & **36.5** & **45.0** & **14.69** & **55.5** & **33.9** & **17.37** & **59.7** & **27.5** & **17.34** & **52.3** & **40.2** & **17.40** & **39.7** & **46.1** & **13.53** & **49.3** & **42.9** & **16.61** \\ \hline \hline \end{tabular} \end{table} Table 4: Performance of different attack models in label-preserving attacks. The bold font marks the best performance under each evaluation setup. The improvements of NatLogAtt over the second-best results (marked with underscores) are statistically significant (\(p<0.05\)) under one-tailed paired t-test. BERT and RoBERTa is close to each other and the medians are indiscernible and are represented by the same red dot in the figure. Results on Label Flipping AttacksTable 5 shows the performance of NatLogAttack on the _label-flipping attacks_. Note that there has been little prior work providing systematic label-flipping attacks for NLP tasks. This new angle of evaluation is more easily implemented with logic-based attacks and provides additional insights. Specifically, the table shows that the numbers of queries that NatLogAttack sent to the victim models are much smaller than those in the _label-preserving_ setting presented in Table 4, suggesting that the victim models are more vulnerable in _label-flipping setting_. For example, we can see that most of the query numbers are within 1-5 in Table 5. The pretrained victim models are capable of memorizing the superficial features related to the original label and have difficulty in capturing the logical relationship when we alter them between sentences by keeping the majority of words untouched. In both the _label-preserving_ and _label-flipping_ setup, the HVASR may still be further improved, although the proposed models have substantially outperformed the off-the-shelf state-of-the-art attack models and cautions have been exercised in all attack generation steps, which leaves room for more research on improving logic-based attacks as future work. Examples and Analysis.Table 6 provides the generated attack examples in the _label-preserving_ setup (\(E\to E\)), in which we can see the quality of attacks generated by NatLogAttack is clearly higher. The baseline attacking models generate adversarial examples by replacing words based on word embedding or language models, which can easily break the logic relationships. Some examples in Table 6 show that the baselines often rely on semantic _relatedness_ to construct adversarial examples, which is not detailed enough for NLI and hence break the logic relations (e.g., the last BertAttack example). Also, the last example of Clare shows that the model deletes words without considering the context (downward) monotonicity, resulting in an invalid attack. Note that the baseline models modify both premises and hypotheses and NatLagAttack focuses only on modifying hypotheses--it is straightforward to copy or adapt the operations of NatLagAttack to modify premises--in many applications, it is more natural to modify the hypotheses and keep the premises (evidences) untouched. Table 7 shows more adversarial examples generated by NatLogAttack in the _label-flipping_ setup. For all the six examples, the prediction of the victim model RoBERTa remains unchanged (_i.e_., _entailment_, _entailment_ and _contradiction_ for the first, middle, and last two examples, respectively), while the ground-truth labels are now _contradiction_, _neutral_, and _entailment_, respectively. The victim model had difficulty in telling the difference, which renders an angle to challenge the models' ability of understanding and perform reasoning. ## 5 Conclusion Towards developing logic-based attack models, we introduce a framework NatLogAttack, which centres around the classical natural logic formalism. The experiments with human and automatic evaluation show that the proposed framework outperforms the existing attack methods. Compared to these models, NatLogAttack generates better adversarial examples with fewer visits to the victim models. In addition to the widely used label-preserving attacks, NatLogAttack also provides label-flipping attacks. The victim models are found to be more vulnerable in this setup and NatLogAttack succeeds in deceiving them with \begin{table} \begin{tabular}{c|c|c|c|c|c|c c c c c c} \hline \hline \multirow{2}{*}{Vict} & \multirow{2}{*}{Lab.} & \multicolumn{3}{c|}{SNLI} & \multicolumn{3}{c|}{MED} & \multicolumn{3}{c|}{MNLI} & \multicolumn{3}{c}{SICK} \\ \cline{3-13} & & Flip. & \multicolumn{1}{c|}{wasse} & \multicolumn{1}{c|}{Qn} & \multicolumn{1}{c|}{PPL} & \multicolumn{1}{c|}{wasse} & \multicolumn{1}{c|}{Qn} & \multicolumn{1}{c|}{PPL} & \multicolumn{1}{c|}{wasse} & \multicolumn{1}{c}{Qn} & \multicolumn{1}{c}{PPL} \\ \hline \multirow{4}{*}{**Model**} & E-C & 37.9 & 1.0 & 14.8 & 48.7 & 1.0 & 16.9 & 33.2 & 1.4 & 13.5 & 31.8 & 10.4 & 16.2 \\ & E-N & 57.5 & 2.9 & 14.9 & 50.9 & 2.8 & 17.7 & 50.3 & 4.7 & 13.7 & 55.8 & 6.5 & 16.1 \\ & C-E & 33.4 & 1.0 & 14.4 & - & - & - & 34.2 & 1.1 & 13.0 & 37.1 & 1.0 & 16.0 \\ \hline \multirow{4}{*}{**Model**} & E-C & 43.5 & 1.4 & 14.6 & 49.8 & 2.9 & 16.7 & 36.8 & 5.0 & 13.5 & 32.1 & 13.9 & 16.4 \\ & E-N & 56.8 & 2.6 & 14.8 & 52.1 & 3.0 & 17.6 & 50.7 & 4.8 & 13.8 & 57.4 & 4.4 & 16.1 \\ \cline{1-1} & C-E & 36.4 & 1.8 & 14.5 & - & - & - & 35.1 & 1.2 & 13.0 & 37.7 & 1.0 & 16.0 \\ \hline \hline \end{tabular} \end{table} Table 5: The evaluation for label-flipping attacks. Figure 2: Query numbers (QNs) of attack models. Red dots are the medians of QNs of different attack models. The blue and orange shapes show the densities of query numbers for BERT and RoBERTa, respectively. much smaller numbers of queries. NatLogAttack provides an approach to probing the existing and future NLI models' capacity from a key viewpoint and we hope more logic-based attacks will be further explored for understanding the desired property of reasoning. ### Limitations Our research focuses on the adversarial attack itself and provides a framework that can be potentially used in different adversarial training strategies. We limit ourselves on attacks in this work, but it would be interesting to investigate logic-based attacks in adversarial training. We will leave that as future work. The proposed attack approach is also limited by the limitations of natural logic, while the latter has been a classical logic mechanism. For example, our proposed framework has less deductive power than first-order logic. It cannot construct attacks building on inference rules like _modus ponens_, _modus tollens_, and _disjunction elimination_. As discussed in the paper, some components of the generation and quality control process can be further enhanced. ## Acknowledgements The research is supported by the NSERC Discovery Grants and the Discovery Accelerator Supplements. We thank Bairu Hou for his contributions to an early version of the proposed model.
2306.12973
From ontology design to user-centred interfaces for music heritage
In this article we investigate the bridge between ontology design and UI/UX design methodologies to assist designers in prototyping web applications for information seeking purposes. We briefly review the state of the art in ontology design and UI/UX methodologies, then we illustrate our approach applied to a case study in the music heritage domain.
Giulia Renda, Marco Grasso, Marilena Daquino
2023-06-22T15:35:33Z
http://arxiv.org/abs/2306.12973v1
# From ontology design to user-centred interfaces for music heritage ###### Abstract In this article we investigate the bridge between ontology design and UI/UX design methodologies to assist designers in prototyping web applications for information seeking purposes. We briefly review the state of the art in ontology design and UI/UX methodologies, then we illustrate our approach applied to a case study in the music heritage domain. Music heritage; ontology design; ux design; generous interfaces 2019 ac [11]. On the other hand, preventing the user from seeing the "whole picture" could be disorienting. Studies in Information Science suggest that "third generation" information systems should first filter out data of interest and then apply data analysis and knowledge discovery tools on the target [9]. To meet this call, scholars advocate for more generous interfaces [22], leading to an approach based on "overview first, zoom and filter, then details on demand" [19]. To reconcile these two different perspectives, the user interface must be able to support multiple tasks and user journeys [1]. In this respect, stories are powerful tools for designing experiences, as they present facts connected by causal relationships, and help to formulate users' motivational aspects or to describe unforeseen situations [7]. Design Thinking (DT) [18][2][6][13] is a user-centred approach to problem solving, based on a hypothesis-driven, abductive and dialectical approach to map requirements to design ideas. Previous studies have shown that DT effectively improves the quality of the ideas generated and reduces the risk of failure [12]. It consists of six phases: empathise, define, ideate, prototype, test, and implement. In the data collection phase, various methods are used, including personas, stories, stakeholder and user journey maps [4]. Several studies [17][20][15] have attempted to incorporate TD into specific aspects of ontologies creation. Results mostly provide formal definitions of HCI and DT methods, but do not inform us on how to leverage real-world domain ontologies in the DT process. In this work, we aim at filling this gap, suggesting the application of methods and analyses widely recognised as tools of the DT methodology directly to the domain ontologies during the ontology design phase. ## 3 Methodology We introduce a modular workflow harmonising eXtreme Design and Design Thinking, that reuses content/user requirements in UI/UX design. We identify nine stages, grouped in three main activities (Figure 1), namely: **- Ontology design.** (1) The ontology design team outlines personas and groups them, (2) writes one or more stories for each persona, and (3) extracts competency questions. **- User interfaces design.** (4) The web development team (us) defines the most important CQs, which serve as drivers for all others, and (5) group remaining questions into meaningful clusters. (6) We analyse drivers and clusters and select appropriate visualisation types for the reference data. **- User experience design.** (7) The web development team outlines interaction patterns (via competitive analysis or focus groups), (8) selects appropriate solutions to deploy, and (9) performs user testing validation. ## 4 Case Study: Designing Polifonia Interfaces The Polifonia ontology network describes music sources, performances, instruments, and music features. The Polifonia ecosystem currently includes 9 datasets, for which 19 personas, 28 stories, and 240 competency questions have been identified2. Following our workflow, we have identified four web applications to be developed, namely: musoW3, a filter-based catalogue of music data on the web, targeted to music professionals; MELODY4, a web editor of data visualisations and stories, targeted to music domain experts; Corpus5, to perform linguistic analysis over a vast corpus of music-related text sources; and the Polifonia Web portal (in progress), to present data to lay users according to several strategies. An example of the workflow applied to Polifonia for generating musoW is the following. Footnote 2: [https://github.com/polifonia-project/stories](https://github.com/polifonia-project/stories) Footnote 3: [https://projects.dharc.unibo.it/musow/](https://projects.dharc.unibo.it/musow/) Footnote 4: [https://projects.dharc.unibo.it/melody/](https://projects.dharc.unibo.it/melody/) Footnote 5: [https://polifonia.disiunibo.it/corpus/](https://polifonia.disiunibo.it/corpus/) Footnote 6: [https://github.com/polifonia-project/stories](https://github.com/polifonia-project/stories) Footnote 7: [https://projects.dharc.unibo.it/musow/](https://projects.dharc.unibo.it/musow/) Footnote 8: [https://projects.dharc.unibo.it/melody/](https://projects.dharc.unibo.it/melody/) Footnote 8: [https://polifonia.disiunibo.it/corpus/](https://polifonia.disiunibo.it/corpus/) Footnote 9: [https://github.com/polifonia-project/stories](https://github.com/polifonia-project/stories) Footnote 10: [https://project.dharc.unibo.it/melody/](https://project.dharc.unibo.it/melody/) Footnote 11: [https://polifonia.disiunibo.it/corpus/](https://polifonia.disiunibo.it/corpus/) Footnote 12: [https://github.com/polifonia-project/stories](https://github.com/polifonia-project/stories) Footnote 13: [https://projects.dharc.unibo.it/musow/](https://projects.dharc.unibo.it/musow/) Footnote 14: [https://projects.dharc.unibo.it/melody/](https://projects.dharc.unibo.it/melody/) Footnote 15: [https://polifonia.disiunibo.it/corpus/](https://polifonia.disiunibo.it/corpus/) Footnote 16: [https://github.com/polifonia-project/stories](https://github.com/polifonia-project/stories) Footnote 17: [https://project.dharc.unibo.it/melody/](https://project.dharc.unibo.it/melody/) Footnote 18: [https://polifonia.disiunibo.it/corpus/](https://polifonia.disiunibo.it/corpus/) Footnote 19: [https://polifonia.disiunibo.it/corpus/](https://polifonia.disiunibo.it/corpus/) Footnote 20: [https://github.com/polifonia-project/stories](https://github.com/polifonia-project/stories) Footnote 21: [https://github.com/polifonia-project/stories](https://github.com/polifonia-project/stories) Footnote 22: [https://github.com/polifonia-project/stories](https://github.com/polifonia-project/stories) Footnote 23: [https://projects.dharc.unibo.it/melody/](https://projects.dharc.unibo.it/melody/) Footnote 24: [https://projects.dharc.unibo.it/melody/](https://projects.dharc.unibo.it/melody/) Footnote 25: [https://polifonia.disiunibo.it/corpus/](https://polifonia.disiunibo.it/corpus/) Footnote 26: [https://github.com/polifonia-project/stories](https://github.com/polifonia-project/stories) Footnote 27: [https://project.dharc.unibo.it/melody/](https://project.dharc.unibo.it/melody/) Footnote 28: [https://polifonia.disiunibo.it/corpus/](https://polifonia.disiunibo.it/corpus/) Footnote 29: [https://github.com/polifonia-project/stories](https://github.com/polifonia-project/stories) Footnote 20: [https://github.com/polifonia-project/stories](https://github.com/polifonia-project/stories) Footnote 21: [https://github.com/polifonia-project/stories](https://github.com/polifonia-project/stories) Footnote 23: [https://projects.dharc.unibo.it/melody/](https://projects.dharc.unibo.it/melody/) Footnote 24: [https://projects.dharc.unibo.it/melody/](https://projects.dharc.unibo.it/melody/) Footnote 25: [https://polifonia.disiunibo.it/corpus/](https://polifonia.disiunibo.it/corpus/) Footnote 26: [https://github.com/polifonia-project/stories](https://github.com/polifonia-project/stories) Footnote 27: [https://project.dharc.unibo.it/melody/](https://project.dharc.unibo.it/melody/) Footnote 28: [https://polifonia.disiunibo.it/corpus/](https://polifonia.disiunibo.it/corpus/) Footnote 29: [https://github.com/polifonia-project/stories](https://github.com/polifonia-project/stories) Footnote 20: [https://github.com/polifonia-project/stories](https://github.com/polifonia-project/stories) Footnote 21: [https://github.com/polifonia-project/stories](https://github.com/polifonia-project/stories) Footnote 22: [https://github.com/polifonia-project/stories](https://github.com/polifonia-project/stories) Footnote 23: [https://projects.dharc.unibo.it/melody/](https://projects.dharc.unibo.it/melody/) Footnote 24: [https://projects.dharc.unibo.it/melody/](https://projects.dharc.unibo.it/melody/) Footnote 25: [https://polifonia.disiunibo.it/corpus/](https://polifonia.disiunibo.it/corpus/) Footnote 26: [https://github.com/polifonia-project/stories](https://github.com/polifonia-project/stories) Footnote 27: [https://project.dharc.unibo.it/melody/](https://project.dharc.unibo.it/melody/) Footnote 28: [https://polifonia.disiunibo.it/corpus/](https://polifonia.disiunibo.it/corpus/) Footnote 29: [https://github.com/polifonia-project/stories](https://github.com/polifonia-project/stories) Footnote 20: [https://github.com/polifonia-project/stories](https://github.com/polifonia-project/stories) Footnote 21: [https://github.com/polifonia-project/stories](https://github.com/polifonia-project/stories) Footnote 22: [https://github.com/polifonia-project/stories](https://github.com/polifonia-project/stories) Footnote 23: [https://project.dharc.unibo.it/melody/](https://project.dharc.unibo.it/melody/) Footnote 24: [https://projects.dharc.unibo.it/melody/](https://projects.dharc.unibo.it/melody/) Footnote 25: [https://polifonia.disiunibo.it/corpus/](https://polifonia.disiunibo.it/corpus/) Footnote 26: [https://github.com/polifonia-project/stories](https://github.com/polifonia-project/stories) Footnote 27: [https://project.dharc.unibo.it/melody/](https://project.dharc.unibo.it/melody/) Footnote 28: [https://polifonia.disiunibo.it/corpus/](https://polifonia.disiunibo.it/corpus/) Footnote 29: [https://github.com/polifonia-project/stories](https://github.com/polifonia-project/stories) Footnote 20: [https://github.com/polifonia-project/stories](https://github.com/polifonia-project/stories) Footnote 21: [https://github.com/polifonia-project/stories](https://github.com/polifonia-project/stories) Footnote 23: [https://project.dharc.unibo.it/melody/](https://project.dharc.unibo.it/melody/) Footnote 24: [https://projects.dharc.unibo.it/melody/](https://projects.dharc.unibo.it/melody/) Footnote 25: [https://polifonia.disiunibo.it/corpus/](https://polifonia.disiunibo.it/corpus/) Footnote 26: [https://github.com/polifonia-project/stories](https://github.com/polifonia-project/stories) Footnote 27: [https://project.dharc.unibo.it/melody/](https://project.dharc.unibo.it/melody/) Footnote 28: [https://polifonia.disiunibo.it/corpus/](https://polifonia.disiunibo.it/corpus/) Footnote 29: [https://github.com/polifonia-project/stories](https://github.com/polifonia-project/stories) Footnote 20: [https://github.com/polifonia-project/stories](https://github.com/polifonia-project/stories) Footnote 21: [https://github.com/polifonia-project/stories](https://github.com/polifonia-project/stories) Footnote 22: [https://github.com/polifonia-project/stories](https://github.com/polifonia-project/stories) Footnote 23: [https://project.dharc.unibo.it/melody/](https://project.dharc.unibo.it/melody/) Footnote 24: [https://projects.dharc.unibo.it/melody/](https://projects.dharc.unibo.it/melody/) Footnote 25: [https://polifonia.disiunibo.it/corpus/](https://polifonia.disiunibo.it/corpus/) Footnote 26: [https://github.com/polifonia-project/stories](https://github.com/polifonia-project/stories) Footnote 27: [https://project.dharc.unibo.it/melody/](https://project.dharc.unibo.it/melody/) Footnote 28: [https://polifonia.disiunibo.it/corpus/](https://polifonia.disiunibo.it/corpus/) Footnote 29: [https://github.com/polifonia-project/stories](https://github.com/polifonia-project/stories) Footnote 20: [https://github.com/polifonia-project/stories](https://github.com/polifonia-project/stories) Footnote 20: [https://github.com/polifonia-project/stories](https://github.com/polifonia-project/stories) Footnote 21: [https://github.com/polifonia-project/stories](https://github.com/polifonia-project/stories) Footnote 22: [https://github.com/polifonia-project/stories](https://github.com/polifonia-project/stories) Footnote 23: [https://project.dharc.unibo.it/melody/](https://project.dharc.unibo.it/melody/) Footnote 24: [https://projects.dharc.unibo.it/melody/](https://projects.dharc.unibo.it/melody/) Footnote 25: [https://polifonia.disiunibo.it/corpus/](https://polifonia.disiunibo.it/corpus/) Footnote 26: [https://github.com/polifonia-project/stories](https://github.com/polifonia-project/stories) Footnote 27: [https://project.dharc.unibo.it/melody/](https://project.dharc.unibo.it/melody/) Footnote 28: [https://polifonia.disiunibo.it/corpus/](https://polifonia.disiunibo.it/corpus/) Footnote 29: [https://github.com/polifonia-project/stories](https://github.com/polifonia-project/stories) Footnote 20: [https://github.com/polifonia-project/stories](https://github.com/polifonia-project/stories) Footnote 21: [https://github.com/polifonia-project/stories](https://github.com/polifonia-project/stories) Footnote 22: [https://github.com/polifonia-project/stories](https://github.com/polifonia-project/stories) Footnote 23: [https://project.dharc.unibo.it/melody/](https://project.dharc.unibo.it/melody/) Footnote 24: [https://projects.dharc.unibo.it/melody/](https://projects.dharc.unibo.it/melody/) Footnote 25: [https://polifonia.disiunibo.it/corpus/](https://polifonia.disiunibo.it/corpus/) Footnote 26: [https://github.com/polifonia-project/stories](https://github.com/polifonia-project/stories) Footnote 27: [https://project.dharc.unibo.it/melody/](https://project.dharc.unibo.it/melody/) Footnote 28: [https://polifonia.disiunibo.it/corpus/](https://polifonia.disiunibo.it/corpus/) Footnote 29: [https://github.com/polifonia-project/stories](https://github.com/polifonia-project/stories) Footnote 20: [https://github.com/polifonia-project/stories](https://github.com/polifonia-project/stories) Footnote 20: [https://github.com/polifonia-project/stories](https://github.com/polifonia-project/stories) Footnote 21: [https://github.com/polifonia-project/stories](https://github.com/polifonia-project/stories) Footnote 22: [https://github.com/polifonia-project/stories](https://github.com/polifonia-project/stories) Footnote 23: [https://project.dharc.unibo.it/melody/](https://project.dharc.unibo.it/melody/) Footnote 24: [https://projects.dharc.unibo.it/melody/](https://projects.dharc.unibo.it/melody/) Footnote 25: [https://polifonia.disiunibo.it/corpus/](https://polifonia.disiunibo.it/corpus/) Footnote 26: [https://github.com/polifonia-project/stories](https://github.com/polifonia-project/stories) Footnote 27: [https://project.dharc.unibo.it/melody/](https://project.dharc.unibo.it/melody/) Footnote 28: [https://polifonia.disiunibo.it/corpus/](https://polifonia.disiunibo.it/corpus/) Footnote 29: [https://github.com/polifonia-project/stories](https://github.com/polifonia-project/stories) Footnote 20: [https://github.com/polifonia-project/stories](https://github.com/polifonia-project/stories) Footnote 20: [https://github.com/polifonia-project/stories](https://github.com/polifonia-project/stories) Footnote 21: [https://github.com/polifonia-project/stories](https://github.com/polifonia-project/stories) Footnote 23: [https://project.dharc.unibo.it/melody/](https://project.dharc.unibo.it/melody/) Footnote 22: [https://github.com/polifonia-project/stories](https://github.com/polifonia-project/stories) Footnote 24: [https://projects.dharc.unibo.it/melody/](https://projects.dharc.unibo.it/melody/) Footnote 25: [https://polifonia.disiunibo.it/corpus/](https://polifonia.disiunibo.it/corpus/) Footnote 26: [https://github.com/polifonia-project/stories](https://github.com/polifonia-project/stories) Footnote 27: [https://project.dharc.unibo.it/melody/](https://project.dharc.unibo.it/melody/) Footnote 28: [https://polifonia.disiunibo.it/corpus/](https://polifonia.disiunibo.it/corpus/) Footnote 29: [https://github.com/polifonia-project/stories](https://github.com/polifonia-project/stories) Footnote 20: [https://github.com/](https://github.com/) 2. **User Story.** Laurent publishes a weekly newsletter in which he summarises his findings in the music industry. To gather information, he created a text document with a list of music resources that he checks regularly. Unfortunately, limited searches can be done on the document. Therefore, he would like to have access to an online catalogue that allows more sophisticated filtering options. 3. **Competency Questions.** The following questions were extracted from the story: 1. CQ1: Can I search for musical content by applying filters (genre, period...)? 2. CQ2: What types of resources can I find? 3. CQ3: Is the music resource X complete or incomplete? 4. CQ4: Is a dataset attached to resource X? 5. CQ5: Can I add resources as a user? 6. CQ6: How can I share what I find on the site? Some preliminary requirements can already be detected from the questions, namely: * Data requirements: genre and time (CQ1), type (CQ2) and availability (CQ4). * Functional requirements: filters (CQ1), completeness (CQ3), crowdsourcing (CQ5), share (CQ6). 4. **Driver questions.** We identify CQ1 as the driving question, as it defines the problem space (_can I search for_), identifies the reference entity (_musical content_), and suggests how to visualise the data (_filters_). 5. **Clusters of questions.** Other CQs can be split in two groups: CQ2-4 address context information of the main entity (_type, complete/incomplete, dataset_); CQ5-6 address actions to be performed on the data (_add, share_). 6. **Select Data views.** The driver question matches a specific type of data visualisation, i.e. a filter-based exploration of resources ordered by relevance, hence no further analysis is needed. 7. **Interaction patterns**. In order to develop generous interfaces, we review web applications that present similar tasks, and then we map UI patterns to CQs: 1. CQ1: group resources under categories and show the counting for each category, to give an overview. 2. CQ2-CQ4: show lists of resources for each category on demand. 3. CQ5, CQ6: provide specialised operations when browsing the record of a resource. 8. **Deploy solutions**. We check whether there are other personas with similar information requirements (step 1). Since no other personas have similar requests and the call for action is rather specific, we continue with the development of a bespoke solution, i.e. musoW. 9. **User test.** musoW was validated in focus groups with stakeholders, competitors, and project partners. Laurent is the only persona that required a dedicated, specialised application for browsing music resources on the web. Other personas are either scholars with very specific research questions (for which we developed MELODY and the Corpus) or lay users, who do not have a specific task guiding their exploration (for which we develop a web portal). Iteratively applying the workflow to each persona may be time-consuming and does not ensure results are representative of the whole picture (rather, the result is simply going to be the sum of all requirements). Therefore, when analysing the remaining 18 personas, step 6 (Select Data views) is extended with a distant reading approach, performing an exploratory analysis of CQs. We manually annotated CQs in an online table6 (see an example in Figure 2) with scope, classes, ontology patterns, and expected type of result (e.g. list, map, single result, explanation). Footnote 6: [https://docs.google.com/spreadsheets/d16hr2fTfc4VUOHob0ALTu1r195xxfqvmWv4y2vxOZFcM](https://docs.google.com/spreadsheets/d16hr2fTfc4VUOHob0ALTu1r195xxfqvmWv4y2vxOZFcM) We analysed results to grasp an overview of priorities, data patterns, and user journeys. The preliminary analysis of CQs is available online as a Jupyter notebook7. In detail, we identified three categories of data and estimated their coverage: Footnote 7: [https://colab.research.google.com/drive/171_Yio2xOJDW6OTLxUAE1PiQZ5xgmg?usp=sharing](https://colab.research.google.com/drive/171_Yio2xOJDW6OTLxUAE1PiQZ5xgmg?usp=sharing) * bibliographic data (on music works, historical events, composers) covered by 70% of CQs; * structured music data (melody, harmony, rhythm), covered by 34% of CQs; * linguistic, full-text data (emotions, song lyrics), covered by 30% of CQs. Among these, 77% rely also on bibliographic data, and 19% also on music data. Only 5% of CQs require all three types of data. Figure 2: Manual annotation of Competency Questions We assume we can identify priorities as the most representative data requirements (bibliographic) and estimate the complexity of services to be implemented as those that satisfy niche areas (musicologists and linguists). Secondly, we analysed entities and ontology patterns. To this end, we identified an input, intermediate, and output entities for each CQ and we used a Sankey diagram to visualise journeys (Figure 3). For instance Carolina-CQ1 (Figure 2) "Where was a musical work performed" has "Music Work" as input, "Musical performance" as intermediate and "Place" as output. Music works, agents, and sources are the main access points to the knowledge graph, intermediate aspects address more or less technical aspects, such as annotations (music and linguistic data), agents, and events, and final outputs are rather diverse. Again, we assume that most recurring patterns are priorities for developing UI components and journeys. Step 7 (Interaction patterns) is performed via a competitive analysis, i.e. a user study wherein participants were requested to validate two similar web applications leveraging different UX strategies more/less similar to generous interfaces8. Results of the survey drove the final definition of UI components and their composition in the web page. Finally, step 9 (User validation) is performed via another user study, this time devoted to co-design aspects9. In particular, users are asked to answer questions on how they would like or expect a website for music data recommendation to look like and behave. Users are asked to imagine themselves in a scenario they are comfortable with (e.g. "you are at home and you want to discover new music") and to describe their research process and expectations. It's worth noting that they do not see a website to evaluate. Results of the survey are matched against decisions already taken in step 7, which provide us, in a reverse-engineering fashion, with an evaluation of the expected user satisfaction. We believe it is fair to assume that the validation of the results produced by using our methodology can be inherited by the methodology itself. Footnote 8: [https://docs.google.com/spreadsheets/d/101Bvik9aAutD9vDk2Gais-5VYarSbRO2SCu8Lz9MJJ0](https://docs.google.com/spreadsheets/d/101Bvik9aAutD9vDk2Gais-5VYarSbRO2SCu8Lz9MJJ0) Footnote 9: [https://docs.google.com/spreadsheets/d/1ky83VvA1IReMCySI_Iagedw5QYH_s5KXNdPict1yWFe320](https://docs.google.com/spreadsheets/d/1ky83VvA1IReMCySI_Iagedw5QYH_s5KXNdPict1yWFe320) ## 5 Conclusion We defined a workflow that leverages ontology requirements in UI/UX design to (1) develop ideas and prototypes that match data requirements, (2) have the UI/UX design iteratively informing and revising ontology requirements. The usage of exploratory analysis on Competency Questions gave us an overview of data requirements and a clear definition of priorities. Data patterns allow us to estimate types of content interaction and their relevance. Two user studies (focused respectively on competitive analysis and co-design techniques) help us to calibrate services and expectations of a wide range of users, including experts and lay people. Our preliminary results lead us to justify two strong assumptions, namely: (1) similar CQs can be grouped by type of interaction pattern; (2) entities that are relevant to a large number of CQs are also likely to be relevant to a wide range of users. As a consequence, we can apply our workflow to a much smaller number of groupings of CQs (instead of over each persona), therefore preventing time-consuming activities. In conclusion, binding ontology design requirements to UI/UX design choices revealed being a good solution to tackle common issues in projects dedicated to the dissemination of cultural heritage data. Results on the case study were successful, and preliminarily validated the goodness of our approach, which was applied when designing four applications having different goals (a catalogue, an authoring platform, a linguistic corpus interface, and a web portal for engaging lay users). In future works we plan to test our methodology in projects with a different scope, in order to validate the reusability of our methodology in contexts different from the one in which it has been developed. Figure 3: Data patterns and user journeys ## 6 Acknowledgments This work is supported by a project that has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 101004746 (Polifonia: a digital harmoniser for musical heritage knowledge, H2020-SC6-TRANSFORMATIONS).
2302.00880
Empirical Analysis of the AdaBoost's Error Bound
Understanding the accuracy limits of machine learning algorithms is essential for data scientists to properly measure performance so they can continually improve their models' predictive capabilities. This study empirically verified the error bound of the AdaBoost algorithm for both synthetic and real-world data. The results show that the error bound holds up in practice, demonstrating its efficiency and importance to a variety of applications. The corresponding source code is available at https://github.com/armanbolatov/adaboost_error_bound.
Arman Bolatov, Kaisar Dauletbek
2023-02-02T05:03:21Z
http://arxiv.org/abs/2302.00880v1
# Empirical Analysis of the AdaBoost's Error Bound ###### Abstract Understanding the accuracy limits of machine learning algorithms is essential for data scientists to properly measure performance so they can continually improve their models' predictive capabilities. This study empirically verified the error bound of the AdaBoost algorithm for both synthetic and real-world data. The results show that the error bound holds up in practice, demonstrating its efficiency and importance to a variety of applications. The corresponding source code is available at github.com/armanbolatov/adaboost_error_bound. Machine Learning, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, AdaBoost, Ada, Ada,Boost, Ada, Ada, **The data set** used for training also has a significant impact on the performance of AdaBoost, as boosting is particularly effective on datasets with a large number of features (Opitz and Maclin, 1999). ### Geometric Margin Over a Dataset **The \(L_{1}\)-geometric margin \(\rho_{f}\)** of a linear function \(f=\sum_{t=1}^{T}\alpha_{t}h_{t}\) over a dataset \(S=(x_{1},\ldots,x_{n})\), is defined as, \[\rho_{f}=\min_{i\in[m]}\frac{|\alpha\cdot\mathbf{h}(x_{i})|}{\|\alpha\|_{1}}= \min_{i\in[m]}\frac{\left|\sum_{t=1}^{T}\alpha_{t}h_{t}(x_{i})\right|}{\sum_{t= 1}^{T}|\alpha_{t}|}.\] The margin serves an important role in error-bound analysis, as it indicates the "separability" of classes. That is, the larger the margin, the more separable the clusters in the dataset are for a function \(f\), and the easier the classification task will be. ### Ensemble VC-Dimension Margin Bound In (Mohri et al., 2018), there is the following error bound: **Theorem.** Let \(\mathcal{H}\) be a family of functions taking values in \(\{+1,-1\}\) with VC-dimension \(d\). Select a sample set \(S\) with size \(m\) and fix \(L_{1}\)-geometric margin \(\rho\). Then, for any \(\delta>0\), with probability at least \(1-\delta\), the following holds for all \(h\in\operatorname{conv}(\mathcal{H})\) \[R(h)\leq\hat{R}_{S,\rho}(h)+\frac{2}{\rho}\sqrt{\frac{2d\log\frac{cm}{d}}{m}} +\sqrt{\frac{\log\frac{1}{\delta}}{2m}}, \tag{1}\] where \(e\) is the Euler's constant, \(\operatorname{conv}(\mathcal{H})\) is the convex hull of \(\mathcal{H}\), \(R(h)\) is the true error, and \(\hat{R}_{S,\rho}\) is the training error (misclassification rate). ## 2 Methodology The error of AdaBoost will be analyzed through experimental data, which will come from randomly generated datasets and the "Heart Disease Health Indicators" dataset with varied properties such as size and dimensionality. For synthetic data, the sklearn.make_classification with parameters class_sep=0.5 and flip_y=0.05 will be used to generate two Gaussian clusters for binary classification. Each dataset will be split into equal-in-size training and testing sets via sklearn.train_test_split. The train set will be used to fit the AdaBoost classifier, and the misclassification rates for both sets will be recorded. We will conduct three experiments, investigating the influence of the sample size of the train set \(m\), VC-dimension of the base learner \(d\), and the number of AdaBoost's iteration \(T\) on the difference of the training and testing errors, which will be denoted as \(\Delta R(h):=R(h)-\hat{R}_{S,\rho}(h)\). Then, we will evaluate the theoretical error bound from the equation (1) and look at the relationship between \(\Delta R\) and \(m\), \(d\), \(T\). In the following experiments, we will fix the parameter \(\delta\) to be equal to 0.05. ## 3 Experimental Results ### Effect of the Number of Iterations First, we will test the influence of the number of base learners \(T\) on the error. We ran two experiments with different parameters for \(d\) and \(m\), evaluated the train/test errors of the classifier, and averaged them by 100 iterations. The results are shown in Figure 1. As can be seen in the graph, the test error looks like the train error but shifted up by a constant amount. Hence the difference between errors is also approximately constant, meaning that \(\Delta R\) is not affected by \(T\). ### Effect of the Sample Size The equation (1) can be rewritten as \[\Delta R(h)\leq\frac{2}{\rho}\sqrt{\frac{2d\log\frac{cm}{d}}{m}}+\sqrt{\frac{ \log\frac{1}{\delta}}{2m}}. \tag{2}\] Denote the right hand side as \(\epsilon_{\mathrm{boost}}(\rho,d,m,\delta)\). The inequality above suggests that \(\Delta R(h)=O\left(\sqrt{\frac{\log m}{m}}\right)\) and the difference of errors will slowly decrease without exceeding the theoretical bound. We will verify this hypothesis by the following steps: 1. Choose \(d\) to be equal to 25, 50, 75, and 100. 2. Generate train and test sets with dimension \(d-1\), the \(L_{1}\)-margin \(\rho\), and varying sample size \(m\) from 10 to 10000 with step 10. 3. Calculate the theoretical error bound \(\epsilon_{\mathrm{boost}}(\rho,d,m,\delta)\). 4. Find the difference of error on train and test sets \(\Delta R(h)\). 5. Scatter plot \(\Delta R\) versus \(\epsilon_{\mathrm{boost}}\). Figure 1: Results of the experiment 3.1 for \(d=50,m=1000\) (left) and \(d=100,m=500\) (right). The \(x\)-axis represents the number of iterations \(T\), while the blue and green lines on the \(y\)-axis represent the errors on the training and testing sets, respectively. The results can be seen in Figure 2. For clarity, we provided a polynomial fit of order 10. As expected, \(\Delta R\) doesn't exceed the error bound and stays around 0 as we increase \(m\). ### Effect of the Base Learners' VC-dimension Analogously, we can derive \(\Delta R(h)=O\left(\sqrt{Cd-d\log d}\right)\) (C is large enough number) from the equation (2). It suggests that the difference between errors will increase quickly up to a certain point, then decrease slowly after that, also without exceeding the theoretical bound. We will verify that by the similar steps: 1. Choose \(m\) to be equal to 500, 1000, 1500, and 2000. 2. Generate train and test sets with sample size \(m\), the \(L_{1}\)-margin \(\rho\) and varying dimension \(d\) from 5 to 1000. 3. Calculate the theoretical error bound \(\epsilon_{\mathrm{boost}}(\rho,d,m,\delta)\). 4. Find the difference of error on train and test sets \(\Delta R(h)\). 5. Scatter plot \(\Delta R\) versus \(\epsilon_{\mathrm{boost}}\). The results are shown in Figure 3. Indeed, for \(m=1500\) and 2000, the difference in errors stays below the theoretical bound. However, for \(m=500\) and \(1000\), some values of \(\Delta R\) exceed the bound. ### Evaluation of the Confidence Parameter Denote \((1-\delta)\cdot 100\%\) as the confidence parameter. Recall that we set \(\delta=0.05\). It means that with a 95% chance, the equation (1) will hold. Let the experimental confidence be the proportion of parameters \((m,d)\) when the equation is held from the list of all selected parameters. These experimental confidences are provided in Table 1. It is apparent that in approximately seven out of eight instances, the experimental confidence remains close to 99%, substantially higher than 95%. This may be due to the fact that the training data was generated from a normal distribution, thus rendering it very suitable. However, equation (1) does not specify what the initial distribution entailed. \begin{table} \begin{tabular}{|l c|c c|} \hline Exp. 3.2 & Confidence & Exp. 3.3 & Confidence \\ \hline \(d=25\) & 100\% & \(m=500\) & 82.5\% \\ \(d=50\) & 99.9\% & \(m=1000\) & 99.3\% \\ \(d=75\) & 99.7\% & \(m=1500\) & 100\% \\ \(d=100\) & 98.9\% & \(m=2000\) & 100\% \\ \hline \end{tabular} \end{table} Table 1: Experimental confidences for experiments 3.2 and 3.3 for all parameters \(m\) and \(d\), respectively. Figure 3: Results of the experiment 3.3 for different sample size \(m\): the blue graph is \(m=500\), the green graph is \(m=1000\), the red graph is \(m=1500\), and the yellow graph is \(m=2000\). The \(x\)-axis is the VC-dimension \(d\), and the \(y\)-axis is the difference of the error on training and testing sets \(\Delta R\). The solid line represents a polynomial fit for the training data. The dashed line represents the theoretical error bound. Figure 2: Results of the experiment 3.2 for different VC-dimensions \(d\): the blue graph is \(d=25\), the green graph is \(d=50\), the red graph is \(d=75\), and the yellow graph is \(d=100\). The \(x\)-axis is the sample size \(m\), the \(y\)-axis is the difference of the error on training and testing sets \(\Delta R\). The solid line represents a polynomial fit for the training data. The dashed line represents the theoretical error bound. ### Experiments on Real Data We chose the "Heart Disease Health Indicators" dataset since it provides us with enough features and datapoints to run the proposed experiments and is designed for the binary classification tasks. The total number of datapoints in the dataset is 253680, and the total number of features is 22. When running the experiments, we split the dataset into equal train and test splits, each with a total of 126840 datapoints to simplify the experimental procedures when calculating the theoretical error bound. In order to analyze the effect of the sample size on the error of the AdaBoost algorithm, we set the VC-dimension of the base classifiers at 21, i.e. using all of the available features and varying \(m\) (the training sample size) from 50 to 126840 with a step size of 50. Similarly, when assessing the effect of the base classifier's VC-dimension, we vary the dimensionality of the inputs from 2 to 22 while fixing \(m\) at 126840. It is important to note that we do not vary \(d\) in a random manner but feed in the features sorted by their importance. Their importance is calculated via the feature_importances_ method. The results are summarized in Figure 4. As we can see, the empirical error behaves as expected and does not exceed the theoretical bound for both cases anywhere on the graph. ## 4 Conclusion In this work, we have provided an empirical verification for the error bound of the AdaBoost algorithm. As the results show, we see that the bound holds for both the synthetic and real data, which was the initial purpose of this report. ## 5 Author Contributions Theoretical analysis, A. B.; methodology A. B. and K. D.; synthetic data experiments, A. B.; real data experiments, K. D.; visualization, A. B.; editing, A. B. and K. D.; supervision, Zhenisbek Assylbekov.
2310.11544
Co-Evolution of Stars and Gas: Using Analysis of Synthetic Observations to Investigate the Star-Gas Correlation in STARFORGE
We explore the relationship between stellar surface density and gas surface density (the star-gas or S-G correlation) in a 20,000 M$_{\odot}$ simulation from the STAR FORmation in Gaseous Environments (STARFORGE) Project. We create synthetic observations based on the Spitzer and Herschel telescopes by modeling active galactic nuclei contamination, smoothing based on angular resolution, cropping the field-of-view, and removing close neighbors and low-mass sources. We extract star-gas properties such as the dense gas mass fraction, the Class II:I ratio, and the S-G correlation ($\Sigma_{\rm YSO}/\Sigma_{\rm gas}$) from the simulation and compare them to observations of giant molecular clouds, young clusters, and star-forming regions, as well as to analytical models. We find that the simulation reproduces trends in the counts of young stellar objects and the median slope of the S-G correlation. This implies that the S-G correlation is not simply the result of observational biases but is in fact a real effect. However, other statistics, such as the Class II:I ratio and dense gas mass fraction, do not always match observed equivalents in nearby clouds. This motivates further observations covering the full simulation age range and more realistic modeling of cloud formation.
Samuel Millstone, Robert Gutermuth, Stella S. R. Offner, Riwaj Pokhrel, Michael Y. Grudić
2023-10-17T19:28:16Z
http://arxiv.org/abs/2310.11544v1
Co-Evolution of Stars and Gas: Using Analysis of Synthetic Observations to Investigate the Star-Gas Correlation in STARFORGE ###### Abstract We explore the relationship between stellar surface density and gas surface density (the star-gas or S-G correlation) in a 20,000 M\({}_{\odot}\) simulation from the STAR FORmation in Gaseous Environments (starforge) Project. We create synthetic observations based on the _Spitzer_ and _Herschel_ telescopes by modeling active galactic nuclei contamination, smoothing based on angular resolution, cropping the field-of-view, and removing close neighbors and low-mass sources. We extract star-gas properties such as the dense gas mass fraction, the Class II:I ratio, and the S-G correlation (\(\Sigma_{\rm YSO}/\Sigma_{\rm gas}\)) from the simulation and compare them to observations of giant molecular clouds, young clusters, and star-forming regions, as well as to analytical models. We find that the simulation reproduces trends in the counts of young stellar objects and the median slope of the S-G correlation. This implies that the S-G correlation is not simply the result of observational biases but is in fact a real effect. However, other statistics, such as the Class II:I ratio and dense gas mass fraction, do not always match observed equivalents in nearby clouds. This motivates further observations covering the full simulation age range and more realistic modeling of cloud formation. Star formation (1569) -- Star forming regions (1565) -- Magnetohydrodynamical simulations (1966) -- Molecular clouds (1072) -- Young stellar objects (1834) -- Scaling relations (2031) -- Early stellar evolution (434) -- Giant molecular clouds (653) ## 1 Introduction The majority of stars form in associations or groups within giant molecular clouds (GMCs, Lada et al., 1991; Krumholz et al., 2019; Cheng et al., 2022), which can vary greatly in size, from \(\sim\)10 to thousands of stars (Porras et al., 2004). Feedback from embedded clusters often quickly disperses the natal clump or even the entire GMC (Lada, 2005; Krause et al., 2020). Therefore, the relationship between gas and young stellar object (YSO) density provides important clues about the star formation process and cloud evolution. Schmidt (1959) was one of the first to present an analytical model of the relationship between star formation rate (SFR), and thus stellar mass, and gas density. That work suggested that SFR and gas density follow a power law relationship. This correlation was examined over the next several decades by a number of authors (e.g. Sanduleak, 1969; Hartwick, 1971). However, it was not until improved observational capabilities and analysis techniques in the 1980s and 1990s (e.g. Kennicutt, 1989, 1998) that strong evidence was found for its viability. This work motivated an analogous relation known as the Kennicutt-Schmidt (KS) law that applies to line-of-sight surface densities of gas and the star formation rate per area: \[\Sigma_{\rm SFR}\propto\Sigma_{\rm gas}^{N}. \tag{1}\] Henceforth, we refer to this relation as the Star-Gas or S-G correlation. This relationship has since been well-characterized as a power-law with an index of N\(\sim\)1.4 as applied to galaxy-scale star formation (see Kennicutt and Evans (2012) for a detailed review). At smaller scales within individual galaxies, there is also evidence for the presence of an S-G correlation. For example, Bigiel et al. (2008) used HI, CO, 24 \(\mu\)m, and UV data to examine the S-G correlation at 750 pc resolution in 18 nearby spiral and dwarf galaxies. Many regions showed a strong power-law relation, although the power-law index varied from 1.1 to 2.7 based on position. They also observed that the star formation efficiency (SFE) decreased with galactic radius, which they argued implies a connection between environment and the S-G correlation. However, the methods used to measure the SFR on \(\gtrsim\) kpc scales, such as H\(\alpha\), far-UV, and 24 \(\mu\)m emission, become less effective at smaller spacial scales. The results of Liu et al. (2011), as well as modeling by Calzetti et al. (2012) show that this kind of analysis breaks down with shrinking sample area because star formation is not well-sampled statistically. Gutermuth et al. (2011) (G11 hereafter) demonstrated that the SFR calculated from far-IR luminosity (\(L_{\rm FIR}\), e.g., Heiderman et al., 2010) underestimates the SFR calculated from counts of YSOs in nearby young clusters by up to an order of magnitude. This is because measurements based on far-IR luminosity assume a well-sampled stellar initial mass function (IMF) and reliable sampling of the GMC mass function to fully sample the lifetime of high-mass stars. However, in order to satisfy these assumptions, measurements must be integrated over physical scales \(\gtrsim\) 1 kpc (Calzetti et al., 2012). To avoid the smoothing inherent to measurements of star formation relations in other galaxies, some recent studies instead focus on individual star-forming regions in the local Milky Way, where it is possible to identify and count individual forming stars with high completeness. Since YSOs provide a direct measurement of the SFR, a simple estimate of the total mass converted to stars per time is given by \[\dot{M}=\frac{m_{\rm YSO}n_{\rm YSO}}{t_{\rm avg}}, \tag{2}\] where \(m_{\rm YSO}\) is the average mass of a YSO, \(n_{\rm YSO}\) is the number of YSOs, and \(t_{\rm avg}\) is the characteristic timescale for the YSO evolutionary stage or stages considered. By utilizing YSO censuses from _Spitzer_, G11 and Pokhrel et al. (2020) (P20 hereafter) found and measured an intracloud S-G correlation with an index of N \(\approx\) 2 in several nearby GMCs. While initial measurements varied widely (N = 1.5 - 4) (G11), P20 reduced intrinsic scatter in the measurements by adopting a uniform YSO extraction from the _Spitzer_ Extended Solar Neighbor Archive (SESNA), utilizing more robust _Herschel_-based GMC gas column density maps, and by specifically using YSOs in the early stages of star formation. This led to N = 1.8 - 2.3 in 12 nearby clouds with gas masses varying over three orders of magnitude. Also, the scaling factor in the S-G correlation varies between clouds (Lada et al., 2013, G11, P20), but the scatter in the scaling factor is reduced significantly when it is normalized by the gas freefall time (Pokhrel et al., 2021). This implies that the SFE per freefall time has limited variation, which may indicate that local processes (e.g., protostellar outflows and stellar winds) govern and regulate star formation (Guszejnov et al., 2021; Pokhrel et al., 2021; Hu et al., 2022). In order to gain a better understanding of how local processes impact star formation, it is useful to turn to theoretical models and numerical simulations. However, observed S-G correlations have only recently started to become incorporated as constraints for models of star-forming molecular gas. P20 used simulations by Qian et al. (2015) that used the ORION adaptive mesh refinement code (Truelove et al., 1998; Klein, 1999) to create synthetic observations similar to observations taken by _Herschel_. That work reproduced similar S-G correlations for 12 nearby GMCs using hydrodynamic turbulent simulations and an analytical model of thermal fragmentation. While the simulation produced an S-G correlation that is very similar to observations, it did not include magnetic fields or kinematic feedback. In this work, we analyze a 20,000 M\({}_{\odot}\) run of the STAR FORMATION in Gaseous Environments (starforge) project, the first massive GMC magnetohydrodynamics simulation to resolve individual stars while including multi-band radiation, stellar winds, protostellar outflows, and supernovae (Grudic et al., 2021, 2022, etc.). In order to most effectively compare the starforge simulation to observations, we construct synthetic observations according to the data used in P20, taking into account the known specifications and limitations of _Spitzer_ and _Herschel_ data. In Section 2, we describe the specifics of the simulation snapshots and our methods for creating synthetic observations. In Section 3, we present results from our investigation into various star-gas properties in the simulation and compare to observations. Discussion is provided in Section 4, and a summary and conclusions are given in Section 5. ## 2 Methods ### starforge Simulations The starforge framework is built on the gizmo meshless finite mass magnetohydrodynamics code (Hopkins, 2015). The framework includes a variety of modifications that enable the modeling of individual forming stars and the interactions that occur with their cloud environment. In this work we analyze the starforge simulation presented in Grudic et al. (2022). We briefly summarize the simulation properties here and refer the reader to Grudic et al. (2021) for a detailed description of the starforge numerical methods. The simulation follows the evolution of a 20,000 M\({}_{\odot}\) cloud with initial radius of 10 pc. The cloud turbulence is initialized so that the cloud is virialized with \(\alpha\equiv 5\sigma_{\rm 3D}^{2}R_{\rm cloud}/(3GM_{\rm cloud})=2\), where \(\sigma_{\rm 3D}\) is the gas velocity dispersion. The initial magnetic field is uniform in the \(\hat{z}\) direction and corresponds to a mass-to-flux ratio relative to the critical value for stability \(\mu\equiv 0.4\sqrt{E_{\rm grav}/E_{\rm mag}}=4.2\), where \(E_{\rm grav}\) and \(E_{\rm mag}\) are the total gravitational and magnetic energies, respectively. The calculation follows the gas thermodynamics self-consistently, including treatment of line cooling, cosmic-ray heating, dust cooling and heating, photoelectric heating, hydrogen photoionization, and collisional excitation of both hydrogen and helium. The evolution of the dust temperature is coupled to the radiative transfer step. gizmo's radiation transfer module follows five bands, which cover the frequencies corresponding to ion-lating radiation, FUV, NUV, optical-NIR, and FIR (Hopkins and Grudic, 2019; Hopkins et al., 2020). Once gas satisfies multiple criteria intended to identify centers of unstable collapse, Lagrangian sink particles are inserted, which occurs at densities of \(\rho_{\rm max}\sim 10^{-14}\,\rm g\,cm^{-3}\). The cell mass resolution is \(dm=10^{-3}\,\rm M_{\odot}\), which allows the calculation to resolve the stellar mass spectrum down to \(\sim 0.1\,\rm M_{\odot}\). The sink particles, henceforth referred to as stars, follow a sub-grid model for protostellar evolution and radiative feedback as described in Offner et al. (2009). The particles are also coupled to models describing protostellar outflow launching, stellar winds, and supernovae (Cunningham et al., 2011; Guszejnov et al., 2021; Grudic et al., 2021). The calculation continues until stellar feedback disperses the natal cloud and star formation concludes, which happens at \(\sim 9\,\rm Myr\). The simulation has a final SFE of 8% that agrees with statistical models of nearby galaxies. Protostellar jets dominate feedback for most of the simulation and are important for regulating the IMF, but they cannot wholly disrupt the cloud. Eventually, radiation and winds from massive stars create bubbles that expand and disrupt the cloud, drastically reducing SF. By following GMC evolution, Grudic et al. (2022) measure a relatively unambiguous IMF. It resembles the Chabrier IMF with a high-mass slope of \(\alpha=-2\pm 0.1\). The IMF is much more realistic than previous simulations without full feedback. Feedback from radiation/winds of massive stars limits the maximum observed mass to 55 solar masses, moderating the high-mass tail of the IMF. The integrated luminosity and ionizing photon rate are also very close to an equal-mass cluster with a canonical IMF. A more detailed study of the impact of various feedback processes and cloud initial conditions on the IMF is presented in Guszejnov et al. (2022). Grudic et al. (2022) also note the importance of directly comparing observations and simulations via synthetic observations, as we aim to do in this work. To construct the stellar surface density, we require a minimum of 11 YSOs. The first snapshot with at least this number of sources is at 1.47 Myr. Altogether our analysis uses 16 snapshots, spaced 0.49 Myr apart, which span 1.47 to 8.80 Myr. ### Constructing Synthetic Observations For our analysis to better mirror that of P20, we create synthetic observations by including various considerations to bring our data closer to that which might have been observed by _Spitzer_ and _Herschel_. We refer to analysis done with minimal adjustments, i.e., only 2D projection, age-to-class conversion, and 0.01 M\({}_{\odot}\) mass cutoff (see below) as the "fiducial analysis", while analysis with further considerations are collectively referred to as "synthetic observations." The fiducial (minimally-adjusted) case allows us to examine how well the simulation can reproduce various statistics and identify where observational biases may affect the agreement. In order to create these synthetic observations, we extract or compute the (line-of-sight-projected when applicable) molecular number density of H\({}_{2}\) and the masses, coordinates, ages, and particle indexes of the sink particles, which represent YSOs. #### 2.2.1 YSOs YSOs fall into distinct groups based on their observed properties. Historically, these have been binned into representative classes (Lada, 1987; Shu et al., 1987; Greene et al., 1994; Robitaille et al., 2007; Dunham et al., 2015), e.g., Class I, Class II, and Class III.1. Note that class does not have a direct mapping to source age, but it is often used as a proxy for evolutionary stage. YSOs in each class differ in the shape of their spectral energy distribution (SED), which depends on the characteristics of the circumstellar material around the YSO. Class Is are usually deeply embedded in cold, dense, and dusty gaseous envelopes, Class IIs have classical protoplanetary disks, and Class IIIs have mostly lost their disks (or the visible disk material has substantially coalesced into larger planetesimals that are generally invisible in the infrared). For the first step of our analysis, we map each of the starforge stars to an observational class. Ideally, the stellar age would be employed to directly map each source to the appropriate spectral class. However, the average age and lifetime of each class is uncertain, since the individual classes are not completely distinct and the boundaries between them are somewhat arbitrary. Class lifetimes are inferred observationally using the relative number of sources in each class and by assuming a typical disk lifetime (e.g. Dunham et al., 2014). Consequently, a full self-consistent class assignment requires constructing synthetic observations using radiative transfer to model the SEDs. Instead, we assign each star to a class based on its age (the time elapsed since the sink particle forms in the simulation) and adopt a statistical approach rather than an exact mapping. We model the transitions from Class I \(\rightarrow\) Class II and Class II \(\rightarrow\) Class III as exponential decays, adapting the models and half-lives of the transitions from Kristensen and Dunham (2018) and Mamajek (2009) to represent the age-to-class conversion. Using these half-lives, we calculate two numbers corresponding to each source, \(f_{a}\) and \(f_{b}\), which corresponds to the statistical weighting given to each source for transitions (a) (Class I \(\rightarrow\) II) and (b) (Class II \(\rightarrow\) III): \[f_{a,b}=\mathrm{e}^{-t_{\mathrm{age}}\frac{\mathrm{ln}(2)}{t_{1}/2;a,b}}, \tag{3}\] where \(t_{\mathrm{age}}\) is the age of the YSO and \(t_{1/2;a}\) and \(t_{1/2;b}\) are the half-lives of the Class I \(\rightarrow\) Class II transition (0.22 Myr) and the Class II \(\rightarrow\) Class III transition (1.7 Myr), respectively. Then, we generate two random numbers (\(r_{a}\) and \(r_{b}\)) for each source using consistent seeds and the persistent source index from starforge, so that each YSO has the same \(r_{a}\) and \(r_{b}\) for the entire run. If \(r_{a}<f_{a}\), then the YSO is assigned to Class I. If not, we check whether \(r_{b}<f_{b}\). If so, the YSO is assigned to Class II. If not, it is assigned to Class III. By fixing \(r_{a}\) and \(r_{b}\) for each source, we ensure that the sources progress forward through the classes (I to II to III) as they age in the simulation. However, in actual observations, a YSO's trajectory may not be so linear. For example, Dunham et al. (2010) used models of accreting sources to show that YSOs undergoing episodic mass accretion may transition to an earlier Class. The notion that older sources can populate the earlier classes is also supported by the work of Hernandez et al. (2007), who observed what appear to be older, "evolved" disks. Another problematic assumption is that the Class lifetimes are the same for every environment, which is unlikely since protostars in areas of high YSO density tend to have greater luminosity (Kryukova et al., 2014; Cheng et al., 2022). Despite the approximate nature of our model for Class assignment, we find that it reproduces the expected YSO distributions well. Whereas, assuming an exact one-to-one mapping between age and Class leads to sharp transitions that do not match observations as closely. Next, in order to model source confusion present in _Spitzer_ observations, we inject Active Galactic Nuclei (AGN) contaminants. In _Spitzer_ observations, background AGN can appear as YSOs of Class I and II with roughly equal probability (Gutermuth et al., 2008, 2009). To simulate this effect, we randomly place \(N\) Class Is and IIs within the dataset, where \(N\) was determined to be \(\sim\)9 per square degree (P20). This has the immediate effect of introducing many sources with low spatial density. This is especially significant for the synthetic clouds at closer distances due to the commensurately larger angular size of the cloud (see Figure 1, where it is clear that AGN dominate over YSOs in low gas density regions). We then correct for these contaminants following the method used by G11: we adopt a threshold of log \(\Sigma_{\mathrm{gas}}>1.3\) M\({}_{\odot}\) pc\({}^{-2}\) for points on the S-G plot (see Section 3.4). We adopt the same distribution of AGN contaminants for all snapshots to ensure that the AGN stay the same (i.e. same position and Class). We also model instrumental detection limits to account for undetectable low-luminosity sources. To replicate this in the synthetic observations, we implement a simple mass cutoff, where we remove sources below 0.08 M\({}_{\odot}\) (200 and 400 pc distance) or 0.2 M\({}_{\odot}\) (800 pc).2 Footnote 2: Note that we implement a _global_ 0.01 M\({}_{\odot}\) mass cutoff for all of our analysis, including the fiducial case, in order to avoid spurious sources with extremely low masses that were the result of a known bug in that simulation that has since been fixed, eliminating the erroneous sources. Last, we model _Spitzer_'s limited angular resolution by removing stars in close proximity. When a source and its nearest neighbor (YSO or AGN) are within the adopted beam size threshold of 5\(\arcsec\), we remove the lower-mass source. We assign AGN a mass of 1.1 M\({}_{\odot}\) to avoid losing them to the mass cutoff. We do only one pass to remove sources, but this is sufficient to remove the vast majority of close neighbors. #### 2.2.2 Gas We construct 2D projected column density maps with cloud distances of 200, 400, and 800 pc, which are chosen to model the majority of the clouds in the P20 sample. Figure 1 shows one of these maps with a spatial distribution plot of YSOs and AGN contaminants. The _Spitzer_ and _Herschel_ fields of view focus on regions of high column density (clumps) within the clouds. To simulate this, we crop the gas maps to the bounds set by a 10\({}^{21}\) cm\({}^{-2}\) column density contour on a 120\({}^{\prime\prime}\)-smoothed gas map constructed specifically for this purpose. This map is not used again in the further analysis. We smooth to keep small overdensities from artificially enlarging the cropping area.3 This greatly reduces the field of view compared to the full view, as shown in Figure 2, and makes our maps more similar to the _Spitzer_ and _Herschel_ data we compare with. Ad Figure 1: Projected N(H\({}_{2}\)) column density map of a 200 pc-distance cloud with N(H\({}_{2}\)) = 10\({}^{21}\) cm\({}^{-2}\) contour over-plotted in green. Colored circles indicate the locations of YSOs and AGN. a) Full field of view of the simulation at \(\sim\)5.4 Myr. b) Zoomed (\(\sim\)20-pc) field of view cropped to the furthest extent of the green contour at \(\sim\)5.4 Myr. AGN contaminants dominate the source counts in the low-column density regions. Figure 2: Projected N(H\({}_{2}\)) column density map with N(H\({}_{2}\)) = 10\({}^{21}\) cm\({}^{-2}\) contour over-plotted in green and N(H\({}_{2}\)) = 10\({}^{22}\) cm\({}^{-2}\) contour in magenta at (a) \(\sim\)2.4 Myr. (b) \(\sim\)5.4 Myr. (c) \(\sim\)8.3 Myr. The green contour outlines the likely _Spitzer_ field of view for an equivalent cloud. Note that the high-density (magenta) region coalesces as star formation increases and eventually breaks apart due to stellar feedback, which is in the process of dispersing the cloud in (c). ditionally, this significantly reduces the amount of low-density AGN contamination (see Figure 1 and Section 4 for more details). In order to simulate the angular resolution of _Herschel_, the gas maps are smoothed with a 36\({}^{\prime\prime}\) Gaussian kernel. ## 3 Analysis ### Overview Statistics To better compare with observations, We first define a few bulk cloud properties. We define the total cloud gas mass, M\({}_{\rm gas}\), as the combined mass of gas at column densities 10\({}^{21}\) cm\({}^{-2}\) and above. Similarly, the dense gas mass, M\({}_{\rm dense}\), is the total gas at column densities 10\({}^{22}\) cm\({}^{-2}\) and above. The dense gas mass fraction is then the ratio of dense to cloud gas mass. This metric gives an indication of the fraction of the cloud that is most likely to form clusters (Battisti & Heyer, 2014; Heyer et al., 2016). We define the disk fraction as the ratio of the number of Class I and II YSOs to the total number of YSOs regardless of circumstellar material. Disk fraction can be used as a proxy for the population age (Haisch et al., 2001; Hernandez et al., 2007). A similar statistic, the Class II to Class I ratio, is generally believed to be a good relative evolution indicator for YSOs, especially for earlier evolution (G11, P20). Figure 3 shows the evolution of the gas properties with time for the fiducial case. The cloud mass and dense gas mass increase steadily over time and peak at \(\sim\) 5.4 Myr. The maximum mass reaches about half of the 20,000 M\({}_{\odot}\) of gas that makes up the entire simulated GMC. After this point, the cloud mass decreases rapidly to less than 1/10 of the initial GMC mass. The dense gas mass fraction exhibits a similar trend, peaking at \(M_{\rm dense}/M_{\rm cloud}\sim 0.6\) at the same time. Figure 4 shows that the number of Class I and II sources evolve in a similar way to the gas. Star formation increases steadily for the first 3.43 Myr as indicated by the rising number of Class Is. After 3.43 Myr, star formation declines to 63% of the maximum in 2.45 Myr and then drops to only half this value in the next snapshot. The number of Class IIs evolves more gradually, peaking at 5.88 Myr, after which point it steadily decreases to about half its maximum value by the end of the simulation. Figure 4 also shows an analytical model adapted from Megeath et al. (2022) created to predict the populations of Class I, II, and III objects. The original model is semi-empirically developed to generate the ensemble of Class II:I ratios observed in Gutermuth et al. (2009), and thus we expect it to describe the progression of star formation in nearby clustered regions reasonably well. The idea of a strong central peak in SFR is characteristic of a cluster formation event, which supports the use of a cluster-derived model. It is shown with a vertical stretch and time axis shift to make the model more visible without adjusting the main model parameters. We also plot a tweaked version of the model with minor adjustments to better fit our assumptions and outputs. Namely, we shift the time axis by 1.47 Myr to match our snapshots, increase the SFR from 100 to 435 (unitless metric which changes the vertical scale of the model), lengthen the rise and decay times for the Class Is from 0.5 to 1.7 Myr and 0.5 to 1.5 Myr, respectively, and shorten the lifetimes of Class Is and IIs to be closer to (but not exactly the same as) the half-lives for our adopted Class transitions (0.5 to 0.3 Myr and 2.0 to 1.5 Myr, respectively). With these parameters, the model reproduces the fiducial starforge data re Figure 3: Evolution of gas in the starforge simulation versus time: (a) Cloud mass at column densities N(H\({}_{2}\)) \(>\) 10\({}^{21}\) cm\({}^{-2}\), (b) Mass of the dense gas at column densities N(H\({}_{2}\)) \(>\) 10\({}^{22}\) cm\({}^{-2}\), (c) Dense gas mass fraction. markably well. This suggests the starforge simulation provides a good representation of cluster formation. As we shall see below, the simulation appears to agree less well with star formation observed in full GMCs, which generally contain multiple distinct star-forming regions and have longer and more complex star formation histories. Figure 5 shows the evolution of the Class II:I ratio and the disk fraction in this simulation. The disk fraction starts near 1.0 and then decreases nearly linearly to 0.21 in the final snapshot. This is more drawn-out than the traditional disk fraction versus stellar age plot (e.g. Mamajek, 2009). The starforge calculation exhibits a broad range of Class II:I ratios, which span 1.3-19.0. For comparison, P20 recorded the Class II:I ratio and the cloud mass for 12 clouds at distances between 140 and 1400 pc. They found that the Class II:I ratio remained between \(\sim\)3.5-9.7 for each cloud observed, which is a much narrower range than we find in the starforge snapshots. However, the P20 values are uncorrected for AGN and edge-on disk contamination, which would likely change the Class II:I ratios, as will be seen below. Using publicly available _Herschel_ data (Andre et al., 2010, P20), we calculate the dense gas mass fraction of the clouds and clusters P20 and Gutermuth et al. (2009) observed. We adopt the publicly available YSO lists from SESNA, correct for AGN and edge-on disk contamination, and crop for coverage consistency and to the N(H\({}_{2}\)) = 10\({}^{21}\) cm\({}^{-2}\) limit. In the case of the Gutermuth et al. (2009) data shown, we adopt all "cluster cores" that overlap with clouds from the P20 sample, and crop to square areas that are twice the diameter implied by the \(R_{\rm circ}\) radii listed in that paper, once converted to the most recent heliocentric distances reported in P20. Some of the selected areas of adjacent cluster cores overlap significantly. The assumed and computed data for these plots are listed in Tables 3 & 4. Figure 6a shows that starforge and the clouds in P20 occupy different regions of the Dense Gas Mass Fraction - Class II:I ratio parameter space. The trajectory agrees better with the clusters from Gutermuth et al. (2009), except for the earliest snapshots. We could correct for this by assuming some amount of ambient star formation occurs in the cloud before the main cluster forms, which would increase the Class II:I ratio, more noticeably in the early and late snapshots that have few Class I and II sources. This supports the implication that starforge more closely models the formation of a large cluster rather than star formation in a GMC. Inspection of Tables 3 & 4 indicates that the total gas mass and dense gas mass in the simulation are also more consistent with the ranges reported for the Gutermuth et al. (2009) clusters. We next apply a correction for AGN to the synthetic observations by removing 4.5 sources per square degree for both Class Is and IIs. We find that the synthetic observation trajectories exhibit strong agreement with each other and the fiducial (Figure 6b). This is expected, since we add that same density of AGN contaminants at the beginning of the synthetic analysis. Figure 4: Evolution of each Class of starforge-derived YSO counts in this work (black points) overlaid with analytical models adapted from Mogeath et al. (2022): (a) Number of Class I sources versus time, (b) Number of Class II sources versus time, (c) Number of Class III sources versus time. The orange lines are shifted and rescaled versions of the Mogeath et al. (2022) models using their parameter selections, while the blue lines adopt parameter value adjustments to achieve strong agreement with the STARFORGE data. ### Evolution of the Star-Gas Fraction The calculation of the S-G correlation in this work emulates the treatment from P20, allowing us to better compare the outcomes of the two. We calculate the \(n^{\rm th}\) nearest neighbor distance (NND) for each Class I YSO, for \(n=11\) using scipy.spatial.KDTree. KDTree uses the algorithm described by Maneewongvatana & Mount (1999) to create a binary tree of 3-dimensional nodes (the positions of the sources). This allows for the quick lookup and classification of nearest-neighbors. We use \(n=11\) because it is a good compromise of spatial resolution (typically 0.1-2 pc smoothing-scale in nearby clouds) and low relative uncertainty (33%, Casertano & Hut 1985). This choice is consistent with Casertano & Hut (1985), G11, and P20. Using a circular mask with a radius of the NND we calculate the area A of each circular mask, the mean column density in each circle \(\Sigma_{\rm C}\), and the ratio \(C\) of covered area to total area within each circle. \(C\) covers edge effects and is thus almost always unity. From this, we calculate \(\Sigma_{\rm gas}\), the gas mass surface density. \(\Sigma_{\rm YSO}\) is the measurement of the surface density of YSOs, calculated as \[\Sigma_{\rm YSO}=\frac{n-1}{A_{n}C}M_{\rm YSO} \tag{4}\] (Casertano & Hut 1985) where \(M_{\rm YSO}\) is the adopted mean mass per YSO and \(n\), \(A_{n}\) and \(C\) are defined above. Except for our fiducial analysis, where we try to avoid as much observational bias as possible, we fix \(\rm M_{\rm YSO}\) at 0.5 \(\rm M_{\odot}\) to keep the analysis consistent with P20. Figure 7 shows the median \(\Sigma_{\rm YSO}/\Sigma_{\rm gas}^{2}\) versus time, which captures the vertical offset and spread around the power-law fit (G11). While anything more than a general positive trend with time showing increasing stellar density as a function of gas density is not immediately clear, the Class I and II values are close to each other for the first \(\sim 6\) Myr. After this point, the populations no longer appear correlated. This points to a large-scale decoupling of the YSOs from their surrounding gas at around 6-6.5 Myr. This is supported by visual examination of the snapshots. Figure 8 shows a snapshot before decoupling occurs (3.42 Myr) and a snapshot after decoupling occurs (7.82 Myr). It is clear that nearly all YSOs reside near or within dense gas before the decoupling, but afterwards the two groups are significantly less correlated. The dense gas mass fraction peaks at \(\sim\)5.38 Myr (Figure 3). This is when feedback begins to disperse the cloud (see Figure 2), and there is a \(\sim\)1 Myr lag before the effects are seen in the other statistics. For example, Figure 5 shows that the number of Class Is drastically declines and the number of Class IIs peaks at \(\sim\)6.36 Myr. This causes the Class II:I ratio to rise significantly. And, as mentioned above, this is also the time when Class Is and IIs in Figure 7 appear to decouple. ### Star-Gas Correlation versus Time Figure 9 shows the slopes and uncertainties of the S-G correlations for the fiducial analysis along with the three sets of synthetic observations as a function of time. Most of the slopes lie relatively close to 2.0, however the well-correlated slopes either lie above or below 2.0, usually localized around 2.4-2.5 or 1.7-1.8. Over half of the fiducial snapshots visually appear to have a tight YSO and gas surface density correlation, with an uncertainty in the slope of \(\leq 0.2\). This provides significant evidence that the power-law relationship for the S-G correlation is a real effect that is a result of underlying physics and not a result of observational bias (see Figure 13 in the Appendix for the fiducial S-G correlation plots). However, many of the snapshots are not well-correlated, appearing as a clump of points that lie on the Figure 5: (a) Class II:I ratio versus time increases steadily until about 6 Myr, at which point it jumps up and does not follow a consistent trend. (d) Disk fraction, i.e., ratio of Class I and Class II sources to total number of sources, decreases steadily, but slower than comparable observations based on mean stellar ages in real clouds. Error bars are calculated through standard error propagation. expected line, but do not span a significant range of surface densities. This is especially true for snapshots with fewer than \(\sim\)100 Class I sources, since this often leads to poorly-constrained slopes with error bars as large as 0.6. This difficulty hinders comparison with previous observations, as many of the observed clouds in G11 and P20 have many more sources and more completely populate the S-G space and thus the S-G correlation. However, the addition of synthetic observation effects, especially adding AGN or removing close neighbors, can artificially compensate for this by filling out the low-density region and depleting the high-density region of the plot, respectively. This is discussed in Section 3.4 below. In addition, the S-G correlations for each snapshot of the simulation in the fiducial and 200, 400, and 800 pc synthetic analyses can be found in the Appendix. Figure 9 illustrates the evolution of the S-G slope as a function of time in the simulation. While the shape, slope, or scatter of the S-G correlation do not change monotonically with time, there are several features that are roughly independent of distance and the presence of the synthetic considerations. This implies that the synthetic observation effects don't obscure the underlying physics, except for in snapshots where low-number (of Figure 6: Dense Gas Mass Fraction versus Class II:I ratio. Left: Blue triangles are values from nearby molecular clouds in P20. Orange triangles represent clusters in those clouds from Gutermuth et al. (2009). All observed data have been corrected for AGN and edge-on disk contamination, and cropped for coverage consistency and N(H\({}_{2}\)) \(>10^{21}\) cm\({}^{-2}\), so they differ to varying degrees from the raw values reported in those works. Black points and line represent the time evolution trajectory for the fiducial analysis of this work, starting from the bottom left. Right: fiducial trajectory overlaid with trajectories from the synthetic analyses at different distances, corrected for AGN contamination. Note that points at high Class II:I ratio are highly uncertain (e.g. Figure 5c). Figure 7: Median value of \(\Sigma_{\rm{YSO}}/\Sigma_{\rm{gas}}^{2}\) versus time for both Class I and Class II sources. In addition to showing the increasing stellar density as a function of gas density, these values are closely correlated until \(\sim\)6 Myr (dashed vertical line). At this time, feedback clearly begins to disrupt the gas (see Figure 3) thereby inducing decoupling of the gas structure and the YSO distribution. YSOs) statistics are significant (e.g., the poorly correlated snapshot in Figure 12). Figure 9 shows the S-G slope declines until around \(\sim 4\) Myr (most noticeable at closer distances), at which point the number of Class I sources peaks. From \(\sim\)4-6 Myr the S-G slope increases as the number of Class II sources continues to rise; the peak in the S-G slope at 6 Myr coincides with the peak in the number of Class II sources. After 6 Myr the S-G slope declines sharply as feedback begins to disperse the cloud in earnest. Many of the snapshots around this time also have poor S-G correlations. Even though much of the cloud gas is dispersed from the central region, the YSOs' dynamics take longer than the gas to respond to the changing gravitational potential. However, star formation still occurs in the remaining pockets of dense gas, maintaining some degree of S-G correlation in the later snapshots (Figure 1). While it is clear that some star-gas statistics evolve with time, the slope of the S-G correlation appears to be relatively constant across the history of the cloud. This is consistent with the observations of G11 and P20 who found little variation in the slopes across a wide collection of GMCs with very different ages. The spread in the S-G slopes and fit uncertainties are significantly larger for the simulations than for the observations of G11 and P20, however. This comparison may not be fully equal, as there is a selection effect on which clouds are actually observed and included for analysis. For this reason the observed clouds may span a narrower range in the cloud lifetime: young clouds with little star formation may not be identified as distinct and/or interesting star-forming regions and thus will be excluded, while older clouds that are in the process of dispersing are excluded since they have little remaining dense material. As a result, the especially-poorly-populated early and late snapshots are not well represented in real data, as it is difficult to find and observe very young and very old clouds. More observational work will need to be done to more effectively compare with these snapshots. ### Demonstration of Synthetic Effects on the Star-Gas Correlation In this section, we explore how each synthetic effect impacts the apparent S-G correlation. Figures 11 and 12 compare the fiducial S-G correlation with those obtained for five different synthetic effects. The first effect we add to the synthetic observation is the adoption of a uniform YSO mass. Figure 10 shows the mean and spread of YSO masses in the simulation, and as can clearly be seen, adopting a fixed average mass does not well-represent the true average mass, which varies by a factor of \(\sim 10\) over time. However, since in Figure 8: Projected N(H\({}_{2}\)) column density map with N(H\({}_{2}\)) = 10\({}^{21}\) cm\({}^{-2}\) contour over-plotted in green and N(H\({}_{2}\)) = 10\({}^{22}\) cm\({}^{-2}\) contour in cyan at (a) \(\sim\)3.4 Myr. (b) \(\sim\)7.8 Myr. Note in (a) that most YSOs, especially Class I sources, remain close to or within the dense gas cyan contour. In (b) however, many of the YSOs are no longer correlated with the locations of the denser gas at either contour level, indicating YSO-gas decoupling. Existing YSOs (mainly Class IIs and IIIs) remain relatively stationary for the first few million years as the gas dissipates, being bound together by gravity. However, new Class Is continue to form in the denser gas (almost all Class Is are within the cyan contours). dividual real YSO masses cannot be directly measured, observational analyses such as P20 must adopt some approximation. Figures 11b and 12b illustrate the S-G correlation assuming a uniform YSO mass of 0.5 M\({}_{\odot}\). Comparing panels (a) and (b) indicates that using the true mass of the sources has little effect on the S-G correlation. While the points move slightly vertically, the slopes change by less than 0.1. Consequently, source mass appears to have a relatively minor impact on the S-G correlation. Considering the minor effects, the uniform mass is used in the rest of the demonstration. The impact of the removal of close neighbors is more significant. When multiple close sources appear as a single source, the effect is to remove many of the highest density points in the S-G relationship shown in Figure 9c. This, in turn, has a flattening effect on the power-law slope, for example, bringing the slopes of most snapshots (all snapshots between 2 and 6.5 Myr) within 2\(\sigma\) of 2.0 (see Figure 9). The earlier and later snapshots tend to be (often significantly) less well-sampled, which likely explains their inconsistent slopes (as shown in Figure 12). The impact of this effect increases with distance, as the 5\({}^{\prime\prime}\) minimum separation imposed on the YSO lists translates to larger physical separations. This can be observed qualitatively in Figures 13 - 16 in the Appendix. Figure 11d illustrates the effect of detection limits on the S-G correlation. We find that implementing a mass sensitivity limit significantly decreases the number of sources at all densities, which increases the fit uncertainties across all snapshots. However, this does not significantly change the slope in well-correlated snapshots. In contrast, Figure 12 shows that the addition or removal Figure 9: Slope and uncertainty of the S-G correlation for each snapshot. The shaded region corresponds to the range of values observed by P20. In many of the earlier snapshots, undersampling causes large uncertainties and produces slopes that are discrepant with observations. We limit the \(y\) axis to better compare differences between the runs. This obscures the (unreasonable) points of some of the earlier snapshots. See Figures 13-16 for full slope and uncertainty values. The leftmost point in the bottom right panel is missing since there is no slope for that individual snapshot (see Figure 16). of a single point can significantly change the slope in a snapshot with fewer YSOs. This effect is also more extreme at larger distances. Specifically, at 200 and 400 pc, the mass limit is the Hydrogen-burning limit: 0.08 M\({}_{\odot}\), while the limit at 800 pc is 0.2 M\({}_{\odot}\), significantly reducing the number of sources with which to calculate the S-G correlation. Compare Figure 16 with Figures 14 and 15 in the Appendix for a visualization of how the number of sources decreases with increasing distance. Next we investigate the impact of AGN contamination on the S-G correlation. The addition of AGN has a significant impact on the S-G correlation as shown in Figures 11e and 12e. Since the AGN are uniformly randomly distributed throughout the field of view, they add a relatively constant (\(\Sigma_{\rm YSO}=\sim\)0.3, \(\sim\)0.1, \(\sim\)0.03 M\({}_{\odot}\)/pc\({}^{2}\) at 200, 400, and 800 pc, respectively) "foot" of points to the bottom of the S-G correlation. This disproportionately affects low \(\Sigma_{\rm YSO}\) regions and artificially flattens the power law. The flattening increases with distance, so much so that the slope of the S-G correlation for every snapshot at 200 pc and 400 pc would lie below 2 at all times. However, we follow observational convention and implement a column density cutoff when fitting the slope as described below. For nearby clouds, the number of YSOs observed at relatively low column densities is small, which causes observations of those regions to be dominated by AGN contaminants. To deal with the similar issue of our synthetic AGN, we adopt the same approach as G11 and remove YSOs in our catalog in regions with log(\(\Sigma_{\rm gas}\)) \(<\) 1.3 M\({}_{\odot}\) pc\({}^{-2}\) and refit the remainder. This is demonstrated in Figures 11e,f and 12e,f. Applying this treatment to the synthetic observations with AGN confirms that such a cut is justified to minimize the bias of the fit caused by AGN contamination. After applying this cut, most slopes steepen and approach the expected value of \(\sim\)2.0 (Figure 9). ## 4 Discussion ### Implications for the S-G Correlation The presence of many well-correlated S-G relationships for the fiducial starforge run implies that the S-G correlation is a physical phenomenon and not solely the result of observational biases. However, the addition of synthetic observation considerations does artificially lower the slope of the S-G correlation for many of the snapshots, generally increasing agreement with observations. This begs the question of whether the very consistent value of 2 determined by P20 is partially caused by observational effects. In this case, the S-G correlation slope is not as invariant and universal as it appears in P20. However, in the latest snapshots, which have better agreement in Class II:I ratio versus dense gas mass fraction with P20 clouds (Figure 6), the S-G correlation slopes are much lower than observed. Nonetheless, it is striking that the broad range of evolutionary stages spanned by one cloud, modeled with all key physical effects, produce a relatively uniform power-law slope. Once star formation is underway, stellar feedback helps to regulate the relationship between dense gas and YSOs. Clouds with particularly high Class II to I ratios are likely dominated by stellar feedback and in the process of cloud dispersal. Follow-up observations that minimize observational effects are required to fully constrain if and the extent to which these biases conspire to produce an S-G power-law slope of \(\sim 2\). ### Comparison to Previous Work Chevance et al. (2022) find that GMCs in nine nearby disc galaxies usually disperse within \(\sim 3\) Myr after unembedded high-mass stars emerge. While not directly measured in this work, we believe the first high-mass stars likely emerge shortly before feedback begins dispersing the cloud in earnest. We estimate dispersal to become qualitatively significant sometime between \(\sim 5.4-6.4\) Myr, as described in Section 3. And, considering the GMC is nearly completely dispersed by 8.8 Myr, the simulations are consistent with the observed \(\lesssim 3\) Myr time frame, as well as the \(\sim 10\) Myr total cloud lifetime they estimate. Figure 10: Average combined Class I and II mass for each snapshot. Error bars represent 95th percentile. It is clear that an assumed mass of 0.5 M\({}_{\odot}\) does not accurately represent YSO masses at all times. However, this has little effect on the calculation of the S-G correlation (see Section 3). As mentioned in the Introduction, P20 adapted HD simulations by Qian et al. (2015) to create synthetic observations of their 12 observed clouds. These synthetic observations included 2D projection and neighbor removal. The HD simulations produced slopes between \(2.3-2.7\), higher than the observed \(1.8-2.3\). The simulations are also limited in density dynamic range compared to some of the clouds they model (see Figure 6 in P20). However, the simulated slopes are similar to the values of \(2.0-3.0\) we observe in the fiducial run (before cloud dispersal and excluding the first two snapshots, see 2). Caution is required when comparing with these simulations since P20 modeled 12 different clouds at a single time, while this work models one cloud at many different times. Regardless, the main improvement of starforge over the simulations in Qian et al. (2015) is more realistic physics, especially magnetic fields and kinematic feedback. While magnetic fields do not play a very significant role in setting the slope of the S-G correlation, kinematic feedback allows starforge to evolve the GMC without driven turbulence (which was necessary for the simulations in Qian et al. 2015). While the Figure 11: Comparison of different synthetic observational effects on a well-correlated snapshot. a) The “fiducial” analysis with no extra considerations. Each panel b) through e) demonstrates 1 synthetic effect each. b) In the calculation of the S-G correlation, uniform 0.5 M\({}_{\odot}\) mass for each source is used. There is a slight vertical shift in the points on the plot, but it does not significantly change the slope or the uncertainty of the S-G correlation. We adopt uniform mass for the rest of the panels. c) The removal of close neighbors that would have been indistinguishable by _Spitzer_. This predominantly removes high-density points, lowering the slope. d) The removal of low-mass sources that would have been undetected by _Spitzer_. This removes points without visible bias towards density, increasing the uncertainty in the slope. e) The addition of AGN. This predominantly adds low-density sources, lowering the slope. f) All previous synthetic effects at once. The slope is much closer to 2. Black dashed lines represent a density cutoff imposed to account for the presence of AGN. The slopes and best-fit lines for e) and f) are only based on points to the right of the black line. STARFORGE simulation starts with an initial turbulent setup, the turbulence, evolution, and dispersal of the cloud are regulated entirely by stellar feedback. ### Model and Analysis Caveats While starforge faithfully reproduces the S-G correlation in many snapshots, there are some areas for improvement. For example, Figure 6, which displays the Class II to I ratio versus dense gas mass fraction, shows that the simulation exhibits poor agreement with the P20 clouds. In contrast, the simulation agrees better with the cluster data from Gutermuth et al. (2009). This implies that the starforge simulation analyzed here produces something that is closer to a smaller denser structure (i.e., a cluster) than the stellar complexes formed in the GMCs of the P20 sample, which may be characterized by a longer and richer star formation history. The drastic evolution in the number of Class Is with time highlights that the simulation SFR is not constant, in contrast with the assumption of constant SFR made by G11, P20, and others when using class ratios (to infer age, for example). Figure 5 shows that this leads to Class II:I ratios that vary much more than in the P20 observations. Megeath et al. (2022) argued that a variable SFR similar to that produced by the starforge simulation is necessary to explain the ensemble of Class II:I ratios and disk fractions in nearby clusters. The agreement between this model and the starforge data (Figure 4) provides more evidence that starforge produces something more similar to a monolithic cluster than a full GMC (i.e., with several smaller, distinct clusters). However, we caution that here we only analyze one simulation that aims to model the typical conditions of a Milky Way cloud. Future work is needed to explore the Figure 12: Comparison of different synthetic observational effects on a poorly-correlated snapshot. Features of figure the same as in Figure 11. Note that this snapshot is particularly sensitive to the removal of a single high-density point. broad range of conditions modeled in the starforge simulation suite, which includes clouds with varying initial magnetic field, turbulence, interstellar radiation field, surface density, cloud size, and cloud initialization (Guszejnov et al., 2022). In particular, the initial cloud setup, that of a uniform density sphere, is a significant oversimplification of the complexity of forming and accreting molecular clouds. Overall, agreement with both data sets would likely be improved by using more realistic initial conditions. For example, a slower star formation start could increase the Class II:I ratio in the early snapshots, improving agreement. Simulations that begin with more realistic cloud initialization, such as a driven-turbulence sphere (Lane et al., 2022) or models that follow cloud formation from galaxy scales (Hu et al., 2023; Ganguly et al., 2023, Hopkins et al. 2023 in prep.), are likely necessary to advance agreement between the starforge framework and observations. One recent interesting aspect of starforge comes from Grudic et al. (2023), who ran 100 2000 M\({}_{\odot}\) STARFORGE simulations and found a sharp mass cutoff on the IMF at 28 M\({}_{\odot}\). This is in contrast to a simulation with similar parameters but 10 times the mass, which generated a 44 M\({}_{\odot}\) star, and a simulation with 10 times the gas surface density which generated a 107 M\({}_{\odot}\) star. They suggest that the STARFORGE IMF has a high-mass cutoff that depends on the environment. This cutoff is generally different from the canonical 100\(-\)150M\({}_{\odot}\) cutoff, which they conclude implies that the IMF cannot be reproduced in small clouds simply by randomly sampling from the full IMF. Here we also outline some inconsistencies in our data processing. The bounds on the cropped field of view (see Figure 1) are set by the furthest extent of the N(H\({}_{2}\)) = 10\({}^{21}\) cm\({}^{-2}\) contour. This occasionally causes larger-than-intended fields of view when an area of gas denser than N(H\({}_{2}\)) = 10\({}^{21}\) cm\({}^{-2}\) is present away from the central cluster. The only major impact this has on the analysis is to increase the number of AGN when calculating the S-G correlation. However, this impact is largely mitigated by the low density cut discussed previously. We also neglect a number of steps that would be needed to complete a fully "apples-to-apples" comparison with the observations. For example, we do not use radiative post-processing to construct the YSO SEDs (e.g., Offner et al., 2012). Nor do we construct synthetic dust continuum maps in order to compute the column density (e.g., Juvela, 2019). These steps would allow us to apply the observational biases, such as the detection limits, more directly. However, we expect any impact on the S-G slope to be minimal, since the YSO positions and relative amounts of dense gas would be unchanged. ## 5 Conclusions In this study, we examine a 20,000 M\({}_{\odot}\) star-forming cloud in the starforge simulation suite in order to investigate the presence and evolution of the S-G correlation. To effectively do so, we create synthetic observations to compare with previous observational work, specifically P20 and G11. These synthetic observations include 2D projection of gas and star particle distributions at multiple distances, an age-to-Class conversion for the simulated stars using an exponential decay model, AGN contamination, a low stellar mass cutoff, the removal of close (unresolved) neighbors, gas map smoothing to mimic limited angular resolution, and a field-of-view crop at a gas column density of N(H\({}_{2}\)) = 10\({}^{21}\) cm\({}^{-2}\). Since most of these effects depend on distance, we place each cloud at 200, 400 and 800 pc to mimic the distances of star-forming regions observed by G11 and P20. This changes the angular size of the cloud, the number of AGN, the mass sensitivity limit, and the neighbor threshold. From these synthetic observations, we examine the dense gas fraction, YSO distribution and frequency, and the S-G correlation for the fiducial analysis and for the synthetic analyses at each distance. We find that the starforge simulation successfully reproduces the S-G correlation in many snapshots and exhibits a typical S-G slope within 1\(\sigma\) of the observed slope of 2. The presence of the S-G correlation both with and without accounting for observational effects implies that this is a real relationship that is a product of the underlying physical processes. However, observational biases, such as AGN contamination, appear to strengthen the S-G correlation, reduce time variation and promote a slope closer to 2. We find that the Class II:I ratios and dense gas fraction characteristic of the starforge simulation exhibit better agreement with those of the clusters in the Gutermuth et al. (2009) sample than the stellar complexes forming in the clouds in P20. No regions in either observational study match the low Class II:I ratios found at early times (\(<\) 3 Myr) in the simulation. This implies that the P20 and Gutermuth et al. (2009) clouds/clusters form stars at a low rate for a few million years. Thus, bias in cloud selection, which favors actively star-forming clouds with significant amounts of dense gas, possibly also contributes to the apparent universality of the S-G correlation. The present study only considers the S-G correlation under one set of typical simulated cloud conditions. Future work is needed to examine the impact of cloud properties and more realistic initial conditions on the S-G correlation. ## Acknowledgements SSRO and RG acknowledge funding support for this work from NSF AAG grants 2107340 and 2107705. SSRO acknowledges support by NSF through CAREER award 1748571, AST-2107340 and AST-2107942, by NASA through grants 80NSSC20K0507 and 80NSSC23K0476, and by the Oden Institute through a Moncrief Grand Challenge award. Support for MYG was provided by NASA through the NASA Hubble Fellowship grant #HST-HF2-51479 awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA, under contract NAS5-26555. The simulation was run on the Frontera supercomputer with LRAC award AST21002. This research is part of the Frontera computing project at the Texas Advanced Computing Center. Frontera is made possible by National Science Foundation award OAC-1818253. SM acknowledges the Massachusetts Space Grant Consortium in their support of his participation in the Summer 2022 Five-College Astronomy Undergraduate Internship Program. This work relies on products from SESNA, the Spitzer Extended Solar Neighborhood Archive, and associated Herschel data, developed with generous support from NASA ADAP grants NNX11AD14G, NNX13AF08G, NNX15AF05G, and NNX17AF24G, and NASA JPL/Caltech Herschel support grant 1489384. SM and RG acknowledge and thank Ronald Snell and S.T. Megeath for their support in constructing the paper. We would also like to thank the referee for their helpful comments. Astropy (Astropy Collaboration et al., 2013), h5py ([http://www.h5py.org/](http://www.h5py.org/)), Matplotlib (Hunter, 2007), NumPy (van der Walt et al., 2011), SciPy (Virtanen et al., 2020), yt (Turk et al., 2011)
2305.15594
Flocks of Stochastic Parrots: Differentially Private Prompt Learning for Large Language Models
Large language models (LLMs) are excellent in-context learners. However, the sensitivity of data contained in prompts raises privacy concerns. Our work first shows that these concerns are valid: we instantiate a simple but highly effective membership inference attack against the data used to prompt LLMs. To address this vulnerability, one could forego prompting and resort to fine-tuning LLMs with known algorithms for private gradient descent. However, this comes at the expense of the practicality and efficiency offered by prompting. Therefore, we propose to privately learn to prompt. We first show that soft prompts can be obtained privately through gradient descent on downstream data. However, this is not the case for discrete prompts. Thus, we orchestrate a noisy vote among an ensemble of LLMs presented with different prompts, i.e., a flock of stochastic parrots. The vote privately transfers the flock's knowledge into a single public prompt. We show that LLMs prompted with our private algorithms closely match the non-private baselines. For example, using GPT3 as the base model, we achieve a downstream accuracy of 92.7% on the sst2 dataset with ($\epsilon=0.147, \delta=10^{-6}$)-differential privacy vs. 95.2% for the non-private baseline. Through our experiments, we also show that our prompt-based approach is easily deployed with existing commercial APIs.
Haonan Duan, Adam Dziedzic, Nicolas Papernot, Franziska Boenisch
2023-05-24T22:06:08Z
http://arxiv.org/abs/2305.15594v1
# Flocks of Stochastic Parrots: Differentially Private Prompt Learning for Large Language Models ###### Abstract Large language models (LLMs) are excellent in-context learners. However, the sensitivity of data contained in prompts raises privacy concerns. Our work first shows that these concerns are valid: we instantiate a simple but highly effective membership inference attack against the data used to prompt LLMs. To address this vulnerability, one could forego prompting and resort to fine-tuning LLMs with known algorithms for private gradient descent. However, this comes at the expense of the practicality and efficiency offered by prompting. Therefore, we propose to privately learn to prompt. We first show that _soft_ prompts can be obtained privately through gradient descent on downstream data. However, this is not the case for _discrete_ prompts. Thus, we orchestrate a noisy vote among _an ensemble of LLMs_ presented with different prompts, i.e., _a flock of stochastic parrots_. The vote privately transfers the flock's knowledge into a single public prompt. We show that LLMs prompted with our private algorithms closely match the non-private baselines. For example, using GPT3 as the base model, we achieve a downstream accuracy of \(92.7\%\) on the sst2 dataset with \((\varepsilon=0.147,\delta=10^{-6})\)-differential privacy vs. \(95.2\%\) for the non-private baseline. Through our experiments, we also show that our prompt-based approach is easily deployed with existing commercial APIs. ## 1 Introduction Large language models (LLMs) exhibit strong capabilities for in-context learning [6; 40]. By prepending the adequate prompt to an LLM's input, the model can perform a myriad of natural language downstream tasks without any modifications to its parameters [41]. While the data used to train an LLM is usually assumed to be public, downstream data used in the prompt is often more sensitive. This can elicit confidentiality issues, for instance, if prompts contain information that represents valuable intellectual property [34]. At the same time, it also raises privacy concerns when the data involves personal information about individuals. In this paper, our first contribution is to show that these concerns are valid. We are the first to instantiate a highly effective membership inference attack (MIA) [7; 45] against prompts. Our attack is able to determine if a given data point was used within the prompt of the LLM. The only existing solution to mitigate this privacy risk would be to forego prompting and instead fine-tune the LLM with a privacy-preserving training algorithm [26; 54]. Yet, fine-tuning lacks the efficiency and practicality of prompting. Indeed, fine-tuning requires significantly more data [42], computational resources [26], and storage space [25]. Additionally, fine-tuning requires access to the LLM parameters. However, many of the state-of-the-art LLMs are proprietary models deployed behind an API which only allows its users to query the LLMs [3; 6; 10; 17; 35]. To leverage the benefits of prompting while at the same time protecting the data contained in prompts, we propose the first algorithms for prompt learning with privacy. Our algorithms offer rigorous guarantees expressed using differential privacy [15]. Perhaps closest to existing work on fine-tuning, we propose to leverage the canonical DPSGD algorithm [1] to learn soft prompts with differential privacy guarantees. Our PromptDPSGD algorithm performs a private gradient descent on the soft prompt embeddings prepended to the LLM's private input. Since these embeddings have very few parameters in comparison to LLMs, our PromptDPSGD is efficient and yields competitive privacy utility trade-offs at a fraction of the training complexity of private fine-tuning. However, learning soft prompts with DPSGD may not always be possible because it requires computing gradients with respect to the prompt input. As mentioned previously, current APIs [3; 6; 10; 17; 35] usually do not provide these gradients. We thus turn to _discrete_ prompts which consist of natural language tokens. Discrete prompts address the aforementioned limitations while being more data-efficient. Our insight is to observe that LLMs with discrete prompts naturally lend themselves to another canonical approach of differentially private learning known as the private aggregation of teacher ensembles (PATE) [37]. We introduce PromptPATE, which creates an ensemble of LLMs with different discrete prompts from the private dataset which we refer to as _a flock of stochastic parrots_[5]. Since interacting with the flock directly can leak private information about the prompts, as we demonstrate with our MIA, PromptPATE additionally performs a knowledge transfer. Therefore, each model in the flock generates a next token prediction for a short input sequence of some public data. By performing a noisy majority vote over all models' token output, we generate a single output that, due to the noise addition, implements differential privacy guarantees while incorporating knowledge from the flock. The public input together with the noisy aggregated output form a new single example for the discrete _student prompt_ that can be prepended to the LLM in lieu of the individual prompts which contain private information. In addition to providing rigorous privacy guarantees, our PromptPATE is highly efficient, since, instead of having to query every model from the flock at inference time, it suffices to query the LLM prepended with the student prompt _once_. We perform extensive experiments against multiple popular LLMs, such as GPT3 [6] and Claude [3], that are deployed behind commercial black-box APIs. Our results highlight that PromptPATE provides high downstream performance that matches the one of non-private prompting even at very strong privacy guarantees. On the sst2 dataset with GPT3, for instance, we reach an accuracy of \(92.7\%\) with privacy costs as little as \((\varepsilon=0.147,\delta=10^{-6})\)-differential privacy, even when the public data used during PromptPATE's knowledge transfer stem from a different distribution than sst2. Our results closely matches the non-private baseline accuracy (\(95.2\%\)). Thus, we conclude that prompt learning for LLMs is not only more efficient and practical than fine-tuning but can also achieve high utility even with strong and practical privacy protection in place. In summary, we make the following contributions: Figure 1: **Our methods for private prompt learning.**_Left:_PromptDPSGD obtains the input gradients from the LLM, and performs DPSGD to update the soft prompt embedding while keeping the LLM frozen. _Right:_PromptPATE creates a noisy ensemble of private discrete prompts, and then transfers knowledge by selecting a student prompt that can be publicly released. PromptPATE only needs black-box access of the LLM and, thus, can be easily deployed with commercial APIs. * We instantiate the first MIA on prompted LLMs and show that we can effectively infer membership of the prompted data points with high success. * We propose a lightweight alternative to DP fine-tuning, namely PromptDPSGD, which optimizes orders of magnitude fewer parameters while keeping the original LLM frozen. * We propose PromptPATE, the first method for DP learning with LLMs that requires only black-box access to the model--making it easily deployable for commercial LLM APIs. * Our experiments on multiple state-of-the-art commercial APIs [6; 3] highlight that our methods achieve both high utility and strong privacy protections in various setups. ## 2 Background and Related Work Prompts for LLMs.The success of LLMs, such as BERT, Claude, OPT, or different versions of GPT and their exceptional in-context learning capacities gave rise to prompt-based learning [14; 6; 39; 40; 35; 56]. Prompts serve as _demonstrations_ of the downstream task, which the model can then generalize from. There are two paradigms for LLM prompting, namely _discrete_ and _soft_ prompts. Discrete prompts [6; 16; 18; 27; 44] are natural-language instructions that contain examples from the downstream task in a well-crafted template. Tuning discrete prompts is often done by prompting the model with different combination of examples, assessing their performance on the downstream task, and choosing the combination that yields the highest performance as the final prompt. In contrast to discrete prompts, soft prompts [24; 27] prepend trainable continuous embeddings to the inputs of LLMs. These embeddings are initialized either at random or with embedding vectors that correspond to tokens from the dictionary. During tuning, the embeddings are updated through gradient descent to minimize the loss of the prompted model on the private downstream task. To increase performance further, trainable embeddings can be prepended not only to the input but also to every LLM layer, a technique known as _prefix_[25; 28; 29]. Both soft prompts and prefix train end-to-end without any human involvement through backpropagation over the LLM. On the other hand, discrete prompts have to be designed manually through careful prompt engineering. Yet, prompt engineering only needs inference passes over the LLM which makes discrete prompt more computationally lightweight. Our work provides privacy protection for all of these paradigms: discrete prompts, as well as for soft prompts and prefix. Privacy Leakage in LLMs.LLMs have been shown to memorize data both from their original large training corpora [8; 20; 23; 32; 48; 55] and from smaller private datasets used to fine-tune them for downstream tasks [33]. The only prior work around privacy leakage in prompt-based learning utilizes prompts to extract knowledge from trained LLMs [13; 22; 38]. In contrast, we study the privacy of the prompting data itself. To do so, we investigate the canonical privacy attack known as **membership inference attacks (MIA)**[7; 45]. Its use as a practical means to demonstrate leakage of private information in ML was recently popularized by a line of work on quantifying memorization [9; 43; 47]. While prior work utilizes MIAs to assess whether a given data point was used to train an LLM, we instantiate a MIA to assess whether a given data point was used within the prompt prepended to the inputs of a trained LLM. Defending Against Privacy Leakage in LLMs.Prior work either focuses on training [2; 19] or fine-tuning [26; 54] LLMs with privacy guarantees. These approaches rely on the mathematical framework of **differential privacy (DP)**[15] and in particular the **DPSGD** algorithm for private stochastic gradient descent [1]. Here, DPSGD is applied to guarantee that one outputs approximately the same model parameters whether or not any given data point was used to train or fine-tune the model. To achieve this, DPSGD clips the per-example gradients that are computed during training and adds well-calibrated noise to each model update. These two operations typically increase the computational complexity of training and decrease the utility of the resulting model [1; 4; 49]. To counteract these effects, state-of-the-art methods for full DP-fine tuning in LLMs require extensive hyperparameter tuning and vast computational resources [26]. Alternative approaches refrain from updating the large number of model parameters and instead introduce additional layers into the model architecture and only fine-tune these layers with DPSGD [54]. To the best of our knowledge, no prior work attempted to provide DP guarantees for prompt data in LLMs. Setup and Notation.We denote by \(P\) the soft or discrete prompt that is prepended to any input sequence \(x_{i}\) when querying the language model \(L\). For brevity, we denote \(L([P,x_{i}])\) by \(L_{P}(x_{i})\).3 The output \(y_{i}\) of \(L_{P}(x_{i})\) is an \(M\)-dimensional probability vector, with \(M\) being the size of the model's vocabulary. Each component of \(y_{i}\) corresponds to the probability that the \(L_{P}\) assigns to the respective token for being the next token in the sequence \(x_{i}\). The semantic meaning of the next token varies depending on the given downstream task. For instance, for classification, the index with the highest probability indicates the token of the class that \(L_{P}\) assigns to \(x_{i}\). Footnote 3: In the prefix method, \(L_{P}\) denotes prepending trainable parameters to every layer’s input, and not only to the model input \(x_{i}\). ## 3 Private Information about Prompt Data Leaks from Prompted LLMs By instantiating a MIA against prompted LLMs, we want to highlight that the private data used within a prompt (which we refer to as _prompt data_ from hereon) can be subject to a substantial privacy risk. We showcase this risk at the example of LLMs that are prompted with discrete prompts \(P\) containing tuples of demonstrations from classification downstream tasks as prompt data \(p=\{(p_{x},p_{y})\}\). For example, in a prompt with one demonstration (_one-shot learning_), the prompt data \(p\) may be specified as \(p=\{\)_("The movie was great.", "positive")_\(\}\). Our prompts are provided in a consistent template where one or multiple demonstrations are combined with instructions as \(P=\textit{[Instruction, (text sequence p x}\), class-label token p\({}_{y}\)),...]. For our MIA, we consider an adversary who aims at inferring whether a given private demonstration \((p_{x},p_{y})\) was used within the prompt data \(p\). The adversary holds \(n\) candidate demonstrations of text sequences and corresponding labels \(l_{i}\) and queries the text sequences \((x_{1},\cdots,x_{n})\) to \(L_{P}\) with black-box access. The prompted model \(L_{P}\) then returns the output probability vectors \((y_{1},\cdots,y_{n})\). Following prior work [21; 53], we analyze the model's output probability at token \(y_{i,l_{i}}\) that corresponds to the _correct_ target class label of every \(x_{i}\). The intuition to distinguish between members and non-members is that the output probabilities at the correct class \(l_{i}\) will be significantly higher for demonstrations that were used within the prompt, _i.e.,_ members with \((p_{x},p_{y})=(x_{i},l_{i})\). We show that even with this simple MIA, we can reliably determine membership for the prompt data. Experimental Setup.We prompt GPT3-Babbage [6] with multiple _one-shot examples_ to solve four standard downstream text classification tasks, namely _dbpedia_[57], _sst2_[46], _agnews_[57] and _trec_[50]. The template of our prompts follows [58]. To evaluate our MIAs, we consider the single data point used within the prompt as a members and \(50\) other randomly selected data points from the respective task's training dataset as non-members. This skewed distribution between members and non-members (1 vs 50) corresponds to a realistic scenario where only a small proportion of the Figure 2: **MIA Risk. We study GPT3 prompted with \(100\) different one-shot examples (dbpedia). _left_: We present the prediction probabilities at the correct class for members (the one-shot example) and non-members (\(50\) randomly sampled private points). The output probability for members is significantly higher than for non-member data points. _right_: We present the AUC-ROC curves of our MIA against the \(100\) prompts (gray lines) and the blue line as an average over all attacks. Given that each prompt has only one member, the resulting TPRs can only be 0% or 100% which leads to the step-shape of the gray curves. The result indicates that our attack is significantly more successful than random guessing (the red dashed line).** candidate data targeted by the adversary are members [21]. To quantify the success of our attack, we report the AUC-ROC curves of \(100\) random trials. Results.Before evaluating the success of the MIA, we analyze the probability output from GPT3 for the correct target class between member and non-member data points. Figure 1(a) shows for the dbpedia dataset that the prediction probabilities for non-members are significantly lower than for members. Figure 1(b) shows that this leads to a high MIA risk in terms of an average AUC score of \(0.84\) for the prompt data. Similar results for other datasets and models are presented in Appendix D. These results highlight that private information can leak from prompt data easily and thus motivate the urgent need for defenses which we develop in the rest of this paper. ## 4 Methods for Privacy Preserving Prompts As of now, if we want to protect the private downstream data, we have to forego prompting altogether because, to the best of our knowledge, no algorithms for private prompt learning exist. The only alternative to privately adapt the LLM would be to perform DP fine-tuning [26; 54]. However, this approach is only feasible when we have direct access to the LLM to update its parameters with DPSGD [26] or to even change the model architecture to insert additional parameters--fine-tuned with DPSGD [54]. This is prohibitively expensive and mostly impossible with the commercial API, thus we propose the first algorithms that enable differentially private prompt learning. We consider two main paradigms of prompting: soft prompts and discrete prompts. To learn private soft prompts, we introduce PromptDPSGD. PromptDPSGD is a parameter-efficient alternative to DP fine-tuning that does not need modifying the parameters or architectures of the LLM. However, many popular APIs [3; 6; 10; 17; 35] do not support soft prompts yet as it requires gradients with respect to the input. Therefore, we propose PromptPATE for discrete prompts. PromptPATE requires only black-box access to an LLM without any knowledge of the LLM's architecture or mode of operation. Instead, the algorithm only needs the next-token prediction of the LLM. This, to our knowledge represents the first solution for privately adapting LLMs in restricted API setups. ### PromptDPSGD: DPSGD for Private Soft Prompt Learning In general, all discrete input tokens to LLMs are internally transformed into continuous input embeddings that the LLM then operates on. Soft prompts are just additional continuous input embeddings that can be prepended to the original input embeddings before passing them through the LLM. To train (or _tune_) soft prompts, we require training data from a potentially private downstream task. After prepending the continuous soft prompt embeddings to input examples from the training data, we can calculate the gradients for the loss of the prompted LLM with respect to these soft prompt embeddings. The gradients provide information about how the soft prompt should be updated in order to minimize the loss on the training data. If we can obtain the gradients for soft prompts, we can learn these prompts with privacy guarantees by applying the canonical DPSGD algorithm [1]. The same applies to prefix, therefore, when we talk about soft prompts in the following, we implicitly also include prefix. We call this approach PromptDPSGD. The algorithm yields soft prompts with DP guarantees that can be deployed with the LLM to solve the respective downstream task. The privacy analysis of PromptDPSGD follows the one of the standard DPSGD. Note, however, that while conceptually similar to fine-tuning the LLM's parameters with DPSGD [54; 26], PromptDPSGD differs in a crucial aspect. In DP-SGD fine-tuning, we require the gradients with respect to all or a subset of the model parameters and update these parameters to minimize the loss. In contrast, in PromptDPSGD, we use the gradients with respect to the soft prompt embeddings and only alter these. We highlight this difference in our PromptDPSGD-algorithm that we present in Appendix C. While this difference seems subtle, it has far-reaching consequences. First, there are orders of magnitude fewer parameters that need to be updated which increases training efficiency. Second, and most importantly, it allows us to keep operating on the original LLM. We discuss the resulting advantages, such as storage efficiency, and the ability to process multiple different tasks simultaneously at the end of this section (in 4.3). These advantages make PromptDPSGD conceptually superior to private fine-tuning. At the same time, as we show in our evaluation, despite the small number of trainable parameters, PromptDPSGD, for simpler tasks, matches the performance of private fine-tuning. Yet, current APIs [3; 6; 10; 17; 35] do not support soft prompting, prefix, or private fine-tuning and only provide black-box access through discrete prompts. For these setups, we propose PromptPATE. ### PromptPATE: PATE for Privacy Preserving Discrete Prompts PATE [36; 37] enables learning classifiers with DP guarantees. It first trains an ensemble of _teacher_ models on disjoint subsets of the private data. Second, through a noisy labeling process, the ensemble privately transfers its knowledge to an unlabeled public dataset. Finally, a separate _student_ model is trained on this labeled public dataset for release. The noisy knowledge transfer in the second step relies on the Confident GNMAX algorithm [37] that we detail in Appendix C. It consists of three main parts: for any input data point from the public unlabeled dataset, each teacher votes for the most likely class. Then, the consensus over the teachers' votes is determined and queries with low consensus are rejected to avoid revealing too much information about the private decision boundary. Finally, the returned class label for any non-rejected data point is determined as a noisy argmax over all teachers' vote counts--where the added noise is sampled from a Gaussian distribution to implement the DP guarantees. For each rejected or labeled data point from the public dataset, privacy costs are accumulated and the ensemble stops labeling once a target privacy budget is reached. Our PromptPATE follows the general flow of standard PATE: training the _teacher models_, _private knowledge transfer_, and training a _student model_. However, due to the significant differences between in-context learning for LLMs and supervised learning in the original PATE and how these different paradigms leverage private and public data, we had to redesign each of these building blocks. This allows to leverage both the data-efficiency of prompts and the rigorous privacy protection from PATE. In the following, we present the building blocks in our PromptPATE. Teacher Models (Flock of Stochastic Parrots).Instead of _training_ teacher models on disjoint partitions of the private data, we use the private data to create disjoint prompts for the LLM. More specifically, we use examples, for instance {(_"The movie was great.", "positive"_),...}, from the private training data to create prompts that can then be deployed with the LLM as teachers. Private Knowledge Transfer.During the private knowledge transfer, the teachers label public data sequences, such as (_"I did enjoy it.", _._). Each teacher votes with the most likely class labels for the private downstream task. In Appendix D, we show that PromptPATE can also operate directly on pure next token predictions from Claude [3] without access to per-token probabilities--enabling full black-box private prompts. By performing the private voting process according to standard PATE with the Confident GNMAX algorithm, we turn our per-teacher predictions into a final class label token that will be appended to the sequence, _e.g._, (_"I did enjoy it"_, "positive"_). The privacy accounting and analysis of our PromptPATE exactly follows the one of standard PATE [37]. Student.The most naive way to obtain a student model following standard PATE would be to label many public sequences and train a language classifier using supervised learning on this data. However, due to the relatively high number of data needed for supervised learning, and the fact that each query to the private teachers consumes privacy, this process would incur high privacy costs. We propose a better approach building on the data-efficiency of prompting [42] by using labeled public sequences to create new discrete student prompts. The selected prompt can then be deployed with the LLM as the PromptPATE student model. In theory, labeling one public sequence by the ensemble would be sufficient to create such a prompt. This approach yields negligible privacy costs, but the resulting prompt might not have good utility due to the high variance in the performance of prompts [58]. Therefore, we generate multiple prompts based on different labeled public sequences and perform prompt tuning to select the best student prompt. Care must be taken during selection: utility cannot be evaluated on the private data anymore given that the prompt will be publicly deployed and selecting based on the private data would incur additional privacy costs. We solve this tension by using parts of the newly-labeled public data as validation data to assess utility of the student prompts. By selecting the prompt with the highest validation accuracy, we deploy the student prompt that most resembles the private teachers. ### Advantages of (Private) Prompting over (Private) Fine-Tuning Our private prompt learning enables us to leverage the general advantages of prompting over fine-tuning while preserving privacy. Private prompting requires significantly less storage than private fine-tuning. While fine-tuning requires storing a separate copy of the LLM model for each downstream task [24], prompts operate only on the input level of LLMs without adapting model parameters, such that only a small task-specific prompt needs to be stored for each downstream task. For example, each copy of the fine-tuned RoBERTa base model requires 125M parameters (\(\sim\)500MB). This becomes prohibitively expensive, especially as the number of parameters for state-of-the-art LLMs rapidly increases. In contrast, soft-prompts and prefix, as the one generated by PromptDPSGD (using implementation from [29]) with the standard prompt length of 10 tokens require less than 10K parameters (40KB) for the soft-prompt and 100K parameters (400KB) for the prefix. A discrete prompt, such as the one generated in PromptPATE, requires less than 1 KB of prepended text. Prompts also enable processing many examples from different tasks in a single batch [25], called mixed-task inference. This allows more efficient use of LLMs since we do not have to wait for a sufficient number of requests for a single task before processing them. This is not possible with any form of fine-tuning, where the fine-tuned model can serve solely a single task. ## 5 Experimental Evaluation We evaluate both PromptDPSGD and PromptPATE and show that they match the performance of non-private prompting while providing strong privacy guarantees. ### PromptDPSGD Experimental Setup.To train soft-prompts and prefix, we follow the experimental setup from prior work on DP fine-tuning. Specifically, we use differentially-private optimization engines for transformers, such as models from the BERT family for the language understanding tasks. The experimental results for classification were performed on the RoBERTa models [30], using the standard NLP datasets, namely sst2, qnli, qqp, and mnli, from the GLUE benchmark [51]. Our implementation for soft-prompt and prefix is based on P-Tuning v2 [29]. To tune the (hyper)parameters for PromptDPSGD, we adjust the length of the soft-prompt or prefix in the private setting (with the default value of \(10\), which commonly yields good performance). For the privacy parameters, we set the \(\delta=1/N\), where \(N\) is the number of data points in a given dataset, The clipping threshold of per-example gradients is set to \(0.1\) in most cases. We use a batch size of 1024. The detailed selection of (hyper-)parameters is presented in Appendix E. Results.We compare our PromptDPSGD against state-of-the-art approaches for private fine-tuning on multiple private downstream datasets. Our results are shown in Table 1. We highlight that both soft prompts and prefix provide competitive privacy utility trade-offs. For example, the difference in accuracy values between the non-private baseline and the private soft prompt ranges from 3% (for the simplest sst2 dataset) and up to 7% (for the most difficult mmli dataset). This mirrors results for other private methods, such as the private fine-tuning of LoRA [54]. We also observe that, similarly, for simple tasks, such as sst2 or qnli, the performance of soft prompt or prefix matches the one of fine-tuning. For the more difficult tasks, namely qqp and mnli, the performance of prefix and soft prompts is also relatively close to fine-tuning. The results obtained for these methods are highly influenced by the number of optimized parameters. For example, for the SST2 \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline \multirow{3}{*}{Dataset} & \multirow{3}{*}{ \begin{tabular}{c} **M** \\ **P** \\ \end{tabular} } & \multicolumn{2}{c}{Soft-Prompt (Our)} & \multicolumn{2}{c}{Prefix (Our)} & \multicolumn{2}{c}{Full-Tuning [26]} & LoRA-Tuning [54] \\ \cline{3-10} & & \multicolumn{2}{c}{<10K} & \multicolumn{2}{c}{<100K} & \multicolumn{2}{c}{125M} & \multicolumn{2}{c}{1.2M} \\ \cline{3-10} & & \(\varepsilon=8\) & \(\varepsilon=\infty\) & \(\varepsilon=8\) & \(\varepsilon=8\) & \(\varepsilon=\infty\) & \(\varepsilon=8\) & \(\varepsilon=\infty\) \\ \hline sst2 & 92.31 & 95.64 & 91.97 & 96.33 & 85.89 & 96.40 & 92.97 & 96.60 \\ qnli & 84.11 & 89.48 & 87.17 & 94.84 & 84.81 & 94.70 & 88.59 & 94.70 \\ qqp & 81.52 & 86.56 & 82.58 & 91.42 & 86.15 & 92.20 & 86.26 & 92.20 \\ mnli & 75.15 & 82.49 & 80.57 & 90.34 & 83.30 & 90.20 & 82.92 & 90.20 \\ \hline \hline \end{tabular} \end{table} Table 1: **Performance of PromptDPSGD. We report the accuracy values (%) for each dataset. All \(\varepsilon\) values are reported as standard DP guarantees. We run the experiment on RoBERTa [30]. The first row **M:** the type of the private Method, the second row **P:** the number of **P**arameters tuned for the method, and the third row **G:** DP Guarantee. We also present results for \(\varepsilon=3\) in Appendix D. task and the RoBERTa-Base model, the prefix requires 19970 additional parameters while soft prompt adds solely 2306 parameters. On the other hand, the number of privately tuned parameters is a few orders of magnitude bigger for fine-tuning and equal to the size of the trained model, namely 125M for the method proposed in [26], while the fine-tuning approach from [54] optimizes around 1.2M parameters. Our results reflect a general trend, where prompts are suited for small downstream tasks while fine-tuning with its bigger number of parameters can also cater to more complex tasks with larger training data sets. ### PromptPATE Experimental Setup.**Teachers:** Unless otherwise specified, we rely on GPT3-Babbage as the base LLM and select one-shot examples randomly without replacement from the private downstream task as prompt data. Our prompt template follows Zhao _et al._[58]. For each setting, we deploy \(200\) teacher prompts. **Private knowledge transfer:** We use the implementation of PATE's Confident GNMAX algorithm and the privacy accounting from [12] and report our algorithm's hyperparameters in Appendix E. **Student:** For each private downstream task, we experiment with two setups (1) selecting public input sequences from the same (IID) and (2) from a different distribution (OOD) as the private data. We introduce three new datasets for the OOD setup: imdb [31], arisetv [11] and qqp [52]. The details of preprocessing these datasets can be found in Appendix E. In both the IID and OOD setup, we limit the size of the public dataset to \(500\) input sequences from the respective datasets. After the ensemble finishes labelling, we select the best labeled public sequence as prompt data based on the validation accuracy on the labeled public set. We repeat the process three times and report average and standard deviation of the test accuracy for the selected student prompt on the private test set. To improve utility, both teachers' and students' output probabilities from GPT3 are recalibrated using contextual calibration [58]. Results.We compare PromptPATE against three baselines: the lower bound baseline represented by a zero-shot prediction (\(\varepsilon=0\)), _i.e.,_ when the LLM is only prompted with an instruction, the private ensemble accuracy (\(\varepsilon=\infty\)), and the upper bound as a non-private one-shot prediction (\(\varepsilon=\infty\)) using the best example from the private data as prompt data. (To save costs, we select from \(200\) candidates.) Table 2 shows that, over all setups, PromptPATE achieves similar utility to the non-private baseline and significantly improves over zero-shot predictions--even at very strong privacy protection (\(\varepsilon<0.3\), \(\delta=10^{-6}\)). Our results also highlight that the distribution of the public data does not need to be very close to the distribution of the private data to yield high-utility student prompts. For example, they can be collected from different domains (dbpedia holds extracts from wikipedia while its public data agnews contains news articles) and for different tasks (tree aims to classify the topic of a given answer while qqp serves to measure the similarity of two questions). Still, with dbpedia being the private downstream data and agnews as public, we achieve an accuracy of 74.6%, which is significantly higher than the zero-shot baseline with 44.2%. \begin{table} \begin{tabular}{l c c c c c c c c c} \hline \hline & Lower & Ens. & Upper & \multicolumn{5}{c}{Our PromptPATE} \\ & Bound & Acc. & Bound & \multicolumn{3}{c}{**IID** Transfer} & \multicolumn{3}{c}{**OOD** Transfer} \\ \cline{2-10} Private & \(\varepsilon=0\) & \(\varepsilon=\infty\) & \(\varepsilon=\infty\) & \multicolumn{1}{c}{Public} & \(\varepsilon\) & Test acc & Public & \(\varepsilon\) & Test acc \\ \hline sst2 & 76.3 & \(90.0\) & \(93.8\) & sst2 & \(0.178\) & \(88.8_{\pm 2.3}\) & imdb & \(0.187\) & \(87.2_{\pm 1.9}\) \\ agnews & 62.0 & \(72.8\) & \(78.2\) & agnews & \(0.248\) & \(71.7_{\pm 0.8}\) & arisetv & \(0.258\) & \(67.9_{\pm 1.7}\) \\ tree & 40.7 & \(57.6\) & \(58.7\) & tree & \(0.281\) & \(52.8_{\pm 1.5}\) & qqp & \(0.293\) & \(50.9_{\pm 3.5}\) \\ dbpedia & 44.2 & \(81.6\) & \(85.6\) & dbpedia & \(0.194\) & \(80.3_{\pm 1.3}\) & agnews & \(0.203\) & \(74.6_{\pm 1.4}\) \\ ss2 (C) & 82.0 & \(94.0\) & \(95.2\) & sst2 & \(0.147\) & \(92.3_{\pm 1.1}\) & imdb & \(0.154\) & \(92.7_{\pm 0.8}\) \\ agnews (4) & 62.0 & \(75.8\) & \(81.0\) & agnews & \(0.145\) & \(73.5_{\pm 1.2}\) & arisetv & \(0.145\) & \(69.6_{\pm 1.8}\) \\ \hline \hline \end{tabular} \end{table} Table 2: **Performance of PromptPATE. We compare PromptPATE with three baselines: zero-shot (Lower Bound), the ensemble’s accuracy (Ens. Acc), and the non-private baseline (Upper Bound) on four classification benchmarks. We study two settings, (IID Transfer) when the public dataset is from the same and (OOD Transfer) different distribution than the private data. We find that PromptPATE achieves strong privacy protection (\(\varepsilon<0.3\) at \(\delta=10^{-6}\)) and utility close to the non-private and significantly higher than the zero-shot. Unless otherwise specified, the experiments are performed on GPT3-Babbage with one-shot prompts. Additionally, we also run experiments on GPT3-Curie for sst2 (C) and 4-shot prompts for agnews (4).** We also provide further insights into the privacy-utility trade-offs that can be achieved with PromptPATE in Figure 2(b). Our results highlight that with more public sequences queried to the ensemble, the privacy consumption increases while, after roughly \(100\) queries, with even \(\varepsilon<0.2\), the student model's test accuracy saturates. This yields very favorable privacy-utility trade-offs which we attribute mainly to the data efficiency of discrete prompts: Even from within as little as \(100\) labeled examples, a high-performing student prompt can be derived. Additionally, we observe that the per-query privacy costs of PromptPATE are relatively low, further benefiting the privacy-utility trade-off. The small privacy costs result from the high consensus between the teacher predictions4, see Figure 2(a)--that might result from all teachers relying on the same underlying LLM, just with different prompts. Footnote 4: As motivated in Section 4.2, high consensus reveals less information about the private decision boundary, and hence incurs smaller privacy costs. Scalability.Finally, we also study how PromptPATE scales with larger LLMs and more examples in the prompt. We experiment with a more performant LLM (GPT3-Currie) for sst2. Due to the higher per-query costs, we are not able to repeat this experiment for all datasets. Our results show that the performance of our private prompt increases together with the performance of the public prompt (92.7% accuracy on Curie vs. 87.2% on Babbage) while the privacy budget \(\epsilon\) decreases (from \(0.178\) to \(0.147\)). To investigate flexibility in terms of numbers of private examples provided as prompt data, we also experiment for agnews with 4-shot teachers. Similar to the non-private study [58] that reports improvements for agnews in the 4-shot setting over 1-shot, we observe that this improvement also translates to the private prompt. Our results indicate that with increasingly more powerful LLMs and larger context windows, private prompting will increase further in terms of privacy-utility trade-offs. ## 6 Conclusions and Outlook By instantiating the first simple yet effective membership inference attack against prompted LLMs, we show that they leak private information about their prompt data. We propose private prompt learning as a holistic and broadly applicable new approach to mitigate this risk. We first introduce PromptDPSGD that enables to train soft-prompts with privacy guarantees. In contrast to fine-tuning, soft prompts optimize significantly fewer parameters and do not require any update of LLM parameters or changes to its architecture. As the first solution to private downstream learning with LLMs in black-box access scenarios, we propose PromptPATE. PromptPATE builds on the highly data-efficient discrete prompts and implements privacy through a noisy knowledge transfer. Through our evaluation against two popular LLMs deployed behind commercial black-box APIs (GPT3 and Claude) [6, 3], we highlight that this method yields downstream performance that matches the one of non-private prompting at very strong privacy guarantees. As LLMs rapidly improve and increase in size, prompts are achieving consistently higher performance while fine-tuning becomes more challenging at this scale. This suggests that privacy protections for prompts will become even more important, especially as context sizes expand. Figure 3: **Additional Insights of PromptPATE. We perform ablation studies on GPT3-Babbage and use dbpedia as private and agnews as public data. _Left:_ Teacher consensus as the fraction of teachers who vote for the correct class over 500 public input sequences. PromptPATE achieves overall high consensus. _Right:_ Student accuracy as a function of the public query set’s size. Already with as few as 100 queries, we observe a plateau in accuracy which highlights PromptPATE’s data efficiency. ## Acknowledgments We would like to acknowledge our sponsors, who support our research with financial and in-kind contributions: Amazon, Apple, CIFAR through the Canada CIFAR AI Chair, DARPA through the GARD project, Intel, Meta, NSERC through a Discovery Grant, the Ontario Early Researcher Award, and the Sloan Foundation. Resources used in preparing this research were provided, in part, by the Province of Ontario, the Government of Canada through CIFAR, and companies sponsoring the Vector Institute. We also thank members of the CleverHans Lab for their feedback.
2303.09233
SwinVFTR: A Novel Volumetric Feature-learning Transformer for 3D OCT Fluid Segmentation
Accurately segmenting fluid in 3D volumetric optical coherence tomography (OCT) images is a crucial yet challenging task for detecting eye diseases. Traditional autoencoding-based segmentation approaches have limitations in extracting fluid regions due to successive resolution loss in the encoding phase and the inability to recover lost information in the decoding phase. Although current transformer-based models for medical image segmentation addresses this limitation, they are not designed to be applied out-of-the-box for 3D OCT volumes, which have a wide-ranging channel-axis size based on different vendor device and extraction technique. To address these issues, we propose SwinVFTR, a new transformer-based architecture designed for precise fluid segmentation in 3D volumetric OCT images. We first utilize a channel-wise volumetric sampling for training on OCT volumes with varying depths (B-scans). Next, the model uses a novel shifted window transformer block in the encoder to achieve better localization and segmentation of fluid regions. Additionally, we propose a new volumetric attention block for spatial and depth-wise attention, which improves upon traditional residual skip connections. Consequently, utilizing multi-class dice loss, the proposed architecture outperforms other existing architectures on the three publicly available vendor-specific OCT datasets, namely Spectralis, Cirrus, and Topcon, with mean dice scores of 0.72, 0.59, and 0.68, respectively. Additionally, SwinVFTR outperforms other architectures in two additional relevant metrics, mean intersection-over-union (Mean-IOU) and structural similarity measure (SSIM).
Sharif Amit Kamran, Khondker Fariha Hossain, Alireza Tavakkoli, Salah A. Baker, Stewart Lee Zuckerbrod
2023-03-16T11:16:02Z
http://arxiv.org/abs/2303.09233v2
# SwinVFTR: A Novel Volumetric Feature-learning Transformer for 3D OCT Fluid Segmentation ###### Abstract Accurately segmenting fluid in 3D volumetric optical coherence tomography (OCT) images is a crucial yet challenging task for detecting eye diseases. Traditional autoencoding-based segmentation approaches have limitations in extracting fluid regions due to successive resolution loss in the encoding phase and the inability to recover lost information in the decoding phase. Although current transformer-based models for medical image segmentation addresses this limitation, they are not designed to be applied out-of-the-box for 3D OCT volumes, which have a wide-ranging channel-axis size based on different vendor device and extraction technique. To address these issues, we propose SwinVFTR, a new transformer-based architecture designed for precise fluid segmentation in 3D volumetric OCT images. We first utilize a channel-wise volumetric sampling for training on OCT volumes with varying depths (B-scans). Next, the model uses a novel shifted window transformer block in the encoder to achieve better localization and segmentation of fluid regions. Additionally, we propose a new volumetric attention block for spatial and depth-wise attention, which improves upon traditional residual skip connections. Consequently, utilizing multi-class dice loss, the proposed architecture outperforms other existing architectures on the three publicly available vendor-specific OCT datasets, namely Spectralis, Cirrus, and Topcon, with mean dice scores of 0.72, 0.59, and 0.68, respectively. Additionally, SwinVFTR outperforms other architectures in two additional relevant metrics, mean intersection-over-union (Mean-IOU) and structural similarity measure (SSIM). Keywords:Fluid Segmentation Optical Coherence Tomography Swin Transformer OCT Segmentation. ## 1 Introduction The macula is part of the retinal subspace primarily responsible for central vision. Fluid buildup, or macular edema in retinal layers, is a common reason for blindness and retinal degeneration [7]. Possible factors include Drusen, Choroidal neovascularization (CNV), Age-related macular degeneration (AMD), and Diabetic retinopathy (DR) [7, 26]. Age-related macular degeneration causes irreversible blindness in approximately 8.7% of people worldwide and is a leading cause of vision loss. Furthermore, it is projected to increase threefold each decade for people over 40 in developed countries [28]. Similarly, Diabetic retinopathy affects one-third of every diabetic patient [25], which is 2.8% of the world's population and the second leading cause of blindness [2]. As a result, early diagnosis, localization, and segmentation of retinal layer fluid accumulation can help with effective treatments and personalized therapy. Optical coherence tomography (OCT) is a non-invasive retinal imaging method [23] that yields 3D volumetric cross-sectional images for viewing the morphology of retinal layers and underlying pathologies. Although the image is extracted through this approach, the differential diagnosis and fluid localization are supervised by an expert ophthalmologist. Manual annotation and segmentation of sub-retinal fluid can be time-consuming, subject to error, and tedious task. Hence, experts in the past proposed and incorporated many image-processing [5, 6] and machine learning [3, 20] techniques to alleviate this problem. However, those traditional approaches required handcrafted feature engineering and pruning, trouble learning spatial and depth features, and less generalization ability. With the advent of deep learning, automated segmentation tasks for medical imaging has surged in popularity, given their effectiveness to pixel-wise segment with high accuracy and precise surface estimation for volumetric and spatial imaging data. For retinal fluid segmentation, 2D U-Net-like auto-encoder models have been predominantly incorporated in early [21, 22] and recent works [30, 16, 29, 10]. These models achieved quite good results for multi-layer fluid segmentation in overall metrics. However, these architectures fail when segmenting fine fluid-layer boundaries and detecting small deposits of fluids. In contrast, recent vision transformer-based auto-encoders for fluid segmentation try to address this problem by utilizing multi-headed window attention [27], or shifted-window attention [19] for capturing local context, improving tiny fluid segmentation. The biggest drawback is these approaches are trained and tested on 2D slices, which only contain spatial features and have no context regarding inter-slice depth information. Though an early work [14] has utilized a 3D U-Net model for retinal fluid segmentation, any new development since then has been stagnant. **Our Contributions:** By taking all this into account, we propose a novel architecture termed Swin Volumetric Feature-learning Transformer (SwinVFTR) that utilizes a swin-transformer as an encoder and joins it to a 3D convolution-based decoder at distinct resolutions via novel volumetric spatial and depth attention block. Moreover, we modify the swin-transformer block with a Multi-receptive field residual block instead of MLP. Our model employs a channel-wise overlapped sampling technique to crop OCT volumes only in the depth axis, while retaining the spatial information. To validate our work, we compare four different 3D convolution and transformer-based architectures for medical image segmentation on three vendor-specific OCT datasets: Spectralis, Cirrus, and Topcon [1]. From Fig. 2, it is apparent that our architecture segments retinal fluid with high dice-score and mean-IOU. ## 2 SwinVFTR ### Channel-wise Volumetric Sampling Sampling OCT B-scans at different depths can affect the outcome of recognizing retinal disease pathology for accurate diagnosis [17]. Although U-Net-like architectures are flexible in handling OCT volumes of different depths, current transformer-based architecture cannot take OCTs with smaller depths. For example, UNETR [9] and Swin-UNETR [8], two state-of-the-art models for medical image segmentation, utilize patch-merging layer to downsample \(\times 32\). As a result, any OCT with less than 64 B-scans cannot be used out-of-the-box for these models. Since we are working on diversified OCT volumes with B-scans of 49 to 128, utilizing volumetric cropping would be ideal. However, we want to retain the spatial information while sampling a section of the original B-scans. So we introduce a channel-wise sampling technique that samples a \(H\times W\times D\) dimensional cropped image from an image with \(H\times W\times C\) dimensions, where \(D<C\) and \(D=32\). We also utilize one less swin-transformer and patch-merging block to make our downsampling \(\times 16\). While producing the output, we do channel-wise overlapped volume stitching (25% overlap) which is given in Fig. 1. ### Proposed Swin-Transformer Block Regular window-based multi-head self-attention (W-MSA), which was incorporated in Vision Transformer (ViT) [4], employs a single low-resolution window for constructing a global feature map and has quadratic computation complexity. Contrastly, the Swin Transformer architecture proposed in [15] integrates shifted windows multi-head self-attention (SW-MSA), which builds hierarchical local feature maps and has linear computation complexity. Recently, Swin-UNETR [8] adopted this swin-transformer block without making any fundamental changes and achieved state-of-the-art dice scores in different 3D medical image segmentation tasks. One of the most significant drawback of Figure 1: Proposed SwinVFTR architecture which takes 3D OCT volume input with channel-wise sampling technique and outputs a 3D segmentation map of the fluid accumulation. The SwinVFTR encoder incorporates a new swin-transformer block consisting of Shifted window attention, Multi-headed attention and Multi-Receptive Field (MRF) sub-block with both convolution and dilated convolution layers. The encoder features are sequentially added with the decoder using a skip-connection consisting of a volumetric attention (VA) block. these blocks is using a Multi-layer perceptron block (MLP) after the post-normalization layer. MLP utilizes two linear (dense) layers, which are computationally more expensive than a 1D convolution with a small kernel. For example, a linear embedding output from a swin-transformer layer having dimension \(X\) (where \(X\) = \(H\times W\times D\)), and input channel, \(C_{in}\) and output channel, \(C_{out}\) will have a total number of parameters, \(X\times C_{in}\times C_{out}\). In contrast, a 1D Conv with kernel size, \(k\) with the same input and output will have less number of parameters, \(k\times C_{in}\times C_{out}\). Here, we did not consider any parameters for bias and the value of \(k=\{1,3\}\). On the other hand, using 1D convolution will drastically affect the performance, given that small receptive fields only accounts for local features and not global ones. Hence, we employ a multi-branch residual block with vanilla and dilated convolution termed, Multi-receptive field Block (MRF) to address this. So for subsequent layers, \(l\) and \(l+1\), the proposed swin-transformer block can be defined as Eq. 1. \[\begin{split} d^{l}&=W\text{-}MSA(\psi(d^{l-1}))\\ d^{l}&=MRF(\psi(d^{l}))+d^{l})\\ d^{l+1}&=SW\text{-}MSA(\sigma(d^{l}))\\ d^{l+1}&=MRF(\psi(d^{l+1}))+d^{l+1})\end{split} \tag{1}\] In Eq. 1, we visualize the first sub-block of swin-transformer consisting of LayerNorm (\(\psi\)) layer, multi-head self attention module (W-MSA), residual connection (+) and Multi-receptive field block (MRF). In a similar manner, the second sub-block of swin transformer consisting of LayerNorm (\(\psi\)) layer, shifted window multi-head self attention module (SW-MSA), residual skip-connection (+) and Multi-receptive field block (MRF). Moreover, \(l\) signifies the layer number and \(d\) is the feature-map. The MRF block can be further elaborated as given in Eq. 2. \[\begin{split} x^{1}&=\delta(Conv(x_{in}))\\ x^{2}&=\delta(Depthwise\_Conv(x^{1}))\\ x^{3}&=\delta(Dilated\_Conv(x_{in}))\\ x_{out}&=\delta(Conv(x^{1}+x^{2}+x^{3}))\end{split} \tag{2}\] In Eq. 2, we first use convolution with kernel size, \(k=1\), and stride, \(s=1\) to extract local features with a small receptive field. Then, the output of these layers is inserted into a depth-wise convolution layer (\(k=1\), \(s=1\)). In a parallel branch, a dilated convolution (\(k=3\), \(s=1\)) with dilation rate, \(d=2\) is utilized to extract features with a larger receptive field. Finally, we add all these outputs from these three convolution layers and then apply a convolution (\(k=1\), \(s=1\)) to get the final result. Here, \(\delta\) signifies the GELU activation, which is applied to all convolution layers. ### Encoder Before the encoder can take the input with dimensions \(H\times W\times D\), we transform the OCT volumetric images. We employ a patch partition step to create a sequence of 3D tokens with a dimension of \(\frac{H}{P}\times\frac{W}{P}\times\frac{D}{P}\) and these features are then projected to an embedding space with dimension \(C\). Specifically, our encoder has a non-overlapping patch with a size of 2 x 2 x 2 and a feature dimension of 2 x 2 x 2 x 1 = 16 by considering one channel of the OCT. We assign the embedding space size, C=24, in our encoder. So the feature output of the patch-partition layer is \(\frac{H}{2}\times\frac{W}{2}\times\frac{D}{2}\times 24\,\). Likewise, each encoder stage downsamples the features by utilizing two win-transformer blocks followed by a patch-merging block. So, the features size changes from \(\frac{H}{2}\times\frac{W}{2}\times\frac{D}{2}\times C\) to \(\frac{H}{4}\times\frac{W}{4}\times\frac{D}{4}\times 2C\), from \(\frac{H}{4}\times\frac{W}{4}\times\frac{D}{4}\times 2C\) to \(\frac{H}{8}\times\frac{W}{8}\times\frac{D}{8}\times 4C\), and from \(\frac{H}{8}\times\frac{W}{8}\times\frac{D}{8}\times 4C\) to \(\frac{H}{16}\times\frac{W}{16}\times\frac{D}{16}\times 8C\), successively. We incorporate two win-transformer blocks after the last patch-merging layer to finalize the encoder. ### Volumetric Attention Block In 3D UNet-like architectures [9, 14, 13], skip connections concatenate the encoder and decoder features to retain loss of information. However, to make these features more robust, Swin-UNETR incorporated residual attention block with two convolution layers similar to [18, 11]. The problem with this approach is that it utilizes regular convolution, which only applies attention spatially and ignores any channel-wise attention. To alleviate this, we propose a volumetric attention (VA) block consisting of separate branches. In the first branch, we have a \(3\times 3\times 3\) followed by a \(1\times 1\times 1\) convolution for spatial attention. In the following branch, we have a \(1\times 1\times 1\) depth-wise convolution followed by a \(1\times 1\times 1\) point-wise convolution for channel-wise attention. In the final branch, we have an identity function that copies the input features. Consequently, we add all of these features to generate our last output feature. ### Decoder Similar to our encoder, we design a symmetric decoder composed of multiple transposed convolution blocks and a volumetric concatenation layer between each stage of the encoder and decoder features. At each stage n (\(n\in 1,2,3\)) in the encoder and bottleneck (\(n=4\)), the volumetric feature representations are reshaped to \(\frac{H}{2^{n}}\times\frac{H}{2^{n}}\times\frac{H}{2^{n}}\) and inserted into a residual convolution block with with two \(3\times 3\times 3\) convolution followed by instance normalization layer. Each decoder's feature maps are doubled in size using a transposed convolution layer. Moreover, each encoder's skip feature maps through the VA blocks are concatenated with the outputs of the previous decoder. Finally, a residual convolution block is applied to the feature with two \(3\times 3\times 3\) convolutions followed by an instance normalization layer. The final segmentation output is generated using a \(1\times 1\times 1\) convolutional layer and a softmax activation function. ### Objective Function For our segmentation output we utilize the Dice coefficient loss given in Eq. 3. For dice-coefficient we use \(\epsilon=1.0\) in numerator and denominator for addressing the division by zero. Here, \(\mathbb{E}\) signifies, expected values given, \(y^{\prime}\) (prediction) and \(y\) (ground-truth). \[\mathcal{L}_{dice}=\mathbb{E}_{y^{\prime},y}\big{[}1-\frac{2\sum_{i=1}^{N}y^{ \prime}_{i}y_{i}+\varepsilon}{\sum_{i=1}^{N}y^{\prime}_{i}+\sum_{i=1}^{N}y_{i}+ \varepsilon}\big{]} \tag{3}\] ## 3 Experiments ### Dataset and Preprocessing For benchmarking, we use the RETOUCH public dataset [1], which contains three image sets from three unique vendor devices and, in total, has 70 volumes. Out of this, 24 volumes were obtained with Cirrus (Zeiss), 24 volumes with Spectralis (Heidelberg), and 22 volumes with T-1000 and T-2000 (Topcon) devices. The numbers of B-scans (volume depths) were 128, 49, and 128, with resolutions of 512\(\times\)1024, 512\(\times\)496, and 512\(\times\)885, respectively, for each of these vendor devices. We only resize Cirrus and Topcon volumes to \(512\times 512\) resolution. The volume contained three different fluids such as intra-retinal fluid (IRF), sub-retinal fluid (SRF), and pigment epithelial detachment (PED), which were manually annotated by an expert as separate ground truth volumes. As test datasets are no more publicly available, we separate the original image sets into training and test set. So for Cirrus and Spectralis, we had 19 training and 5 test volumes, whereas, for Topcon, we had 18 training and 4 test volumes. We further utilize 5-fold cross-validation to find the model with the highest dice score. For image transformations, we apply Random Intensitity Shift ( +/- 10 with 50% probability) and Random Channel-wise volumetric cropping for training our model. **Data use declaration and acknowledgment:** The dataset was released as part of Retouch Challenge. For usage, please refer to the data confidentiality agreement. Figure 2: SwinVFTR segments fluid with better precision than other 3D CNN and Transformer architectures. The row contains Cirrus, Spectralis and Topcon data-sets. Whereas the column contains ground-truths and segmentation maps for SwinVFTR, SwinUNETR, UNETR, ResUNet-3D and Attention-UNet-3D. Here, IRF, SRF, and PED fluids are colored as Red, Yellow and Blue. ### Hyper-parameter Initialization We used Adam optimizer [12], with learning rate \(\alpha=0.0001\), \(\beta_{1}=0.9\) and \(\beta_{2}=0.999\). We train with mini-batches with batch size, \(b=1\) for 600 epochs using PyTorch and MONAI library monai.io. It took between 8-12 hours to train our model on NVIDIA A30 GPU, depending on the data-set. Because Spectralis has a lower number of B-scans compared to Cirrus and Topcon, it takes less amount to train. The inference time is \(0.5\) second per volume. We provide ablation for hyper-parameter selections in subsection 3.4 and in the supplementary materials. The code repository is provided in this link. ### Quantitative Evaluation We compared our architecture with some best-performing 3D CNN and Transformer architectures, including ResUNet-3D [11], AttentionUNet-3D [18], UNETR [9] and SwinUNETR [8] as illustrated in Fig. 2. We trained and evaluated all four architectures using their publicly available source code on the three datasets. SwinUNETR utilizes a swin-transformer encoder as a backbone and a step-wise decoder with transposed convolution and residual blocks to upsample the features. In contrast, the UNETR employs a vision transformer with self-attention layers as encoders and deconvolution layers for upsampling. ResUnet-3D and Attention-UNet-3D are simple modifications of UNet 3D architectures, with the first using residual layers in the encoder and decoders and the second incorporating attention in the skip connections between them. In Fig. 2, we visualize segmentation results for intra-retinal fluid (IRF), sub-retinal fluid (SRF), and pigment epithelium detachments (PED). It is apparent from the figure that our model's prediction is more accurate than other transformer and CNN-based architectures, and the segmentation boundary is finer and less coarse than SwinUNETR and UNETR. \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c|c} \hline \multirow{2}{*}{Dataset} & \multirow{2}{*}{Method} & \multirow{2}{*}{Year} & \multirow{2}{*}{SSIM} & \multirow{2}{*}{Mean-IOU} & \multicolumn{3}{c|}{Dice Score} & \multicolumn{2}{c}{Mean Dice} \\ \cline{6-10} & & & & & IRF & SRF & PED & w/o BG & w/ BG \\ \hline \multirow{8}{*}{Spectralis} & Attention-UNet-3D [18] & 2018 & 0.914 & 0.439 & 0.608 & 0.394 & 0.096 & 0.366 & 0.517 \\ & ResUNet-3D[11] & 2019 & 0.985 & 0.595 & 0.602 & 0.574 & 0.602 & 0.592 & 0.694 \\ & UNETR [9] & 2022 & 0.984 & 0.546 & 0.567 & 0.493 & 0.544 & 0.534 & 0.651 \\ & SwinUNETR [8, 24] & 2022 & 0.985 & 0.613 & 0.601 & 0.544 & 0.662 & 0.602 & 0.701 \\ & **SwinVETR** & 2023 & **0.987** & **0.625** & **0.624** & **0.578** & **0.670** & **0.624** & **0.718** \\ \hline \multirow{8}{*}{Cirrus} & Attention-UNet-3D [18] & 2018 & 0.928 & 0.446 & 0.664 & 0.472 & 0.011 & 0.382 & 0.527 \\ & ResUNet-3D [11] & 2019 & 0.983 & 0.490 & 0.648 & **0.622** & 0.012 & 0.427 & 0.570 \\ & UNETR [9] & 2022 & 0.987 & 0.487 & 0.635 & 0.594 & 0.081 & 0.436 & 0.577 \\ & SwinUNETR [8, 24] & 2022 & 0.986 & 0.452 & 0.682 & 0.338 & 0.131 & 0.384 & 0.537 \\ & **SwinVETR** & 2021 & **0.988** & **0.492** & **0.691** & 0.507 & **0.146** & **0.448** & **0.587** \\ \hline \multirow{8}{*}{Topconn} & Attention-UNet-3D [18] & 2018 & 0.894 & 0.412 & 0.526 & 0.542 & 0.083 & 0.383 & 0.519 \\ & ResUNet-3D [11] & 2019 & 0.974 & 0.526 & 0.534 & 0.648 & 0.419 & 0.534 & 0.649 \\ \cline{1-1} & UNETR [9] & 2022 & 0.979 & 0.495 & 0.592 & 0.416 & 0.427 & 0.478 & 0.607 \\ \cline{1-1} & SwinUNETR [8, 24] & 2022 & 0.980 & 0.483 & 0.583 & 0.451 & 0.331 & 0.455 & 0.590 \\ \cline{1-1} & **SwinVETR** & 2023 & **0.981** & **0.553** & **0.638** & 0.523 & **0.548** & **0.571** & **0.678** \\ \hline \end{tabular} \end{table} Table 1: Quantitative comparison on Spectralis, Cirrus, & Topcon [1]. Next, we quantitatively evaluate all five models using mean-intersection-over-union (mIOU), dice scores, and structural similarity index (SSIM) as shown in Table. 1. We also provide fluid-wise dice scores for IRF, SRF, and PED. Table. 1 shows that our model's overall dice score, SSIM, and mIOU far exceed other architectures. Although for Cirrus and Topcon, our model's segmentation performance is a little worse for SRF fluid against ResUNet-3D, the dice score PED is almost \(10\times\) better for Cirrus and \(1.2\times\) better for Topcon. Another essential evaluation we did was calculating dice score with (w/ BG) and without background (w/o BG), as background contains the majority of the pixels, and it can skew the results with high false-positive rates. As the table shows, our model outperforms other architectures with a higher dice score for both with and without background. ### Ablation Study **Hyper-parameter search:** In supplementary Table. 1, we have provided the effects of different hyper-parameters such as epoch \(e\), learning rate, \(\alpha\) and batch-size, \(b\). Our search space for epoch, \(e=\{100,300,600\}\), learning-rate, \(\alpha=\{10e-4,5e-5,10e-5\}\), and batch-size, \(b=\{1,2,3\}\). We found \(e=600\), \(\alpha=10e-4\), and \(b=1\) to be the best-performing hyper-parameters for SwinVFTR. **Effects of Architectural Change:** In the supplementary Table. 2, we provide the comparative comparison of SwinVFSTR's performance with and without VA (Volumetric Attention) and MRF (Multi-receptive field) Blocks. As the results show, our model with VA and MRF achieves the highest dice score and mean-IOU, and our model with VA only performs the second best. The model without VA and MRF blocks incorporates residual convolution for skip connected instead of VA and regular swin-transformer layer with MLP instead of MRF. ## 4 Conclusion In this paper, we proposed a new 3D transformer-based fluid segmentation architecture called SwinVFTR. Combining our novel channel-wise sampling technique, incorporating volumetric attention, and utilizing a multi-receptive field swin-transformer block, the architecture segments fluid volumes with high precision for three relevant metrics. We provide comparative comparisons with state-of-the-art 3D CNN and Transformer models, where our model supersedes them. Clinical experts can efficiently employ our architecture in various segmentation applications and ophthalmic modalities. The model is best suited for locating inter- and sub-retinal layer fluid deposits for early disease diagnosis and monitoring future prognosis. We hope to extend this work to other ophthalmic modalities. ## Acknowledgement This material is based upon work supported by the **** under Grant No. **** issued through ****.
2302.06229
Link Prediction with Attention Applied on Multiple Knowledge Graph Embedding Models
Predicting missing links between entities in a knowledge graph is a fundamental task to deal with the incompleteness of data on the Web. Knowledge graph embeddings map nodes into a vector space to predict new links, scoring them according to geometric criteria. Relations in the graph may follow patterns that can be learned, e.g., some relations might be symmetric and others might be hierarchical. However, the learning capability of different embedding models varies for each pattern and, so far, no single model can learn all patterns equally well. In this paper, we combine the query representations from several models in a unified one to incorporate patterns that are independently captured by each model. Our combination uses attention to select the most suitable model to answer each query. The models are also mapped onto a non-Euclidean manifold, the Poincar\'e ball, to capture structural patterns, such as hierarchies, besides relational patterns, such as symmetry. We prove that our combination provides a higher expressiveness and inference power than each model on its own. As a result, the combined model can learn relational and structural patterns. We conduct extensive experimental analysis with various link prediction benchmarks showing that the combined model outperforms individual models, including state-of-the-art approaches.
Cosimo Gregucci, Mojtaba Nayyeri, Daniel Hernández, Steffen Staab
2023-02-13T10:07:26Z
http://arxiv.org/abs/2302.06229v1
# Link Prediction with Attention ###### Abstract. Predicting missing links between entities in a knowledge graph is a fundamental task to deal with the incompleteness of data on the Web. Knowledge graph embeddings map nodes into a vector space to predict new links, scoring them according to geometric criteria. Relations in the graph may follow patterns that can be learned, e.g., some relations might be symmetric and others might be hierarchical. However, the learning capability of different embedding models varies for each pattern and, so far, no single model can learn all patterns equally well. In this paper, we combine the query representations from several models in a unified one to incorporate patterns that are independently captured by each model. Our combination uses attention to select the most suitable model to answer each query. The models are also mapped onto a non-Euclidean manifold, the Poincare ball, to capture structural patterns, such as hierarchies, besides relational patterns, such as symmetry. We prove that our combination provides a higher expressiveness and inference power than each model on its own. As a result, the combined model can learn relational and structural patterns. We conduct extensive experimental analysis with various link prediction benchmarks showing that the combined model outperforms individual models, including state-of-the-art approaches. Knowledge graph embedding, link prediction, ensemble, geometric integration + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. Structural patterns refer to the arrangements of elements in a graph. A relation forms a hierarchical pattern when its corresponding graph is close to tree-like (Bordes and Senn, 2015), e.g., _(eagle, type-of, bird)_. For example, RotatE defines transformations as rotations \(g_{r}^{\text{RotatE}}(\mathbf{h})=\mathbf{h}\circ\mathbf{r}\) in Complex space (\(\circ\) is an element-wise complex product). In this way, RotatE can enforce both \(\mathbf{h}\circ\mathbf{r}=\mathbf{t},\mathbf{t}\circ\mathbf{r}=\mathbf{h}\) if \(\mathbf{r}^{2}=1\) and, thus, it is able to model symmetric relations. In Table 1, we present a summary of the query representation of some state-of-the-art baselines. We indicate whether a KGE can or cannot model a specific pattern. If it can model a pattern, we further include the number of constraints they have to satisfy to express this pattern. For instance, antisymmetry for RotatE requires two constraints \(\mathbf{r}\neq\mathbf{-1}\) and \(\mathbf{r}\neq\mathbf{1}\) to be expressed. Further explanation of Table 1 can be found in Appendix A.4. Beyond the KGEs surveyed in Table 1, further works have defined query representations successfully dealing with different subsets of patterns, such as 5"E (Kang et al., 2017), AttE/H (Bordes and Senn, 2015), TransH (Shen et al., 2017), or ProjE (Kang et al., 2018). However, there is neither a single transformation function that can model all patterns nor a single approach that can take advantage of all the different transformation functions. In this paper, we tackle this problem and propose a general framework to integrate different transformation functions from several KGE models, \(\mathbb{M}\), in a low-dimensional geometric space such that heterogeneous relational and structural patterns are well represented. In particular, we employ spherical geometry to unify different existing representations of KGE queries, \((h,r,?)\). In our framework, representations of KGE queries, \(g_{r}^{i}(\mathbf{h})\) with \(i\in\mathbb{M}\), define the centers of hyperspheres, and candidate answers lie inside or outside of the hyperspheres whose radiuses are derived during training. Plausible answers mostly lie inside the convex hull formed by the centers of the hyperspheres. Based on this representation, we learn how to pay attention to the most suitable representations of a KGE query. Thereby, attention is acquired to adhere to applicable patterns (see Figure 1 and Figure 2). For instance, given a KGE query (_Leader1_, _caAuthor_, \(?\)), attention will focus on the representation of this query defined by RotatE, as our framework has learned about the symmetry of the relation _caAuthor_. Likewise, TransE and RotatE will be preferred for KGE query (_Student1_, _supervisedBy_, \(?\)) accounting for the pattern (\(X\), _isPhDIn_, \(Y\)), (\(Y\), \(\textit{ledBy},Z\)) \(\rightarrow\) (\(X\), _supervisedBy_, \(Y\)), while TransE will be favored for KGE query (_Leader1_, _supervisedBy_, \(?\)) due to the anti-symmetry of _supervisedBy_. Furthermore, we also project our model onto a non-Euclidean manifold, the Poincare ball, to also facilitate structural preservation. In summary, our key contributions are as follows: * We propose a spherical geometric framework for combining several existing KGE models. To our knowledge, this is the first approach to integrate KGE models taking advantage of the different underlying geometric transformations. * We utilize an attention mechanism to focus on query representations depending on the characteristics of the underlying relation in the query. Therefore, our method can support various relational patterns. Furthermore, structural patterns are captured by projecting the model onto the Poincare ball. * We present various theoretical analyses to show that our model subsumes various existing models. ## 2. Related Work We review the related works in three parts, namely the baseline models we used for combination, the models which provide other approaches for combinations, and models that combine spaces. ### KGE Model Baselines Various models (Kang et al., 2017; Kang et al., 2018; Kang et al., 2019) have been proposed for KGE in the last few years. Each KGE defines a score function \(f(h,r,t)\) which takes embedding vectors of a triple \((\mathbf{h},\mathbf{r},\mathbf{t})\) and scores the triple. In our work we integrate and compare them to the following baselines: * TransE (Bordes and Senn, 2015) computes the score of a triple by computing the distance between the tail and the _translated_ head. Thanks to the translation-based transformation, this KGE is particularly suited for modeling inverse and composition patterns. * RotatE (Shen et al., 2017) uses a relation-specific rotation \(\mathbf{r}_{i}=e^{i\theta}\) to map each element of the head to the corresponding tail. RotatE can infer symmetrical patterns if the angle formed by the head and tail is either \(0\) or \(\theta\). Besides, rotations are also effective in capturing antisymmetry, composition, or inversion. * DistMult (Shen et al., 2018) represents each relation as a diagonal matrix. Its score function captures pairwise interaction between _the same dimension_ of the head and tail embedding. Thus, DistMult treats symmetric relations well, but scores so highly inverse links of non-symmetric and antisymmetric relations. * ComplEx (Shen et al., 2018) extends DistMult in the complex space to effectively capture symmetric and antisymmetric patterns. * AttH (Bordes and Senn, 2015) combines relation-specific rotations and reflections using hyperbolic attention and applies a hyperbolic translation. Rotation can capture antisymmetrical and symmetrical patterns, reflection can naturally represent symmetrical relations, while the hyperbolic translation can capture hierarchy. We also compared our models against AttE (Bordes and Senn, 2015), a variant of AttH with curvature set to zero. Figure 1. Subgraph exhibiting heterogeneous patterns (Kang et al., 2017). ### KGEs Combination Combinations between KGEs of the same kind.Authors in (Kolmogorov, 2017) showed that, under some conditions, the ensemble generated from the combination of multiple runs of low-dimensional embedding models _of the same kind_ outperforms the corresponding individual high-dimensional embedding model. Unlike our approach, the ensemble model will still be able to express only a subset of existing logical patterns. Combination between different KGE models.Prior works (Kolmogorov, 2017) proposed to combine different knowledge graph embeddings through score concatenation to improve the performance in link prediction. (Kolmogorov, 2017) proposed a relation-level ensemble, where the combination of individual models is performed separately for each relation. A recent work (Sutskever et al., 2019), proposed to combine the scores of different embedding models by using a weighted sum. Such methods combine scores either per model or per relation, while we provide a query attention mechanism for the combination. A different approach has been proposed in MulDE (Mullen et al., 2019), where link prediction is improved by _correcting_ the prediction of a "student" embedding through the use of several pre-trained embeddings that act as "teachers". The student embedding can be considered to constitute an ensemble model. However, this ensemble cannot steer decisions towards the strengths of individual models but can only decide randomly or based on majority guidance by teachers. Further ensemble approaches between KGE and machine learning models can be found in the Appendix A.3 ### Combination Of Spaces A different line of research aims at improving link prediction performance by combining different geometrical spaces. (Kolmogorov, 2017) improves link prediction by combining Hyperbolic, Spherical, and Euclidean space. Similarly, (Kolmogorov, 2017) embedded knowledge graphs into an Ultra-hyperbolic manifold, which generalizes Hyperbolic and Spherical manifolds. On the other hand, we combine queries rather than geometric spaces. ## 3. Proposed Approach In this section, we present our geometric query integration model using Euclidean and Hyperbolic geometries, and introduce our approach in the following four items: a) entity, relation, and query representation, b) spherical query embedding, c) Riemannian attention-based query combination, d) expressivity analysis. a)**Entity, Relation and Query Embedding.Let \(\mathcal{E},\mathcal{R}\) be the entity and relation sets. We represent each entity \(e\in\mathcal{E}\) and relation \(r\in\mathcal{R}\) as \(d_{e}\) and \(d_{r}\) dimensional vectors which are denoted by \(\mathbf{e}\) and \(\mathbf{r}\), respectively. Thus, each triple \((h,r,t)\) has a vector representation \((\mathbf{h},\mathbf{r},\mathbf{t})\) where \(\mathbf{h},\mathbf{t}\) are the corresponding entity embeddings. We split each triple \((h,r,t)\) into two parts, namely the tail query \(q=(h,r,?)\) and the candidate answer \(t\), and represent their embeddings by \(\mathbf{q},\mathbf{t}\) respectively. In our model, we aim at combining the queries from several existing KGE models that are specified in Table 1. We denote the query representation set by \(\mathcal{Q}=\{\mathbf{q}_{i}|\mathbf{q}_{i}=g_{r}^{i}(\mathbf{h}),i\in\mathbb{M}\}\) where \(\mathbb{M}\) is a set of several existing KGE models such as TransE, RotatE, ComplEx, DistMult, etc, and the function \(g_{r}^{i}(\mathbf{h})\) is a relation-specific transformation from a head embedding to a query representation for model \(i\). Note that we assume that \begin{table} \begin{tabular}{l l l l l l l l} \hline \hline Model & Query & Embeddings & Symmetry & Antisymmetry & Inversion & Composition & Hierarchy \\ \hline TransE (Sutskever et al., 2019) & \(\mathbf{q}=\mathbf{h}+\mathbf{r}\) & \(\mathbf{q},\mathbf{h},\mathbf{r}\in\mathbb{R}^{d}\) & \(\mathbf{\times}\) & \(\mathbf{\checkmark}=\mathbf{0}\) & \(\checkmark-\mathbf{0}\) & \(\checkmark-\mathbf{0}\) & \(\mathbf{\times}\) \\ RotatE (Sutskever et al., 2019) & \(\mathbf{q}=\mathbf{h}\circ\mathbf{r}\) & \(\mathbf{q},\mathbf{h},\mathbf{r}\in\mathbb{C}^{d}\) & \(\checkmark-\mathbf{2}\) & \(\checkmark-\mathbf{2}\) & \(\checkmark-\mathbf{2}\) & \(\mathbf{\times}\) & \(\mathbf{\times}\) \\ ComplEx (Sutskever et al., 2019) & \(\mathbf{q}=\mathbf{h}\times\mathbf{r}\) & \(\mathbf{q},\mathbf{h},\mathbf{r}\in\mathbb{C}^{d}\) & \(\checkmark-\mathbf{2}\) & \(\checkmark-\mathbf{2}\) & \(\checkmark-\mathbf{2}\) & \(\mathbf{\times}\) & \(\mathbf{\times}\) \\ DistMult (Sutskever et al., 2019) & \(\mathbf{q}=\mathbf{h}\cdot\mathbf{r}\) & \(\mathbf{q},\mathbf{h},\mathbf{r}\in\mathbb{R}^{d}\) & \(\checkmark-\mathbf{0}\) & \(\mathbf{\times}\) & \(\mathbf{\times}\) & \(\mathbf{\times}\) & \(\mathbf{\times}\) \\ RefH (Dong et al., 2019) & \(\mathbf{q}=\mathbf{Ref}(\mathbf{\theta}_{r})\mathbf{h}\) & \(\mathbf{q},\mathbf{h}\in\mathbb{H}^{d}\) & \(\checkmark-\mathbf{0}\) & \(\mathbf{\times}\) & \(\mathbf{\times}\) & \(\mathbf{\times}\) & \(\checkmark-\mathbf{0}\) \\ \hline \hline \end{tabular} \end{table} Table 1. Specification of query representation of baseline and state-of-the-art KGE models and respective pattern modeling and inference abilities. Atte/H include both rotation (RotatE) and reflection (RefH), hence are not mentioned in the table to avoid repetitions. \(\circ\) is element-wise complex product together with relation normalization. Figure 2. The overall architecture of our proposed model with spherical geometry. We combine query representations of TransE, RotatE, AttE (with Reflection), and DistMult (per dimension scaling). The left part shows query integration with attention to TransE model. The right part represents query combination without attention. different models lie on the same space. In this paper, we stay in Euclidean space for query combination. In this regard, we can combine models lying either directly in Euclidean space (e.g., TransE and DistMult) and models that can be rewritten to lie in the Euclidean space (e.g., models lying in Complex or Hypercomplex spaces as ComplEx, RotatE, and QuatE by assuming \(\mathbb{R}^{4}=\mathbb{R}^{2}\times\mathbb{R}^{2}=\mathbb{C}^{1}\times\mathbb{ C}^{1}\), where \(\mathbb{C}^{d},\mathbb{R}^{d}\) are \(d\)-dimensional Complex and Euclidean spaces). We then project such query vectors on a hyperbolic manifold to handle hierarchical patterns. #### 3.2.2 Spherical Query Embedding In this part, first, we propose a spherical query embedding to represent each query as a sphere whose center is the vector embedding of the query. This sphere defines the answer space of the query. Second, we propose an approach to combine query representations of several already existing embedding models in one spherical query representation to enhance the modeling of heterogeneous patterns. In _"radius and ranking"_, we will show that the spherical representation is connected to the ranking metric Hits@k. In particular, the top k candidate answers for a query \(q\) are embedded in a sphere whose center is a combination of the vector embeddings \(\mathbf{q}_{i}\) of query \(q\). To practically enforce this, the radius in our spherical query embedding needs to be set. Therefore, in _"radius and loss"_, we will show that a loss function can enforce the improvement of Hits@k by enforcing top k candidate answers of a query inside the sphere. Here, we formalize the combination of \(n\) spherical KGEs. Let us assume that \(\mathbf{q}_{1},\mathbf{q}_{2},\ldots,\mathbf{q}_{n}\in\mathcal{Q}\) be the \(n\) vector query embeddings of a query \(q=(h,r,?)\) from \(n\) distinct KGE models, and \(\mathbf{a}=\mathbf{t}\) be the embedding of the candidate answer. We represent each query as a hypersphere with a pair \(\mathbf{q}_{i}^{c}=(\mathbf{q}_{i},\epsilon_{i}),\ \mathbf{q}_{i}\in\mathcal{Q}\), where \(\mathbf{q}_{i}\in\mathbb{R}^{d}\) is the center of the \(i\)th sphere associated to the \(i\)th model and \(\epsilon_{i}\) is the radius. By using the function \[p(\mathbf{q}_{i},\mathbf{a})=\|\mathbf{a}-\mathbf{q}_{i}\|,\quad\mathbf{q}_{i}\in\mathcal{Q}, \tag{1}\] we define the answer space \(\mathcal{A}\) and non-answer space \(\mathcal{N}\) as decision boundaries in the embedding space for each query as follows: \[\begin{cases}\mathcal{A}_{i}&=\{\mathbf{e}\in\mathbb{R}^{d}\ \ |\ \|\mathbf{e}-\mathbf{q}_{i}\|\leq \epsilon_{i}\},\\ \mathcal{N}_{i}&=\{\mathbf{e}\in\mathbb{R}^{d}\ \ |\ \|\mathbf{e}-\mathbf{q}_{i}\|>\epsilon_{i} \}.\end{cases} \tag{2}\] In this case, all embeddings of answers \(a\) are supposed to lie on or inside a sphere with a radius of \(\epsilon_{i}\) and center \(\mathbf{q}_{i}\), i.e., \(\mathbf{a}\in\mathcal{A}_{i}\), and the ones which are not answers lie outside of the sphere [24; 47]. We combine spherical query embeddings of several existing KGE models in one spherical query embedding as follows: _Combination._ Given the vector embeddings \(\mathbf{q}_{1},\mathbf{q}_{2},\ldots,\mathbf{q}_{n}\in\mathcal{Q}\) we can set a radius \(\epsilon_{i}^{k}\) for each \(\mathbf{q}_{i}\) such that the answer space \(\mathcal{A}_{i}\) covers the top \(k\) candidate answers. \[\begin{cases}\mathcal{A}_{1}&=\{\mathbf{e}\in\mathbb{R}^{d}\ \ |\ \|\mathbf{e}-\mathbf{q}_{1}\|\leq \epsilon_{1}^{k}\},\\ &\vdots\\ \mathcal{A}_{n}&=\{\mathbf{e}\in\mathbb{R}^{d}\ \ |\ \|\mathbf{e}-\mathbf{q}_{n}\|\leq \epsilon_{n}^{k}\}.\end{cases} \tag{3}\] Summing up the above inequalities, we have \(\|\mathbf{a}-\mathbf{q}_{1}\|+\ldots+\|\mathbf{a}-\mathbf{q}_{n}\|\leq\epsilon _{1}^{k}+\ldots+\epsilon_{n}^{k}\). Because of triangle inequality of the metric \(\|.\|\), this can be extended to the following inequality \(\|\mathbf{a}-\mathbf{q}_{1}+\ldots+\mathbf{a}-\mathbf{q}_{n}\|\leq\epsilon_{1 }^{k}+\ldots+\epsilon_{n}^{k}\), that concludes \(\|\mathbf{a}-\frac{\mathbf{q}_{1}+\ldots+\mathbf{q}_{n}}{\|\mathbf{a}-\mathbf{ q}_{n}\|}\leq\frac{\epsilon_{1}^{k}+\ldots+\epsilon_{n}^{k}}{\|\mathbf{a}}\). Therefore, the combined spherical query embedding is the spherical embedding \(\mathbf{q}_{E}^{c}=(\mathbf{q}_{E},\epsilon_{E})\) where \[\begin{cases}\mathbf{q}_{E}&=\frac{\mathbf{q}_{1}+\ldots+\mathbf{q}_{n}}{n},\\ \epsilon_{E}&=\frac{\epsilon_{1}^{k}+\ldots+\epsilon_{n}^{k}}{n}.\end{cases} \tag{4}\] This leads to the following top \(k\) candidate answer space of the combined spherical query embedding: \[\mathcal{A}_{E}=\{\mathbf{e}\in\mathbb{R}^{d}\ \ |\ \|\mathbf{e}-\mathbf{q}_{E}\|\leq \epsilon_{E}\}. \tag{5}\] Figure 2 (top right) shows the query representations, and candidate answer spaces of TransE, RotatE, RefE, and DistMult, together with the combined query (without attention to a particular model). The combined query mainly lies inside the convex hull of all the models within the answer space. We later show that most answers lie within the convex hull covered by the combined query. Therefore, the combined model takes the advantage of all models. Before theoretically justifying this, we bridge the radius \(\epsilon\) in spherical query embedding and ranking metrics, as well as the practical way of modeling radius using the loss function in the following parts. _Radius and Ranking._ Most KGE models are evaluated based on ranking metrics such as Hits@k [5]. Here we explain the connection between the ranking metrics and radius in our spherical query embedding. Because the overall ranking is computed by taking the average over ranks of all test triples, we explain the connection between ranking and our model by considering an individual test triple. During testing, for each given positive test triple \((h,r,t)\), the tail \(t\) is replaced one by one with all entities \(e\in\mathcal{E}\). We denote \(\mathbb{T}_{e}=(h,r,e)\) the corrupted triple generated by replacing \(t\) by \(e\). Therefore, \(\mathcal{T}=\{\mathbb{T}_{e}|e\in\mathcal{E}-\{t\}\}\) is the set of all corrupted triples generated from the correct triple \((h,r,t)\). After computing the score of each triple in \(\mathcal{T}\) and sorting them based on their scores in descending way, we select top \(k\) high score samples and generate a new set \(\mathcal{T}_{k}\) containing these samples. The spherical query embedding \(\mathbf{q}_{E}^{c}=(\mathbf{q}_{E},\epsilon_{E})\) associated to a query \(q=(h,r,?)\) defines a spherical answer space \(\mathcal{A}_{E}\) that contains the vector embeddings \(\mathbf{e}\) for the top \(k\) entities \(e\in\mathcal{T}_{k}\). That is, \(\mathcal{T}_{k}\) contains top \(k\) candidates for a query \(q\), and \(\mathcal{A}_{E}\) in Equation 5 is the candidate answer embedding space. We want the vectors of answers in \(\mathcal{T}_{k}\) to lie inside \(\mathcal{A}_{E}\), and to be as close as possible to the query center to improve ranking results. To enforce this, we define a loss function to optimize the embeddings, as is explained below. _Radius and Loss Function._ In this part, we show that the existing loss functions implicitly enforce a particular (implicit) radius around the vector query embedding \(\mathbf{q}_{E}\). Let us focus on the widely used loss function shown in the following [6]: \[\mathcal{L}=\sum_{e\in\mathcal{E}}\log(1+\exp(y_{e}(-p(\mathbf{q}_{i},\mathbf{e})+\delta_ {h}+\delta_{e}))), \tag{6}\] where \(y_{e}=1\) if \(e=a\), and \(y_{e}=-1\) if \(e\neq a\), and \(\delta_{h}\) and \(\delta_{e}\) are trainable entity biases. Minimization of this loss function leads to maximizing the function \(-p(\mathbf{q}_{i},\mathbf{e})+\delta_{h}+\delta_{e}\). This can be approximately represented as \(-p(\mathbf{q}_{i},\mathbf{e})+\delta_{h}+\delta_{e}\geq M\) where \(M\) is a large number. Therefore, we have \(p(\mathbf{q}_{i},\mathbf{e})\leq\delta_{h}+\delta_{e}-M=\delta_{he}-M=\epsilon_{i}\) which forms boundaries for classification as well as ranking. In the next part, we theoretically show that \(\mathbf{q}_{E}\) lies within the convex hull of the set of vectors \(\{\mathbf{q}_{1},\ldots,\mathbf{q}_{n}\}\). Thus, the combined model takes advantage of each model in ranking. Theoretical AnalysisEquation 5 indicates that if the query is represented by \((\mathbf{q}_{E},\epsilon_{E})\), then the score given by the combined model to a plausible answer is lower than the average of the scores given by the individual models, and higher than the lowest individual model score because, without loss of generality, we have \[\min(p(\mathbf{q}_{1},\mathbf{e}),p(\mathbf{q}_{2},\mathbf{e}))\leq p(\mathbf{q}_{E},\mathbf{e})\leq\max (p(\mathbf{q}_{1},\mathbf{e}),p(\mathbf{q}_{2},\mathbf{e})). \tag{7}\] This equation shows that for a particular \(k\), the combined model gets a better score than the worst model, but it gets a lower score than the best one. However, by increasing \(k\), the combined model covers the answers provided by both models because most of the answers lie in the convex hull between the queries (later it will be proved), and the combined model with arbitrary large \(k\) covers the answers represented by each model. Therefore, the combined model improves Hits@k with a sufficiently large \(k\). Later in this section, we present the attention-based model which enables us to improve Hits@k for small \(k\). The following proposition states that the best embedding for an answer to a query lies in the convex hull of the query embeddings given by two models. This implies that if two models are trained jointly with the combined model, the answers of each query lie between the centers of the two spheres associated with the two embeddings of the query. This facilitates the answer space of combined spherical query embedding to cover the answer embedding from each individual model. This can be generalized for an arbitrary number of models. **Proposition 3.1**: _Let \(\mathbf{q}_{1}\) and \(\mathbf{q}_{2}\) be two query embeddings for a query \(q\). Then, the following two statements are equivalent for every vector \(\mathbf{a}\) in the vector space:_ \[\begin{cases}\mathbf{a}=\operatorname*{argmin}_{\mathbf{e}}(p(\mathbf{q}_{1},\mathbf{e})+p( \mathbf{q}_{2},\mathbf{e})),\\ \mathbf{a}\text{ lies in the convex hull of vectors }\mathbf{q}_{1}\text{ and }\mathbf{q}_{2}.\end{cases} \tag{8}\] #### c.1.1 **Riemannian Attention-Based Query Combination.** _Weighted Combined Query Embedding._ A consequence of proposition 3.1 is that the combined query embedding can improve the performance when \(k\) is sufficiently large (e.g., Hits@20). However, for a low \(k\) (e.g., Hits@1) the performance is degraded because one model gets a better ranking, and the combined model with an average query does not cover it. In addition, among several models, there might be possible that some models return wrong answers which might also influence the combined model. Therefore, allowing the combined spherical query embedding \(\mathbf{q}_{E}\) to slide to \(\mathbf{q}_{1}\) or \(\mathbf{q}_{2}\) is beneficial. Hence, without loss of generality, we combine two query embeddings as the convex combination of the inequalities: \[\begin{cases}\mathbf{a}\|\mathbf{a}-\mathbf{q}_{1}\|\leq ae_{1}^{k},\\ \mathbf{\beta}\|\mathbf{a}-\mathbf{q}_{2}\|\leq\beta e_{2}^{k},\quad\mathbf{a},\mathbf{\beta}\geq 0,\ \mathbf{a}+\mathbf{\beta}=1.\end{cases} \tag{9}\] By computing this convex combination, we have \(\mathbf{a}\|\mathbf{a}-\mathbf{q}_{1}\|+\mathbf{\beta}\|\mathbf{a}-\mathbf{q}_{2}\|\leq ac_{1}^{k}+ \beta e_{2}^{k}\). This inequality implies \(\|\mathbf{a}\mathbf{a}-\mathbf{\alpha}\mathbf{q}_{1}+\mathbf{\beta}\mathbf{a}-\mathbf{\beta}\mathbf{q}_{2}\| \leq\alpha\|\mathbf{a}-\mathbf{q}_{1}\|+\mathbf{\beta}\|\mathbf{a}-\mathbf{q}_{2}\|\leq ac_{1}^{k}+ \beta e_{2}^{k}\), which subsequently leads to \[\|\mathbf{a}-(\alpha\mathbf{q}_{1}+\beta\mathbf{q}_{2})\|\leq ac_{1}^{k}+\beta e_{2}^{k}. \tag{10}\] Therefore, the combined spherical query embedding is \(\mathbf{q}_{E}^{c}=(\mathbf{q}_{E},\epsilon_{E}^{k})\) where \(\mathbf{q}_{E}=(\alpha\mathbf{q}_{1}+\beta\mathbf{q}_{2})\) and \(\epsilon_{E}^{k}=\alpha e_{1}^{k}+\beta e_{2}^{k}\). This combination is generalized for \(n\) models: \[\|\mathbf{a}-\sum\alpha_{i}\mathbf{q}_{i}\|\leq\sum\alpha_{i}\epsilon_{i}^{k}, \tag{11}\] Attention Calculation.Given a combined spherical query embedding \(\mathbf{q}_{E}^{c}=(\mathbf{q}_{E},\epsilon_{E})\) with \[\mathbf{q}_{E}=\sum\alpha_{i}\mathbf{q}_{i},\epsilon_{E}^{k}=\sum\alpha_{i}\epsilon_{ i}^{k}, \tag{12}\] we can compute \(\alpha_{i}\) by providing an attention mechanism [6] \[\alpha_{i}=\frac{\exp(g(\mathbf{w}\mathbf{q}_{i}))}{\sum_{j}\exp(g(\mathbf{w}\mathbf{q}_{j}))}, \tag{13}\] where \(g(\mathbf{x})=\mathbf{w}\mathbf{x}\) is a function with a trainable parameter \(\mathbf{w}\). We call this version of our model Spherical Embedding with Attention (SEA). Riemannian Query Combination.We next extend our attention-based query combination to Riemannian manifolds to model both relational patterns (via various transformations used in different models) and structural patterns as hierarchy via the manifolds (e.g., Poincare ball). Similarly to [6], we perform attention on tangent space. We consider all models in Euclidean space and combine their query embeddings. The resulting query embedding on the tangent space is then projected to the manifold via the exponential map. This attention-based model combination is defined as follows: \[\begin{cases}\mathbf{q}_{E}^{euc}&=\sum_{i}\frac{\exp(g(\mathbf{q}_{i}))}{\sum_{j}\exp(g (\mathbf{q}_{j}))}\mathbf{q}_{i},\\ \mathbf{q}_{E}^{M}&=\exp_{0}(\mathbf{q}_{E}^{euc}).\end{cases} \tag{14}\] We compute the score as \(p(q,a)=d(\mathbf{q}_{E}^{M}\oplus\mathbf{r},\mathbf{a})\), where \(\mathbf{h},\mathbf{r},\mathbf{t},\mathbf{q}\) are points on a manifold \(\mathcal{M}\), \(\exp_{0}(\cdot)\) is the exponential map from origin, and \(\oplus\) is Mobius addition. In terms of the Poincare ball, the manifold, exponential map, and Mobius addition are defined as follows [2, 6]: \[\begin{cases}\mathbf{M}&=\{\mathbf{p}\in\mathbb{R}^{d}|\|\mathbf{p}\|\leq\frac{1}{c}\},\\ \exp_{0}(\mathbf{v})&=\tanh(\sqrt{c}\|\mathbf{v}\|)\frac{\mathbf{p}}{\sqrt{c}\|\mathbf{v}\|},\\ d^{c}(\mathbf{p},\mathbf{q})&=\frac{2}{c}\tanh^{-1}(\sqrt{c}\|-\mathbf{p}\oplus\mathbf{q}\|),\\ \mathbf{p}\oplus\mathbf{q}&=\frac{(1+2c,\mathbf{p},g(\mathbf{v}\!+\!\!\{\mathbf{v}}\!\!\{\mathbf{v}}\!\! \{\mathbf{v}}\!\!\{\mathbf{v}}\!\!\{\mathbf{v}}\!\!\{\mathbf{v}}\!\!\{\mathbf{v}}\!\!\{\mathbf{v}}\!\! \{\mathbf{v}}\!\!\!{\mathbf{v}}\!\!{\mathbf{v}}\!\!{\mathbf{v}}\!\!{\mathbf{v}}\!\!{\mathbf{v}}\!\!{\mathbf{v}} \!\!{\mathbf{v}}\!\!{\mathbf{v}}\!\!{\mathbf{v}}\!\!{\mathbf{v}}\!\!{\mathbf{v}}\!\!{\mathbf{v}}\!\!{ \mathbf{v}}\!\!{\mathbf{v}}\!\!{\mathbf{v}}\!\!{\mathbf{v}}\!\!{\mathbf{v}}\!\!{\mathbf{v}}\!\!{\mathbf{v}} \!\!{\mathbf{v}}\!\!{\mathbf{v}}\!\!{\mathbf{v}}\!\!{\mathbf{v}}\!\!{\mathbf{v}}\!\!{\mathbf{v}}\!\!{\mathbf{v }}\!\!{\mathbf{v}}\!\!{\mathbf{v}}\!\!{\mathbf{v}}\!\!{\mathbf{v}}\!\!{\mathbf{v}}\!\!{\mathbf{v}} \!\!{\mathbf{v}}\!\!{\mathbf{v}}\!\!{\mathbf{v}}\!\!{\mathbf{v}}\!\!{\mathbf{v}}\! It is important to notice that a model can infer a pattern inherently or infer a pattern under a certain condition (see Table 1). Our model aims to take advantage of the inference power of multiple models on heterogeneous patterns with minimum certain conditions by providing attention mechanisms per relation type forming different patterns. Note that our model is not influenced by incapable models on particular patterns because the attention can be learned as zero for those models. Overall, our combined model can inherit the capabilities mentioned in Table 1 and ignore the incapability of other models which is shown in Theorem 3.2 and Theorem 3.3. Hence, if our model is executed on a dataset containing only a single pattern, we do not expect to outperform the combined models, rather than achieving competitive performance to the best model. Proof of propositions can be found in Appendix A.1. ## 4. Experiments In this section, we conduct extensive evaluations to show the effectiveness of our proposed approach. To do so, we first introduce the utilized datasets, followed by the selected baseline for the combination and the comparison. We then present the experimental setup and hyper-parameter setting. The results and analysis are presented in three folds: comparison with the individual baselines, comparison with other combination models, and comparison with models in the Ultrahyperbolic space. Finally, we provide several analyses to show the role of attention on learning and inference over various patterns for different kinds of relations and models. ### Dataset We use the following standard benchmark for the evaluation: * **Wordnet**: WN18RR (Kang et al., 2017) is a subset of WN18, which contains a mixture of symmetric and antisymmetric as relational patterns, and hierarchical structural patterns. _see also_ and _hyper-nym_ are examples of symmetry and hierarchy in this dataset. WN18RR contains 11 relations, 86,835 training triples, and 40,943 entities. Compared to the other datasets in KGE literature, WN18RR is considered sparse; * **FreeBase**: FB15k-237 (Zhou et al., 2018) is the subset of FB15k from removing leakage of inverse relations (Kang et al., 2017). FB15k-237 is less sparse than WN18RR, and mainly contains composition patterns. It contains 237 relations, 272,115 triples, and 14,541 entities. * **NELL**: NELL-995 (Zhou et al., 2018) contains 75,492 entities and 200 relations, having \(\sim 22\%\) hierarchical relations. We use a subset of NELL-995 with 100% hierarchy, created in (Beng et al., 2017). ### Baseline In this section, we aim to show experimentally that the geometric combination of several existing KGE models improve their performance. To this end, we select a subset of KGEs in Euclidean, Complex, and Hyperbolic space with different capabilities to show we can combine a wide range of models. In particular, we select a subset of TransE, DistMult, ComplEx, RotatE, AttE (only reflection), and AttH (hyperbolic projection operator) and compare our combined models against such baselines. We also compare our models with two additional state-of-the-art KGEs: in high dimension, TuckER (Beng et al., 2017), and in low dimension, MuRP (Beng et al., 2017), to show that our models can outperform models that were not combined. Furthermore, we also compare our model with a recent top model for combining several KGEs, namely MuIDE (Zhou et al., 2018) because it uses a similar set of KGEs for the combination, similar dimensions, and some of the benchmarks we used. Additionally, we will show that our model gets comparable performance with UltraF (Zhou et al., 2018), a model on the Ultrahyperbolic space. ### Experimental Setup #### Evaluation Metrics We use the popular ranking metrics (Zhou et al., 2018) namely Mean Reciprocal Rank (MRR), and Hits@k, k = 1,3,10. Given a set of test triples \(\mathcal{T}=\{(h,r,t)\}\), for each test triple \(p=(h,r,t)\), we compute its rank as follows: we first corrupt the head entity by replacement with all possible entities in the KG, say \(e^{\prime}\in\mathcal{E}\), and generate a set of candidate corrupted triples for \(p\) i.e., \(p_{\text{c}}=\{p^{\prime}=(e^{\prime},r,t)\}\). We filter \(p_{\text{c}}\) by removing all generated candidates which are already appeared in the train, validation, and test sets, together with removing the cycle. After computing the score of the candidate triples and sorting them, we find the rank of the candidate test \(p\), and call it \(r_{p}\). The same procedure is performed for computing the right rank by corrupting the tail entity and computing the right rank. The average of the left and right ranks will be considered the final rank of the test triple. We then compute the average reciprocal of all test triples and report it as MRR. Hits@k will be computed by reporting the percentage of the test triples ranked less than \(k\). #### Hyperparameters The hyperparameters corresponding to our model are embedding dimension \(d\), models to combine \(m\), optimizer \(o\), learning rate \(lr\), number of negative samples \(n\), batch size \(b\), dtype \(dt\), and double_neg_dn. We additionally used \(\alpha^{2}\) as attention parameters (in place of \(a\)) playing the role of a simple kind of regularization mechanism (\(ar\)), to further penalize the models with less contribution in the attention. Following the common practice of KGEs, we use both low \(d=32\) and high dimensions \(d=500\) for the evaluation of our model. For the other hyperparameters, we use the following ranges \(m=\{\)TransE, DistMult, ComplEx, RotatE, AttE (only reflection)\(\}\), \(o=\{\)Adam, Adagrad\(\}\), \(lr=\{0.1,0.05,0.001\}\) Figure 3. Comparison between the importance given by each model to a symmetric (in green) and antisymmetric (in blue) relation.1 \(n=\{-1,50,100,150,200,250\}\) where -1 refers to full negative sampling (Hamilton et al., 2017), \(b=\{100,500\}\), \(dt=\{single,double\}\), \(ar=\{yes,no\}\), \(dn=\{yes,no\}\). We also add reciprocal triples to the training set as the standard data augmentation technique (Hamilton et al., 2017). The optimal hyperparameters for each dataset are specified in Appendix A.2. ### Link Prediction Results And Analysis The result of comparing SEA and SEPA to the combined models on FB15k-237, WN18RR and NELL-995-h100 are shown in Table 3 (\(d=32\)) and in Table 4 (\(d=500\)). As expected, while the hyperbolic version of our combined model (SEPA) outperforms all baselines in low-dimensional settings, the Euclidean one (SEA) is the best model in high-dimensional space. Comparing SEPA and SEA, in low-dimensional space, we can see the performance improvements on WN18RR and NELL-995-h100 are much more than FB15k-237. This is due to the presence of a significant amount of hierarchical relations in WordNet and NELL compared to Freebase. We still observe SEPA outperforms SEA on FB15k-237 dataset. The main reason is that SEPA combines hyperbolic manifolds with various transformations used in queries of different models, so it is capable of capturing the mixture of structural and logical patterns in a low dimension (e.g., compositional patterns in Freebase). Even though we did not combine Atte and AttH directly, but only used reflection and the hyperbolic projection, respectively, we were still able to outperform them. Similarly, SEPA outperforms MuRP in low dimensions, and SEA outperforms TuckER in high dimensions in all metrics apart from the H@1 of FB15k-237. More details are available in Appendix A.6. Our combination model increases the expressiveness of individual models (Proposition 3.2), having the best performance gain in low-dimensional space. Besides, our model takes advantage of the inference power of the base models with fewer constraints (Table 1) by utilizing the attention mechanism. On the other hand, in high-dimensional space, Euclidean models are proven to be fully expressive (Zhu et al., 2017). Hence, even though SEA outperforms all baselines, the performance gain is not as significant as in low-dimensions. ### Further analyses We additionally make a series of further analyses to evaluate the performance of our attention-base combination function. First, we want to show that our model is able to increase the precision of predictions for both symmetric and antisymmetric relations. Table 2 shows the H@1 results in WN18RR, in the low-dimensional setting of SEPA, compared to the individual combined KGE. Further results on H@10 can be found in Appendix A.5. For example, if we look at the symmetric relation _derivationally related form_, we can see that the H@1 of TransE is very low when compared to the one of ComplEx and DistMult, and yet, our model was able to improve this metric. Similarly, when we look at an antisymmetric relation (e.g., _member of domain usage_) we have the opposite situation, having high performance for TransE and a lower one for ComplEx and DistMult. The intuition is that the attention base combination can effectively give more importance to the best models for the specific kind of relation involved in the query. Such intuition is reinforced in Figure 3, which shows the (averaged) attention value among the individual models for the above-mentioned relations. It shows that the attention function can effectively select the correct proportion among the models for the two different kinds of relations. Besides, the importance of the attention function is highlighted by our ablation study, which consists of turning off the attention from our best models, SEPA at dimension 32, and SEA at dimension 500. We obtained two new versions of the models, namely **SEP** and **SE**. Tables 3, 4 show that SEPA and SEA outperform SEP and SE. space as a sophisticated manifold containing several sub-manifolds, our models get competitive results to the state-of-the-art in the Ultrahyperbolic space (Table 6). In particular, SEPA gets competitive results in low-dimensions, while SEA in high-dimensions. One may consider using our idea to integrate approaches such as (Zhu et al., 2017; Zhang et al., 2018) with other baselines. However, due to their involved multiple geometric spaces, such integration will require a substantial revision of combinations of transformations and, hence, is left for future work. ## 5. Conclusion In this paper, we propose a new approach that facilitates the combination of the query representations from a wide range of popular knowledge graph embedding models, designed in different spaces such as Euclidean, Hyperbolic, ComplEx, etc. We presented a spherical approach together with attention to queries to capture heterogeneous logical and structural patterns. We presented a theoretical analysis to justify such characteristics in expressing and inferring patterns and provided experimental analysis on various benchmark datasets with different rates of patterns to show our models uniformly perform well in link prediction tasks on various datasets with diverse characteristics in terms of patterns. Our ablation studies, relation analysis on WN18RR and analysis of the learned attention values show our models mainly take the advantage of the best-performing models in link prediction tasks. By doing that, we achieved state-of-the-art results in Euclidean and Hyperbolic spaces. In future work, we will combine various manifolds besides combining the queries in knowledge graph embedding. Additionally, the proposed approach could be applied to other tasks. For example, it could be possible to use an attention mechanism to combine multi-hop queries computed using different complex query answering methods (Zhu et al., 2017; Zhang et al., 2018). ###### Acknowledgements. This work has received funding from the following projects: The European Union's Horizon 2020 research and innovation program under the Marie Sklodowska-Curie grant agreement No: 860801; BMWi Servicemeister (01MK20008f); DFG - COFFEE (STA 572_15-2); and DFG Excellence Strategy in the Clusters of Excellence IntCDC at the University of Stuttgart, BP 20. \begin{table} \begin{tabular}{|l|l|l l l l|l l l l|} \hline \multirow{2}{*}{**Elements**} & \multirow{2}{*}{**Model**} & \multicolumn{6}{c|}{**WN18RR**} \\ \cline{3-10} & & MRR & H@1 & H@3 & H@10 & H@3 & H@10 \\ \hline \multirow{3}{*}{\(d\)=32} & UltraE (\(\texttt{q-4}\)) & **0.488** & 0.440 & **0.503** & 0.558 \\ & SEPA & 0.481 & **0.441** & 0.496 & **0.562** \\ & SEA & 0.466 & 0.425 & 0.482 & 0.542 \\ \hline \hline \multirow{3}{*}{\(d\)=500} & UltraE (\(\texttt{q-4}\)) & **0.501** & 0.450 & 0.515 & **0.592** \\ & SEPA & 0.480 & 0.436 & 0.498 & 0.570 \\ \cline{1-1} & SEA & 0.500 & **0.454** & **0.518** & 0.591 \\ \hline \end{tabular} \end{table} Table 6. Comparison between our proposed models and the best UltraE (Zhu et al., 2017) in WN18RR. Best score in bold and second best underlined. \begin{table} \begin{tabular}{|l|l|l l l l l l l l l l l l|} \hline \multirow{2}{*}{**Elements**} & \multirow{2}{*}{**Model**} & \multicolumn{6}{c|}{**WN18RR**} & \multicolumn{6}{c|}{**FB15k-237**} & \multicolumn{6}{c|}{**NELL-995-h100**} \\ \cline{3-14} & & MRR & H@1 & H@3 & H@10 & MRR & H@1 & H@3 & H@10 & MRR & H@1 & H@3 & H@10 \\ \hline \multirow{3}{*}{Individual Models} & TransE & 0.356 & 0.256 & 0.419 & 0.531 & 0.336 & 0.243 & 0.369 & 0.524 & 0.300 & 0.212 & 0.340 & 0.469 \\ & DistMult & 0.443 & 0.412 & 0.453 & 0.504 & 0.343 & 0.249 & 0.380 & 0.533 & 0.322 & 0.238 & 0.359 & 0.486 \\ & RotatE & 0.387 & 0.376 & 0.392 & 0.409 & 0.266 & 0.188 & 0.289 & 0.422 & 0.322 & 0.238 & 0.359 & 0.493 \\ & ComplEx & 0.487 & 0.443 & 0.503 & 0.573 & 0.265 & 0.186 & 0.290 & 0.422 & 0.323 & 0.237 & 0.362 & 0.492 \\ & AttE & 0.491 & 0.444 & 0.507 & 0.583 & 0.359 & **0.264** & 0.395 & 0.548 & 0.377 & 0.292 & 0.419 & 0.539 \\ \cline{1-1} & AttH & 0.482 & 0.434 & 0.502 & 0.576 & 0.356 & 0.262 & 0.393 & 0.546 & 0.366 & 0.279 & 0.412 & 0.532 \\ \hline \hline Our models & SEPA & 0.480 & 0.436 & 0.498 & 0.570 & 0.354 & 0.259 & 0.390 & 0.545 & 0.347 & 0.260 & 0.387 & 0.520 \\ & SEA & **0.500** & **0.454** & **0.518** & **0.591** & **0.360** & **0.264** & **0.398** & **0.549** & **0.384** & **0.294** & **0.432** & **0.554** \\ \hline \hline Ablation & SE & 0.495 & 0.448 & 0.513 & 0.587 & 0.353 & 0.259 & 0.389 & 0.542 & 0.381 & 0.292 & 0.427 & 0.548 \\ \hline \end{tabular} \end{table} Table 4. Link prediction evaluation on datasets for d-500. \begin{table} \begin{tabular}{|l|l|l l l l l l l l l l l l|} \hline \multirow{2}{*}{**Elements**} & \multirow{2}{*}{**Model**} & \multicolumn{6}{c|}{**WN18RR**} & \multicolumn{6}{c|}{**FB15k-237**} & \multicolumn{6}{c|}{**NELL-995-h100**} \\ \cline{3-14} & & MRR & H@1 & H@3 & H@10 & MRR & H@1 & H@3 & H@10 & MRR & H@1 & H@3 & H@10 \\ \hline \multirow{3}{*}{Individual Models} & TransE & 0.356 & 0.256 & 0.419 & 0.531 & 0.336 & 0.243 & 0.369 & 0.524 & 0.300 & 0.212 & 0.340 & 0.469 \\ & DistMult & 0.443 & 0.412 & 0.453 & 0.504 & 0.343 & 0.249 & 0.380 & 0.533 & 0.322 & 0.238 & 0.359 & 0.486 \\ & RotatE & 0.387 & 0.376 & 0.392 & 0.409 & 0.266 & 0.188 & 0.289 & 0.422 & 0.322 & 0.238 & 0.359 & 0.493 \\ & ComplEx & 0.487 & 0.443 & 0.503 & 0.573 & 0.265 & 0.186 & 0.290 & 0.422 & 0.323 & 0.237 & 0.362 & 0.492 \\ & AttE & 0.491 & 0.444 & 0.507 & 0.583 & 0.359 & **0.264** & 0.395 & 0.548 & 0.377 & 0.292 & 0.419 & 0.539 \\ & AttH & 0.482 & 0.434 & 0.502 & 0.576 & 0.356 & 0.262 & 0.393 & 0.546 & 0.366 & 0.279 & 0.412 & 0.532 \\ \hline \hline Our models & SEPA & 0.480 & 0.436 & 0.498 & 0.570 & 0.354 & 0.259 & 0.390 & 0.545 & 0.347 & 0.260 & 0.387 & 0.520 \\ & SEA & **0.500** & **0.454** & **0.518** & **0.591** & **0.360** & **0.264** & **0.398** & **0.549** & **0.384** & **0.294** & **0.432** & **0.554** \\ \hline \hline Ablation & SE & 0.495 & 0.448 & 0.513 & 0.587 & 0.353 & 0.259 & 0.389 & 0.542 & 0.381 & 0.292 & 0.427 & 0.548 \\ \hline \end{tabular} \end{table} Table 3. Link prediction evaluation on datasets for d-32. Best score and best baseline are in bold and underlined, respectively.
2307.05840
Assessing individual risk and the latent transmission of COVID-19 in a population with an interaction-driven temporal model
Interaction-driven modeling of diseases over real-world contact data has been shown to promote the understanding of the spread of diseases in communities. This temporal modeling follows the path-preserving order and timing of the contacts, which are essential for accurate modeling. Yet, other important aspects were overlooked. Various airborne pathogens differ in the duration of exposure needed for infection. Also, from the individual perspective, Covid-19 progression differs between individuals, and its severity is statistically correlated with age. Here, we enrich an interaction-driven model of Covid-19 and similar airborne viral diseases with (a) meetings duration and (b) personal disease progression. The enriched model enables predicting outcomes at both the population and the individual levels. It further allows predicting individual risk of engaging in social interactions as a function of the virus characteristics and its prevalence in the population. We further showed that the enigmatic nature of asymptomatic transmission stems from the latent effect of the network density on this transmission and that asymptomatic transmission has a substantial impact only in sparse communities.
Yanir Marmor, Alex Abbey, Yuval Shahar, Osnat Mokryn
2023-07-11T23:26:49Z
http://arxiv.org/abs/2307.05840v1
Assessing individual risk and the latent transmission of COVID-19 in a population with an interaction-driven temporal model ###### Abstract Interaction-driven modeling of diseases over real-world contact data has been shown to promote the understanding of the spread of diseases in communities. This temporal modeling follows the path-preserving order and timing of the contacts, which are essential for accurate modeling. Yet, other important aspects were overlooked. Various airborne pathogens differ in the duration of exposure needed for infection. Also, from the individual perspective, Covid-19 progression differs between individuals, and its severity is statistically correlated with age. Here, we enrich an interaction-driven model of Covid-19 and similar airborne viral diseases with (a) meetings duration and (b) personal disease progression. The enriched model enables predicting outcomes at both the population and the individual levels. It further allows predicting individual risk of engaging in social interactions as a function of the virus characteristics and its prevalence in the population. We further showed that the enigmatic nature of asymptomatic transmission stems from the latent effect of the network density on this transmission and that asymptomatic transmission has a substantial impact only in sparse communities. ## Introduction The SARS-CoV-2 pandemic, like other diseases, spread differently in different countries and communities [1]. Disease progression results from the interplay between the population's complex interaction dynamics, which are associated with the population's physical contacts' network, and the disease dynamics [2, 3], as well as characteristics such as the population age [4, 5]. Here, we present in detail an SEIR-like _interactions-based contagion model_ (ICM) of airborne diseases for COVID-19 [6, 7] over real-world interaction data that is enriched with personal disease progression details that depend on the individual _susceptibility_ to the disease. The model is termed _Interactions-based Contagion Model with Individual outcomes_ (ICMI). The individual susceptibility determines the severity of the disease for an infected individual. For COVID-19, the probability of an individual contracting a severe form of the disease, referred to as their susceptibility to the disease, is a function of a person's age [4]. Accurately modeling disease progression requires considering the time-respecting paths, which are the sequence and order of these interactions between the members [8, 9, 10]. Contacts' temporal ordering and dynamics are crucial for understanding the transmission of infectious diseases. The interactions' temporal path ordering was shown to affect the spreading dynamics [11, 12, 13, 14, 15, 16, 17]. Further, considering the accurate structure of human interactions is pivotal for correctly predicting the spread of epidemics, such as the Covid-19 disease [18]. To model the disease progression in a real-life community, we use the real-world encounters data from the Copenhagen Networks Study (CNS) dataset [19, 20, 21]. The use of real-world interaction data as the dataset for our analysis allows for a person-to-person spread of disease at the level at which it occurs in its actual temporal and local contexts [22, 23]. An investigation of the transmission of a severe acute respiratory syndrome (SARS) in a 2003 Toronto-area community outbreak found that "longer and closer proximity exposures incurred the highest rate of disease" [24, 22]. Thus, the duration of the interaction is crucial for correctly simulating the airborne transmission of various pathogens. A key feature of ICM is that the interaction's duration is correlated positively with the latent transmitted viral load if the encounter is with an infected person. Duration of interaction as a proxy for the transmitted viral load was used for estimating viral infection between animals [25] and humans [26, 27, 28, 29, 30]. Smieszek [26] noted that the time people spend interacting with each contact person decreases as the number of contact persons increases. The suggested model takes the length of the exposure into account while following real-world interactions of various scales that exist in the CNS dataset, as the proximity information enables the detection of gatherings [31, 32]. Combining the above with individual susceptibility, the ICMI model presents three significant contributions. First, it extends the capabilities of ICM [6, 7] by taking individual outcomes into account, enabling it to predict outcomes at the community and individual levels. With ICMI, the daily and average expected percentage of needed hospital beds could be predicted for communities that leverage digital proximity tracing applications [33, 34]. In addition, the model considers the probability of symptomatic or severe infection in correlation with the individual's age. It models age using a personal susceptibility parameter, \(s_{i}\), a normalized parameter denoting the personal susceptibility to the disease that correlates with age. Thus, lower values of \(s_{i}\) correspond to younger people who are more likely to be asymptomatic. Second, ICMI enables us to model and predict individual risk of infection given personal daily exposure. Individual risk prediction projections are provided as a function of different variants' virality and individual vulnerability probability due to personal immune levels. This prediction of individual risk not only enables individuals to plan their schedules to reduce their risk of infection but also presents a complementary paradigm to the current government-imposed non-pharmaceutical interventions (NPIs), providing an additional layer of personal control. Third, the ICMI model contributes to the study of the spread of diseases caused by asymptomatic transmission. Asymptomatic transmission of diseases such as COVID-19 is considered to be the Achilles' heel of the pandemic control [35, 36] since asymptomatic individuals continue their normal social and travel activities while being hard to trace [37]. The exact extent and impact of asymptomatic transmission are debatable [38], and differences in asymptomatic cases' generation time may affect the transmission and spreading factor estimations [39]. Here, we find that asymptomatic transmission is significant and influential only in relatively sparse networks, and its effect is mitigated in dense networks. Asymptomatic transmission influences the progression of the disease only in sparse communities. Thus, taking into account the macroscopic daily outcomes of the microscopic interactions while considering individual susceptibility to severe disease enables us to predict outcomes at both the population and the personal level. The model further allows predicting individual risk of conducting meetings as a function of the virus characteristics and its prevalence in the population. We further showed that the enigmatic nature of asymptomatic transmission stems from the latent effect of the network density on this transmission and that asymptomatic transmission has a substantial effect only in sparse communities. ## Results We ran numeric simulations over the CNS real-world interaction dataset, combining two processes. The first is the interaction model (ICM). The model considers the duration of the interactions, as is described in Equation 2. Figure 1 shows Figure 1: The average daily meetings duration histogram of the CNS social networks. X-axis depicts a log-scale measure of meetings’ duration, the y-axis denotes the average daily amount of meetings of various duration in the CNS network measured over 24-hour intervals. the average daily meetings' duration in the CNS dataset. Considering the dataset consists of close to 700 individuals connected daily, the data demonstrate a skewed distribution graph, with many short meetings and a few very long ones. The model then accounts for the circadian nature of human behavior by considering the daily probability of not being infected in any of the encounters (Equation 3). The second process incorporated is the personal disease progression modeling according to the individual susceptibility, as described in the infection model presented in Fig 7. The CNS dataset was recorded in a university and contained the reading of 700 students. The interactions are probably denser than that of a small metropolitan [6]. The ICMI model, however, assigns heterogeneous personal vulnerability values as described in each experiment. Personal vulnerability correlates with age, as \(s_{i}\) is a normalized parameter that correlates with age, \(s_{i}\in[0\ldots 1]\). Low values of \(s_{i}\) denote very young people (kids), and high values denote very old. The two processes are combined in each of the simulations, and each experiment is the result of 200 iterations. The simulation code is written in Python and is freely available: [https://github.com/ScanLab-ossi/covid-simulation](https://github.com/ScanLab-ossi/covid-simulation). ### COVID-19 disease progression with individual outcomes We start with a simulation of the disease progression over the real-world CNS temporal network. Given a variant with a minimum exposure latency \(D_{\min}\) (i.e., each encounter with an infectious node is long enough to infect), we compute disease progression as well as individual outcomes in a community. Here, we start with one infectious initial node, i.e., a single patient zero. \(P_{\max}\), the maximal probability of getting infected given exposure is set to 0.5, and the population is young (\(s_{i}\) values are small), s.t. 80% of the population is asymptomatic (All parameters are configurable). During each day, for each node \(i\), interactions with _Infectious_ nodes, either presymptomatic or asymptomatic, that are longer than \(D_{\min}\) are examined. At the end of each day, according to Eq. 2 and Eq. 3 a node either stays in state _S_usceptible or enters state _E_xposed and infected. Personal progression of the disease then follows the individual state machine depicted in Figure 7. Figure 2 depicts the disease progression over the CNS real-world temporal network for two different COVID-19 variants. Each experiment was the result of 200 iterations. Figure 2a shows the progression of a fast variant ((that is, \(D_{\min}\) is low), for which every encounter with an infectious individual is long enough to infect, and Figure 2b shows the disease progression and community outcome for a slower variant, in which only long exposures can infect (that is, \(D_{\min}\) is high). Individual outcomes can predict the number of daily hospital beds needed, denoted in red in the figures, and the expected death toll, if any. We further see that more contagious variants, as is the case shown in Figure 2a, create, even in young populations in which the majority of patients are asymptomatic, a higher load on the community hospitals. ### Individual projection of infection as a function of the daily exposure Considering the changing daily number of meetings and their lengths, we can approximate the individual probability of infection, given their daily exposure. We project here individual infection outcomes as a function of the daily number of meetings and their lengths and contextual information. Contextual information can be considered a proxy for the virality of a variant and susceptibility to the infection of an individual's immune system. When considering meetings' duration, the minimum duration in the model, \(D_{\min}\), is used as an approximate for the virality of a variant, with very low \(D_{\min}\) values corresponding to very infectious variants, and vice-versa. The other global parameter in the model, \(P_{\max}\), is used here on a per-individual level, \(P_{\max,i}\), to denote the personal vulnerability of a person \(i\) to the virus. In this approximation, high \(P_{\max,i}\) values correspond to highly vulnerable, e.g., immune-compromised people. Lower values correspond to a lower probability of getting infected, given maximal exposure. In both experiments, the probability of getting infected is computed. The severity of the infection would then depend on the individual susceptibility. In these two experiments, we examine the probability of becoming infected. This probability differs from the probability of being symptomatic or severely ill, which correlates with the personal susceptibility parameter, \(s_{i}\). The probability of getting infected does not correlate with this parameter, of course [40]. We generate random exposure data sampled from distributions fitted to the CNS real-world aggregated data. Aggregated information was taken such that an average daily outcome could be calculated. The average number of infected individuals differs on different days, depending on the spread of the virus in the community. Typically, a person would not be aware of the precise situation in their community. Hence, we consider the average of all the days in the dataset as the typical day over which we calculate individual outcomes. In a time window \(\tau\), the information for the possible \(N\) people encountered by an average person during that time window was generated by randomly sampling from a distribution \(N\sim ax^{b}\) fitted to the distribution of the CNS dataset (\(a\approx 0.051,b\approx-0.635\), \(max\{\mathrm{Cov}[N]\}\leq 4.4e^{-5}\)). Meetings lengths, \(d_{ik}^{j}\), were generated from a discrete probability distribution based directly on the distribution of all encounters duration in the original dataset. Each experiment is the result of 200 iterations, and the results are depicted in Figures 4 and 3, showing the personal projection of infection as a function of the daily exposure. Understandably, the number of contacts an individual is exposed to per day correlates highly with that individual's total exposure per day. The most basic configuration (\(D_{\min}=0\) and \(P_{\max}=1\)) for an individual shows a close to a linear relationship between the daily amount of nodes exposed to, the sum of exposure duration and \(P_{\infected}\). Figure 3 depicts the projected individual outcome for various levels of exposure to variants, approximated using the amount of viral load needed for infection. Higher \(D_{\min}\) values correspond to a higher viral load needed for infection and, thus, to a less transmissible variant. Lower \(D_{\min}\) values correspond to a highly transmissible variant. More encounters, even if short, increase the risk of infection even when the virus is less infectious. However, individuals can lower their risk of infection by lowering their daily exposure to less transmissible viruses. Figure 4 depicts the projected infection probability as a function of the daily exposure for various individual vulnerability levels. Here, the most transmissible variant was considered, i.e., enough viral load is transmitted even in very short exposures. Here, recovered or vaccinated people have a significantly lower probability of infection given similar exposure. Lowering \(P_{\max}\) reduces \(P_{\text{infected}}\) by that factor, with the slope of the linear correlation dropping accordingly. ### Temporal density mitigates the effect of asymptomatic infection Modeling asymptomatic transmission can be achieved using the individual susceptibility parameter at the community level. ICMI's detailed individual disease progression encompasses the individual susceptibility factor per node, \(s_{i}\). At the community level, we define \(\vec{S}_{c}\) as the vector of personal susceptibility levels in a community. \(\vec{S}_{c}\) is defined as the vector \(s_{i},i\in[1..n]\), with \(n\) is the total number of individuals in the simulation. The distribution of community personal susceptibility levels, \(\vec{S}_{c}\), determines the percentage of symptomatic and asymptomatic individuals in the simulation. Thus, it can be used to examine the effect of different levels of asymptomatic carriers in the population on the progress of the disease. As susceptibility correlates with age, younger populations will yield more asymptomatic patients. To examine the effect of asymptomatic transmission in a population, we ran the simulation over the CNS dataset while varying the average age of the population, s.t., the percentage of asymptomatic among the infected varies between 10% to 90%. Each experiment was performed 200 times, with random placement of the initial patient zero. Figure 5 depicts the results of the experiment. Surprisingly, we find that asymptomatic transmission does not have a significant effect in this experiment. This result aligns with the difficulty of assessing the true impact of asymptomatic transmission [41]. This is also a sensitivity test for how transmission in a community differs when the population age varies. In a previous study [6], we demonstrated that the CNS network is very dense. Over two-thirds (\(\sim 64\%\)) of the days have a temporal density of 0.2 or lower. That is, the overall number of interactions is at most 20% of the possible number of interactions. A quarter of the days have a temporal density of 25%, and the remaining 11% days are very dense, up to half the maximal possible density in which everybody meets everybody. We perform the following experiments to understand the effect of temporal density on latent transmission due to asymptomatic people. We reduce temporal density by splitting each day to \(k=[2,3]\) pseudo days with \(1/k\) the interactions of the original day. Thus, for example, a network with half the density will be depicted as twice as long, pseudo-days wise. We then repeat the experiment described above, changing the percentage of asymptomatic in the population over the longer, less dense networks. Figure 6 demonstrates the effect of temporal density on the latent transmissibility of a disease. In each experiment, we changed the percentage of asymptomatic carriers in the population between 10% and 90%. Each experiment was performed 200 times, with random placement of the initial patient zero. Figure 5(a) depicts the spread of the disease on a network with half the temporal density of the original one, showing a significant effect of the latent, asymptomatic transmission. Similarly, when the temporal density is one-third that of the original network, as is the case in Figure 5(b), the more asymptomatic the population, the more they contribute to the total infection rate. Hence, we find here that asymptomatic transmission influences the total infection rate in sparse networks. However, increased density mitigates the effect of asymptomatic transmission, and in highly dense communities, we could expect the disease to spread fast, regardless of age distribution. ## Discussion Here, we presented the ICMI model, which assesses the disease progression over real-world community interactions. The model follows the temporal dynamics of the interactions while taking into account the virus parameters and individual susceptibility to the disease. Three different aspects combine the model: following temporal interactions, considering the interaction duration as a proxy for the transmitted viral load, and incorporating individual susceptibility. We discuss here each of these aspects. The last decade's abundance of temporal information paved the path to further understanding of the temporal dynamics of networks [42, 43, 8]. Temporal networks have become the playground for inferring behavior in a plethora of areas, including, but not limited to, the inference of the effects of changes on the evolution of networks [44, 45]; revealing hidden structures, such as the revelation of the structure of "co-presence" in a metropolitan or university from temporal daily encounters [46]; the temporal, spatial diffusion of information in social media [47], and the behavior of viral processes over it [48]. Incorporating temporal dynamics into existing viral models has proven challenging. Recently, Holme [10] suggested a fast implementation of a temporal SIR over temporal networks. The model, however, does not allow for dynamics representative of real-world interactions, e.g., it assumes exponential time to recovery. Accounting for large infection events, Cooper _et al._[49] extended the SIR model to consider the surges in the size of the susceptible population over time. Yet, they do not consider other population dynamics. Recent agent-based models incorporated human dynamics such as inter-arrival times and heterogeneity of interactions [50, 51]. For example, OpenABM [52] created a simulated city environment of 1 million people and their dynamics, typical of a household, schools, social interactions, etc. Their agent-based modeling was then used to evaluate different social distancing techniques in that environment. Unlike the presented ICMI model, they do not consider personal differences, i.e., individual susceptibility, yet discuss their importance for accurate modeling. Agent-based models, however, do not maintain other real-world dynamics such as temporal path ordering. Several recent models considered that the transmitted viral load can differ between interactions. Given an estimated load, the rate of the shedding of viral load in metropolitan transportation was estimated [53]. Viral load and meeting duration were considered to understand the interplay between biological and social factors in the asymptomatic spread of the disease [54]. The importance of considering personal differences was discussed by many. For example, age and comorbidity factors contributed to the appearance of symptoms and to early isolation after infection [55, 56]. Specifically, the SIDARTHE model [57] differentiated between symptomatic and asymptomatic patients and considered the severity of their symptoms as a proxy for isolation. SIDARTHE used the following states: susceptible (S), infected (I), diagnosed (D), ailing (A), recognized (R), threatened (T), healed (H), and extinct (E). However, the model did not consider the temporal dynamics of human interactions nor the effect of the duration of the encounters. Our model agrees with the findings of Peirlink _et al._[58] that if infectiousness is the same for both symptomatic and asymptomatic patients, the size of the asymptomatic population does not largely affect the overall outbreak dynamics. Other complementary findings to ours are those of Park _et al._[39] and Subramanian _et al._[41], who have shown that a faster asymptomatic transmission rate increases the realized proportion of asymptomatic transmission. Yet these findings did not take into account how asymptomatic contagion is affected by the density of the population. In here, it was shown that asymptomatic transmission has a substantial impact only in sparse communities. ICMI considers the interaction's length as a proxy for the transmitted viral load. It computes the probability of infection at the end of the day. ICMI aggregates viral load only in the case of higher-order interactions, like group meetings. Otherwise, it does not consider different interactions as having an additive effect, as there is no evidence that viral load "remains" in-between interactions [59]. The CNS dataset was similarly used by Hambridge _et al._[60], who devised a temporal interaction-based SEIR model to assess the effect of various interventions on the CNS data. Unlike ICMI, the work does not consider interactions' length, and assumes that multiple exposures on the same day increase the risk. The ICMI model is the first to consider the interaction between the daily macroscopic dynamics and the microscopic interactions to predict the spreading dynamics in a population, given the various outcomes of getting infected within the population. Given today's abundance of digital traces, the ICMI model enables policymakers to assess the disease progression using real-world interactions typical within their communities while considering the various pathogens. We have shown that by incorporating the individual outcomes and the community age distribution, policymakers could receive an estimation of the expected number of hospital beds required as the disease progresses in the community. Devising a method for predicting individual outcomes as a function of daily exposure to a pathogen further gives a personal design tool for individuals to assess their risk in taking meetings of various lengths given the spread of the virus in the community. The method enables policymakers to decide when would be the right time to restrict large gatherings and long meetings and decide on general guidelines for the public. The work has several limitations. The CNS dataset is the result of students' interactions in a university. The interactions were denser than social interactions, which may result in a faster-than-reality infection process [6]. We thus showed that the effect of asymptomatic latent transmission is negligible in dense networks but not in sparser ones. An additional limitation is that the analysis of latent transmission due to asymptomatic infections does not consider that symptomatic and asymptomatic people shed different viral loads [61, 62]. The option was implemented in the code and is part of our future work. ## Conclusions The paper describes an SEIR-like interaction-driven contagion model of airborne disease for COVID-19 with individual outcomes (ICMI) that depend on age. It shows that ICMI, encompassing daily macroscopic dynamics with the microscopic level of interaction duration, enables outcomes prediction on both the population and the individual levels. It further allows for individual assessment of risk levels in different populations and can be used as a tool by policymakers. Using ICMI, we further showed that the effect of latent transmission due to asymptomatic infections depends highly on the structure and interaction density of the population's social network and is higher in sparser structures. In future work, we intend to use additional datasets obtained from community digital contact tracing or community structure as provided in [52] to explore the model further. The data would be augmented with the corresponding population age distribution to enable community-level predictions for real-life communities. Here, age was considered a proxy for personal vulnerability to the disease since we considered the recent COVID-19 pandemic a key example. However, in the case of other pathogens, other individual factors can be used. ## Methods ### Interaction-driven contagion SEIR-like model with personal outcomes (ICMI) The ICM with Individual outcomes (ICMI) model encompasses the emergent effects of the following three modeling dimensions: 1. Real-life temporal interactions, modeled at a _macroscopic_ level (daily). Here, we consider the topological structure of the interactions. Who met whom, and when, during each time window. Every day, we assess the likelihood of each node being exposed and infected within a given time window. This probability is the complement of the chance that the node avoids exposure during all of its encounters with infectious nodes in that time window. 2. The _duration_ of all interactions: how long each interaction lasted, and was it long enough to result in infection? Interaction duration is modeled at a _microscopic_ level and correlates positively with the latent transmitted viral load, which, in turn, is positively correlated with the probability of getting infected [62, 24]. 3. Individual disease progression modeling: a personal susceptibility parameter mediates the progression and severity of the disease for infected individuals. Following the literature, for COVID-19, this parameter correlates with age [4]. Infected individuals, whether symptomatic or asymptomatic [63, 35], become infectious and can transmit the infection to others. Symptomatic individuals are removed from the network once symptoms appear. Recovered individuals cannot be reinfected with the same variant. The individual disease progression encompasses the population diversity, found to critically affect the spread of Covid-19 [64]. ### The ICM model: contagion process without individual outcomes Interacting nodes can be in one of the following states: _S_usceptible, _E_xposed, _I_nfectious, and either _R_ecovered or _R_emoved. All nodes begin in state \(S\), but for some initial infectious patients zero nodes in state \(I\). Nodes that interact with infectious nodes might become infected, entering state \(E\). State \(E\) stands for _exposed and infected_. Infected nodes become Infectious, thus entering state \(I\). Infectious nodes in state \(I\) transition to state \(R\). The transition between states \(S\) and \(E\) is probabilistic and immediate. The transition between states \(E\) and \(I\) and from \(I\) to \(R\) is merely a function of time. In the model, the probability of being exposed is calculated at the end of each time window \(\tau\) as the complement of the probability of not getting exposed and infected at any of the interactions during that day. \[P_{i}^{\tau}(S\to E)=1-\prod_{i}^{N_{t}^{\tau}}(1-P_{\max}) \tag{1}\] Where \(N_{t}^{\tau}\) is the subset of infected nodes in the time window \(\tau\) that interacted with node \(i\) during that time window and thus might potentially expose it to the infection, and \(P_{\max}\) is the probability of being infected during a maximal exposure. ### The ICM model: encounters duration heterogeneity The probability of infection is inversely correlated with distance and decreases dramatically with it, and is correlated with the duration at the exposure distance [62, 65, 24]. Hence, the likelihood of getting exposed and infected during each interaction with an infectious node is modeled as a Sigmoid function of the duration of the interaction. At each encounter with an infectious node in time window \(\tau\), there is a probability for node \(i\) to get exposed and infected that is calculated as follows. Let \(d_{i,k}\) be a non-zero value for the strength of an edge that enters the focal node \(i\) from an _infected node \(k\)_, where \(k\in K\). \(K\) is the set of infectious nodes that \(i\) encounters in time window \(\tau\). Here, the strength of an edge, \(d_{i,k}\), corresponds to the duration of a meeting between the focal node \(i\) and an infectious neighbor node \(k\in K\). Thus, the probability of node \(i\) becoming exposed and infected during an encounter with an infectious node \(k\) is as follows: \[\forall k\in K,P_{i,k}=\begin{cases}P_{\text{e}}&d_{i,k}<D_{\min}\\ \frac{d_{i,k}}{D_{\max}}&D_{\min}\leq d_{i,k}\leq D_{\max}\\ 1&d_{i,k}>D_{\max}\end{cases} \tag{2}\] The model assigns an insignificant infection probability, \(P_{\text{e}}\), to encounters with infectious nodes that are shorter than the minimal time to infect, \(D_{min}\). \(D_{min}\) is a property of a pathogen. For meetings longer than the minimal time to infect, the probability of infection is linear to the meeting duration, denoted as the strength of the link, \(d_{i,k}\). Meetings longer than a maximal value, \(D_{max}\), are considered as having the maximal probability of infection. As COVID-19 variants, such as Alpha and Delta, are associated with different exposure levels of transmitted viral load [66, 67], the virality of such pathogens is correlated in the model with the minimum exposure latency, \(D_{\min}\). \(D_{\max}\) denotes the duration of the exposure for which the probability of infection is maximal. If the interaction is shorter than \(D_{\min}\), the duration is set to a minimal probability that reduces the probability of being infected due to this encounter. At the end of each time window \(\tau\), the probability of a node \(i\) becoming exposed and infected (state \(E\)) is calculated as the complement of the probability of not being exposed in any of the encounters during that time window with infectious nodes, as follows: \[P_{i}^{\tau}(S\to E)=1-\prod_{k}^{K}(1-P_{i,k}\cdot P_{\max}) \tag{3}\] Where \(P_{i}^{\tau}(S\to E)\) is the probability of node \(i\) in state _Susceptible_ to transition from state _Susceptible_ to state _Exposed_ following the interactions during the time window \(\tau\). In the case of a gathering, it is probable that node \(i\) may encounter more than one infectious node \(k\). We consider that in gatherings in which a node interacts with several infectious others, it is exposed to a higher viral load. The duration of the interaction in a gathering is calculated as the gathering's duration multiplied by the number of infectious nodes participating in the gathering. _ICMI: adding a detailed personal disease progression modeling_ Once infected, personal reactions to the infection differ based on age, comorbidity, and other latent features. To trade off accuracy with simplicity, we encompass all of the above in a personal _susceptibility_ parameter and correlate it with age [68, 4]. Hence, the higher the individual susceptibility, the more likely the probability of becoming symptomatic and severely ill. Disease progression is currently set to follow a known timeline [69, 70]. We will now explain the individual medical progression model. Once a person is exposed and infected, the disease progression and timeline depend on their _personal susceptibility probability_, referred for node \(i\) as \(s_{i}\). Figure 7 depicts the model. The initial state is _S_usceptible, and following Eq 3, a node can transition to states _E_xposed and infected. However, in this state, individuals may be either _Symptomatic_ or _Asymptomatic_, depending on their _personal susceptibility_ parameter, \(s_{i}\). Hence, the transition from state \(S\) to \(E\) would be to _Symptomatic_ with probability \(s_{i}\) and to _Asymptomatic_ with probability \(1-s_{i}\). The incubation period from exposure to infection differs between asymptomatic and symptomatic people; In the simulation, both are modifiable parameters and rely on found COVID-19 infection parameters [69]. Asymptomatic individuals recover and are placed back into the interactions temporal simulation in the Recovered State, in which they are not susceptible to the disease for the run of the simulation. Symptomatic individuals are removed from the interactions temporal network; however, we continue to model their disease progression. Sick individuals have severe symptoms with a probability of \(s_{i}\) and light symptoms with a probability of \(1-s_{i}\). People with light symptoms quarantine until they have recovered. People with severe symptoms are hospitalized in the _ICU_ in a severe state with probability \(s_{i}\) or a Stable state with probability \(1-s_{i}\). In both these states, individuals may deteriorate and die in probability \(s_{i}\) or recover in probability \(1-s_{i}\). Individuals who have recovered are re-introduced into the simulation as 'Recovered'. In this state, they are immune to the disease for the remainder of the simulation. ### Real-world contact information An ideal dataset for simulating the viral spread of a disease would be the actual population contact tracing. Manual contact tracing is slow and prone to high delays [71]. Digital contact tracing can aid in that; however, as seen in many countries, highly inaccurate if it is done by digital tracking. Digital apps are ubiquitously suggested [72, 73]. However, the use of their data entails severe privacy issues [74], and information is sparse unless the app is widely adopted [75]. To overcome these hurdles, we use data from the Copenhagen Networks Study (CNS) [21]. This data includes over 700 students' contact information for over one month, recorded using Bluetooth sensors in mobile phones provided to the participants. The dataset describes a socio-physical activity at a high, non-aggregated resolution; It is temporal, following over thirty days of interactions of over several hundred people. The CNS proximity information was registered as a function of the Received Signal Strength Indicator (RSSI). Extracting exact distance information from RSSI data is a difficult task [76, 77, 78]. For infection probability, distance is just one of the dimensions in calculating the chance of infection - the directions of standing, ventilation, and the environment are equally significant parameters [79, 80, 81]. As this information was unavailable, we chose to model proximity rather than exact distance and direction. Stronger signals correlate roughly with high proximity. Hence, we mined the CNS network for interactions for which the RSSI \(\geq-90\), which is the threshold value. We model the social network of interactions \(\Gamma\) as a sequence of \(T\) consecutive undirected weighted temporal graphs \(\{G_{\tau}\in\Gamma,\tau\in T\}\) where each temporal snapshot graph \(G_{\tau}=(V_{\tau},E_{\tau})\) denotes the subset of interacting nodes \(V_{\tau}\) during the \(\tau_{th}\) temporal window and the weighted edges \(E_{\tau}\) the interactions during this time [8, 31]. Each edge is a distinct interaction. Edge weight corresponds to the _duration_ of the interaction used in the model described above. We further detect gatherings, as in [31], as the effect of gatherings on contagious epidemic processes was recently researched and found significant [82, 32]. ## Author contributions statement O.M., Y.S., Y.M., and A.A. designed the experiments; Y.M and A.A. wrote the code and performed all the experiments; All authors analyzed the results; O.M. wrote the paper; All authors reviewed the manuscript. ## Additional information **Code availability:** All code and data used in this research are freely available: [https://github.com/ScanLab-ossi/covid-simulation](https://github.com/ScanLab-ossi/covid-simulation). **Competing interests:** The authors declare no competing interests.
2310.11222
Fast Node Vector Distance Computations using Laplacian Solvers
Complex networks are a useful tool to investigate various phenomena in social science, economics, and logistics. Node Vector Distance (NVD) is an emerging set of techniques allowing us to estimate the distance and correlation between variables defined on the nodes of a network. One drawback of NVD is its high computational complexity. Here we show that a subset of NVD techniques, the ones calculating the Generalized Euclidean measure on networks, can be efficiently tackled with Laplacian solvers. In experiments, we show that this provides a significant runtime speedup with negligible approximation errors, which opens the possibility to scale the techniques to large networks.
Michele Coscia, Karel Devriendt
2023-10-17T12:55:14Z
http://arxiv.org/abs/2310.11222v1
# Fast Node Vector Distance Computations using Laplacian Solvers ###### Abstract Complex networks are a useful tool to investigate various phenomena in social science, economics, and logistics. Node Vector Distance (NVD) is an emerging set of techniques allowing us to estimate the distance and correlation between variables defined on the nodes of a network. One drawback of NVD is its high computational complexity. Here we show that a subset of NVD techniques, the ones calculating the Generalized Euclidean measure on networks, can be efficiently tackled with Laplacian solvers. In experiments, we show that this provides a significant runtime speedup with negligible approximation errors, which opens the possibility to scale the techniques to large networks. ## 1 Introduction Complex networks are useful for a number of tasks. One prominent example is tracking the propagation of a phenomenon through a complex system. Examples range from diseases [3, 26, 42], memes/behaviors [19, 47, 16, 14, 22], or product adoption [28, 45] through a social network; productive knowledge in international trade [21, 41]; or goods in network modeling problems in logistics [33]. The Node Vector Distance (NVD) term has been recently used to group these tasks under a common structure [13]. In NVD, the phenomenon is represented as a vector recording one value per node. Then two vectors from different phenomena, or from the same phenomenon at different observation times, can be compared. Specifically, with NVD one can calculate their distance, network variance, or correlation. Most useful NVD techniques share a drawback: they are computationally complex to calculate. This severely limits their practical applicability to nodes containing a handful thousands of nodes, a far cry from the (tens or hundreds) million of nodes of the most interesting complex networks. In this paper we focus specifically on those NVD techniques based on the inversion of the graph Laplacian [10]. We do so for two reasons. First, these measures are among the most intuitive available. Second, because it turns out that the most computationally intensive part of calculating such measures is not necessary. We show how the already existing collection of techniques known as "Laplacian solvers" [37] can be directly applied to the Generalized Euclidean NVD technique, greatly reducing its computational complexity and allowing the analysis of really large complex networks. In our experiments we show how much runtime we gain in synthetic networks of growing sizes, showing an empirical estimation of the new computational complexity. We also do a brief analysis of the memory consumption. Finally, we show the practical applicability on a number of real world networks. The latter experiments also shows that, even if Laplacian solvers do not provide exact solutions, the approximation they induce is negligible for all practical purposes. All the experiments we run can be reproduced with the material we provide1. Footnote 1: [https://www.michelecoscia.com/?page_id=1733#nvdfast](https://www.michelecoscia.com/?page_id=1733#nvdfast) ## 2 Related Works ### Node Vector Distance Node Vector Distance (NVD) is a collection of techniques to estimate the network distance between node vectors - vectors recording one value per node [13]. NVD has a number of applications in network science, it can be used to track disease spreading [3], estimate the complexity of a country's economy [21], or quantify ideological polarization on social media [22]. The techniques at the basis of NVD can also be used to estimate how dispersed a variable is in a network [15], as well as calculating the correlations between node vectors on a network [12]. There are a number of different approaches one can take. One can apply graph signal processing techniques via the graph Fourier transform [35; 36]. Another popular approach is to compute the optimal way to transport the weights of one vector to another with respect to the distance in the network, giving rise to the Earth Mover Distance [33; 50]. In this paper we focus on a different class of solutions, which we label "Generalized Euclidean". In this class, one adapts the classical Euclidean distance to the graph setting. In the case of regular Euclidean distance, the node vectors are embedded in a space where all dimensions contribute equally - here, the distance is induced by the inner product represented by the identity matrix, so there is no distinction between the nodes. In the case of Generalized Euclidean distance, the node vectors are embedded in a complex space represented by the graph; more precisely, the distance between node vectors in this space is given by computing the quadratic product of their difference with the pseudoinverse Laplacian matrix as in Section 3.1. Note that using the pseudoinverse Laplacian is not a unique solution, as there are other ways to take into account the graph structure in the Euclidean formula [10]. Since the pseudoinverse Laplacian is the technique we focus on in this paper, we will provide more details about this approach in Section 3. For the purpose of this section, we only need to mention that pseudoinverting the Laplacian is computationally complex, but not necessary. One can achieve an approximate result by using Laplacian solvers, which we discuss now. ### Laplacian Solvers Laplacian solvers are a class of solutions to problems in the form \(Lx=b\), where \(L\) is the Laplacian of an undirected graph [43]. These solvers have a number of applications in graph partitioning and specification. Laplacian solvers make use of a number of techniques to solve the \(Lx=b\) problem in near linear time [37; 38; 39; 40]. Examples include sparse approximate Gaussian elimination [27], building a chain of progressively sparser graphs [24; 25], and recursive graph preconditioning [40]. One major drawback of the methods cited so far is that they only work with undirected graphs. However, there is a collection of techniques that work on directed graphs as well [8; 7]. ## 3 Methods ### Generalized Euclidean Let us assume we are working with a graph \(G=(V,E)\), with \(V\) being the set of nodes and \(E\subseteq V\times V\) the set of edges - pairs of nodes. For this paper we assume to work with undirected graphs: if \(u,v\in V\) and \((u,v)\in E\), then \((u,v)=(v,u)\). The graphs can be weighted, i.e. each edge can have a positive real weight \(w>0\) - although, in this paper, we ignore weights (including them does not change any of our conclusions). We can define a number of useful matrices. \(A\) is the adjacency matrix of \(G\), with \(A_{uv}=1\) if \((u,v)\in E\) and \(A_{uv}=0\) otherwise. \(D\) is the degree matrix, the degree being the number of connections a node has. \(D\) contains the degree of a node in the main diagonal and zero elsewhere. The Laplacian matrix is defined as \(L=D-A\), i.e. it contains the degree of a node on the main diagonal and \(L_{uv}=-1\) if \((u,v)\in E\). The Laplacian is useful to solve a number of problems. For instance, it can be used to solve the discrete heat exchange problem. If \(h\) contains the heat value for each node of the network, we can use the Laplacian to estimate how heat propagates through the graph. This is done by solving the differential equation \(\frac{\partial h}{\partial t}=-Lh\)[9]. It can also be used for spectral clustering [44]. It follows that the Laplacian is helpful to understand the relationships between nodes. Previous work has exploited this fact to use the Laplacian as the matrix defining the space in which a Generalized Euclidean (GE) distance measure lives. If we are given two vectors \(a\) and \(b\), each with \(|V|\) entries, then their network distance is: \[\delta_{G,a,b}=\sqrt{(a-b)^{T}L^{\dagger}(a-b)}.\] where \(L^{\dagger}\) is the (Moore-Penrose) pseudoinverse of \(L\). \(L\) cannot be inverted directly, because it is singular. To calculate \(L^{\dagger}\) one needs to perform a singular value decomposition (SVD) of \(L\). Herein lies the main issue with this measure: SVD requires \(\mathcal{O}(|V|^{\alpha})\) time to be solved, with \(\alpha\) larger than \(2\) and smaller than \(3\). This makes GE intractable for all but trivially sized graphs. ### Laplacian Solvers A Laplacian solver is a technique that is able to solve systems of linear equations in the form of \(Lx=b\) in near linear time. To explain each Laplacian solver techniques in depth goes beyond the scope of this paper. In Section 2.2 we provide further references. In this section, we briefly mention how some of these solvers work. Sparse approximate Gaussian elimination [27] works by performing an approximate sparse Cholesky decomposition. The Cholesky decomposition is an efficient algorithm for solving systems of linear equations. The issue is that, locally, \(L\) does not satisfy the sparsity assumption for the Cholesky decomposition - e.g. in case of large cliques. Thus, cliques need to be sampled and then the regular Cholesky decomposition can be applied. Spectral graph sparsification [24; 25] works by taking \(G\) and sparsify it to \(G^{\prime}\) in such a way that \(G\) and \(G^{\prime}\) have very similar spectra. This is done iteratively via a preconditioning chain. After the first spectral sparsification, \(G^{\prime}\) is then contracted by eliminating nodes of degree \(1\) and \(2\). This can be done efficiently, because the spectrum of the Laplacian is related to the cut problem, and it is possible to sparsify the graph while preserving its cuts. Recursive graph preconditioning [40] puts together the previous two approaches by recursively sparsifying \(G\) via a partial Cholesky factorization, ensuring a low condition number at every step in the recursion. ## 4 Experiments ### Setup Details #### 4.1.1 Implementation We implement the GE function in Julia (version 1.8.0). We use the Laplacians.jl package2 for Julia to access implementations of the Laplacian solvers (version 1.3.0). We use the methods' names provided in the package to refer to the various methods we use here. We run our code on a Intel Xeon Platinum 8358 at 2.60GHz. Footnote 2: [https://danspielman.github.io/Laplacians.jl/dev/](https://danspielman.github.io/Laplacians.jl/dev/) #### 4.1.2 Synthetic Data We test the Laplacian solvers on a number of synthetic networks, which allow us to vary both the number of nodes \(|V|\) and the graph's density by changing the average degree. We use different models because each of them can reproduce some of the common properties we find in real world networks. Specifically, we use: * the resulting networks have small diameters [11]. * Barabasi-Albert (BA): we grow this network by adding one node at a time. Each node connects to \(k\) already existing nodes, with \(k\) being a parameter. Existing nodes receive new connections with a probability directly proportional to their degree. This network reproduces well both small world property and broad degree distributions [4]. * Watts-Strogatz (WS): this network starts from a circle graph where nodes are connected to all of their \(k\) closest neighbors. Then, each edge is rewired randomly with probability \(p\), with \(k\) and \(p\) as parameters. This model reproduces the high clustering and small world features [46]. * Stochastic Blockmodel (SBM): in this model, as an input, the user partitions nodes into groups and specifies two probabilities. \(p_{in}\) determines the probability of connecting to a node inside the same group, and \(p_{out}\) regulates the connections to nodes outside the group. This model can generate network communities [23]. For all models, we make sure to directly compare networks with roughly the same number of edges. #### 4.1.3 Real World Data For our applications section (Section 5) we also make use of real world data, to showcase the usefulness of GE - and, as a consequence, the need for efficient ways to estimate it. Specifically we use: * one network per congress edition -, using data from Voteview.com [31]. Each node is a representative and they are connected if the two representatives have co-voted on bills more often than the average same-party pair. The procedure to build these networks has been used multiple times in the literature [1, 22]. * Section 5.2 (Various): networks that come from a variety of papers, retrieved via the network catalogues SNAP [30] and Netzschleuder [32]. We use small networks in Section 5.1 because we want to show how accurate the Laplacians solvers can be in quantifying the GE values against the exact result obtained via SVD - thus we need to be able to run SVD. The larger networks in Section 5.2 are used to showcase the possibilities opened by the Laplacian solvers that are not available to the exact solutions via the Laplacian pseudoinverse. ### By Network Size In this section we test the effect of the size of the network on the running time and memory consumption of the Laplacian solvers against the baseline using the pseudoinversion via SVD. We split the size test first by increasing the number of nodes while keeping the density of the network constant, and then by keeping the number of nodes fixed but increasing the network density. #### 4.2.1 Runtimes (\(|V|\)) For the runtimes, we exclude outlier runs which took more than twice the average runtime. This is done to exclude compilation time from the estimate - this issue only affects very small input sizes where compilation could take significantly longer than running time. All plots report average runtimes over ten independent runs. The exception is Baseline, for which we make a single run for \(|V|=10^{4}\) and we do not run for larger \(|V|\) at all, due to its excessively long runtimes. We start by analyzing the runtimes for increasing number of nodes. Figure 1 reports the results. The first evident result is that any Laplacian solver has both a constant running time advantage and a better asymptotic complexity. Even for tiny networks of \(100\) nodes, regardless of the network topology, all Laplacian solvers are at least one order of magnitude faster than the baseline. From these plots we can infer that the empirical asymptotic complexity of the baseline is \(\sim\mathcal{O}(|V|^{2.6})\). For the Laplacian solvers the exact combination of solver and topology matters, but in general the empirical asymptotic complexity is between \(\mathcal{O}(|V|^{1.2})\) and \(\mathcal{O}(|V|^{1.4})\), in all cases decisively below \(\mathcal{O}(|V|^{2})\). The practical result is that the baseline takes at least one order of magnitude more time to compute a \(|V|=10^{4}\) network than any Laplacian solver takes for a network two orders of magnitude larger. Among the Laplacian solvers there is no clear overall winner. CG is the fastest for the Erdos-Renyi and SBM topologies, but it ties with ApproxChol for the Barabasi-Albert model, while ApproxChol is also fastest for Watts-Strogatz. The topology in general has different effects on different solvers. Picking CG as an example, Figure 2(a) shows that indeed CG runs slower for Watts-Strogatz than it does for all other topologies. Other solvers also experience different strengths and weaknesses depending on the topology - not shown here for space issues. Figure 1: The running time (y axis) against \(|V|\) (x axis) for all the methods (line color) on different synthetic networks. #### 4.2.2 Runtimes (Density) Efficient Laplacian solvers exploit, among other things, the sparseness of a graph. It is interesting to investigate what happens to the runtime when the graphs we investigate get denser and denser. In this experiment, we fix \(|V|=10,000\), and we increase the average degree of the network from \(1\) to \(64\). The vast majority of real world networks have low average degrees in the single digit realm [17], thus this domain covers the most realistic scenarios. Also in this case we report the average of ten runs, taking out outliers and ignoring compilation time. Figure 3 shows the results. Since the baseline works with dense matrices anyway, there is no real effect of density on its running time, which is roughly constant. Most Laplacian solvers have longer runtimes for denser networks - as expected. All Laplacian solvers are orders of magnitude faster than the baseline, and thus represent a significant advantage. CG shows a peculiar pattern: it takes longer for extremely sparse networks - with average degree close to one - then gets faster and faster for middle values of average degree between four and eight. After this, the runtimes increase with density as expected. This pattern is consistent, independently from the topology of the network. It seems that, for extremely sparse networks with average degree lower than four, CG might not be the best choice. For denser networks, however, CG can be one or two orders of magnitude faster than the other Laplacian solvers. #### 4.2.3 Memory For the memory test we show only a single run per method, due to limitations in memory benchmarking. However, memory consumption should not be variable across runs and the results of a single run are still indicative of the overall trends. We also run a single Laplacian solver, CG, because the memory consumption for all solvers is indistinguishable in all cases. From Figure 4 we can see that there is a basic memory consumption coming from simply running the program. For \(|V|=10^{4}\), the Laplacian solvers do not add any memory consumption to this basic rate. However, the baseline needs to shift from sparse to dense matrix representations to calculate the pseudoinverse of the Laplacian. This means that its memory consumption is already between \(5\) and \(6\)GB in our implementation even for these small networks. If we exclude the warm-up phase for \(|V|<10^{3}\), the memory consumption of the baseline scales exponentially. Figure 3: The running time (y axis) against average degree (x axis) for all the methods (line color) on different synthetic networks. Figure 2: Time and memory consumption (y axis) against \(|V|\) (x axis) for all synthetic networks (line color) for the CG Laplacian solver. This is not true for Laplacian solvers, here represented by CG. The total memory consumption at \(|V|=10^{4}\) is still in the neighborhood of the basic cost of running the program. Even for \(|V|=10^{6}\), the memory required is below 2GB in all but one case. Asymptotically, the best function describing the growth in memory consumption by CG is linear, not exponential. Figure 2(b) does not show any significant difference in memory consumption for CG depending on the topology of the network. ## 5 Applications ### Polarization The GE measure can be used to estimate polarization on social media, or any networked system where we have information about the opinions of the nodes [22]. This is done by calculating the distance between the vector recording the opinions of nodes on one side of the spectrum - e.g. Democrats - with the one recording the opinions of the nodes on the other side of the spectrum - e.g. Republicans. For this task we use the Congress networks described in Section 4.1.3. Specifically, we focus on the 85th, 105th, and 113th Congress, since they show the lowest, average, and highest value of polarization, respectively. Table 1 shows the result. First, the Baseline method confirms the differences in scores between the three networks. Then we show how the four Laplacian solvers estimate the level of polarization to be practically identical to the exact one we compute via the Baseline. The largest error is in the neighborhood of \(10^{-8}\), which is far below the level of precision required for such an analysis. ### Various We look at a variety of networks which would all benefit from a GE analysis, in increasing size to show the speedup of the Laplacian solvers in real world scenarios. We simplify all networks to an undirected, unweighted, simple graph version, even if the original network was either directed, weighted, or multilayer. All runtimes exclude I/O operations and preprocessing, so they ignore the time it takes to read the graph from disk. Below we briefly explain what the node vectors are in each case. * _Hiring_: we can calculate the distance between the region in which a university is located, by analyzing the hiring patterns. * _EUAir_: we can calculate the distance between airlines depending on which airports they serve. * _EUCor_: we have communities based on email exchange, and GE could tell the distance between community pairs. \begin{table} \begin{tabular}{l|c c c} Method & 85th & 105th & 113th \\ \hline \hline Baseline & 1.006 & 3.664 & 8.330 \\ ApproxChol & \(2.1e^{-14}\) & \(2.1e^{-14}\) & \(1.3e^{-13}\) \\ Aug Tree & \(6.9e^{-9}\) & \(1.1e^{-14}\) & \(5.0e^{-14}\) \\ KMP & \(4.4e^{-16}\) & \(2.7e^{-15}\) & \(1.1e^{-13}\) \\ CG & \(4.9e^{-10}\) & \(2.7e^{-15}\) & \(2.1e^{-13}\) \\ \end{tabular} \end{table} Table 1: The polarization scores for three US Congress networks (top row) and the difference between exact and approximate solution using a specific Laplacian solver (bottom four rows). Figure 4: Memory consumption (y axis) against \(|V|\) (x axis) for all the methods (line color) on different synthetic networks. * _Open Flights_: we can calculate the distance between countries based on how the airlines connect their airports. * a metadata we have about the users - based on their friendships on the platform. * _Wiki RFA_: we can calculate the distance between admins and non-admins in the voting network. * _Fly Brain_: we can calculate the distance between neuron types in the neural network. * _Twitter15m_: we can measure the distance between two hashtags in the user network. * _Patents_: we can measure the distance between patent categories in the patent citation patterns. * _DBpedia_: we do not have node metadata, so we calculate distances between random vectors, but this network could be used, e.g., to calculate distances between different page categories in the encyclopedia, whose pages are connected by hyperlinks. Table 2 reports the running times. These are also summarized in Figure 5. For comparison purposes, we estimate the scaling of the two methods with a power relation with the number of nodes. The best function approximating the runtime of ApproxChol is \(\mathcal{O}(|V|^{1.12})\) (Figure 5(a)). On the other hand, the best function approximating the runtime of the exact SVD-based solution is \(\mathcal{O}(|V|^{2.51})\) (Figure 5(b)). However, it would be more appropriate to estimate the scaling of the Laplacian solver with the number of edges. This is is because their advantage becomes less and less relevant the more the network is dense. Note the difference in runtimes, e.g., in Hiring and EUAir. Notwithstanding the fact that Hiring has fewer nodes and fewer edges, it is much more dense than EUAir (21% dense vs 3% dense) and thus the Laplacian solvers actually take longer to run on this smaller network. On the other hand, it is remarkable that the Laplacian solver can process the DBpedia network (18M nodes) in the same time it takes the baseline to process LastFm (7.6k nodes). ## 6 Conclusions In this paper we showed that using Laplacian solvers will bring massive speedups in the calculation of the Generalized Euclidean measure and other related measures in the Node Vector Distance class of problems. The speedup is relative \begin{table} \begin{tabular}{l|r r r|r r|r} Network & \(|V|\) & \(|E|\) & Dens & ApproxChol (s) & Baseline (s) & Ref \\ \hline Hiring & 145 & 2,266 & 0.2170 & 0.0023 & 0.0040 & [6] \\ EUAir & 450 & 2,953 & 0.0292 & 0.0011 & 0.0454 & [5] \\ EUCore & 1,005 & 16,064 & 0.0318 & 0.0068 & 0.4496 & [29] \\ Open Flights & 3,214 & 18,858 & 0.0036 & 0.0079 & 13.623 & [32] \\ LastFm & 7,624 & 27,806 & 0.0009 & 0.0149 & 244.20 & [34] \\ Wiki RFA & 11,381 & 194,592 & 0.0030 & 0.0976 & 853.00 & [48] \\ Fly Brain & 21,739 & 2,897,925 & 0.0122 & 2.6151 & 6181.3 & [49] \\ Twitter15m & 87,569 & 4,708,274 & 0.0012 & 4.3299 & & [18] \\ Patents & 3,774,768 & 16,518,947 & 2.31e\({}^{-6}\) & 59.311 & & [20] \\ DBpedia & 18,268,992 & 136,537,566 & 8.18e\({}^{-7}\) & 247.10 & & [2] \\ \end{tabular} \end{table} Table 2: The runtimes of the ApproxChol solver against the exact solution in number of seconds for a collection of networks of different sizes (in number of nodes \(|V|\) and edges \(|E|\)). We terminate the process after one hour, thus we do not report runtimes longer than that. Figure 5: Runtime (y axis) on real networks by number of nodes (x axis). to calculating an exact solution via the pseudoinverse of the Laplacian. Since Laplacian solvers scale with the number of edges, these speedups are more noticeable for sparse networks. Besides an improved time efficiency, these methods also require fewer resources in terms of memory, since the process to obtain the pseudoinverse of the Laplacian involves using dense matrices, while all Laplacian solvers work with sparse structures. We failed to notice significant differences between different Laplacian solvers in synthetic networks. As the network grows in number of nodes, they all increase their runtimes approximately at the same rate. The only potential difference comes when we densify the network. The CG solver is the slowest for very sparse networks, but it scales better as the network becomes denser and denser. This paper can be used as an argument to use Laplacian solvers to efficiently solve GE and related NVD problems.
2307.04586
Timbre transfer using image-to-image denoising diffusion implicit models
Timbre transfer techniques aim at converting the sound of a musical piece generated by one instrument into the same one as if it was played by another instrument, while maintaining as much as possible the content in terms of musical characteristics such as melody and dynamics. Following their recent breakthroughs in deep learning-based generation, we apply Denoising Diffusion Models (DDMs) to perform timbre transfer. Specifically, we apply the recently proposed Denoising Diffusion Implicit Models (DDIMs) that enable to accelerate the sampling procedure. Inspired by the recent application of DDMs to image translation problems we formulate the timbre transfer task similarly, by first converting the audio tracks into log mel spectrograms and by conditioning the generation of the desired timbre spectrogram through the input timbre spectrogram. We perform both one-to-one and many-to-many timbre transfer, by converting audio waveforms containing only single instruments and multiple instruments, respectively. We compare the proposed technique with existing state-of-the-art methods both through listening tests and objective measures in order to demonstrate the effectiveness of the proposed model.
Luca Comanducci, Fabio Antonacci, Augusto Sarti
2023-07-10T14:28:56Z
http://arxiv.org/abs/2307.04586v2
# Timbre Transfer using Image-to-Image Denoising Diffusion Implicit Models ###### Abstract Timbre transfer techniques aim at converting the sound of a musical piece generated by one instrument into the same one as if it was played by another instrument, while maintaining as much as possible the content in terms of musical characteristics such as melody and dynamics. Following their recent breakthroughs in deep learning-based generation, we apply Denoising Diffusion Models (DDMs) to perform timbre transfer. Specifically, we apply the recently proposed Denoising Diffusion Implicit Models (DDIMs) that enable to accelerate the sampling procedure. Inspired by the recent application of DDMs to image translation problems we formulate the timbre transfer task similarly, by first converting the audio tracks into log mel spectrograms and by conditioning the generation of the desired timbre spectrogram through the input timbre spectrogram. We perform both one-to-one and many-to-many timbre transfer, by converting audio waveforms containing only single instruments and multiple instruments, respectively. We compare the proposed technique with existing state-of-the-art methods both through listening tests and objective measures in order to demonstrate the effectiveness of the proposed model. ## 1 Introduction Timbre is an extremely important perceptual aspect of music, yet it is hard to both model and define. The concept of musical timbre can be defined as the perceived characteristics of a musical sound that are different from pitch and amplitude contours [1]. Timbre Transfer concerns the task of converting a musical piece from one timbre to another while preserving the other music-related characteristics. While this operation is not trivial, it is of extreme interest for several applications, from the development of plugins to be used in Digital Audio Workstations (DAW) to enabling the possibility of playing sounds corresponding to not widely available musical instruments. In this paper, we present DiffTransfer, a technique for timbre transfer which is tested both between single and multiple instruments and is based on a continuous Denoising Diffusion Implicit Model (DDIM) with deterministic sampling [2], a modified version of Denoising Diffusion Probabilistic Models (DDPMs) that are trained using the same procedure, but allow for faster sampling times. Specifically, in [2] it was empirically shown that DDIMs allow for \(10\times-50\times\) faster wall-clock time performances with respect to DDPMs. In order to be able to convert one timbre into another, we use a procedure similar to the recently proposed image-to-image technique Palette [3]. Specifically, we use as input to the diffusion model the noise and condition it with the chosen input timbre spectrogram, then, through the denoising procedure, the model learns to reconstruct spectrograms of the desired timbre. We consider the scenario where the timbre-transfer task is _paired_, which means that the desired and input spectrograms have the same melodic/harmonic content, but differ in terms of timbre. We experiment both with the possibility of converting between tracks containing only single instruments and also mixtures of instruments, with no prior separation step, while making no modifications to the model in order to take into account both configurations. In order to demonstrate the effectiveness of the proposed model, we compare DiffTransfer with state-of-the-art techniques, both through objective measures and by performing a user-based listening test. The source code and audio excerpts can be found at [https://lucacoma.github.io/DiffTransfer/](https://lucacoma.github.io/DiffTransfer/). ## 2 Related Work Several types of timbre Transfer techniques have been proposed in the literature. In [4] a CycleGAN [5] is applied in order to perform an unpaired transfer using the Constant-Q transform and the audio is then recovered through a WaveNet [6] model. In [7] an attention-based architecture is applied in order to convert mel spectrograms, which are then inverted through a MelGAN architecture [8]. Gaussian mixture-based variational autoencoders are applied [9] in order to learn a latent space where pitch and timbre representations are disentangled. Another class of methods, instead, extracts musical parameters such as pitch and loudness from the input audio tracks and performs the transfer by resynthesizing sound through a network that has learned to generate tracks with the desired timbre. The most known example of these techniques is the Differentiable Digital Signal Processing (DDSP) [10] model. Other similar techniques were proposed such as [11], where a hierarchical model is used in order to reconstruct the signal at increasing resolutions. Recently there have been proposed also models that directly work on the audio waveform such as [12], where music pieces are translated to specific timbre domains. The only model that, to the best of our knowledge and except for the one proposed in this paper, is tested on multi-instrument timbre transfer without any source separation pre-processing is the Music-STAR network, presented in [13]. In Music-STAR a WaveNet autoencoder [14] is trained by applying teacher-forcing [15] to the decoders in order to recover the desired timbre. Denoising Diffusion Probabilistic Models (DDPMs) [16] have recently become the latest state-of-the-art for what concerns deep learning-based generation firstly replacing Generative Adversarial Networks (GANs) [17] and Variational Autoencoders [18], due to their easier training procedure and increased quality of the produced results. DDPMs have been successfully applied to a wide variety of image-related tasks such as generation [19] and translation [3]. More recently, DDPMs have been also used for audio-related tasks. In [20] a diffusion model is applied in order to convert mid tracks to spectrograms, while in [21] a text-to-music diffusion model is proposed. DDPMs have also been applied to symbolic music generation [22], speech synthesis [23] and singing voice extraction [24]. While DDPMs have extremely powerful generation capabilities they suffer from slow sampling times. To ameliorate this issue, recently Denoising Diffusion Implicit Models (DDIMs) [2], which allow for faster sampling times and were recently applied to image inpainting [25]. ## 3 Proposed Model In this section, we describe the proposed DiffTransfer technique for timbre transfer. Instead of working directly with raw audio signals, we convert them into log mel-scaled spectrograms, due to their easier handling by deep learning models. We then propose a model that, given as input the spectrogram corresponding to the conditioning instrument, generates the corresponding target spectrogram that would have been obtained by playing the same piece of music with the target instrument. Operatively we achieve this through a conditional continuous-time DDIM, which learns to denoise the target instrument spectrogram, while conditioned on the input instrument spectrogram, as depicted in Fig. 1. At inference time, the model is fed with the input conditioning instrument concatenated with Gaussian noise and generates the corresponding target spectrogram. We retrieve the audio signal by applying to the log mel spectrograms the SoundStream 1 model [26], provided by [20] where it was trained on a custom music dataset. Footnote 1: [https://tfhub.dev/google/soundstream/mel/decoder/music/1](https://tfhub.dev/google/soundstream/mel/decoder/music/1) In the following, we'll provide a brief overview of the DDIM framework and notation used in this paper, in order to make the trcatation as compact as possible, for additional and more thorough formulations, we refer the reader to [2] and [3]. We aim at giving a general overview of the process and we'll use a slight abuse of notation to describe the diffusion process using the continuous time framework, Figure 1: Training scheme of the proposed DiffTransfer technique. The target instrument spectrogram is summed with noise following a simplified cosine schedule. The decoder, conditioned on the conditioning instrument spectrogram and on the sinusoidal embedding representing the current time instant estimates the added noise. The decoder parameters are estimated by computing the L1 loss between the ground truth and the estimated diffusion noise. in order to make it more similar to the more common literature regarding DDPMs and DDIMs. ### Diffusion Decoder We adopt a procedure similar to the Palette [3] image-to-image translation technique in order to train the timbre transfer decoder as a Denoising Diffusion Implicit Model (DDIM) [2]. Broadly speaking, DDIMs work by learning how to generate data from noise in a two-part procedure. The first part is denoted as the _forward process_, where Gaussian noise \(\gamma\sim\mathcal{N}(0,1)\) is subsequently added to the input until it is indistinguishable from the former. The second part consists of the _reverse process_ where a decoder learns how to invert the forward process, effectively reconstructing data from the noise. DDIMs can be seen as a generalization of DDPMs that shares the same training procedure, however, they differ in the modeling of the reverse process, by using a non-markovian diffusion process, which allows for faster generation times. #### 3.1.1 Forward Process Let us define \(\mathbf{X}\) and \(\mathbf{Y}\) as the log mel spectrograms corresponding to the conditioning and target instruments, respectively. We choose a continuous diffusion time [27, 28, 29]in order to be able to change the number of desired sampling steps. If we consider \(T\) steps, then the diffusion time can be defined as \(t\in\{0,1\}\), where consecutive times are separated by \(\Delta_{t}=1/T\). Then, the forward process is defined similarly to the case of DDPMs by subsequently adding noise to the target spectrogram for \(T\) steps \[\begin{split} q(\mathbf{Y}_{t}|\mathbf{Y}_{t-\Delta_{t}})= \mathcal{N}(\mathbf{Y}_{t},\sqrt{(\alpha_{t})}\mathbf{Y}_{t-\Delta_{t}}, \beta_{t}\mathbf{I}),\\ q(\mathbf{Y}_{1:T}|\mathbf{Y}_{0})=\prod_{t=1}^{T}q(\mathbf{Y} _{t-\Delta_{t}})\end{split} \tag{1}\] where \(\alpha\) and \(\beta\) are parameters defined by a simplified cosine schedule [30]. #### 3.1.2 Reverse Process In the case of DDIMs, the reverse diffusion process is operated by introducing an additional distribution \(p_{\theta}\), where a sample \(\mathbf{Y}_{t-\Delta t}\) can be generated from a sample \(\mathbf{Y}_{t}\) as \[\begin{split}\mathbf{Y}_{t-\Delta t}=&\sqrt{\beta_ {t-\Delta t}}\left(\frac{c-\sqrt{\beta_{t}}\gamma_{\theta}^{(t)}(\mathbf{Y}_{t },\mathbf{X})}{\sqrt{(\alpha_{t})}}\right)+\\ &\sqrt{1-\alpha_{t-\Delta_{t}}}\cdot\gamma_{\theta}^{(t)}( \mathbf{Y}_{t},\mathbf{X}),\end{split} \tag{2}\] , where \(\gamma\) is the noise estimated by a network with parameters \(\theta\). The noise at time \(t\)\(\gamma_{\theta}^{(t)}\) is estimated by a network that is conditioned also on the input timbre spectrogram \(\mathbf{X}\), similarly to the formulation proposed in Palette [3]. #### 3.1.3 Training Procedure The denoising process is operated through a U-Net architecture which is conditioned on \(\mathbf{X}\) and trained to predict the added noise in order to minimize the L1 loss \[\mathbb{E}=||\gamma_{\theta}^{(t)}(\mathbf{Y}_{t},\mathbf{X})-\gamma||_{1}^ {1}, \tag{3}\] where \(\gamma\) is the true perturbation, while \(\gamma_{\theta}^{(t)}(\mathbf{Y}_{t},\mathbf{X})\) is the estimate of the noise added to the target spectrogram at time \(t\), conditioned on the input spectrogram \(\mathbf{X}\). ### Architecture The decoder architecture is based on a U-Net model. The building element is made of residual blocks, in each of these the input is processed by (i) a 2D convolutional layer with swish activation, followed by batch normalization and by (ii) a convolutional layer with no activation. Both convolutional layers have kernel size \(3\). The output of this procedure is then summed with the residual, which is obtained by processing the input with a convolutional layer with kernel size \(1\). The encoder part of the network consists of \(3\) downsampling blocks, each consisting of \(4\) residual blocks having filter sizes \(64,128,256\). The output of each downsampling block is followed by average pooling, with pool size \(2\) in order to compress the dimension of the spectrograms. The last block of the encoder is followed a self-attention block. The bottleneck obtained through the encoder is processed by a residual block with \(512\) filters and is then processed by the decoder, which is a specular version of the encoder. The only difference lies in the use of transposed convolutions in order to create upsampling layers needed to increase the dimension of the features. The last downsampling layer of the encoder, the bottleneck and the first upsampling layer of the decoder are followed by self-attention. ### Deployment The proposed model takes as input spectrograms of a fixed size, therefore audio tracks longer than the ones used for training need to be sliced accordingly. The decoder takes as input the conditioning spectrogram \(\mathbf{X}\) and the diffusion noise and retrieves an estimate of the latter, which can then be subtracted in order to obtain an estimate of the desired output timbre spectrogram \(\hat{\mathbf{Y}}\). The output waveform \(y\) can then be obtained by feeding the pre-trained SoundStream model with \(\hat{\mathbf{Y}}\). ## 4 Experiments In this section, we describe experiments performed with the aim of demonstrating the capabilities of the proposed DiffTransfer technique both in the single-instrument and multi-instrument application scenarios. In Fig. 3 we show an example of input, generated and ground-truth spectrograms, obtained via the DiffTransfer model when converting from a Clarinet to Strings. ### Dataset In order to train the model we considered the StarNet dataset [31], which contains a set of tracks that are played with two timbre-domains, namely strings-piano and vibraphone-clarinet. The dataset consists of roughly 22 hours of audio. We used the reduced version of the dataset, where tracks are resampled to \(16000\ \mathrm{Hz}\) and converted them to mono. In order to perform the evaluation, we use the same ten tracks considered in [13], in order to ease the comparison with their model. ### Techniques Under Comparison We consider two baselines in order to compare the performances of the proposed DiffTransfer architecture. For what concerns the single-instrument timbre transfer task, we consider the Universal Network [12] fine-tuned on the StarNet dataset as done in [13]. For what concerns the multi-timbre task, we consider the mixture-supervised version of the Music-STAR network proposed in [13]. We perform three different types of timbre transfer tasks: _single_, where only single instruments are converted, _single_/_mixed_ where the separate conversions of single instruments are mixed in order to create the desired mixture track and _mixture_, where the mixture is directly converted. These nomenclatures are used just to ease the presentation of the results, we would like to point out that, for what concerns the DiffTransfer architecture, no specific changes are required for the various types of applications, except for the choice of desired input data. ### Experiment Setup The Universal Network and Music-STAR architectures are trained with the procedure described in [13]. The DiffTransfer network is trained for \(5000\) epochs using a batch size of \(16\), with the AdamW optimizer [32] with learning rate \(2e-5\) and weight decay \(1e-4\). The epoch that minimizes the \(L1\) noise prediction loss is chosen in order to retain the model used to compute the results. We train a total of six models, performing the following timbre transfer conversions: vibraphone to piano, piano to vibraphone, clarinet to strings, strings to clarinet vibraphone/clarinet to piano/strings and piano/strings to vibraphone/clarinet. The network input features are computed by first applying the Short-Time Fourier Transform (STFT) with a Hann window of size \(0.020\ \mathrm{s}\) and \(50\%\) overlap to normalized audio tracks. Then the log mel spectrogram is computed over \(128\) bins corresponding to the range of \(0-16000\) Hz. We do not feed the entire audio tracks as input to the network, instead, during each epoch we extract \(128\) frames from the log mel spectrogram, corresponding to \(\approx\ 2\ \mathrm{s}\). Each spectrogram slice is normalized between \(-1\) and \(1\) before being given as input to the network and the output spectrograms are denormalized before being fed to the SoundStream model in order to recover the audio waveform. Since the tracks considered for the test are of length \(10\ \mathrm{s}\) and the model gets as input a fixed \(128\) frames spectrogram we slice the conditioning spectrogram before feeding into the model and we keep the input noise fixed for all slices, in order to ensure consistency in the generation. All spectrogram slices are normalized in the range \([-1,1]\) and denormalized before being fed to the SoundStream decoder. ### Objective Evaluation We evaluate the model objectively in order to analyze the perceptual similarity and content preservation capabilities of the generated tracks with respect to the ground truth audio. In order to evaluate the perceptual similarity, we compute the Frechet Audio Distance (FAD) [33] using the VG-Gish embeddings [34], through a PyTorch implementation2. FAD is a reference-free metric for music enhance Figure 2: Deployment scheme of the proposed DiffTransfer technique. The decoder is fed with Gaussian noise and with the conditioning instrument spectrogram. The noise estimate provided by the decoder is then subtracted from the input noise in order to provide an estimate of the desired target spectrogram, from which the audio is estimated via the SoundStream model [20, 26]. ment algorithms, which views the embeddings as a continous multivariate Gaussian and is computed between the real and generated data as \[\mathrm{FAD}=||\mu_{r}-\mu_{g}||^{2}+\mathrm{tr}(\Sigma_{r}+\mu_{g}-2\sqrt{ \Sigma_{r}\Sigma_{g}}), \tag{4}\] where \((\mu_{r},\Sigma_{r})\) and \((\mu_{g},\Sigma_{g})\) are the mean and covariances of the embeddings corresponding to the real and generated data, respectively. Similarly to [20], we compute FAD in order to analyze the perceptual similarity between the generated audios with respect to the ground truth one, corresponding to the original StarNet dataset. To understand the content-preservation capabilities of the model, following [35], we compute how the pitch contours of generated ground truth audio tracks are dissimilar, by calculating the mismatch between two sets of pitches \(A\) and \(B\) through the Jaccard Distance \[JD(A,B)=1-\frac{|A\cap B|}{|A\cup B|}, \tag{5}\] where a lower value corresponds to a lower mismatch and thus to a higher degree of similarity between the generated pitch contours. Pitch contours are computed using a multi-pitch version of the MELODIA [36] as implemented in the Essentia library [37], rounding pitches to the nearest semitone. We report the values obtained by computing the metrics on the test dataset in Table 1. ### Subjective Evaluation In order to evaluate subjectively the timbre transfer capabilities, we perform a listening test with 18 human participants. The web page of the test is available at 3. The test was split into two parts corresponding to the single and multiple instrument application scenarios, respectively. Footnote 3: https://listening-test-ismir-ttd. During the single instrument part of the test, the users listened to four tracks, corresponding to the four types of conversions performed, namely: clarinet to strings, strings to clarinet, piano to vibraphone, vibraphone to piano. Each example consisted of two conditions, one obtained via the DiffTransfer model and the other through the Universal Network. In the second part of the test, concerning multiple instrument timbre transfer, a total of four tracks were considered, two for the conversion from vibraphone/strings to piano/strings waveforms and two for the reverse conversion. Each example consisted of four conditions, namely DiffStar (single/mix), Universal Network (single/mix), DiffStar (mixture) and Music-STAR (mixture). Both the order of conditions and the order of examples in each separate part of the test were randomized. The participants were asked to rate the conditions in terms of similarity with respect to the reference track on a 5 elements Likert scale where \(1\) corresponds to bad and \(5\) to excellent. We report the results obtained through the listening test in Table 2. ### Discussion By briefly inspecting both the objective and subjective results, reported in Table 1 and 2, respectively, it is clear how the proposed DiffTransfer model outperforms the Universal Network and Music-STAR baselines both for what concerns the single and multiple timbre transfer tasks. When considering single timbre results, DiffTransfer is able to achieve significantly better performances in terms of FAD, Jaccard Distance and Perceived Similarity, with respect to the Universal network. The gap between the two methods becomes even more evident when considering the \begin{table} \begin{tabular}{|c|c|c|} \hline \multicolumn{3}{|c|}{**Objective Evaluation**} \\ \hline **Method** & **FAD \(\downarrow\)** & **JD \(\downarrow\)** \\ \hline Universal Network (single) & 7.09 & 0.53 \\ \hline DiffTransfer (single) & 2.58 & 0.28 \\ \hline \hline Universal Network (single/mixed) & 10.47 & 0.64 \\ \hline DiffTransfer (single/mixed) & 4.73 & 0.46 \\ \hline \hline Music-STAR (mixture) & 8.93 & 0.57 \\ \hline DiffTransfer (mixture) & 4.37 & 0.38 \\ \hline \end{tabular} \end{table} Table 1: Objective Evaluation of the proposed DiffTransfer Method compared to the baselines, in terms of Fréchet Audio Distance (FAD) and Jaccard Distance (JD). Results are averaged over all participants and over all the tracks considered for each part of the test. Figure 3: Example of Timbre Conversion log mel Spectrograms using the DiffTransfer architecture, obtained when converting Clarinet (a) to Strings (b). The ground truth Strings spectrogram is shown in (c). single/mixed case, i.e. when single timbre transfer tracks are mixed in order to form the desired mixture audio. For what concerns the Music-STAR method, the gap with respect to DiffTransfer remains high in terms of FAD, but becomes less noticeable when considering JD and the perceived subjective similarity. ## 5 Conclusion In this paper, we have presented DiffTransfer a technique for both single- and multi-instrument timbre transfer using Denoising Diffusion Implicit models. The novelty of the proposed approach lies in the fact that in addition to being, to the best of our knowledge, the first application of diffusion models to timbre transfer, it is the first model to be tested in order to perform single and multi-timbre transfer, without varying the architecture depending on which application is chosen. We compared the proposed model with state-of-the-art Universal Network and Music-STAR baselines through both objective evaluation measures and a listening test, demonstrating the better capabilities of the proposed DiffTransfer approach. Future works will involve increasing the audio quality of the generated audio, by taking into account the consistency of subsequent generated spectrograms. Furthermore, we plan on modifying the model in order to be able to perform unpaired timbre transfer, which greatly eases the dataset requirements and applicability of the technique.
2303.08893
A Multifidelity deep operator network approach to closure for multiscale systems
Projection-based reduced order models (PROMs) have shown promise in representing the behavior of multiscale systems using a small set of generalized (or latent) variables. Despite their success, PROMs can be susceptible to inaccuracies, even instabilities, due to the improper accounting of the interaction between the resolved and unresolved scales of the multiscale system (known as the closure problem). In the current work, we interpret closure as a multifidelity problem and use a multifidelity deep operator network (DeepONet) framework to address it. In addition, to enhance the stability and accuracy of the multifidelity-based closure, we employ the recently developed "in-the-loop" training approach from the literature on coupling physics and machine learning models. The resulting approach is tested on shock advection for the one-dimensional viscous Burgers equation and vortex merging using the two-dimensional Navier-Stokes equations. The numerical experiments show significant improvement of the predictive ability of the closure-corrected PROM over the un-corrected one both in the interpolative and the extrapolative regimes.
Shady E. Ahmed, Panos Stinis
2023-03-15T19:25:38Z
http://arxiv.org/abs/2303.08893v2
# A Multifidelity Deep Operator Network Approach to Closure for Multiscale Systems ###### Abstract Projection-based reduced order models (PROMs) have shown promise in representing the behavior of multiscale systems using a small set of generalized (or latent) variables. Despite their success, PROMs can be susceptible to inaccuracies, even instabilities, due to the improper accounting of the interaction between the resolved and unresolved scales of the multiscale system (known as the closure problem). In the current work, we interpret closure as a multifidelity problem and use a multifidelity deep operator network (DeepONet) framework to address it. In addition, to enhance the stability and accuracy of the multifidelity-based closure, we employ the recently developed "in-the-loop" training approach from the literature on coupling physics and machine learning models. The resulting approach is tested on shock advection for the one-dimensional viscous Burgers equation and vortex merging using the two-dimensional Navier-Stokes equations. The numerical experiments show significant improvement of the predictive ability of the closure-corrected PROM over the un-corrected one both in the interpolative and the extrapolative regimes. Reduced order models DeepONet In-the-loop training Differentiable physics Multifidelity learning ## 1 Introduction High fidelity simulations produce invaluable information to augment our understanding of the world and physical processes around us. However, their use has been limited in multi-query and outer-loop applications such as design optimization, model predictive control, and uncertainty quantification. This is simply because they are too computationally demanding and we do not often have the computing infrastructure that enables multiple forward runs within the allowable turnaround time. Therefore, there is a need to build computationally-lightweight models that describe the system's behavior with acceptable accuracy. The last few decades have witnessed increased interest in model order reduction (MOR) developments. Among these, projection-based reduced order models (PROMs) have shown promise in representing the behavior of multiscale systems using a small set of generalized (or latent) variables. The derivation of PROM commonly involves two steps: (1) constructing a few basis functions that encapsulate the dominant features of the system, and (2) defining a model to estimate the leading coefficients (weights) of these basis functions at different times/parameters as they are used to expand the solution of the system under investigation. The combination of proper orthogonal decomposition (POD) [1] and Galerkin methods [2] has been a main driver for PROM developments in fluid dynamics, structural mechanics, and other fields. The basic idea of POD is to represent a data set as a linear combination of its dominant modes, which are calculated from the data set itself. These modes are hierarchically sorted, based on the eigenvalues of an appropriate covariance operator, signifying the relative importance of individual modes to the high dimensional data reconstruction in the \(\ell_{2}\)-sense. An underlying assumption of scale-separation is often implied by considering the analogy to Fourier basis wherein the low-index modes represent the largest scales while the high-index modes correspond to the smaller scales [3]. To reduce the computational complexity of the system, only a handful of the leading POD modes are retained while the remaining ones are truncated. In the second step of Galerkin method, the high fidelity governing equations are projected onto the span of the selected POD modes to derive a reduced set of dynamical equations that defines the Galerkin-POD (GPOD) model. Although GPOD models perform well in many quasi-periodic and statistically steady state cases, they usually fail in long-time predictions especially for systems whose solution is convection-dominated as we all as for systems representing turbulent flows. One source of the failure modes in GPOD predictions is related to their lack of stability guarantees and this line of research has been at the center of several research efforts. Barone et al. [4] demonstrated that the stability of PROM predictions is closely tied to the type of inner product used to define the projection. In particular, carefully-designed inner products that preserve symmetry and satisfy boundary conditions have been shown to provide better performance in terms of stability, compared to standard \(L^{2}\) inner product. Similar findings have been reported in [5, 6, 7] using other forms of inner products. The inaccuracy (and even instability) of the GPOD models can also be attributed to the severe truncation of the POD modes. Although the small scales themselves might not be of interest, the error due to the neglect of these scales can grow rapidly and infect the accuracy of the prediction at the larger scales. Lorenz [8] attributed the finite-time weather predictability barrier to the propagation of errors at small scales to large scales, coined as the real butterfly effect [9]. Subspace rotation techniques (e.g., using a mixture of large energetic and small dissipative modes [10, 11, 12]) have been shown to effectively address the modal truncation effects. Compensating accurately for the contribution of the truncated scales onto the resolved ones (as a function of the resolved state variables and parameters) is known as the "closure problem" [13]. A large body of literature has focused on developing closure models in multiscale phenomena, using physical arguments [14, 15, 16] and mathematical formulations [17, 18, 19]. More recently, there has been a surge in adopting machine learning (ML) tools to build novel closure models, e.g., [20, 21, 22, 23, 24, 25]. However, most of these developments have two fundamental issues that limit their applicability (and we aim to address in this study) as follows: 1. They mostly rely on deep neural network (DNN) capabilities as universal approximators of arbitrary functions. Nonetheless, a common theme in DNN-based implementations is their limited applicability when it comes to variations in physical parameters as well as initial and boundary conditions. There have been some successes in adopting DNNs for time-dependent parametric PROMs, e.g., using an ensemble of DNNs trained on clustered regions of interest in the parameter space or feeding the neural network with extra information about the system. For instance, Xie et al. [26] trained a residual neural network (ResNet) to learn the closure terms in GPOD models and enriched the input vector with parameter values to improve the predictive capabilities in parametric settings. However, the critical view of the need to re-train DNNs for new cases, specifically in extrapolative contexts, has been lingering unaddressed rigorously in the scientific community for a while. 2. Previous works on ML-based predictions often employ an idealized training environment that does not reflect the actual operation. For instance, the DNN models are usually trained as standalone components using _clean and curated_ data (e.g., noise-free) as opposed to the testing/deployment conditions where inputs are inevitably contaminated with all sorts of errors, e.g., measurement noise and numerical approximations. Furthermore, the ML prediction at one time step has no effect on the input data at the next time steps during the training phase. However, during testing, the inaccuracies at one step manifest themselves in the following steps. This can lead to severe inaccuracy, and even instability, when the ML model is coupled with physical models across scales [27, 28, 29, 30] (e.g., failure in _a posteriori_ tests despite performing well in the _a priori_ setting [31, 32, 33, 34]). For the first pitfall, operator networks appear as a viable solution as they learn the mappings between infinite-dimensional function spaces. Deep operator networks (DeepONet) [35] and Fourier neural operator (FNO) [36] are the most popular frameworks in this area nowadays, showing varying levels of success in a wide range of benchmark problems [37]. However, the training of operator networks requires large amounts of high fidelity data, which is often hard to collect in practice. Multifidelity operator network (MFON) [38] leverages the use of an array of data and physical models with different fidelities to learn accurate mappings between the input and output spaces. _In this work, we formulate the closure problem (equivalently the subgrid scale correction and physical parameterization problem) as a multifidelity operator learning problem_. In particular, the simulation model resolving the large scales (the GPOD model in this case) represents the low fidelity model and the objective is to learn the corresponding high-fidelity correction terms to maintain accurate (and stable) predictions for the POD expansion coefficients. In the present study, we adopt the multifidelity implementation of the DeepONet architecture proposed by Howard et al. [38]. To address the second issue, differentiable programming tools provide the computing support to train the ML model in conjunction with other routines/solvers that interact with it. This mode of training is referred to as "in-the-loop" training [39] and has resulted in significant accuracy and stability improvement in coupled physics-ML models [40, 41, 42]. _We extend the "in-the-loop" training paradigm to the closure modeling problem using MFONs._ Although this incurs a computational overhead for the gradient computations during the backpropagation, we argue that exposing the MFON model to its own predictions during the training phase is beneficial for long forecast lead times. It is particularly important to understand (1) how the inaccuracies of the low-fidelity GPOD model feed back into the estimated correction (high-fidelity) terms, and (2) how the uncertainty in ML outputs at one step propagates through the GPOD model and the ML model itself during the following time steps. In Section 2, we consider a class of dynamical systems governed by unsteady partial differential equations (PDEs) along with their PROM formulation. We also show the need for the correction term to account for the modal truncation (coarsening) of the system. In Section 3, we describe the MFON framework and its adaptation for the closure modeling problem in PROMs. Then, we dedicate Section 4 to differentiate between the traditional "offline" training of multifidelity DeepONets and its "in-the-loop" version. Numerical experiments using prototypical flow problems are given in Section 5. Finally, concluding remarks and ideas for future work are provided in Section 6. ## 2 Problem formulation We consider a family of dynamical systems governed by PDEs as follows: \[\frac{\partial u}{\partial t}=\mathcal{N}(u,\frac{\partial u}{\partial x}, \frac{\partial^{2}u}{\partial x^{2}},\dots;\mu), \tag{1}\] where \(u(x,t):\mathbb{R}^{d}\times\mathbb{R}\rightarrow\mathbb{R}^{r}\) is the solution, \(x\in\mathbb{R}^{d}\) is the spatial dimension (e.g., \(d\in\{1,2,3\}\)), \(t\in\mathbb{R}\) is the time, and \(\mu\in\mathbb{R}^{p}\) denotes the model's parameters. Equation (1) is often solved numerically by defining a grid and applying one of the classical methods for solving PDEs (e.g., finite difference methods, finite volume methods, finite element method, spectral methods, etc.). This results in a semi-discretized system of equations for the solution state vector \(\mathbf{u}\in\mathbb{R}^{N}\) as follows: \[\frac{d\mathbf{u}}{dt}=F(\mathbf{u};\mu), \tag{2}\] where \(F:\mathbb{R}^{N}\times\mathbb{R}^{p}\rightarrow\mathbb{R}^{N}\) represents the system's dynamics and \(N\) is the number of degrees of freedom (e.g., number of grid points). For complex systems, the number of grid points, \(N\), required to accurately resolve the underlying dynamics is usually very large. The associated computational costs make it unfeasible to embed such methods in multi-query applications, where the system is solved repetitively at different times and/or parameter values. Therefore, alternative methods to efficiently _approximate_ the solution \(\mathbf{u}\) are sought-after. ### Reduced order modeling PROMs have gained popularity in physical science disciplines (e.g., fluid dynamics). These methods are based on the Galerkin ansatz where the solution is defined as a linear superposition of a finite set of basis functions as follows: \[\mathbf{u}(t)\approx\mathbf{u}^{\text{ROM}}(t):=\sum_{i=1}^{R}a_{i}(t)\phi_{i}, \tag{3}\] where \(\{\phi_{i}\}_{i=1}^{R}\) denote the basis functions (or spatial modes) and \(\{a_{i}\}_{i=1}^{R}\) are the accompanying modal coefficients. POD has been the main driver for defining optimal sets of basis functions over the last few decades. The POD algorithm starts with an ensemble of system's realizations at different times as follows: \[\mathcal{U}:=\{\mathbf{u}(t_{1}),\mathbf{u}(t_{2}),\dots\mathbf{u}(t_{M})\}, \tag{4}\] where \(\mathbf{u}(t_{n})\) is a representation of the solution state vector at time \(t_{n}\) that can be obtained from experimental measurements and more commonly from the high fidelity solution of Eq. (2), denoted as full order model (FOM) hereafter. For parametric systems, this ensemble can be enriched with snapshot data at different parameter values as follows: \[\mathcal{U}:=\{\mathcal{U}^{\mu_{1}},\mathcal{U}^{\mu_{2}},\dots,\mathcal{U }^{\mu_{P}}\}. \tag{5}\] Equation (3) can be further equipped with an affine transformation as follows: \[\mathbf{u}^{\text{ROM}}(t)=\bar{\mathbf{u}}+\sum_{i=1}^{R}a_{i}(t)\phi_{i}, \tag{6}\] where \(\bar{\mathbf{u}}\) is a reference mode, usually taken as the ensemble-average. Therefore, the ensemble of shifted snapshots \(\tilde{\mathcal{U}}\) is constructed from \(\widetilde{\mathbf{u}}(t)=\mathbf{u}(t)-\bar{\mathbf{u}}\). A correlation matrix can be defined using \(\tilde{\mathcal{U}}\) and an eigenvalue decomposition reveals hierarchically-sorted basis functions to approximate \(\mathbf{u}(t)\). Equivalently, a singular value decomposition of \(\tilde{\mathcal{U}}\) can be effected as follows: \[\tilde{\mathcal{U}}=\mathbf{U}\mathbf{\Sigma}\mathbf{V}^{\mathsf{T}}, \tag{7}\] where \(\mathbf{U}\) and \(\mathbf{V}\) are the matrices of left and right singular vectors, respectively and \(\mathbf{\Sigma}\) is a diagonal matrix of the associated singular values. The first \(R\) columns of \(\mathbf{U}\) are used to define the POD basis functions \(\{\phi_{i}\}_{i=1}^{R}\). The _true_ POD coefficients, which correspond to the best low-rank representation of \(\mathbf{u}(t)\), can be computed as follows: \[\mathbf{a}(t_{n})=\mathbf{\Phi}^{\mathsf{T}}(\mathbf{u}(t_{n})-\bar{\mathbf{ u}}). \tag{8}\] However, Eq. (8) requires access to the FOM solution \(\mathbf{u}\), which is not possible in practice when the solution is queried at different times/parameters. Thus, we need to derive a model for the time-varying modal coefficients, \(\mathbf{a}(t)=[a_{1}(t),a_{2}(t),\ldots,a_{R}(t)]^{\mathsf{T}}\) without solving Eq. (2). This can be obtained by the orthogonal projection of the FOM operators in Eq. (2) onto the POD basis functions \(\mathbf{\Phi}=[\phi_{1},\phi_{2},\ldots,\phi_{R}]\), a process known as Galerkin projection. This results in a Galerkin POD (GPOD) model as follows: \[\hat{\mathbf{a}}=\mathbf{\Phi}^{\mathsf{T}}F(\bar{\mathbf{u}}+\mathbf{\Phi} \mathbf{a};\mu). \tag{9}\] The GPOD predictions can be computed by numerically integrating Eq. (9) (e.g., using Runge-Kutta methods). Thus, the sequence of GPOD model predictions can be written as \[\begin{split}\textsc{GPOD}&=\textsc{GPOD}\\ \mathbf{a}(t_{n+1})&=G(\mathbf{a}(t_{n});\mu),\qquad \forall n\geq 0,\end{split} \tag{10}\] where \(G(\cdot;\cdot)\) is the GPOD one-step mapping (flow map) from \(t_{n}\) to \(t_{n+1}:=t_{n}+\Delta t\). ### Closure modeling Due to the truncation of the POD basis functions (i.e., \(R\ll N\)), the accuracy of Eq. (9) for representing the dynamics of the resolved POD modes can be compromised. Thus, the GPOD predictions often deviate significantly from the optimal values in Eq. (8) and closure models are required to reduce this gap. In general, closure terms can appear as a correction to the right-hand side of Eq. (9) at continuous-time level, which is also known as the memory. Instead, we follow a predictor-corrector approach to compensate for the effect of truncated scales as follows: \[\begin{split}\widehat{\mathbf{a}}(t_{n+1})&=G( \mathbf{a}(t_{n});\mu),\\ \mathbf{a}(t_{n+1})&=\widehat{\mathbf{a}}(t_{n+1})+ \mathbf{c}(t_{n+1}).\end{split} \tag{11}\] A substantial body of work in the literature has been devoted to model the closure term \(\mathbf{c}\). ML approaches and particularly DNNs have been utilized to develop closure models from data. A recent survey of such methods can be found in [13]. However, DNNs' capabilities are largely restricted to learning functions, which requires re-training the neural network for new parameter values and/or initial (or boundary) conditions. Instead, operator learning methods have been recently proposed to allow the training/testing of neural networks with varying settings (e.g., different parameters) [35]. In the following section, we briefly describe the DeepONet architecture and its multifidelity extension. Then, we address the closure modeling problem as a multifidelity learning task. ## 3 Multifidelity operator networks for closure modeling Lu et al. [35] proposed the DeepONet framework to learn operators between infinite-dimensional spaces. Inspired by the universal approximation theorem for operators by Chen and Chen [43], DeepONet approximates the action of an operator \(\mathcal{Q}(\mathbf{g})(\mathbf{y})\) as the inner product of the outputs of two neural networks, namely the branch net and trunk net. Although variations of DeepONet have been presented in [35], the _unstacked_ implementation is widely considered the "standard" DeepONet where the branch net takes the input function \(\mathbf{g}\) sampled at points \(\{x_{i}\}_{i=1}^{S}\) and produces the output \(\{B_{i}\}_{i=1}^{L}\). On the other hand, the coordinates and/or parameters, denoted as \(\mathbf{y}\), go to the trunk net whose output is denoted as \(\{T_{i}\}_{i=1}^{L}\). Thus, the branch and trunk nets are trained simultaneously and operator \(\mathcal{Q}(\mathbf{g})(\mathbf{y})\) is approximated as follows: \[\mathcal{Q}(\mathbf{g})(\mathbf{y})\approx\mathcal{Q}^{\Theta}(\mathbf{g})( \mathbf{y}):=\sum_{i=1}^{L}B_{i}T_{i}, \tag{12}\] where \(\Theta\) represents the aggregated parameters (weights) of the branch and trunk nets. We note that the only restriction is that the input function \(\mathbf{g}\) is sampled at the same locations \(\{x_{i}\}_{i=1}^{S}\). However, there are no constraints on the output query points \(\mathbf{y}\) nor the input \(\mathbf{g}\) itself. To achieve such powerful approximation and generalization capabilities, DeepONet usually requires larger amounts of data sets and the training is more expensive than a typical DNN. Modified architectures and training algorithms have been proposed to mitigate this burden [44], including the use of physics-based loss [45] similar to physics-informed neural networks (PINNs) [46]. Multifidelity learning algorithms leverage the existence of data and models with disparate levels of fidelities to build a framework that outperforms those who could be built using single fidelity data/models. In the context of DeepONet, two recent developments include the MFONs proposed by Howard et al. [38] and Lu et al. [47], where bifidelity DeepONets are trained simultaneously using the high fidelity and low fidelity data sets. Lu et al. [47] proposed various architectures to combine the low fidelity operator network (LFON) and the high fidelity operator network (HFON), including learning the residual of the LFON and using LFON prediction to augment the inputs of the HFON. On the other hand, Howard et al. [38] proposed the use of three composite blocks, starting with the LFON, followed by a linear and nonlinear sub-networks to learn the linear and nonlinear correlations between low fidelity and high fidelity data. In addition, Howard et al. [38] demonstrated that a non-composite MFON can benefit from low-fidelity physical models to replace the LFON block. Along similar lines, De et al. [48] used DeepONet to learn the discrepancy between the true system's response and low fidelity model's predictions. Inspired by the non-composite MFON setup from [38] and the residual learning approach in [47, 48], we view the PROM closure modeling in Section 2.2 as a multifidelity learning problem. We hypothesize that there is a correlation between the low fidelity model predictions (for the resolved scales) and the contribution of the truncated scales onto the resolved scales (the closure terms). We use the GPOD model \(G(\cdot;\cdot)\) to define the low fidelity physical model and HFON learns the residual terms to correct the PROM predictions. In particular, we use the POD coefficients at current time, \(\mathbf{a}(t_{n})\), and the low-fidelity predictions, \(G(\mathbf{a}(t_{n});\mu))\) as inputs to the branch net. This also introduces a time-memory effect similar to the non-Markovian terms in the Mori-Zwanzig formalism. The model's parameter \(\mu\) and the mode index \(k\in\{1,2,\dots,R\}\) are fed to the trunk net. Thus, the MFON approximation of the closure term can be written as follows: \[\mathbf{a}(t_{n+1})=\underbrace{G(\mathbf{a}(t_{n});\mu)}_{\text{ low fidelity model}}+\underbrace{\mathcal{Q}^{\Theta}\overbrace{\left([\mathbf{a}(t_{n})^{\mathsf{T}},G( \mathbf{a}(t_{n});\mu)^{\mathsf{T}}]^{\mathsf{T}}\right)}^{\text{branch net}} \overbrace{\left([\mu,k]^{\mathsf{T}}\right)}^{\text{trunk net}}}_{\text{ residual}}. \tag{13}\] While we are not using time explicitly as an input variable to MFON in Eq. (13), we are learning the closure values as a function of the modal coefficients which themselves depend on time. Thus, at different times, the predicted corrections are different. It is also worth noting that the proposed MFON framework is not restricted to GPOD as the low fidelity model. For example, Grimberg et al. [49] identified culprits in the traditional Galerkin framework and recommended the use of Petrov-Galerkin framework instead. In particular, the least-squares Petrov-Galerkin (LSPG) has shown outstanding performance in many challenging problems [50, 51] and can be an viable choice for the low fidelity model in MFON. ## 4 Training of multifidelity operator networks In this section, we differentiate between two paradigms for training MFONs. ML-based emulators are often trained as a standalone component where the training data are sampled from high fidelity simulations or experiments. This mode of training is denoted as "offline" training and has been predominantly adopted in scientific ML (SciML) for computational cost considerations. However, recent advances in differentiable programming and automatic differentiation (AD) tools have led to another mode of training, called "online" training. In particular, AD allows the computation of the gradients of arbitrary models' outputs with respect to their inputs (or parameters) so that the gradient-based optimizer can backpropagate through both the ML and physical models in an end-to-end setup [52, 53]. For time-dependent problems, we refer to the online training as "in-the-loop" training where both the physical and ML models are embedded and coupled in the time-integration loop. ### Offline training Given the POD basis functions \(\mathbf{\Phi}\) and the reference field \(\bar{\mathbf{u}}\), the GPOD model \(G(\cdot;\cdot)\) can be constructed (examples can be found in Section 5). Two consecutive snapshots at time \(t_{n}\) and time \(t_{n+1}\) can be used to compute the _true_ POD coefficients as follows: \[\mathbf{a}(t_{n}) =\mathbf{\Phi}^{\mathsf{T}}(\mathbf{u}(t_{n})-\bar{\mathbf{u}}), \tag{14}\] \[\mathbf{a}(t_{n+1}) =\mathbf{\Phi}^{\mathsf{T}}(\mathbf{u}(t_{n+1})-\bar{\mathbf{u}}).\] Meanwhile, the _low-fidelity_ prediction at \(t_{n+1}\), given the _true_ coefficients at time \(t_{n}\) can be written as follows: \[\widehat{\mathbf{a}}(t_{n+1})=G(\mathbf{a}(t_{n});\mu), \tag{15}\] and the correction can be defined as \(\mathbf{c}(t_{n+1})=\mathbf{a}(t_{n+1})-\widehat{\mathbf{a}}(t_{n+1})\). Therefore, the training samples can be formed from \(\{\mathbf{a}(t_{n}),\mathbf{a}(t_{n+1})\}\), where the branch net takes \(\mathbf{a}(t_{n})\) and \(\widehat{\mathbf{a}}(t_{n+1})=G(\mathbf{a}(t_{n});\mu)\) as input and the trunk net is fed with the parameters \(\mu\) and the mode index \(\{k\}_{k=1}^{k}\). Finally, the output of MFON can be compared against \(\mathbf{a}(t_{n+1})\) using the following \(\ell_{2}\) loss function: \[l(t_{n},t_{n+1})=\bigg{\|}\mathbf{a}(t_{n+1})-\widehat{\mathbf{a}}(t_{n+1})- \mathcal{Q}^{\Theta}\bigg{(}[\mathbf{a}(t_{n})^{\mathsf{T}},\widehat{\mathbf{ a}}(t_{n+1})^{\mathsf{T}}]^{\mathsf{T}}\bigg{)}\bigg{(}[\mu,k]^{\mathsf{T}} \bigg{)}\bigg{\|}_{2}^{2}, \tag{16}\] where the parameters \(\Theta\) can be optimized using a (stochastic) gradient descent-based optimizer (e.g., Adam). This mode of _fully_ supervised learning is denoted as _offline training_. The underlying assumption here, which is often overlooked, is that the _true_ coefficients are available at current time \(t_{n}\). Therefore, the ML model aims to learn a single-step correction. The rationale behind that stems from the assumption that the ML-based correction would always steer the _low fidelity_ predictions to match the _high fidelity_ solution. However, this is not often the case and the ML prediction can, at best, be considered an approximation of the true residual. Therefore, the sequence of the MFON predictions can be written as follows: \[\begin{array}{l}\overset{\text{MFON}}{\widehat{\mathbf{a}}}(t_{n+1})=G( \mathbf{a}(t_{n});\mu)\\ \overset{\text{MFON}}{\mathbf{a}}(t_{n+1})=\overset{\text{MFON}}{\widehat{ \mathbf{a}}}(t_{n+1})+\widehat{\mathbf{c}}(t_{n+1}),\end{array} \tag{17}\] where \(\widehat{\mathbf{c}}\) denotes the MFON _estimate_ of the closure term \(\mathbf{c}\). This discrepancy between the training and deployment conditions of MFON might cause a severe accumulation of the error, eventually leading to significantly inaccurate predictions. In addition, several studies have reported unstable behavior of the ML predictions after they are placed into the operation cycles despite their superior performance when tested separately (e.g., for single time steps). ### In-the-loop training In order to create consistent training and testing environments, we leverage the differentiable programming capabilities of modern SciML tools (e.g., TensorFlow, PyTorch, and JAX) to create a feedback loop between the low fidelity and high fidelity models across multiple time steps. Instead of always feeding the MFON with _true_ data points at the current time, we use the predictions of MFON to define the input at next time step. This can be also thought of as a way to enforce _temporal_ causality. Figure 1 shows a schematic representation comparing "offline training" where true coefficients are given at the input against "in-the-loop training" where MFON predictions at one time step are fed back as input for the next step. Although in-the-loop training shares some features with recurrent neural networks, the key benefit of in-the-loop training lies in its combination with the physics-based solver (GPOD in our case) in a way that allows their interactions during the _training_ phase rather than using the solver only to pre-prepare the training data, e.g., through I/O operations. Figure 1: A schematic diagram for conventional _offline training_ (left) and _in-the-loop training_ (right) of DeepONet in a multifidelity setting to correct Galerkin POD models predictions. For the offline training, the MFON is fed with _true_ values at the inputs. However, in-the-loop training allows the MFON to see the _effect_ of its previous output as input in the next step during the training process and thus accounts for the long-term interplay between the DeepONet and the GPOD model. Ideally, MFON should be trained given only the initial condition at time \(t_{0}\) and the output of each time step is looped back as input till the final time \(t_{M}\). However, this is not often feasible due to the memory and compute costs of backpropagation algorithms as well as numerical precision errors resulting in exploding or vanishing gradients. Instead, we define a time window of \(\tau\) time steps where the _true_ POD coefficients are only given at the first step after which MFON predictions are used recursively as follows: \[\begin{array}{ll}\widehat{\mathbf{a}}(t_{n+1})=G(\mathbf{a}(t_{n});\mu),& \qquad\qquad\begin{array}{l}\text{\small{MFON}}\\ \mathbf{a}(t_{n+1})=\widehat{\mathbf{a}}(t_{n+1})+\widehat{\mathbf{c}}(t_{n+ 1}),\\ \widehat{\mathbf{a}}(t_{n+2})=G(\mathbf{a}(t_{n+1});\mu),&\qquad\mathbf{a}(t_{n+ 2})=\widehat{\mathbf{a}}(t_{n+2})+\widehat{\mathbf{c}}(t_{n+2}),\\ \vdots\\ \widehat{\mathbf{a}}(t_{n+\tau})=G(\mathbf{a}(t_{n+\tau-1});\mu),&\qquad \qquad\begin{array}{l}\text{\small{MFON}}\\ \mathbf{a}(t_{n+\tau})=\widehat{\mathbf{a}}(t_{n+\tau})+\widehat{\mathbf{c}}(t _{n+\tau}).\end{array}\end{array} \tag{18}\] The resulting loss function can be written as follows: \[l(t_{n},t_{n+\tau})=\frac{1}{\tau}\!\sum_{k=1}^{\tau}\!\left\|\!\mathbf{a}(t_{ n+k})-\widehat{\mathbf{a}}(t_{n+k})-\mathcal{Q}^{\Theta}\!\left(\!\left[ \mathbf{a}(t_{n+k-1})^{\mathsf{T}},\widehat{\mathbf{a}}(t_{n+k})^{\mathsf{T }}\right]^{\mathsf{T}}\!\right)\!\right\|_{2}^{2}, \tag{19}\] where \(\mathbf{a}(t_{n})=\mathbf{a}(t_{n})\) and \(\widehat{\mathbf{a}}(t_{n+k})=\widehat{\mathbf{a}}(t_{n+k})\) as given in Eq. (18). We note that the definition in Eq. (19) can be modified using a weighting scheme depending on the problem under consideration, e.g., to give higher importance to earlier predictions. In addition, the framework is extensible to cases where intermediate data points are not available for training MFON. For example, assuming that _true_ data are only available at the end of \(\tau\) time steps, the loss function can be written as follows: \[l(t_{n},t_{n+\tau})=\left\|\mathbf{a}(t_{n+\tau})-\widehat{\mathbf{a}}(t_{n+ \tau})-\mathcal{Q}^{\Theta}\!\left(\!\left[\mathbf{a}(t_{n+\tau-1})^{\mathsf{ T}},\widehat{\mathbf{a}}(t_{n+\tau})^{\mathsf{T}}\right]^{\mathsf{T}}\! \right)\!\right\|_{2}^{2}. \tag{20}\] The unrolled representation of _in-the-loop_ training is shown in Fig. 2 for \(\tau=3\). The idea of in-the-loop training for time-dependent problems can be also related to the recent windowed least-squares approaches (e.g., see [54, 55]), in the sense that the objective is to minimize the trajectory error rather than the instantaneous (single-step) error. However, in windowed PROM approaches, multiple (localized) models are often constructed for different _non-overlapping_ windows. During the prediction, these models are queried sequentially based on the corresponding point in space/time. In contrast, we build a single MFON model that is trained to minimize the solution error for \(\tau\) steps and there is no restriction on where the time window begins. For example, a training batch can include _overlapping_ time windows as follows: \(0\rightarrow\tau\Delta t\), \(\Delta t\rightarrow(\tau+1)\Delta t\), \(2\Delta t\rightarrow(\tau+2)\Delta t\), etc. Even though the MFON is trained to minimize the error for \(\tau\) steps, having different overlapping trajectories in the training batches improves the generalizability of the trained MFON model. Therefore, during the prediction, the MFON model is called recursively as many time steps as needed to even extrapolate to times not seen during the training. Although we only cover the MFON in discrete time setup with constant step size, it can be extended to continuous time to allow adaptive time stepping and sampling schemes. The value of the time window \(\tau\) over which the MFON is trained is typically selected to balance two opposing effects. On one hand, \(\tau\) should be as large as possible to impose higher temporal causality between the predictions of consecutive time steps. This minimizes the error along longer solution trajectories rather than just instantaneous (single-step) corrections. On the other hand, in practice, using large values of \(\tau\) leads to lengthy computational graphs, which makes the computations of gradients (using automatic differentiation) more challenging. In addition, the selection of the optimizer, e.g., first order stochastic gradient descent-based optimizer, plays a role in the selection of \(\tau\). Another important factor is related to the dynamical characteristics of the system itself. For example, for a chaotic system, it might be more challenging to use large values of \(\tau\), in which cases the value of \(\tau\) might be limited to a few Lyapunov times. ## 5 Numerical experiments We demonstrate the MFON framework for closure modeling in multiscale systems using two test problems showing strong convective dynamics. The first problem is the viscous Burgers problem corresponding to an advecting shock wave in one-dimensional (1D) setting. Then, we consider the two-dimensional (2D) vortex merger problem that has been used as a simplified model for many dynamical phenomena in large scale geophysical flows. ### Burgers problem The 1D viscous Burgers problem is defined using the following equation: \[\frac{\partial u}{\partial t}+u\frac{\partial u}{\partial x}=\frac{1}{\text{ Re}}\frac{\partial^{2}u}{\partial x^{2}}, \tag{21}\] where \(u(x,t):\mathbb{R}\times\mathbb{R}\rightarrow\mathbb{R}\) is the velocity field and Re represents the Reynolds number (i.e., the ratio of inertial forces to viscous forces). The GPOD for Eq. (21) can be written as follows: \[\dot{\mathbf{a}}=\mathcal{C}+\mathcal{L}\mathbf{a}+\mathbf{a}^{\mathsf{T}} \mathcal{N}\mathbf{a}, \tag{22}\] where \(\mathcal{C}\in\mathbb{R}^{R}\), \(\mathcal{L}\in\mathbb{R}^{R\times R}\), and \(\mathcal{N}\in\mathbb{R}^{R\times R\times R}\) denote the constant, linear, and nonlinear terms as follows: \[\begin{split}[\mathcal{C}]_{i}&=\bigg{(}\phi_{i},- \bar{u}\frac{\partial\bar{u}}{\partial x}+\frac{1}{\text{Re}}\frac{\partial^ {2}\bar{u}}{\partial x^{2}}\bigg{)},\\ [\mathcal{L}]_{ij}&=\bigg{(}\phi_{i},-\bar{u}\frac{ \partial\phi_{j}}{\partial x}-\phi_{j}\frac{\partial\bar{u}}{\partial x}+ \frac{1}{\text{Re}}\frac{\partial^{2}\phi_{j}}{\partial x^{2}}\bigg{)},\\ [\mathcal{N}]_{ijk}&=\bigg{(}\phi_{i},-\phi_{j} \frac{\partial\phi_{k}}{\partial x}\bigg{)},\end{split} \tag{23}\] where \(\big{(}\cdot,\cdot\big{)}\) denotes an inner product and \(\bar{u}\) is the mean velocity field (see Eq. (6)). The form of Eq. (22) is often denoted as a tensorial GPOD and it takes advantage of the polynomial nonlinearity in governing equations to precompute Figure 2: An unrolled representation of Fig. 1 for _in-the-loop training_ of DeepONet over \(\tau=3\) time steps. In top panel, the intermediate values of true POD coefficients are used to compute increments of the loss while only the final values after \(\tau\) time steps are used to compute the loss in the bottom panel. the GPOD model terms. As mentioned before, the proposed MFON framework is not restricted to a specific form of the low fidelity model and can easily incorporate cases with other types of nonlinearity. However, in those cases, hyper-reduction techniques [56, 57, 58] should be implemented to reduce the computational cost of the low fidelity model. We consider a domain of length \(1\) and impose zero Dirichlet boundary conditions (i.e., \(u(0,t)=u(1,t)=0\)) and define a family of initial conditions corresponding to a single square pulse of height \(1\) and parameterized by the pulse width \(w_{p}\) as follows: \[u(x,0)=\begin{cases}1,&\text{if}\quad x\in[0,w_{p}],\\ 0,&\text{if}\quad x\in(w_{p},1].\end{cases} \tag{24}\] In particular, our training data set corresponds to initial conditions with \(w_{p}\in[0.25,0.75]\) with increment of \(0.05\). Also, training data are generated at \(\text{Re}\in[2500,10000]\) with increment of \(2500\). For the FOM solution, we utilize a family of compact finite difference schemes for spatial discretization and the third order total variation diminishing Runge-Kutta (TVD-RK3) scheme for temporal integration. We divide the spatial domain into \(4096\) equally spaced intervals and use a fixed time step of \(\Delta t_{\text{FOM}}=10^{-4}\). Snapshots of velocity field are stored every \(100\) time steps for \(t\in[0,1]\) to build the training data set. Figure 3 shows the time evolution of the resulting wave at \(\text{Re}=10,000\) starting from three different pulse widths. For the GPOD models, we use a time step of \(\Delta t_{\text{ROM}}=10^{-2}\) that is \(100\) times larger than \(\Delta t_{\text{FOM}}\). For the design of the DeepONet architecture as well as selection of optimizer learning rate and number of iterations, we followed a manual trail-and-error approach since the dimensionality of the problem is relatively small. In particular, we use identical branch and trunk network architectures, except for the width of the input layer in each of them. For the branch net, the input layer width is \(2R\) to accommodate the POD coefficients at the current time and the low fidelity predictions at the next time step. On the other hand, the width of the trunk input layer is 2, corresponding to Reynolds number and the mode index. Three hidden feedforward layers with \(10\) neurons each are used for both the branch and trunk nets. For the activation function, we found that the hyperbolic tangent activation performs better than the rectified linear unit. For the optimizer, we use Adam with a decaying learning rate starting from \(10^{-3}\). Howard et al. [38] include two DeepONets to learn the linear and nonlinear correlations between the low fidelity and high fidelity data. However, due to the _in-the-loop_ training framework, we find that the linear DeepONet causes the gradient descent optimizer to blow up due to the repetitive multiplications of weight matrices resulting in exploding gradients. Although such performance can be improved by enforcing stability conditions (e.g., using matrix decomposition [59]), we only use the nonlinear DeepONet with the hyperbolic tangent activation function. #### 5.1.1 Interpolative regime First, we test the MFON for closure modeling at parameter values that fall in the interpolation regime compared to the training data sets. In particular, we consider the Burgers problem with an initial condition defined by a pulse width of \(0.675\) at Reynolds number of \(4000\). We retain 10 modes to build the GPOD model, corresponding to \(\sim 95\%\) of the total energy in the system quantified using the relative information content (RIC), as shown in Fig. 4 and defined as follows: \[\text{RIC}(R)=\frac{\sum_{i=1}^{R}\sigma_{i}^{2}}{\sum_{i=1}^{M}\sigma_{i}^{2} }\times 100. \tag{25}\] In addition, we compare the performance of MFON with offline training versus in-the-loop training. For the 1D Burgers problem, we found that both approaches of in-the-loop training (see Fig. 2) give similar results. Therefore, we consider Figure 3: Velocity field at \(\text{Re}=10,000\) starting with an initial condition of a pulse with a width \(w_{p}\in\{0.25,0.50,0.75\}\) the more challenging case where training data points are available only at the end of \(\tau\) steps (i.e., the bottom panel in Fig. 2). Figure 5 depicts the predictions of the first and last POD coefficients. "FOM Projection" curves refer to the projection of the FOM snapshots onto the POD basis as defined in Eq. (8) while "Low Fidelity" refers to the predictions of the GPOD model without adding corrections as in Eq. (10). It is clear the GPOD predictions deviate from their true values, especially for the high-index modes (e.g., \(a_{10}\)) that are closer to the truncated modes. This observation is consistent with the locality of modal interaction and energy transfer that has been motivating the development of variational multiscale closure techniques [60]. The MFON results are shown for different values of \(\tau\) that distinguishes the proposed in-the-loop training procedure. We also highlight that \(\tau=1\) is equivalent to offline training as we shall refer to it from now on. Figure 4: Relative information content for Burgers problem training data set. Figure 5: The evolution of the \(1^{\text{st}}\) and \(10^{\text{th}}\) POD modal coefficients for Burgers problem with an initial pulse width of \(w_{p}=0.675\) at \(\text{Re}=4000\), corresponding to an interpolative test case. For in-the-loop training, true data points are assumed to be available at the end of \(\tau\) time steps. Offline training is equivalent to setting \(\tau=1\). Figure 6 shows that reconstructed velocity field at \(t=0.5\) and \(t=1.0\) by plugging the coefficients predicted by different models in Eq. (6) while the FOM denotes the solution of the Burgers equation, Eq. (21), using finite difference schemes. The relative error with respect to \(\mathbf{u}^{\text{FOM}}\) is presented in Fig. 7 where we can see that low fidelity GPOD predictions can lead to more than \(30\%\) error in the velocity field. On the other hand, MFON accuracy levels are comparable to the FOM projection, which represents the _maximum_ reconstruction quality that can be obtained, for the retained POD modes. We also notice that _in-the-loop_ training yields slightly better MFON models compared to offline training, especially toward the end of the time interval. #### 5.1.2 Extrapolative regime Next, we test the performance of MFON under extrapolation conditions. In particular, we use a Reynolds number of \(15000\) that is \(150\%\) higher than the largest value in the training data sets and we also consider a larger pulse width of \(0.85\) to define the initial condition. More importantly, we explore the performance of MFON for larger time intervals. While the training data correspond to \(t\in[0,1]\), we perform longer time predictions up to \(t=2\). Figure 8 shows the predictions of POD coefficients for the first and last mode. Although MFON with offline training performs well up to \(t=1\), its accuracy significantly deteriorates for longer time predictions. On the other hand, MFON with in-the-loop Figure 6: The predicted velocity field at \(t=0.5\) (top) and \(t=1.0\) (bottom) for Burgers problem with an initial pulse width of \(w_{p}=0.675\) at \(\text{Re}=4000\), corresponding to an interpolative test case. For in-the-loop training, true data points are assumed to be available at the end of \(\tau\) time steps. Offline training is equivalent to setting \(\tau=1\). Figure 7: The relative error in the predicted velocity field as a function of time for Burgers problem with an initial pulse width of \(w_{p}=0.675\) at \(\text{Re}=4000\), corresponding to an interpolative test case. For in-the-loop training, true data points are assumed to be available at the end of \(\tau\) time steps. Offline training is equivalent to setting \(\tau=1\). training gives more accurate predictions with \(\tau=5\) and \(\tau=10\). However, \(\tau=20\) results in significantly longer computational graphs for the backpropagation where exploding and diminishing gradient issues arise. The space-time contour plots for the reconstructed velocity fields are shown in Fig. 9 and the corresponding relative errors are given in Fig. 10. We see that GPOD can lead to around \(100\%\) erroneous predictions. The accuracy levels of MFON with offline training are comparable to FOM projection up to \(t=1\). Nonetheless, a drastic increase in the error occurs for temporal extrapolation. On the other hand, imposing temporal causality by in-the-loop training improves the predictability of MFON models with \(\tau=5\) and \(\tau=10\). It is also clear that there is an optimal value for \(\tau\) with respect to the trade-off between the gain from introducing the feedback loop in the training and the cost of computing the gradients with longer computational graphs needed for backpropagation. We reiterate that the value of \(\tau\) corresponds to the training phase only. During testing, it is assumed that only the initial condition at \(t=0\) is known. In addition, we compare the performance of MFON against a single-fidelity DeepONet to predict the evolution of the POD coefficients without taking any information from the low fidelity GPOD model. We consider two variations of such single-fidelity time integration model as shown in Fig. 11. The first one corresponds to an autoregressive model that directly evolves the time-dependent coefficients as follows: \[\begin{array}{c}\text{\tiny Autoexpressive}\\ a_{k}(t_{n+1})=\mathcal{Q}^{\Theta}\big{(}\mathbf{a}(t_{n})\big{)}\big{(}[\mu,k] ^{\mathsf{T}}\big{)}.\end{array} \tag{26}\] The predictions in Fig. 9 and the corresponding errors in Fig. 10 reveal that such single-fidelity (physics-agnostic) model does not perform well. In order to improve this performance, we consider an incremental DeepONet integrator as follows: \[\begin{array}{c}\text{\tiny Incremental}\\ a_{k}(t_{n+1})=a_{k}(t_{n})+\mathcal{Q}^{\Theta}\big{(}\mathbf{a}(t_{n})\big{)} \big{(}[\mu,k]^{\mathsf{T}}\big{)}.\end{array} \tag{27}\] Figures 9 and 10 show substantial improvement in predictions using the incremental integrator compared to the autoregressive one. However, the performance drops beyond \(t=1\), corresponding to extrapolation in time. We highlight that the incremental time integrator can be considered as a special case of multifidelity modeling, where the current state (at time \(t_{n}\)) represents the low fidelity estimate of the future state (at \(t_{n+1}\)). Nonetheless, using a physics-based GPOD as a low fidelity model in the MFON framework significantly improves the predictive capabilities in unseen regimes, compared to the fully data-driven models in Fig. 11. Figure 8: The evolution of the \(1^{\text{st}}\) and \(10^{\text{th}}\) POD modal coefficients up to \(t=2\) for Burgers problem with an initial pulse width of \(w_{p}=0.85\) at \(\text{Re}=15000\), corresponding to an extrapolative test case. For in-the-loop training, true data points are assumed to be available at the end of \(\tau\) time steps. Offline training is equivalent to setting \(\tau=1\). Since MFON corrections are applied in the POD subspace, the lower bound for the MFON error corresponds to the projection error of the POD basis. In order to further reduce this error, either an increased number of POD basis should be retained in the PROM or an alternative set of basis functions should be adopted. For the latter, adaptive, localized, or custom-made basis functions can provide viable alternative, but it is outside the scope of the presents study. In order to assess the effect of varying the number of POD modes, Fig. 12 depicts the performance of MFON with \(R=5\) and \(R=20\), compared to the baseline of \(R=10\) in the current study. We observe the MFON error is close to the FOM projection in different cases, implying a reduction in the closure error compared to the low fidelity GPOD predictions with the same number of POD modes. On the other hand, the upper bound for the MFON error relies on several factors. First, the specific architecture (e.g., number of layers, neurons, and activation functions) in addition to the optimizer options play a significant role. In this regard, there have been recent theoretical studies for the error estimates of DeepONet [61], which is the key component in MFON. However, the extension of such analysis to multifidelity DeepONet settings is still missing. Second, the accumulation of time-integration error between different time steps plays another role, which depends on the order of the time integration scheme. Figure 9: The predicted velocity field for Burgers problem with an initial pulse width of \(w_{p}=0.85\) at \(\text{Re}=15000\) using different operator learning approaches. For in-the-loop training, true data points are assumed to be available at the end of \(\tau\) time steps. Offline training is equivalent to setting \(\tau=1\). Figure 11: Single fidelity DeepONet time integrators for the POD modal coefficients. Figure 12: The predicted velocity field and the relative error for Burgers problem with an initial pulse width of \(w_{p}=0.85\) at \(\text{Re}=15000\), with different numbers of POD modes for the reduced order model. Figure 10: The relative error in the predicted velocity field as a function of time for Burgers problem with an initial pulse width of \(w_{p}=0.85\) at \(\text{Re}=15000\) with different modeling approaches. For in-the-loop training, true data points are assumed to be available at the end of \(\tau\) time steps. Offline training is equivalent to setting \(\tau=1\). The computational cost of MFON steps compared to GPOD is reported in Fig. 13 for different test cases at different values of Reynolds number and different numbers of POD modes. We find that the computational overhead (in terms of compute time) of MFON over GPOD is about \(5\%\), which is very minimal. We have benefited from JAX capabilities, including just-in-time (JIT) compilation and GPU porting to accelerate both the GPOD solver and the MFON computations. ### Vortex merger problem Our second test problem is the 2D vortex merger problem, describing the evolution of two co-rotating vortices, initially in close proximity and eventually forming one big vortex. It models some of the fundamental processes in fluid motion that occur in many fields such as astrophysics, meteorology, and geophysics. We consider a spatial domain of dimensions \((2\pi\times 2\pi)\) with periodic boundary conditions in both the \(x\) and \(y\) directions. The flow is initiated with a pair of Gaussian vortices with equal strengths centered at \((x_{1},y_{1})\) and \((x_{2},y_{2})\) as follows: \[\omega(x,y,0)=\exp\big{(}-\rho\left[(x-x_{1})^{2}+(y-y_{1})^{2}\right]\big{)}+ \exp\big{(}-\rho\left[(x-x_{2})^{2}+(y-y_{2})^{2}\right]\big{)}, \tag{28}\] where \(\omega\) is the vorticity field and \(\rho\) is a parameter that controls the mutual interactions between the two vortical motions, set as \(\rho=\pi\) in the present study. We consider the two vortices' axes to be initially placed on a circle of radius \(\pi/4\), separated by an \(180^{\circ}\) angle as follows: \[x_{1} =\pi+\frac{\pi}{4}\cos(\theta), y_{1} =\pi+\frac{\pi}{4}\sin(\theta) \tag{29}\] \[x_{2} =\pi+\frac{\pi}{4}\cos(\theta+180^{\circ}), y_{2} =\pi+\frac{\pi}{4}\sin(\theta+180^{\circ}).\] The dynamics of the vortex merger problem can be described by the following vorticity transport equation: \[\frac{\partial\omega}{\partial t}+J(\omega,\psi)=\frac{1}{\text{Re}}\nabla^{2}\omega, \tag{30}\] where \(\psi\) denotes the streamfunction field that is linked with the vorticity field by the following kinematic relationship: \[\nabla^{2}\psi=-\omega. \tag{31}\] The Jacobian operator, \(J(\cdot,\cdot)\), is defined as: \[J(\omega,\psi)=\frac{\partial\omega}{\partial x}\frac{\partial\psi}{\partial y }-\frac{\partial\omega}{\partial y}\frac{\partial\psi}{\partial x}. \tag{32}\] Figure 13: Pareto front plot for the online compute time per step versus the time-averaged relative error in the reconstructed velocity field from different ROM approaches. Different data points (with the same color and marker) refer to testing at different Reynolds numbers. Points with the same compute time (vertical lines) correspond to fixed number of POD modes for each case, starting from \(R=5\) on the left, \(R=10\) in the middle, and \(R=20\) on the right. For the FOM simulations, we define a regular Cartesian grid with a resolution of \(256\times 256\) (i.e., \(\Delta x=\Delta y=2\pi/256\)) and we use the TVD-RK3 scheme with a time-step of \(10^{-3}\). We run the FOM up to \(t=40\). However, training data comprise vorticity snapshots that are collected every \(100\) time steps only for \(t\in[0,20]\). The evolution of the vortex merger problem is depicted in Fig. 14, which illustrates the convective and interactive mechanisms affecting the transport and development of the two vortices. This makes it a challenging problem for standard ROM approaches and a good testbed for the proposed MFON framework. We apply the POD analysis in Section 2.1 onto the vorticity field since it is the prognostic variable in Eq. (30). Similar to Eq. (6), we approximate \(\omega\) using the span of the first \(R\) POD modes as follows: \[\omega(\cdot,t)\approx\omega^{\text{ROM}}(\cdot,t)=\bar{\omega}(\cdot)+\sum_{i =1}^{R}a_{i}(t)\phi_{i}(\cdot), \tag{33}\] On the other hand, the streamfunction can be approximated as follows: \[\psi(\cdot,t)\approx\psi^{\text{ROM}}(\cdot,t)=\bar{\psi}(\cdot)+\sum_{i=1}^{R }a_{i}(t)\theta_{i}(\cdot), \tag{34}\] where the mean field and basis functions for the streamfunction can be obtained using Eq. (31) as follows: \[\nabla^{2}\bar{\psi} =-\bar{\omega}, \tag{35}\] \[\nabla^{2}\theta_{i} =-\phi_{i},\quad i=1,2,\dots,R.\] It should be noted that enforcing the kinematic relationship in Eq. (31) does not guarantee that the resulting basis functions \(\theta\) for the streamfunction are orthogonal. However, it allows us to use the same coefficients \(\{a_{i}\}_{i=1}^{R}\) in Eq. (33) and Eq. (34). We set \(R=10\) to define the total number of resolved scales and hence the dimensionality of the GPOD system. The GPOD for Eq. (30) is similar to Eq. (22) with the following terms: \[\begin{split}[\mathcal{C}]_{i}&=\bigg{(}\phi_{i},- J(\bar{\omega},\bar{\psi})+\frac{1}{\text{Re}}\nabla^{2}\bar{\omega}\bigg{)}, \\ [\mathcal{L}]_{ij}&=\bigg{(}\phi_{i},-J(\bar{\omega}, \theta_{j})-J(\phi_{j},\bar{\psi})+\frac{1}{\text{Re}}\nabla^{2}\phi_{j} \bigg{)},\\ [\mathcal{N}]_{ijk}&=\bigg{(}\phi_{i},-J(\phi_{j}, \theta_{k})\bigg{)}.\end{split} \tag{36}\] #### 5.2.1 Interpolative regime We first demonstrate the performance of the proposed MFON for an interpolative testing case in both the initial condition and Reynolds number. We consider an initial vorticity field corresponding to \(\theta=60^{\circ}\) and \(\text{Re}=1500\) and run the FOM, GPOD, and MFON models up to \(t=20\). The evolution of the first and last POD coefficients is shown in Fig. 15. The low fidelity predictions, resulting from the GPOD model, exhibit large deviation from the FOM projected values, especially for the \(10^{\text{th}}\) mode. Meanwhile, using MFON with offline training (\(\tau=1\)) exacerbates the predictions despite giving high accuracy levels for single step predictions (not shown). On the other hand, adopting in-the-loop training (\(\tau>1\)) significantly improve the results. However, it is important to select \(\tau\) values that enforce temporal causality while keeping the training feasible with the available data and optimizer. For instance, we compare the performance with \(\tau\in\{1,5,10,20\}\) and we find that \(\tau=20\) gives the best results while \(\tau=10\) yields the worst predictions for this particular problem. We also highlight these observations correspond to having training data available at the end of \(\tau\) steps (bottom panel in Fig. 2). Figure 16 displays the propagation of relative error in the predicted vorticity fields from various models. It is clear that offline training can yield unreliable predictions even for interpolative test cases. In addition, it is evident that the choice of \(\tau\) is an important hyperparameter in our in-the-loop framework for training MFON. The reconstruction of vorticity field at \(t=20\) (corresponding to the end of the training time interval) is shown in Fig. 17. The MFON with \(\tau=20\) is in close agreement with the FOM projection field, which defines the optimal reconstruction with \(10\) POD modes. In addition, the time step size in GPOD and MFON is \(100\) times larger than that of FOM. Therefore, we not only reduce the number of degrees of freedom from \(256^{2}\) to \(10\), but also use a much coarser time stepping. Figure 14: The evolution of the vorticity field for the vortex merger problem at \(\text{Re}=2000\) starting from different initial conditions. Figure 15: The evolution of the \(1^{\text{st}}\) and \(10^{\text{th}}\) POD modal coefficients up to \(t=20\) for the vortex merger problem with an initial orientation with \(\theta=60^{\circ}\) at \(\text{Re}=1500\), corresponding to an interpolative test case. For in-the-loop training, true data points are assumed to be available at the end of \(\tau\) time steps. Offline training is equivalent to setting \(\tau=1\). Figure 16: The relative error in the predicted vorticity field as a function of time for the vortex merger problem with an initial orientation with \(\theta=60^{\circ}\) at \(\text{Re}=1500\), corresponding to an interpolative test case. For in-the-loop training, true data points are assumed to be available at the end of \(\tau\) time steps. Offline training is equivalent to setting \(\tau=1\). Figure 17: The predicted velocity field at \(t=20\) for the vortex merger problem with an initial orientation with \(\theta=60^{\circ}\) at \(\text{Re}=1500\), corresponding to an interpolative test case. For in-the-loop training, true data points are assumed to be available at the end of \(\tau\) time steps. Offline training is equivalent to setting \(\tau=1\). #### 5.2.2 Extrapolative regime The extrapolative performance of MFON is corroborated for the vortex merger problem by considering \(\text{Re}=3000\) that is \(150\%\) larger than the highest value in the training data set. In addition, we carry out predictions up to \(t=40\), which is twice the time interval in the training data. We emphasize that this is a transient and highly convective flow problem and is often challenging for ROMs. Figure 18 shows the predictions of the \(1^{\text{st}}\) and \(10^{\text{th}}\) POD coefficients from \(t=0\) to \(t=40\) while the relative error in the reconstructed vorticity field is presented in Fig. 19. The GPOD model without the closure term fails to capture the true dynamics of the resolved scales. Meanwhile, the MFON with offline training deviates significantly from the target solution. Introducing the feedback loop in the training environment (i.e., \(\tau>1\)) improves the results, especially for longer time predictions. We also notice that \(\tau=10\) gives a worse model than both resulting from \(\tau=5\) and \(\tau=20\), similar to Section 5.2.1. Finally, we illustrate the effect of using in-the-loop training while penalizing deviations at the intermediate points (top panel in Fig. 2). In particular, we suppose that training data are available at each time step as defined in Eq. (19). However, this can be extended to arbitrary points that are unequally spaced in time. The relative error of the predicted Figure 19: The relative error in the predicted vorticity field as a function of time for the vortex merger problem with an initial orientation with \(\theta=45^{\circ}\) at \(\text{Re}=3000\), corresponding to an extrapolative test case. For in-the-loop training, true data points are assumed to be available at the end of \(\tau\) time steps. Figure 18: The evolution of the \(1^{\text{st}}\) and \(10^{\text{th}}\) POD modal coefficients up to \(t=40\) for the vortex merger problem with an initial orientation with \(\theta=45^{\circ}\) at \(\text{Re}=3000\), corresponding to an extrapolative test case. For in-the-loop training, true data points are assumed to be available at the end of \(\tau\) time steps. vorticity field as a function of time is presented in Fig. 20. We notice that the error of MFON with \(\tau>1\) is about twice as accurate as the GPOD model without closure. Interestingly, the \(\tau=10\) case is giving very good results, compared to Fig. 19 where the loss function is defined using data points at the end of \(\tau\) steps. The choice of optimal values of \(\tau\) and its dependence on the availability of intermediate data points is still an open research question. The contour plots of reconstructed vorticity fields are presented in Fig. 21, where we can visually inspect that MFON predictions with \(\tau=10\) and \(\tau=20\) are the closest to the projection of the FOM solution. Thus, viewing the closure problem as a multifidelity learning problem gives promising results. Moreover, using DeepONet framework allows the extrapolation to new parameter/time regimes where traditional PROMs usually fail. It is worth noting here that the POD bases are constructed with data up to \(t=20\), which are less representative of solution at \(t=40\) due to the convective nature of the vortex merger problem. Hence, we can spot a relatively large discrepancy between the FOM solution (e.g., Fig. 14) and the FOM projection in Fig. 21. Although the representability of the ROM basis functions is beyond the scope of the current work, we refer the interested readers to recent works on adaptive PROMs by online sampling from FOM [62, 63], time-dependent subspaces [64, 65], and custom-made basis function using DeepONets [66]. Figure 21: The predicted velocity field at \(t=40\) for the vortex merger problem with an initial orientation with \(\theta=45^{\circ}\) at \(\text{Re}=3000\), corresponding to an extrapolative test case. For in-the-loop training, true data points are assumed to be available also at the intermediate steps within the \(\tau\) time steps. Figure 20: The relative error in the predicted vorticity field as a function of time for the vortex merger problem with an initial orientation with \(\theta=45^{\circ}\) at \(\text{Re}=3000\), corresponding to an extrapolative test case. For in-the-loop training, true data points are assumed to be available also at the intermediate steps within the \(\tau\) time steps. ## 6 Concluding remarks and future work We have introduced a multifidelity operator learning framework for closure modeling in multiscale systems. We have used the combination of proper orthogonal decomposition (POD) and Galerkin projection to define the low fidelity model and trained a deep neural network (DeepONet) to learn correction terms. We have demonstrated that augmenting the Galerkin POD (GPOD) models with multifidelity operator learning (MFON) improves its generalizability to make predictions for situations with varying parameters and initial conditions. The extrapolation to different initial conditions is particularly a key advantage compared to state-of-the-art projection-based reduced order models (PROMs). Furthermore, we have leveraged differentiable programming tools to enable in-the-loop training of MFONs and provide a feedback loop between the MFON predictions at one time step to the inputs at the next step. In-the-loop training can be seen as a way of imposing temporal causality in time-dependent problems. Two test cases of convection-dominated flow problems have been considered which corroborate the efficacy of MFONs with in-the-loop training for closure modeling. Our numerical results support the conclusion that exposing the MFON to its own output during the training phase leads to more accurate predictions and expands the predictive skill horizon. As promising as the results are, many research questions remain open. As detailed in Section 5, it is still needed to understand the effect of the time window's length for in-the-loop training and how to optimize it with regard to the specific problem in hand, the size of the training data, the complexity of architecture, etc. Analogies between the time window for in-the-loop training and the assimilation window in the four-dimensional variational (4D-VAR) data assimilation can potentially provide insights into the selection of \(\tau\). It is also important to analyze the interplay between the low fidelity model of resolved scales and the MFON. Questions like: (1) how do inaccuracies and uncertainties in the low fidelity model feed back into the MFON predictions of the closure and vice versa; (2) how does the error at one time step propagate to the next steps and how does it affect the predictability limits of MFONs; and (3) how does improving/worsening the low fidelity model (e.g., by including more or fewer modes) change the MFON predictions, should be answered. Although we demonstrate the use of MFON to account for the effect of truncated scales on the dynamics of resolved ones in PROM setting, the closure problem manifests itself in an array of applications that can take advantage of the MFON framework. This could be due to coarsening the model resolution to the scales of interest (as in large eddy simulations), parameterization of physical processes (e.g., clouds) in climate modeling, or using physical assumptions to derive simplified governing equations and/or analytical solutions (e.g., self-similarity solutions of PDEs). ## Acknowledgments We would like to thank Dr Amanda Howard for helpful discussions and comments. The work of SA is supported by the Department of Energy (DOE) Office of Advanced Scientific Computing Research (ASCR) through the Pacific Northwest National Laboratory Distinguished Fellowship in Scientific Computing (Project No. 71268). The work of PS was supported by DOE's Scientific Discovery through Advanced Computing (SciDAC) program via a partnership on Earth system model development between the Office of Biological and Environmental Research (BER) and ASCR (Project No. 79699). Pacific Northwest National Laboratory is operated by Battelle Memorial Institute for DOE under Contract DE-AC05-76RL01830.
2303.01109
Gradient estimates for nonlinear elliptic equations involving the Witten Laplacian on smooth metric measure spaces and implications
This article presents new local and global gradient estimates of Li-Yau type for positive solutions to a class of nonlinear elliptic equations on smooth metric measure spaces involving the Witten Laplacian. The estimates are derived under natural lower bounds on the associated Bakry-\'Emery Ricci curvature tensor and find utility in proving general Harnack inequalities and Liouville-type theorems to mention a few. The results here unify, extend and improve various existing results in the literature for special nonlinearities already of huge interest and applications. Some important consequences are presented and discussed.
Ali Taheri, Vahideh Vahidifar
2023-03-02T09:42:53Z
http://arxiv.org/abs/2303.01109v1
Gradient estimates for nonlinear elliptic equations involving the Witten Laplacian on smooth metric measure spaces and implications ###### Abstract. This article presents new local and global gradient estimates of Li-Yau type for positive solutions to a class of nonlinear elliptic equations on smooth metric measure spaces involving the Witten Laplacian. The estimates are derived under natural lower bounds on the associated Bakry-Emery Ricci curvature tensor and find utility in proving general Harnack inequalities and Liouville-type theorems to mention a few. The results here unify, extend and improve various existing results in the literature for special nonlinearities already of huge interest and applications. Some important consequences are presented and discussed. Key words and phrases:Smooth metric measure spaces, Witten Laplacian, Gradient estimates, Nonlinear elliptic equations, Harnack inequalities, Li-Yau estimates, Liouville-type theorems 2010 Mathematics Subject Classification: 53C44, 58J60, 58J35, 60J60 ###### Contents * 1 Introduction * 2 Statement of the main results * 2.1 A local and a global Li-Yau type gradient estimate for (1.1) * 2.2 A local and a global elliptic Harnack inequality for (1.1) * 2.3 A Liouville-type theorem and some applications * 3 Proof of the Li-Yau type gradient estimate in Theorem 2.1 * 3.1 Some technical lemmas and identities * 3.2 Proof of Theorem 2.1 * 4 Proof of the elliptic Harnack inequality in Theorem 2.3 * 5 Proof of the Liouville result in Theorem 2.5 ## 1. Introduction Suppose \((M,g,d\mu)\) is a smooth metric measure space by which it is meant that \(M\) is a complete Riemannian manifold of dimension \(n\geq 2\) endowed with a weighted measure \(d\mu=e^{-f}dv_{g}\), \(f\) is a smooth potential on \(M\), \(g\) is the Riemannian metric and \(dv_{g}\) is the usual Riemannain volume measure. In this paper we derive gradient estimates of local and global Li-Yau type along with Harnack inequalities and Liouville-type results for positive smooth solutions \(u\) to the nonlinear elliptic equation: \[\Delta_{f}u(x)+\Sigma[x,u(x)]=0,\qquad\Sigma:M\times\mathbb{R}\to\mathbb{R}. \tag{1.1}\] Here \(\Delta_{f}u=e^{f}\text{div}[e^{-f}\nabla u]=\Delta u-\langle\nabla f,\nabla u\rangle\) is the Witten Laplacian (also known as the weighted or drifting Laplacian, or occasionally to emphasise the choice of \(f\), the \(f\)-Laplacian) where \(\nabla,\text{div}\) and \(\Delta\) are the usual gradient, divergence and Laplace-Beltrami operators associated with the metric \(g\). The Witten Laplacian is a symmetric diffusion operator with respect to the invariant weighted measure \(d\mu=e^{-f}dv_{g}\) and arises in many contexts ranging from probability theory, geometry and stochastic processes to quantum field theory and statistical mechanics [2, 4, 20]. It is the natural generalisation of the Laplace-Beltrami operator to the smooth metric measure space setting and it coincides with the latter precisely when the potential \(f\) is a constant. The nonlinearity \(\Sigma=\Sigma[x,u]\) in (1.1) is a sufficiently smooth multi-variable function depending on both the spatial variable \(x\in M\) and the independent variable (solution) \(u\). In what follows, in order to better orient the reader and showcase the results, we discuss various examples of such nonlinearities arising from different contexts, e.g., from conformal geometry and mathematical physics, each presenting a different phenomenon whilst depicting a corresponding singular or regular behaviour. As for the curvature properties of the triple \((M,g,d\mu)\) we introduce the generalised Bakry-Emery Ricci curvature tensor (_see_[2, 3, 30]), by writing, for \(m\geq n\), \[\mathscr{R}ic_{f}^{m}(g)=\mathscr{R}ic(g)+\text{Hess}(f)-\frac{\nabla f\otimes \nabla f}{m-n}. \tag{1.2}\] Here \(\mathscr{R}ic(g)\) denotes the Riemannain Ricci curvature tensor of \(g\), \(\text{Hess}(f)\) stands for the Hessian of \(f\), and \(m\geq n\) is a fixed constant. For the sake of clarity we point out that in the event \(m=n\), by convention, \(f\) is only allowed to be a constant, resulting in \(\mathscr{R}ic_{f}^{n}(g)=\mathscr{R}ic(g)\), whilst, we also allow for \(m=\infty\) in which case by formally passing to the limit in (1.2) we write \(\mathscr{R}ic_{f}^{\infty}(g)=\mathscr{R}ic(g)+\text{Hess}(f):=\mathscr{R}ic_{ f}(g)\). According to the weighted Bochner-Weitzenbock formula, for every smooth function \(u\) on \(M\) we have, \[\frac{1}{2}\Delta_{f}|\nabla u|^{2}=|\text{Hess}(u)|^{2}+\langle\nabla u, \nabla\Delta_{f}u\rangle+\mathscr{R}ic_{f}(\nabla u,\nabla u). \tag{1.3}\] Hence by an application of the Cauchy-Schwartz inequality giving \(\Delta u\leq\sqrt{n}|\text{Hess}(u)|\) and upon recalling the identity \(\Delta_{f}u=\Delta u-\langle\nabla f,\nabla u\rangle\) it is evident that \[|\text{Hess}(u)|^{2}\geq\frac{(\Delta u)^{2}}{n},\qquad\frac{(\Delta u)^{2}}{ n}+\frac{\langle\nabla f,\nabla u\rangle^{2}}{m-n}\geq\frac{(\Delta_{f}u)^{2}}{ m}, \tag{1.4}\] and so it follows from (1.3) and (1.4) that \[\frac{1}{2}\Delta_{f}|\nabla u|^{2}-\langle\nabla u,\nabla\Delta_{f}u\rangle \geq\frac{1}{m}(\Delta_{f}u)^{2}+\mathscr{R}ic_{f}^{m}(\nabla u,\nabla u). \tag{1.5}\] In particular, subject to a curvature lower bound \(\mathscr{R}ic_{f}^{m}(g)\geq\mathsf{k}g\), the operator \(L=\Delta_{f}\) is seen to satisfy the curvature-dimension condition \(\text{CD}(\mathsf{k},m)\) (_cf_. [2, 3, 4, 44]). Our principal objective in this paper is to develop local and global gradient estimates of Li-Yau type and Harnack inequalities for positive smooth solutions to (1.1). It is well-known that these estimates form the basis for deriving various qualitative properties of solutions and are thus of great significance and utility (_see_, e.g., [26, 28, 35, 49]). Such properties include (but are not restricted to) Holder regularity and higher order differentiability, sharp spectral asymptotics and bounds, heat kernel bounds, Liouville-type results and many more (_cf._[1, 6, 8, 18, 20, 29, 37, 38, 39, 53]). Whilst in proving gradient estimates one typically works with an explicit nonlinearity with a specific structure (of singularity, regularity, decay and growth), in this paper we keep the analysis and discussion on a fairly general level without confining to specific examples in order to firstly provide a unified treatment of the estimates and secondly to more clearly see how the structure and form of the nonlinearity influences the estimates and subsequent results. As such our approach and analysis largely unify, extend and at places, improve various existing results in the literature for specific choices of nonlinearities (see below for more). Gradient estimates for positive solutions to (1.1) in the special case of the nonlinearity being a superposition of a logarithmic and a linear term with variable coefficients: \[\Delta_{f}u(x)+\mathsf{p}(x)u(x)\log u(x)+\mathsf{q}(x)u(x)=0, \tag{1.6}\] along with its parabolic counterpart have been the subject of extensive studies (_see_, e.g., Ma [31], Ruan [34], Wu [46] and Yang [50] and the references therein). The interest in such problems originates from its natural links with gradient Ricci solitons. Recall that a Riemannian manifold \((M,g)\) is said to be a gradient Ricci soliton _iff_ there there exists a smooth function \(f\) on \(M\) and a constant \(\lambda\in\mathbb{R}\) such that (_cf._[11, 15, 30]) \[\mathscr{R}ic_{f}(g)=\mathscr{R}ic(g)+\operatorname{Hess}(f)=\lambda g. \tag{1.7}\] A gradient Ricci soliton can be expanding (\(\lambda>0\)), steady (\(\lambda=0\)) or shrinking (\(\lambda<0\)). The notion is a generalisation of an Einstein manifold and has a fundamental role in the analysis of singularities of the Ricci flow [23, 53]. Taking trace from both sides of (1.7) and using the contracted Bianchi identity leads one to a simple form of (1.6) with constant coefficients: \(\Delta u+2\lambda u\log u=(A_{0}-n\lambda)u\) for suitable constant \(A_{0}\) and \(u=e^{f}\) (_see_[31] for details). Other types of equations closely relating to (1.6) including: \[\Delta_{f}u(x)+\mathsf{p}(x)u^{a}(x)|\log u|^{b}(x)=0, \tag{1.8}\] for real exponents \(a,b\) or more generally for a nonlinear function \(\gamma=\gamma(s)\) on \(\mathbb{R}\): \[\Delta_{f}u(x)+\mathsf{p}(x)u^{a}(x)\gamma(\log u)(x)+\mathsf{q}(x)u^{b}(x)=0, \tag{1.9}\] have been studied in detail in [9, 17, 40, 41, 46, 47]. Yamabe type equations \(\Delta u+\mathsf{p}(x)u^{s}+\mathsf{q}(x)u=0\) are also of form (1.1) with a power-like nonlinearity. Bidaut-Veron and Veron [5] studied the equation \(\Delta u+u^{s}+\mathsf{q}u=0\) on a compact manifold and under suitable conditions on the Ricci tensor, \(n\) and \(s,\mathsf{q}\) showed that it only admits constant solutions. Gidas and Spruck [19] considered \[\Delta u(x)+\mathsf{p}(x)u^{s}(x)=0,\qquad 1\leq s<(n+2)/(n-2), \tag{1.10}\] and showed that when \(\mathscr{R}ic(g)\geq 0\) any non-negative solution to this equation must be zero. Yang [51] showed that the same equation with constant \(\mathsf{p}>0\) and \(s<0\) admits no positive solution when \(\mathscr{R}ic(g)\geq 0\). Note that the case \(s=3\) in (1.10) is related to Yang-Mills equation (_cf._ Cafarelli, Gidas and Spruck [9]) and the case \(s<0\) is related to the steady states of the thin films equation (_cf._ Guo and Wei [21]). For more related results see Brandolini, Rigoli and Setti [7], Li [25], Li, Tam and Yang[27] and Zhang [52] and for a more detailed account on the Yamabe problem in geometry see [24, 32]. The natural form of Yamabe equation in the setting of smooth metric measure spaces is \[\Delta_{f}u(x)+\mathsf{p}(x)u^{s}(x)+\mathsf{q}(x)u(x)=0. \tag{1.11}\] For gradient estimates, Harnack inequalities and other counterparts of the above results we refer the reader to Case [12], Wu [47], Zhang and Ma [54]. A more general form of Yamabe equation is the Einstein-scalar field Lichnerowicz equation (see, e.g., Choquet-Bruhat [14], Chow [15], Zhang [53]). When the underlying manifold has dimension \(n\geq 3\) this takes the form \(\Delta u+\mathsf{p}(x)u^{\alpha}+\mathsf{q}(x)u^{\beta}+\mathsf{r}(x)u=0\) with \(\alpha=(n+2)/(n-2)\) and \(\beta=(3n-2)/(n-2)\) while when \(n=2\) this takes the form \(\Delta u+\mathsf{p}(x)e^{2u}+\mathsf{q}(x)e^{-2u}+\mathsf{r}(x)=0\). The Einstein-scalar field Lichnerowicz equation in the setting of smooth metric measure spaces can be further generalised and written as: \[\Delta_{f}u+\mathsf{p}(x)u^{\alpha}+\mathsf{q}(x)u^{\beta}+\mathsf{r}(x)u\log u +\mathsf{h}(x)u=0, \tag{1.12}\] and \[\Delta_{f}u+\mathsf{p}(x)e^{2u}+\mathsf{q}(x)e^{-2u}+\mathsf{r}(x)=0. \tag{1.13}\] For gradient estimates, Harnack inequalities and Liouville-type results in this and related contexts see Dung, Khanh and Ngo [16], Song and Zhao [36], Taheri [40, 41], Wu [48] and the references therein. Let us end this introduction by briefly describing the plan of the paper. In Section 2 we present the main results of the paper, namely, a local and global gradient estimate for equation (1.1), followed by both local and global Harnack inequalities and a general Liouville-type result. The subsequent sections, namely, 3, 4 and 5 are then devoted to the detailed proofs respectively. **Notation** Fixing a base point \(p\in M\) we denote by \(d=d_{p}(x)\) the Riemannian distance between \(x\) and \(p\) with respect to the metric \(g\) and by \(r=r_{p}(x)\) the geodesic radial variable with origin at \(p\). We denote by \(\mathcal{B}_{R}(p)\subset M\) the closed geodesic ball of radius \(R>0\) centred at \(p\). When the choice of the point \(p\) is clear from the context we often abbreviate and write \(d(x)\), \(r(x)\) or \(\mathcal{B}_{R}\) respectively. We write \(s_{+}=\max(s,0)\) and \(s_{-}=\min(s,0)\) and so \(s=s_{+}+s_{-}\) with \(s_{+}\geq 0\) and \(s_{-}\leq 0\). For given function \(\Sigma=\Sigma[x,u]\) we denote its partial derivatives by subscripts, e.g., \(\Sigma_{x}\), \(\Sigma_{u}\), _etc._ and we reserve the notation \(\Sigma^{x}\) for the function \(\Sigma[\cdot,u]\) obtained by freezing the argument \(u\) and viewing it as a function of \(x\); e.g., below we frequently use \(\nabla\Sigma^{x}\) and \(\Delta_{f}\Sigma^{x}\). For the sake of reader's convenience we recall that in local coordinates \((x^{i})\) we have the following formulae for the Laplace-Beltrami operator, Riemann and Ricci curvature tensors respectively: \[\Delta=\frac{1}{\sqrt{|g|}}\frac{\partial}{\partial x_{i}}\left(\sqrt{|g|}g^{ ij}\frac{\partial}{\partial x_{j}}\right), \tag{1.14}\] and \[[\mathscr{R}m(g)]^{\ell}_{ijk}=\frac{\partial\Gamma^{\ell}_{jk}}{\partial x_ {i}}-\frac{\partial\Gamma^{\ell}_{ik}}{\partial x_{j}}+\Gamma^{p}_{jk}\Gamma^ {\ell}_{ip}-\Gamma^{p}_{ik}\Gamma^{\ell}_{jp}, \tag{1.15}\] and \[[\mathscr{R}ic(g)]_{ij}=\frac{\partial\Gamma^{k}_{ij}}{\partial x_{k}}-\frac {\partial\Gamma^{\ell}_{\ell j}}{\partial x_{i}}+\Gamma^{k}_{ij}\Gamma^{\ell }_{\ell k}-\Gamma^{\ell}_{ik}\Gamma^{k}_{\ell j}. \tag{1.16}\] Note that here \[\Gamma^{k}_{ij}=\frac{1}{2}g^{k\ell}\left(\frac{\partial g_{j\ell}}{\partial x_{i }}+\frac{\partial g_{i\ell}}{\partial x_{j}}-\frac{\partial g_{ij}}{\partial x _{\ell}}\right), \tag{1.17}\] are the Christoffel symbols and \(g_{ij}\), \(|g|\) and \(g^{ij}=(g^{-1})_{ij}\) are respectively the components, determinant and the components of the inverse of the metric tensor \(g\). ## 2. Statement of the main results In this section we present the main results of the paper. The proofs are delegated to the subsequent sections. We emphasise that throughout the paper, the curvature lower bounds are expressed in the form \(\mathscr{R}ic_{f}^{m}(g)\geq-(m-1)kg\) with \(k\geq 0\) and \(m\) a suitable constant \(n\leq m<\infty\). As the estimates here are of Li-Yau type, it is well-known that a lower bound on \(\mathscr{R}ic_{f}(g)\) is not sufficient for the purpose. ### A local and a global Li-Yau type gradient estimate for (1.1) **Theorem 2.1**.: _Let \((M,g,d\mu)\) be a complete smooth metric measure space with \(d\mu=e^{-f}dv_{g}\) and assume that \(\mathscr{R}ic_{f}^{m}(g)\geq-(m-1)kg\) in \(\mathcal{B}_{2R}=\mathcal{B}_{2R}(p)\) for suitable constants \(m\geq n\) and \(k\geq 0\). Let \(u\) be a positive solution to (1.1) in \(\mathcal{B}_{2R}\). Then for every \(\mu>1\) and \(\varepsilon\in(0,1)\) and every \(x\in\mathcal{B}_{R}\),_ \[\frac{|\nabla u|^{2}}{\mu u^{2}}+\frac{\Sigma[x,u]}{u} \leq \frac{m\mu}{2R^{2}}\left[(c_{2}+(m-1)c_{1}(1+R\sqrt{k})+2c_{1}^{2 })+\frac{mc_{1}^{2}\mu^{2}}{4(\mu-1)}\right]\] \[+\frac{\sqrt{m}}{2}\bigg{[}\frac{m\mu^{2}\mathsf{A}_{\Sigma}^{2}} {(1-\varepsilon)(\mu-1)^{2}}+\left[\frac{27m\mu^{2}\mathsf{B}_{\Sigma}^{4}}{4 \varepsilon(\mu-1)^{2}}\right]^{1/3}+2\mu\mathsf{C}_{\Sigma}\bigg{]}^{1/2}\] \[+\frac{m\mu}{2}\sup_{\mathcal{B}_{2R}}\left\{\frac{(u\Sigma_{u} [x,u]-\Sigma[x,u])_{+}}{u}\right\}, \tag{2.1}\] _where the quantities \(\mathsf{A}_{\Sigma}\), \(\mathsf{B}_{\Sigma}\) and \(\mathsf{C}_{\Sigma}\) are given by_ \[\mathsf{A}_{\Sigma}=\sup_{\mathcal{B}_{2R}}\left\{\frac{2(m-1)ku+(-\Sigma[x,u ]+u\Sigma_{u}[x,u]-\mu u^{2}\Sigma_{uu}[x,u])_{+}}{2u}\right\}, \tag{2.2}\] \[\mathsf{B}_{\Sigma}=\sup_{\mathcal{B}_{2R}}\left\{\frac{|\Sigma_{x}[x,u]-\mu u \Sigma_{xu}[x,u]|}{u}\right\},\qquad\mathsf{C}_{\Sigma}=\sup_{\mathcal{B}_{2R }}\left\{\frac{(-\Delta_{f}\Sigma^{x}[x,u])_{+}}{u}\right\}. \tag{2.3}\] The local estimate above has a global counterpart subject to the prescribed bounds in the theorem being global. The proof follows by passing to the limit \(R\to\infty\) in (2.1) and taking into account the vanishing of certain terms as a result of the bounds being global and the relevant constants being independent of \(R\). The precise formulation of this is given in the following theorem. **Theorem 2.2**.: _Let \((M,g,d\mu)\) be a complete smooth metric measure space with \(d\mu=e^{-f}dv_{g}\) and \(\mathscr{R}ic_{f}^{m}(g)\geq-(m-1)kg\) on \(M\) where \(m\geq n\) and \(k\geq 0\). Assume \(u\) is a positive solution to (1.1) on \(M\). Then for every \(\mu>1\) and \(\varepsilon\in(0,1)\) and every \(x\in M\),_ \[\frac{|\nabla u|^{2}}{\mu u^{2}}+\frac{\Sigma[x,u]}{u} \leq \frac{\sqrt{m}}{2}\bigg{[}\frac{m\mu^{2}\mathsf{A}_{\Sigma}^{2}}{( 1-\varepsilon)(\mu-1)^{2}}+\bigg{[}\frac{27m\mu^{2}\mathsf{B}_{\Sigma}^{4}}{4 \varepsilon(\mu-1)^{2}}\bigg{]}^{1/3}+2\mu\mathsf{C}_{\Sigma}\bigg{]}^{1/2} \tag{2.4}\] \[+\frac{m\mu}{2}\sup_{M}\left\{\frac{(u\Sigma_{u}[x,u]-\Sigma[x,u] )_{+}}{u}\right\}.\] _Here \(\mathsf{A}_{\Sigma}\), \(\mathsf{B}_{\Sigma}\) and \(\mathsf{C}_{\Sigma}\) are as in (2.2)-(2.3) in Theorem 2.1 except that now the supremums are taken over all of \(M\)._ ### A local and a global elliptic Harnack inequality for (1.1) **Theorem 2.3**.: _Under the assumptions of Theorem 2.1 and the Bakry-Emery curvature bound \(\mathscr{R}ic_{f}^{m}(g)\geq-(m-1)kg\) on \(\mathcal{B}_{2R}\), for any positive solution \(u\) to (1.1), and any \(x_{1},x_{2}\in\mathcal{B}_{R}\) we have_ \[u(x_{2})\leq e^{2R\sqrt{\mathbb{H}}}u(x_{1}). \tag{2.5}\] _The positive constant \(\mathbb{H}\) can be explicitly expressed in terms of the local bounds as_ \[\mathbb{H}= \,m\mu^{2}[(c_{2}+(m-1)c_{1}(1+R\sqrt{k})+2c_{1}^{2})+mc_{1}^{2} \mu^{2}/(4(\mu-1))]/(2R^{2})\] \[+\sqrt{m}\mu/2\{m\mu^{2}\mathsf{A}_{\Sigma}^{2}/[(1-\varepsilon) (\mu-1)^{2}]+[27m\mu^{2}\mathsf{B}_{\Sigma}^{4}/(4\varepsilon(\mu-1)^{2})]^{1 /3}+2\mu\mathsf{C}_{\Sigma}\}^{1/2}\] \[+m\mu^{2}\sup_{\mathcal{B}_{2R}}\{(u\Sigma_{u}[x,u]-\Sigma[x,u])_ {+}/(2u)\}\] \[-\mu\inf_{\mathcal{B}_{R}}\{(\Sigma[x,u])_{-}/u\}, \tag{2.6}\] _where \(\mathsf{A}_{\Sigma}\), \(\mathsf{B}_{\Sigma}\) and \(\mathsf{C}_{\Sigma}\) are as in (2.2)-(2.3) in Theorem 2.1. In particular from (2.5) we have_ \[\sup_{\mathcal{B}_{R}}u\leq e^{2R\sqrt{\mathbb{H}}}\inf_{\mathcal{B}_{R}}u. \tag{2.7}\] For the global version we can use a similar argument utilising the global bounds in Theorem 2.2 and have the counterpart of (2.5) with \(d(x_{1},x_{2})\) replacing \(2R\) and \(\mathbb{H}>0\) now being its global version from (2.4). The precise formulation is given below. **Theorem 2.4**.: _Under the assumptions of Theorem 2.2 and the Bakry-Emery curvature bound \(\mathscr{R}ic_{f}^{m}(g)\geq-(m-1)kg\) on \(M\), for any positive solution \(u\) to (1.1), and any \(x_{1},x_{2}\in M\) we have_ \[u(x_{2})\leq e^{d(x_{1},x_{2})\sqrt{\mathbb{H}}}u(x_{1}). \tag{2.8}\] _The positive constant \(\mathbb{H}\) can be explicitly expressed in terms of the global bounds as_ \[\mathbb{H}= \,\sqrt{m}\mu/2\{m\mu^{2}\mathsf{A}_{\Sigma}^{2}/[(1-\varepsilon )(\mu-1)^{2}]+[27m\mu^{2}\mathsf{B}_{\Sigma}^{4}/(4\varepsilon(\mu-1)^{2})]^{ 1/3}+2\mu\mathsf{C}_{\Sigma}\}^{1/2}\] \[+m\mu^{2}\sup_{M}\{(u\Sigma_{u}[x,u]-\Sigma[x,u])_{+}/(2u)\}-\mu \inf_{M}\{(\Sigma[x,u])_{-}/u\}, \tag{2.9}\] _where \(\mathsf{A}_{\Sigma}\), \(\mathsf{B}_{\Sigma}\) and \(\mathsf{C}_{\Sigma}\) are as in Theorem 2.2._ ### A Liouville-type theorem and some applications **Theorem 2.5**.: _Let \((M,g,d\mu)\) be a smooth metric measure space with \(d\mu=e^{-f}dv_{g}\) satisfying \(\mathscr{R}ic_{f}^{m}(g)\geq 0\) in \(M\). Let \(u\) be a positive solution to \(\Delta_{f}u+\Sigma[u]=0\). Assume that \(\Sigma[u]\geq 0\), \(u\Sigma_{u}[u]-\Sigma[u]\leq 0\) and \(\mu u^{2}\Sigma_{uu}[u]-u\Sigma_{u}[u]+\Sigma[u]\geq 0\) for some \(\mu>1\). Then \(u\) must be a constant. In particular \(\Sigma[u]=0\)._ The proof of this theorem follows from the gradient estimates established above and is presented in Section 5. We end this section by giving two stark applications of the above theorem. To this end consider first a superposition of power-like nonlinearities with real coefficients \(\mathsf{p}_{j}\) and exponents \(a_{j}\) for \(1\leq j\leq N\) in the form \[\Sigma[u]=\sum_{j=1}^{N}\mathsf{p}_{j}u^{a_{j}}. \tag{2.10}\] A direct calculation gives \[u\Sigma_{u}[u]-\Sigma[u] =\sum_{j=1}^{N}\mathsf{p}_{j}(a_{j}-1)u^{a_{j}}, \tag{2.11}\] \[\mu u^{2}\Sigma_{uu}[u]-u\Sigma_{u}[u]+\Sigma[u] =\sum_{j=1}^{N}[\mu\mathsf{p}_{j}a_{j}(a_{j}-1)-\mathsf{p}_{j}a_{ j}+\mathsf{p}_{j}]u^{a_{j}}\] \[=\sum_{j=1}^{N}[\mathsf{p}_{j}(a_{j}-1)(\mu a_{j}-1)]u^{a_{j}}. \tag{2.12}\] Evidently for the range \(\mathsf{p}_{j}\geq 0\) we have \(\Sigma(u)\geq 0\) whilst subject to \(a_{j}\leq 1\) we have \(u\Sigma_{u}[u]-\Sigma[u]\leq 0\) and \(\mu u^{2}\Sigma_{uu}[u]-u\Sigma_{u}[u]+\Sigma[u]\geq 0\) (by choosing \(\mu>1\) suitably). Theorem 2.5 now leads to the following conclusion extending earlier results on Yamabe type problems to more general nonlinearities (_cf._[16, 19, 51, 48]). Further applications and results in this direction will be discussed in a forthcoming paper (_see_ also [40, 41]). **Theorem 2.6**.: _Let \((M,g,d\mu)\) be a complete smooth metric measure space with \(d\mu=e^{-f}dv_{g}\) and \(\mathscr{R}ic_{f}^{m}(g)\geq 0\). Let \(u\) be a positive smooth solution to the equation_ \[\Delta_{f}u+\sum_{j=1}^{N}\mathsf{p}_{j}u^{a_{j}}=0. \tag{2.13}\] _If \(\mathsf{p}_{j}\geq 0\) and \(a_{j}\leq 1\) for \(1\leq j\leq N\) then \(u\) must be a constant._ _Remark 2.7_.: Note that a constant solution to (1.1) must be a zero of \(\Sigma\). Thus if \(\Sigma\) has no positive zeros then the above Liouville theorem becomes a non-existence result. In Theorem 2.6 and for (2.10) this happens when \(\mathsf{p}_{j}>0\) for at least one \(1\leq j\leq N\). As another application, again relating to the discussions in Section 1, consider a superposition of a logarithmic and a power-like nonlinearity with real coefficients \(\mathsf{p},\mathsf{q}\), exponent \(s\) and \(\gamma\in\mathscr{C}^{2}(\mathbb{R})\) in the form \[\Sigma[u]=\mathsf{p}u\gamma(\log u)+\mathsf{q}u^{s}. \tag{2.14}\] A straightforward calculation then gives \(u\Sigma_{u}[u]-\Sigma[u]=\mathsf{p}u\gamma^{\prime}(\log u)+\mathsf{q}(s-1)u^{s}\) and \(\mu u^{2}\Sigma_{uu}[u]-u\Sigma_{u}[u]+\Sigma[u]=\mathsf{p}[(\mu-1)u\gamma^{ \prime}+\mu u\gamma^{\prime\prime}]+[\mathsf{q}(s-1)(\mu s-1)]u^{s}\). The following theorem now directly results from Theorem 2.5. **Theorem 2.8**.: _Let \((M,g,d\mu)\) be a complete smooth metric measure space with \(d\mu=e^{-f}dv_{g}\) and \(\mathscr{R}ic_{f}^{m}(g)\geq 0\). Let \(u\) be a positive smooth solution to the equation_ \[\Delta_{f}u+\mathsf{p}u\gamma(\log u)+\mathsf{q}u^{s}=0. \tag{2.15}\] _Assume that \(\mathsf{p},\mathsf{q}\geq 0\), \(s\leq 1\) and that along the solution \(u\) we have \(\gamma\geq 0\), \(\gamma^{\prime}\leq 0\) and \(\mu\gamma^{\prime\prime}+(\mu-1)\gamma^{\prime}\geq 0\) for some \(\mu>1\)\((\)with \(1<\mu<1/s\) if \(0<s<1)\). Then \(u\) must be a constant._ ## 3. Proof of the Li-Yau type gradient estimate in Theorem 2.1 This section is devoted to the proof of the main estimate for the positive solutions of (1.1) in its local form. In its global form, the estimate, as seen, follows by passing to the limit \(R\to\infty\). As this requires a number of technical lemmas and tools, we pause briefly, to present these results and tools in the next subsection, before moving on to the proof of the local estimate in Theorem 2.1 in the following subsection. ### Some technical lemmas and identities **Lemma 3.1**.: _Let \(u\) be a positive solution to the equation (1.1) and let \(h=\log u\). Then \(h\) satisfies the equation_ \[\Delta_{f}h+|\nabla h|^{2}+e^{-h}\Sigma[x,e^{h}]=0. \tag{3.1}\] Proof.: An easy calculation gives \(\nabla h=(\nabla u)/u\) and \(\Delta h=(\Delta u)/u-|\nabla u|^{2}/u^{2}\). Hence \(\Delta_{f}h=(\Delta_{f}u)/u-|\nabla u|^{2}/u^{2}=-\Sigma[x,u]/u-|\nabla u|^{2} /u^{2}\) giving the desired conclusion. **Lemma 3.2**.: _Let \(u\) be a positive solution to (1.1), \(h=\log u\) and let \(H\) be defined by_ \[H=|\nabla h|^{2}+\mu e^{-h}\Sigma[x,e^{h}], \tag{3.2}\] _where \(\mu\geq 1\) is an arbitrary constant. Then \(H\) satisfies the equation_ \[\Delta_{f}H= \ 2|\nabla^{2}h|^{2}+2\frac{\langle\nabla f,\nabla h\rangle^{2}}{ m-n}-2\langle\nabla h,\nabla H\rangle+2\mathscr{R}ic_{f}^{m}(\nabla h,\nabla h)\] \[-2(\mu-1)e^{-h}\Sigma|\nabla h|^{2}+2(\mu-1)e^{-h}\langle\nabla h,\nabla\Sigma\rangle+\mu\Delta_{f}(e^{-h}\Sigma), \tag{3.3}\] _where we have abbreviated the arguments of \(\Sigma\) and its derivatives._ Proof.: Referring to the formulation of \(H\) is (3.2) an application of \(\Delta_{f}\) to both sides of the equation gives \[\Delta_{f}H=\Delta_{f}|\nabla h|^{2}+\mu\Delta_{f}(e^{-h}\Sigma[x,e^{h}]). \tag{3.4}\] Furthermore, referring to (3.1) and again to (3.2) it is evident that we have the relation: \[\Delta_{f}h=-(|\nabla h|^{2}+e^{-h}\Sigma[x,e^{h}])=-(H-(\mu-1)e^{-h}\Sigma[x, e^{h}]). \tag{3.5}\] Now as for the first term on the right-hand side of (3.4), by the generalised Bochner-Weitzenbock formula (as applied to \(h\)), we have \[\Delta_{f}|\nabla h|^{2}=2|\nabla^{2}h|^{2}+2\langle\nabla h,\nabla\Delta_{f} h\rangle+2\mathscr{R}ic_{f}^{m}(\nabla h,\nabla h)+2\frac{\langle\nabla f,\nabla h \rangle^{2}}{m-n}. \tag{3.6}\] Hence, by substituting (3.6) into (3.4) and making note of (3.5), we have after a basic differentiation, \[\Delta_{f}H = 2|\nabla^{2}h|^{2}+2\langle\nabla h,\nabla(-H+(\mu-1)e^{-h}\Sigma[x,e^{h}])\rangle \tag{3.7}\] \[+2\mathscr{R}ic_{f}^{m}(\nabla h,\nabla h)+2\frac{\langle\nabla f,\nabla h\rangle^{2}}{m-n}+\mu\Delta_{f}(e^{-f}\Sigma[x,e^{h}])\] \[= 2|\nabla^{2}h|^{2}-2\langle\nabla h,\nabla H\rangle-2(\mu-1)e^{ -h}\Sigma[x,e^{h}]|\nabla h|^{2}\] \[+2(\mu-1)e^{-h}\langle\nabla h,\nabla\Sigma[x,e^{h}]\rangle+2 \mathscr{R}ic_{f}^{m}(\nabla h,\nabla h)\] \[+2\frac{\langle\nabla f,\nabla h\rangle^{2}}{m-n}+\mu\Delta_{f} (e^{-h}\Sigma[x,e^{h}]),\] which upon a rearrangement of terms gives the desired identity. **Lemma 3.3**.: _Let \(u\) be a positive solution to (1.1), \(h=\log u\) and let \(H\) be as defined by (3.2). Then, if \(\mathscr{R}ic_{f}^{m}(g)\geq-(m-1)kg\), we have_ \[\Delta_{f}H \geq \,2\frac{(\Delta_{f}h)^{2}}{m}-2\langle\nabla h,\nabla H \rangle+[e^{-h}\Sigma-\Sigma_{u}]H \tag{3.8}\] \[+\left[e^{-h}\Sigma-\Sigma_{u}+\mu e^{h}\Sigma_{uu}-2(m-1)k \right]|\nabla h|^{2}\] \[-2\langle\nabla h,[e^{-h}\Sigma_{x}-\mu\Sigma_{xu}]\rangle+\mu e ^{-h}\Delta_{f}\Sigma^{x}.\] Proof.: By virtue of the bound \(\mathscr{R}ic_{f}^{m}(g)\geq-(m-1)kg\) it follows upon recalling the identity in Lemma 3.2 that \[\Delta_{f}H \geq \,2\frac{(\Delta_{f}h)^{2}}{m}-2\langle\nabla h,\nabla H \rangle-2(\mu-1)e^{-h}\Sigma|\nabla h|^{2} \tag{3.9}\] \[+2(\mu-1)e^{-h}\langle\nabla h,\nabla\Sigma\rangle-2(m-1)k| \nabla h|^{2}\] \[+\mu\Sigma\Delta_{f}e^{-h}+2\mu\langle\nabla e^{-h},\nabla \Sigma\rangle+\mu e^{-h}\Delta_{f}\Sigma.\] Note that in concluding the above inequality, specifically, the first term on the right-hand side, we have made use of the basic inequalities \[|\nabla^{2}h|^{2}+\frac{\langle\nabla f,\nabla h\rangle^{2}}{m-n}\geq\frac{( \Delta h)^{2}}{n}+\frac{\langle\nabla f,\nabla h\rangle^{2}}{m-n}\geq\frac{( \Delta_{f}h)^{2}}{m}. \tag{3.10}\] Let us now proceed by attending to some useful and straightforward calculations relating to the nonlinear term \(\Sigma=\Sigma(x,e^{h})\). Evidently \[\nabla\Sigma[x,e^{h}]=\Sigma_{x}[x,e^{h}]+e^{h}\Sigma_{u}[x,e^{h}]\nabla h, \tag{3.11}\] and thus moving on to the Laplacian we can write \[\Delta\Sigma[x,e^{h}]=\text{div}(\Sigma_{x}[x,e^{h}]+e^{h}\Sigma_{u}[x,e^{h}] \nabla h). \tag{3.12}\] It is convenient to do the calculations in local coordinates and so we proceed by writing \[\Delta\Sigma(x,e^{h})= \sum_{i=1}^{n}\frac{\partial}{\partial x_{i}}(\Sigma_{x_{i}}[x,e^{h }]+e^{h}\Sigma_{u}[x,e^{h}]h_{i})\] \[= \sum_{i=1}^{n}\bigg{\{}\Sigma_{x_{i}x_{i}}[x,e^{h}]+\Sigma_{x_{i}u }[x,e^{h}](e^{h})_{i}+e^{h}h_{i}\Sigma_{u}[x,e^{h}]h_{i}\] \[+e^{h}(\Sigma_{x_{i}u}[x,e^{h}]+\Sigma_{uu}[x,e^{h}](e^{h})_{i})h_ {i}+e^{h}\Sigma_{u}[x,e^{h}]h_{ii}\bigg{\}}. \tag{3.13}\] Abbreviating the arguments \([x,e^{h}]\) of \(\Sigma\) for convenience and rewriting the above we have \[\Delta\Sigma =\Delta\Sigma^{x}+e^{h}\langle\Sigma_{xu},\nabla h\rangle+e^{h}| \nabla h|^{2}\Sigma_{u}\] \[+e^{h}\langle\Sigma_{xu},\nabla h\rangle+e^{2h}|\nabla h|^{2} \Sigma_{uu}+e^{h}\Sigma_{u}\Delta h\] \[=\Delta\Sigma^{x}+2e^{h}\langle\Sigma_{xu},\nabla h\rangle+e^{h} |\nabla h|^{2}(\Sigma_{u}+e^{h}\Sigma_{uu})+e^{h}\Sigma_{u}\Delta h. \tag{3.14}\] As a result the above upon substitution give \[\Delta_{f}\Sigma =\Delta\Sigma-\langle\nabla f,\nabla\Sigma\rangle=\Delta\Sigma- \langle\nabla f,(\Sigma_{x}+e^{h}\Sigma_{u}\nabla h)\rangle\] \[=\Delta\Sigma-\langle\nabla f,\Sigma_{x}\rangle-e^{h}\Sigma_{u} \langle\nabla f,\nabla h\rangle\] \[=\Delta\Sigma^{x}-\langle\nabla f,\nabla\Sigma^{x}\rangle+2e^{h} \langle\Sigma_{xu},\nabla h\rangle\] \[\quad+e^{h}|\nabla h|^{2}(\Sigma_{u}+e^{h}\Sigma_{uu})+e^{h} \Sigma_{u}\Delta h-e^{h}\Sigma_{u}\langle\nabla f,\nabla h\rangle\] \[=\Delta_{f}\Sigma^{x}+2e^{h}\langle\Sigma_{xu},\nabla h\rangle+e^ {h}|\nabla h|^{2}(\Sigma_{u}+e^{h}\Sigma_{uu})+e^{h}\Sigma_{u}\Delta_{f}h. \tag{3.15}\] Moreover for the sake of future reference we also note that \[\Delta_{f}e^{-h} =\Delta e^{-h}-\langle\nabla f,\nabla e^{-h}\rangle\] \[=\text{div}(-e^{-h}\nabla h)+e^{-h}\langle\nabla f,\nabla h\rangle\] \[=-e^{-h}\Delta h+e^{-h}|\nabla h|^{2}+e^{-h}\langle\nabla f, \nabla h\rangle\] \[=-e^{-h}(\Delta_{f}h-|\nabla h|^{2}). \tag{3.16}\] Now returning to the inequality (3.9) and upon substituting from (3.11), (3.15) and (3.16) we obtain \[\Delta_{f}H \geq 2(\Delta_{f}h)^{2}/m-2\langle\nabla h,\nabla H\rangle-2(\mu-1)e^ {-h}\Sigma|\nabla h|^{2}\] \[+2(\mu-1)e^{-h}\langle\nabla h,\Sigma_{x}+e^{h}\Sigma_{u}\nabla h \rangle-2(m-1)k|\nabla h|^{2}\] \[-\mu e^{-h}\Sigma(\Delta_{f}h-|\nabla h|^{2})-2\mu e^{-h}\langle \nabla h,\Sigma_{x}+e^{h}\Sigma_{u}\nabla h\rangle\] \[+\mu e^{-h}[\Delta_{f}\Sigma^{x}+2e^{h}\langle\Sigma_{xu},\nabla h \rangle+e^{h}(\Sigma_{u}+e^{h}\Sigma_{uu})|\nabla h|^{2}+e^{h}\Sigma_{u} \Delta_{f}h], \tag{3.17}\] or upon rearranging \[\Delta_{f}H \geq 2(\Delta_{f}h)^{2}/m-2\langle\nabla h,\nabla H\rangle-2(\mu-1)e^{-h }\Sigma|\nabla h|^{2}\] \[+2(\mu-1)e^{-h}\langle\nabla h,\Sigma_{x}\rangle+2(\mu-1)\Sigma_{ u}|\nabla h|^{2}\] \[-2(m-1)k|\nabla h|^{2}-\mu(e^{-h}\Sigma-\Sigma_{u})\Delta_{f}h+ \mu e^{-h}\Sigma|\nabla h|^{2}\] \[-2\mu e^{-h}\langle\nabla h,\Sigma_{x}\rangle-2\mu\Sigma_{u}| \nabla h|^{2}+\mu e^{-h}\Delta_{f}\Sigma^{x}\] \[+2\mu\langle\nabla h,\Sigma_{xu}\rangle+\mu(\Sigma_{u}+e^{h} \Sigma_{uu})|\nabla h|^{2}. \tag{3.18}\] Next by recalling (3.2) and (3.5) we can write \[\mu\Delta_{f}h=-\mu(H-(\mu-1)e^{-h}\Sigma[x,e^{h}])=-[H+(\mu-1)|\nabla h|^{2}], \tag{3.19}\] and therefore by substituting the latter back in (3.18) and rearranging terms it follows that \[\Delta_{f}H \geq 2(\Delta_{f}h)^{2}/m-2\langle\nabla h,\nabla H\rangle\] \[+(e^{-h}\Sigma-\Sigma_{u})[H+(\mu-1)|\nabla h|^{2}]\] \[+[-2(\mu-1)e^{-h}\Sigma-2\Sigma_{u}-2(m-1)k]|\nabla h|^{2}\] \[+(\mu e^{-h}\Sigma+\mu\Sigma_{u}+\mu e^{h}\Sigma_{uu})|\nabla h| ^{2}\] \[-2e^{-h}\langle\nabla h,\Sigma_{x}\rangle+\mu e^{-h}\Delta_{f} \Sigma^{x}+2\mu\langle\nabla h,\Sigma_{xu}\rangle. \tag{3.20}\] Finally taking into account the necessary cancellations and by a further rearrangement of terms we obtain \[\Delta_{f}H \geq 2(\Delta_{f}h)^{2}/m-2\langle\nabla h,\nabla H\rangle+(e^{-h} \Sigma-\Sigma_{u})H\] \[+[e^{-h}\Sigma-\Sigma_{u}+\mu e^{h}\Sigma_{uu}-2(m-1)k]|\nabla h |^{2}\] \[-2\langle\nabla h,e^{-h}\Sigma_{x}-\mu\Sigma_{xu}\rangle+\mu e^{- h}\Delta_{f}\Sigma^{x}, \tag{3.21}\] which is the desired conclusion. The following lemma will also be used in the course of the proof of the local estimate in the next subsection. **Lemma 3.4**.: _Suppose \(a,b,z\in\mathbb{R}\), \(c,y>0\) and \(\mu>1\) are arbitrary constants such that \(y-\mu z>0\). Then for any \(\varepsilon\in(0,1)\) we have_ \[(y-z)^{2} -a\sqrt{y}(y-\mu z)-by-c\sqrt{y}\] \[\geq(y-\mu z)^{2}/\mu^{2}-a^{2}\mu^{2}(y-\mu z)/[8(\mu-1)]\] \[-(3/4)c^{4/3}[\mu^{2}/(4\varepsilon(\mu-1)^{2})]^{1/3}-(\mu^{2}b^ {2})/[4(1-\varepsilon)(\mu-1)^{2}]. \tag{3.22}\] Proof.: Starting from the expression on the left-hand side in (3.22) we can write for any \(\delta,\varepsilon\) by basic considerations \[(y-z)^{2} -a\sqrt{y}(y-\mu z)-by-c\sqrt{y}\] \[=(1-\varepsilon-\delta)y^{2}-(2-\varepsilon\mu)yz+z^{2}+( \varepsilon y-a\sqrt{y})(y-\mu z)+\delta y^{2}-by-c\sqrt{y}\] \[=(1/\mu-\varepsilon/2)(y-\mu z)^{2}+(1-\varepsilon-\delta-1/\mu+ \varepsilon/2)y^{2}+(1-\mu+\varepsilon\mu^{2}/2)z^{2}\] \[\quad+(\varepsilon y-a\sqrt{y})(y-\mu z)+\delta y^{2}-by-c\sqrt {y}. \tag{3.23}\] In particular setting \(\delta=(1/\mu-1)^{2}\) and \(\varepsilon=2-2/\mu-2(1/\mu-1)^{2}=2(\mu-1)/\mu^{2}\) gives \(1-\varepsilon-\delta-1/\mu+\varepsilon/2=0\) and \(1-\mu+\varepsilon\mu^{2}/2=0\) and so by making note of the inequality \(\varepsilon y-a\sqrt{y}\geq-a^{2}/(4\varepsilon)\) with \(\varepsilon=2(\mu-1)/\mu^{2}>0\) we can deduce from (3.23) that \[(y-z)^{2} -a\sqrt{y}(y-\mu z)-by-c\sqrt{y} \tag{3.24}\] \[\geq(y-\mu z)^{2}/\mu^{2}-a^{2}\lambda^{2}(y-\mu z)/[8(\mu-1)]+( \mu-1)^{2}y^{2}/\mu^{2}-by-c\sqrt{y}.\] Next, considering the last three terms only we can write, for any \(\varepsilon\in(0,1)\), \[(\mu-1)^{2} y^{2}/\mu^{2}-by-c\sqrt{y}\] \[\geq(\mu-1)^{2}y^{2}/\mu^{2}-(1-\varepsilon)(\mu-1)^{2}y^{2}/\mu^ {2}-(\mu^{2}b^{2})/[4(1-\varepsilon)(\mu-1)^{2}]-c\sqrt{y}\] \[\geq\varepsilon(\mu-1)^{2}y^{2}/\mu^{2}-(\mu^{2}b^{2})/[4(1- \varepsilon)(\mu-1)^{2}]-c\sqrt{y}\] \[\geq-(3/4)c^{4/3}[\mu^{2}/(4\varepsilon(\mu-1)^{2})]^{1/3}-(\mu^ {2}b^{2})/[4(1-\varepsilon)(\mu-1)^{2}] \tag{3.25}\] where above we have made use of \((1-\varepsilon)(\mu-1)^{2}y^{2}/\mu^{2}-by\geq-(\mu^{2}b^{2})/[4(1-\varepsilon )(\mu-1)^{2}]\) and \(\varepsilon(\mu-1)^{2}y^{2}/\mu^{2}-c\sqrt{y}\geq-(3/4)c^{4/3}[\mu^{2}/(4 \varepsilon(\mu-1)^{2})]^{1/3}\) to deduce the first and last inequalities respectively. Substituting back in (3.24) gives the desired inequality. ### Proof of Theorem 2.1 This is based on the estimate established in Lemme 3.3 and a localisation argument. In order to carry out the localisation and the relevant estimates we proceed by first constructing a suitable cut-off function. Towards this end we begin by introducing a function \(\bar{\psi}=\bar{\psi}(t)\) satisfying the following conditions: * \(\bar{\psi}\) is of class \(\mathscr{C}^{2}[0,\infty)\). * \(0\leq\bar{\psi}(t)\leq 1\) for \(0\leq t<\infty\) with \[\bar{\psi}(t)=\left\{\begin{array}{ll}1&\quad t\leq 1,\\ 0&\quad t\geq 2.\end{array}\right.\] (3.26) * \(\bar{\psi}\) is non-increasing (that is, \(\bar{\psi}^{\prime}\leq 0\)), and additionally, for suitable constants \(c_{1},c_{2}>0\), its first and second order derivatives satisfy the bounds \[-c_{1}\leq\frac{\bar{\psi}^{\prime}}{\sqrt{\bar{\psi}}}\leq 0\qquad and \qquad\bar{\psi}^{{}^{\prime\prime}}\geq-c_{2}.\] (3.27) Next pick and fix a reference point \(p\) in \(M\) and with \(r=r_{p}(x)\) set \[\psi(x)=\bar{\psi}\left(\frac{r(x)}{R}\right). \tag{3.28}\] It is evident that \(\psi\equiv 1\) for when \(0\leq r(x)\leq R\) and \(\psi\equiv 0\) for when \(r(x)\geq 2R\). Let us now consider the localised function \(\psi H\) supported on \(\mathcal{B}_{2R}\). Let us also assume that \(x_{1}\) in \(\mathcal{B}_{2R}\) is the point where \(\psi H\) attains its maximum. As for \(\psi H\leq 0\) the estimate is trivially true we can assume that \([\psi H](x_{1})>0\). Furthermore by an argument of Calabi [10] we can assume that \(x_{1}\) is not in the cut locus of \(p\). It then follows that at this point: \[\nabla(\psi H)=0\qquad and\qquad\Delta(\psi H)\leq 0,\qquad\Delta_{f}(\psi H) \leq 0. \tag{3.29}\] Now starting from the basic identity \[\Delta_{f}(\psi H)=H\Delta_{f}\psi+2\langle\nabla\psi,\nabla H\rangle+\psi \Delta_{f}H \tag{3.30}\] and making note of the relations (3.29) at the maximum point \(x_{1}\), we can write \[0 \geq H\Delta_{f}\psi+2\langle\nabla\psi,\nabla H\rangle+\psi\Delta_{ f}H\] \[\geq H\Delta_{f}\psi+\frac{2}{\psi}\langle\nabla\psi,\nabla(\psi H )\rangle-2\frac{|\nabla\psi|^{2}}{\psi}H+\psi\Delta_{f}H\] \[\geq H\Delta_{f}\psi-2\frac{|\nabla\psi|^{2}}{\psi}H+\psi\Delta_ {f}H. \tag{3.31}\] We proceed now by obtaining suitable lower bounds for each of the three individual terms on the right-hand side of this inequality. As for the first term, referring to (3.28), a straightforward calculation gives \(\nabla\psi=(\bar{\psi}^{\prime}/R)\nabla r\) and \(\Delta\psi=\bar{\psi}^{\prime\prime}|\nabla r|^{2}/R^{2}+\bar{\psi}^{\prime} \Delta r/R\). Subsequently from these we have \[\Delta_{f}\psi=\Delta\psi-\langle\nabla f,\nabla\psi\rangle=\frac{\bar{\psi}^ {\prime\prime}}{R^{2}}|\nabla r|^{2}+\frac{\bar{\psi}^{\prime}}{R}\Delta_{f}r. \tag{3.32}\] For the last term on the right here the Wei-Wiley Laplacian comparison theorem (_cf_. [45]) together with \(\mathscr{R}ic_{f}^{m}(g)\geq-(m-1)k\) gives \(\Delta_{f}r\leq(m-1)\sqrt{k}\coth(\sqrt{k}r)\). Hence substituting back in (3.32) and noting \(\bar{\psi}^{\prime}\leq 0\) we have: \[\Delta_{f}\psi\geq\frac{1}{R^{2}}\bar{\psi}^{\prime\prime}+\frac{(m-1)}{R} \bar{\psi}^{\prime}\sqrt{k}\coth(\sqrt{k}r). \tag{3.33}\] Moreover upon noting \(\coth(\sqrt{k}r)\leq\coth(\sqrt{k}R)\) and \(\sqrt{k}\coth(\sqrt{k}R)\leq(1+\sqrt{k}R)/R\), subject to \(R\leq r\leq 2R\) [here we are using the monotonicity of \(\coth x\) and the bound \(x\coth x\leq 1+x\) for \(x>0\)] we deduce that \[(m-1)\bar{\psi}^{\prime}\sqrt{k}\coth(\sqrt{k}r)\geq(m-1)(1+\sqrt{k}R)\bar{ \psi}^{\prime}/R. \tag{3.34}\] Hence substituting this back in (3.33) and making note of the assumptions on \(\bar{\psi}\), specifically, \(0\leq\bar{\psi}\leq 1\) and the lower bounds on \(\bar{\psi}^{\prime}\), \(\bar{\psi}^{\prime\prime}\) in (3.27), it follows that \[\Delta_{f}\psi \geq\frac{1}{R^{2}}\bar{\psi}^{{}^{\prime\prime}}+\frac{(m-1)}{R} \left(\frac{1}{R}+\sqrt{k}\right)\bar{\psi}^{{}^{\prime}}\] \[\geq-\frac{c_{2}}{R^{2}}-\frac{(m-1)}{R}c_{1}\left(\frac{1}{R}+ \sqrt{k}\right)\] \[=-\frac{1}{R^{2}}[c_{2}+(m-1)c_{1}(1+R\sqrt{k})], \tag{3.35}\] which can be readily utilised to bound the first term on the right-hand side in (3.31). Referring next to the middle term in the same inequality, by using the imposed bounds on \(\bar{\psi}^{\prime}\) [the first inequality in (3.27)], we have \[\frac{|\nabla\psi|^{2}}{\psi}=\frac{\bar{\psi}^{\prime 2}}{\bar{\psi}} \frac{|\nabla\varrho|^{2}}{R^{2}}=\left(\frac{\bar{\psi}^{\prime}}{\sqrt{\bar {\psi}}}\right)^{2}\frac{|\nabla\varrho|^{2}}{R^{2}}\leq\frac{c_{1}^{2}}{R^{2}}. \tag{3.36}\] Since for the third term on the right-hand side in (3.31) we already have the conclusion of Lemma 3.3 at our disposal, by substituting the above fragments back, we obtain at the maximum point \(x_{1}\) the inequality \[0 \geq H\Delta_{f}\psi-2(|\nabla\psi|^{2}/\psi)H+\psi\Delta_{f}H\] \[\geq -H[c_{2}+(m-1)c_{1}(1+R\sqrt{k})+2c_{1}^{2}]/R^{2}\] \[+\psi(e^{-h}\Sigma-\Sigma_{u})H+\psi[2(\Delta_{f}h)^{2}/m-2\langle \nabla h,\nabla H\rangle]\] \[+\psi(e^{-h}\Sigma-\Sigma_{u}+\mu e^{h}\Sigma_{uu}-2(m-1)k)| \nabla h|^{2}\] \[-2\psi\langle\nabla h,e^{-h}\Sigma_{x}-\mu\Sigma_{xu}\rangle+\psi \mu e^{-h}\Delta_{f}\Sigma^{x}. \tag{3.37}\] Referring now to the above inequality, since we have \(H>0\) and \(\nabla(\psi H)=0\) at \(x_{1}\), it is easily seen that, \[\psi\langle\nabla h,\nabla H\rangle=-H\langle\nabla h,\nabla\psi\rangle\leq H |\nabla h||\nabla\psi|\leq c_{1}\frac{\sqrt{\psi}}{R}H|\nabla h|. \tag{3.38}\] Likewise by an application of the Cauchy-Schwarz inequality we can write \[\langle\nabla h,e^{-h}\Sigma_{x}-\mu\Sigma_{xu}\rangle\leq|\nabla h||e^{-h} \Sigma_{x}-\mu\Sigma_{xu}|. \tag{3.39}\] Therefore by substituting (3.38) and (3.39) back in (3.37), making note of (3.1) and multiplying through by \(\psi\geq 0\), it follows that \[0 \geq -\psi H[c_{2}+(m-1)c_{1}(1+R\sqrt{k})+2c_{1}^{2}]/R^{2}\] \[+(2\psi^{2}/m)(|\nabla h|^{2}+e^{-h}\Sigma)^{2}-2c_{1}\psi^{3/2}| \nabla h|H/R\] \[+\psi^{2}[e^{-h}\Sigma-\Sigma_{u}+\mu e^{h}\Sigma_{uu}-2(m-1)k]| \nabla h|^{2}\] \[+\psi^{2}H(e^{-h}\Sigma-\Sigma_{u})-2\psi^{2}|e^{-h}\Sigma_{x}- \mu\Sigma_{xu}||\nabla h|\] \[+\psi^{2}\mu e^{-h}\Delta_{f}\Sigma^{x}. \tag{3.40}\] Now in order to obtain the desired bounds out of this it is more efficient to proceed by setting \[y=\psi|\nabla h|^{2},\qquad z=-\psi e^{-h}\Sigma, \tag{3.41}\] In particular note that \(y-z=-\psi\Delta_{f}h\) and \(y-\mu z=\psi H\) by (3.1) and (3.2) respectively. Substituting the above in (3.40) thus gives \[0 \geq -\psi H[c_{2}+(m-1)c_{1}(1+R\sqrt{k})+2c_{1}^{2}]/R^{2}\] \[+(2/m)[(y-z)^{2}-(mc_{1}/R)y^{1/2}(y-\mu z)-m\mathsf{A}_{\Sigma} y-m\mathsf{B}_{\Sigma}y^{1/2}]\] \[+\psi^{2}H(e^{-h}\Sigma-\Sigma_{u})+\psi^{2}\mu e^{-h}\Delta_{f} \Sigma^{x}, \tag{3.42}\] where \[\mathsf{A}_{\Sigma}=(m-1)k-\inf_{\mathcal{B}_{2R}}\{(e^{-h}\Sigma-\Sigma_{u}+ \mu e^{h}\Sigma_{uu})_{-}/2\}, \tag{3.43}\] \[\mathsf{B}_{\Sigma}=\sup_{\mathcal{B}_{2R}}|e^{-h}\Sigma_{x}-\mu\Sigma_{xu}|. \tag{3.44}\] Utilising Lemma 3.4 upon setting \(a=mc_{1}/R,b=m{\sf A}_{\Sigma}\) and \(c=m{\sf B}_{\Sigma}\) it follows [_see_ (3.22)] that \[(y-z)^{2}-mc_{1}y^{1/2}(y-\mu z)/R-m{\sf A}_{\Sigma}y-m{\sf B}_{ \Sigma}y^{1/2}\] \[\geq \frac{1}{\mu^{2}}(y-\mu z)^{2}-\frac{m^{2}c_{1}^{2}\mu^{2}}{8(\mu -1)R^{2}}(y-\mu z)\] \[-\frac{m^{2}\mu^{2}{\sf A}_{\Sigma}^{2}}{4(1-\varepsilon)(\mu-1) ^{2}}-\frac{3}{4}\left[\frac{m^{4}\mu^{2}{\sf B}_{\Sigma}^{4}}{4\varepsilon( \mu-1)^{2}}\right]^{1/3}. \tag{3.45}\] Thus from (3.42) it follows that \[0\geq -\psi H[c_{2}+(m-1)c_{1}(1+R\sqrt{k})+2c_{1}^{2}]/R^{2}\] \[+\frac{2}{m}\bigg{[}\frac{(\psi H)^{2}}{\mu^{2}}-\frac{m^{2}c_{1 }^{2}\mu^{2}}{8(\mu-1)R^{2}}(\psi H)\] \[-\frac{m^{2}\mu^{2}{\sf A}_{\Sigma}^{2}}{4(1-\varepsilon)(\mu-1) ^{2}}-\frac{3}{4}\left[\frac{m^{4}\mu^{2}{\sf B}_{\Sigma}^{4}}{4\varepsilon( \mu-1)^{2}}\right]^{1/3}\bigg{]}\] \[+\psi^{2}H(e^{-h}\Sigma-\Sigma_{u})+\psi^{2}\mu e^{-h}\Delta_{f} \Sigma^{x}, \tag{3.46}\] or after basic considerations and a rearrangement of terms \[0\geq \ \frac{2}{m\mu^{2}}(\psi H)^{2}\] \[-\left[\frac{1}{R^{2}}[c_{2}+(m-1)c_{1}(1+R\sqrt{k})+2c_{1}^{2}]\right.\] \[-\inf_{{\cal B}_{2R}}\{(e^{-h}\Sigma-\Sigma_{u})_{-}\}+\frac{mc_ {1}^{2}\mu^{2}}{4(\mu-1)R^{2}}\bigg{]}\psi H\] \[-\left[\frac{m\mu^{2}{\sf A}_{\Sigma}^{2}}{2(1-\varepsilon)(\mu- 1)^{2}}+\frac{3}{2}\left[\frac{m\mu^{2}{\sf B}_{\Sigma}^{4}}{4\varepsilon( \mu-1)^{2}}\right]^{1/3}-\mu\inf_{{\cal B}_{2R}}\{(e^{-h}\Delta_{f}\Sigma^{x}) _{-}\}\right]\!. \tag{3.47}\] Here we have used \(\psi H(e^{-h}\Sigma-\Sigma_{u})_{-}\leq\psi^{2}H[e^{-h}\Sigma-\Sigma_{u}]\) and \(\mu(e^{-h}\Delta_{f}\Sigma^{x})_{-}\leq\psi^{2}\mu e^{-h}\Delta_{f}\Sigma^{x}\). Now upon setting \[{\sf D}= \ [c_{2}+(m-1)c_{1}(1+R\sqrt{k})+2c_{1}^{2}]/R^{2}\] \[-\inf_{{\cal B}_{2R}}\{(e^{-h}\Sigma-\Sigma_{u})_{-}\}+[mc_{1}^{2 }\mu^{2}/(4(\mu-1)R^{2})], \tag{3.48}\] and \[{\sf E}= \ m\mu^{2}{\sf A}^{2}/(2(1-\varepsilon)(\mu-1)^{2})\] \[+3/2[m\mu^{2}{\sf B}^{4}/(4\varepsilon(\mu-1)^{2})]^{1/3}-\mu\inf _{{\cal B}_{2R}}\{(e^{-h}\Delta_{f}\Sigma^{x})_{-}\}, \tag{3.49}\] we can write (3.47) as \[0\geq 2(\psi H)^{2}/(m\mu^{2})-{\sf D}(\psi H)-{\sf E}. \tag{3.50}\] As a result it follows from this inequality that \[\psi H \leq(m\mu^{2})/4\left[\mathsf{D}+\sqrt{\mathsf{D}^{2}+(8\mathsf{E})/ (m\mu^{2})}\right]\] \[\leq(m\mu^{2})/4\left[2\mathsf{D}+\sqrt{(8\mathsf{E})/(m\mu^{2})} \right]=(m\mu^{2}/2)\mathsf{D}+\mu\sqrt{m\mathsf{E}/2}. \tag{3.51}\] Since \(\psi\equiv 1\) on \(\mathcal{B}_{R}\) and \(x_{1}\) is a maximum point of \(\psi H\) on \(\mathcal{B}_{2R}\), we have \[\sup_{\mathcal{B}_{R}}H=\sup_{\mathcal{B}_{R}}[\psi H]\leq\sup_{ \mathcal{B}_{2R}}[\psi H]=(\psi H)(x_{1}). \tag{3.52}\] Thus it follows that \[\sup_{\mathcal{B}_{R}}H\leq(m\mu^{2}/2)\mathsf{D}+\mu\sqrt{m\mathsf{E}/2}. \tag{3.53}\] Therefore recalling (3.2), substituting for \(\mathsf{D}\) and \(\mathsf{E}\) from (3.48) and (3.49) above, we can write after multiplying both sides by \(1/\mu\), that for every \(x\in\mathcal{B}_{R}\): \[\mu^{-1}|\nabla h|^{2}+e^{-h}\Sigma(x,e^{h}) \leq m\mu\mathsf{D}/2+(m\mathsf{E}/2)^{1/2}\] \[\leq m\mu[c_{2}+(m-1)c_{1}(1+R\sqrt{k})+2c_{1}^{2}]/(2R^{2})\] \[-(m\mu/2)\inf_{\mathcal{B}_{2R}}\{(e^{-h}\Sigma-\Sigma_{u})_{-}\}\] \[+[m^{2}c_{1}^{2}\mu^{3}/(8(\mu-1)R^{2})]\] \[+\sqrt{m}\{[m\mu^{2}\mathsf{A}_{\Sigma}^{2}/(4(1-\varepsilon)( \mu-1)^{2})]\] \[+(3/4)[m\mu^{2}\mathsf{B}_{\Sigma}^{4}/(4\varepsilon(\mu-1)^{2}) ]^{1/3}\] \[-(\mu/2)\inf_{\mathcal{B}_{2R}}\{(e^{-h}\Delta_{f}\Sigma^{x})_{-} \}\}^{1/2}. \tag{3.54}\] Finally reverting back to \(u\) upon noting the relation \(h=\log u\) and rearranging terms \[\frac{|\nabla u|^{2}}{\mu u^{2}}+\frac{\Sigma[x,u]}{u} \leq m\mu[c_{2}+(m-1)c_{1}(1+R\sqrt{k})\] \[+2c_{1}^{2}+mc_{1}^{2}\mu^{2}/(4(\mu-1))]/(2R^{2})\] \[-(m\mu/2)\inf_{\mathcal{B}_{2R}}\{(\Sigma/u-\Sigma_{u})_{-}\}\] \[+\sqrt{m}\{[m\mu^{2}\mathsf{A}_{\Sigma}^{2}/(4(1-\epsilon)(\mu-1 )^{2})]\] \[+(3/4)[m\mu^{2}\mathsf{B}_{\Sigma}^{4}/(4\epsilon(\mu-1)^{2})]^{1 /3}\] \[-(\mu/2)\inf_{\mathcal{B}_{2R}}\{([\Delta_{f}\Sigma^{x}]/u)_{-}\} \}^{1/2}, \tag{3.55}\] which is the desired estimate as in (2.1). The proof is thus complete. ## 4. Proof of the elliptic Harnack inequality in Theorem 2.3 Now in order to prove the Harnack inequality we need to integrate the differential Harnack inequality along a geodesic path \(\gamma\) joining the points \(x_{1}\) and \(x_{2}\) inside \(\mathcal{B}_{R}\). Towards this end let us begin by rewriting the local gradient estimate for (1.1) as follows: \[\sup_{\mathcal{B}_{R}}\frac{|\nabla u|^{2}}{u^{2}} \leq\frac{m\mu^{2}}{2R^{2}}\left[[c_{2}+(m-1)c_{1}(1+R\sqrt{k})+2c_{ 1}^{2}]+\frac{mc_{1}^{2}\mu^{2}}{4(\mu-1)}\right]\] \[\quad+\frac{\sqrt{m}\mu}{2}\bigg{[}\frac{m\mu^{2}\mathsf{A}^{2}}{ (1-\varepsilon)(\mu-1)^{2}}+\left[\frac{27m\mu^{2}\mathsf{B}^{4}}{4\varepsilon (\mu-1)^{2}}\right]^{1/3}-2\mu\inf_{\mathcal{B}_{2R}}\left\{\frac{(\Delta_{f} \Sigma^{x})_{-}}{u}\right\}\bigg{]}^{1/2}\] \[\quad+\frac{m\mu^{2}}{2}\sup_{\mathcal{B}_{2R}}\left\{\frac{(u \Sigma_{u}[x,u]-\Sigma[x,u])_{+}}{u}\right\}-\mu\inf_{\mathcal{B}_{R}}\left\{ \frac{(\Sigma[x,u])_{-}}{u}\right\}:=\mathbb{H}. \tag{4.1}\] Here we have denoted the expression on the right-hand side of (4.1) by \(\mathbb{H}\) which is a positive constant. Now integrating the quantity \(|\nabla u|/u\) along a geodesic curve \(\gamma\) in \(\mathcal{B}_{R}\) (with \(\gamma(0)=x_{1}\) and \(\gamma(1)=x_{2}\)) we have \[\log u(x_{2})-\log u(x_{1}) =\int_{0}^{1}\frac{d}{ds}\log u(\gamma(s))\,ds\] \[=\int_{0}^{1}\langle[\nabla u/u](\gamma(s)),\gamma^{\prime}(s) \rangle\,ds\leq\left[\sup_{\mathcal{B}_{R}}\frac{|\nabla u|}{u}\right]\int_{0} ^{1}|\gamma^{\prime}|\,ds\] \[\leq d(x_{1},x_{2})\sqrt{\mathbb{H}}\leq 2R\sqrt{\mathbb{H}}. \tag{4.2}\] Therefore \(\log[u(x_{2})/u(x_{1})]\leq d(x_{1},x_{2})\sqrt{\mathbb{H}}\leq 2R\sqrt{ \mathbb{H}}\) or after exponentiation \[u(x_{2})\leq e^{2R\sqrt{\mathbb{H}}}u(x_{1}) \tag{4.3}\] giving the desired inequality. The remaining assertions are now straightforward consequences of this inequality. ## 5. Proof of the Liouville result in Theorem 2.5 Starting from (2.4) and noting that \(\mathsf{B}_{\Sigma}=0\) (as a result of \(|\Sigma_{x}-\mu u\Sigma_{xu}|\equiv 0\)), \(k=0\) and \(\mathsf{C}_{\Sigma}=0\) (as a result of \(\Delta_{f}\Sigma^{x}\equiv 0\)) we obtain, after rearranging the inequality, \[\frac{|\nabla u|^{2}}{\mu u^{2}}+\frac{\Sigma(u)}{u} \leq\frac{\sqrt{m}}{2}\bigg{[}\frac{m\mu^{2}\mathsf{A}_{\Sigma}^ {2}}{(1-\varepsilon)(\mu-1)^{2}}+\bigg{[}\frac{27m\mu^{2}\mathsf{B}_{\Sigma}^ {4}}{4\varepsilon(\mu-1)^{2}}\bigg{]}^{1/3}+2\mu\mathsf{C}_{\Sigma}\bigg{]}^{1/2}\] \[\quad+\frac{m\mu}{2}\sup_{M}\left\{\frac{(u\Sigma_{u}[u]-\Sigma[ u])_{+}}{u}\right\}\] \[\leq m\mu\bigg{[}\sup_{M}\left\{\frac{(-\Sigma[u]+u\Sigma_{u}[u]- \mu u^{2}\Sigma_{uu}[u])_{+}}{4(\sqrt{1-\varepsilon})(\mu-1)u}\right\}\] \[\quad+\sup_{M}\left\{\frac{(u\Sigma_{u}[u]-\Sigma[u])_{+}}{2u} \right\}\bigg{]}. \tag{5.1}\] Next from the imposed assumptions on \(\Sigma\) and its derivatives it is easily seen that \[(-\Sigma[u]+u\Sigma_{u}[u]-\mu u^{2}\Sigma_{uu}[u])_{+}\equiv 0,\qquad(u \Sigma_{u}[u]-\Sigma[u])_{+}\equiv 0. \tag{5.2}\] Hence from (5.1) it follows that \[\frac{|\nabla u|^{2}}{\mu u^{2}}+\frac{\Sigma[u]}{u}\equiv 0, \tag{5.3}\] and so again from the assumptions imposed on \(\Sigma\) that \(|\nabla u|^{2}/u^{2}\equiv 0\). The conclusion now follows at once. ### Authors Statements The authors declare no conflict of interest. They have equal contribution in this research. The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request. The authors gratefully acknowledge support from EPSRC.
2305.02956
Hybrid quantum learning with data re-uploading on a small-scale superconducting quantum simulator
Supervised quantum learning is an emergent multidisciplinary domain bridging between variational quantum algorithms and classical machine learning. Here, we study experimentally a hybrid classifier model accelerated by a quantum simulator - a linear array of four superconducting transmon artificial atoms - trained to solve multilabel classification and image recognition problems. We train a quantum circuit on simple binary and multi-label tasks, achieving classification accuracy around 95%, and a hybrid model with data re-uploading with accuracy around 90% when recognizing handwritten decimal digits. Finally, we analyze the inference time in experimental conditions and compare the performance of the studied quantum model with known classical solutions.
Aleksei Tolstobrov, Gleb Fedorov, Shtefan Sanduleanu, Shamil Kadyrmetov, Andrei Vasenin, Aleksey Bolgar, Daria Kalacheva, Viktor Lubsanov, Aleksandr Dorogov, Julia Zotova, Peter Shlykov, Aleksei Dmitriev, Konstantin Tikhonov, Oleg V. Astafiev
2023-05-04T16:03:32Z
http://arxiv.org/abs/2305.02956v2
# Hybrid quantum learning with data re-uploading on a small-scale superconducting quantum simulator ###### Abstract Supervised quantum learning is an emergent multidisciplinary domain bridging between variational quantum algorithms and classical machine learning. Here, we study experimentally a hybrid classifier model accelerated by a quantum simulator - a linear array of four superconducting transmon artificial atoms - trained to solve multilabel classification and image recognition problems. We train a quantum circuit on simple binary and multi-label tasks, achieving classification accuracy around 95%, and a hybrid model with data re-uploading with accuracy around 90% when recognizing handwritten decimal digits. Finally, we analyze the inference time in experimental conditions and compare the performance of the studied quantum model with known classical solutions. ## 1 Introduction Over the last years, high attention has been attracted by the idea of using quantum circuits (QCs) as universal approximating models for machine learning (ML) tasks [1, 2, 3, 4], inspired by research on variational quantum algorithms (VQA) and quantum approximate optimization algorithms (QAOA) [5]. While various architectures for quantum circuits have been suggested, including convolutional [6], generative-adversarial [7, 8], recurrent [9] circuits, it is still not clear how to overcome the general trainability issues [10, 11, 12, 13] when such models have high quantum volume; this is an area of active research [14, 15, 16]. Additionally, it is not yet known whether optimizing a quantum circuit can lead to an algorithm that may outperform classical counterparts. However, it is known that even a single-qubit quantum circuit is enough to solve non-trivial classification tasks [17, 18], there is potential in using quantum kernel estimation for support vector machines [19, 20], and it is supposed that QC-based models may have advantages in expressivity and generalization [21, 22, 23, 24, 25]. Also, QC unitarity automatically ensures effective weight normalization [1], which is useful for recurrent models. To date, the idea found a few experimental realizations on various physical platforms, the most prominent being trapped ions and superconducting artificial atoms [26, 3, 27, 28, 29]. To achieve quantum advantage for a certain ML task is to find a QC than will train or perform inference faster, or with less resources, or with higher accuracy than it is currently possible using classical computational devices. For classification problems, this is achievable probably only with a classically intractable \(\mathbf{\theta}\)-parameterized (\(\mathbf{\theta}\in\mathbb{R}^{m}\)) QC used to map a feature vector \(\mathbf{x}\in\mathbb{R}^{n}\) to a higher-dimensional feature Hilbert space vector \(|\Phi(\mathbf{x},\mathbf{\theta})\rangle\), or, more generally, a density matrix \(\rho(\mathbf{x},\mathbf{\theta})\) lying in the space of unit-trace Hermitian operators [2, 3, 4]. Then the prediction is found by performing single- or multi-qubit measurements upon \(\rho(\mathbf{x},\mathbf{\theta})\), optionally using quantum state tomography (QST) [30]. For this strategy to be successful, the classes should be easier to separate in the new feature space than in the original, in similarity to classical dimensionality reduction approaches [31, 32]. Pre-processing of \(\mathbf{x}\) and post-processing of measurement outcomes, for example, by simultaneously-trained classical neural networks, seems to improve model performance. When the model combines both classical and quantum mappings, it is called a hybrid deep quantum learning model, or hybrid quantum neural network [33, 30]. We note, however, that using the term "neural network" should probably be avoided for quantum models as it might be ambiguous due to the qualitative differences in their mathematical formulation. Recent studies signify that repeated loading of \(\mathbf{x}\) to the QC significantly improves its expressivity [17, 4]. In this work, we experimentally train a small hybrid model to solve several problems of supervised learning on well-known datasets - bit-string parity (parity), diagnosing breast cancer by biopsy results (cancer), discerning wine cultivars by their physical and chemical parameters (wines), and recognizing handwritten digits (mnist) [34]. As a small-scale prototype of a quantum hardware accelerator used to evaluate the quantum part of the model, we use a linear array of superconducting transmon artificial atoms with nearest-neighbour interactions [35, 36, 27]. We show that four qubits with at most two hundred parameters is enough to reach test accuracies higher than 90% for all studied datasets. ## 2 Learning simple datasets An optical image of the experimental device is shown in Fig. 1(a). The chip hosts eight artificial atoms forming a linear chain with nearest-neighbour interactions [27]. Each transmon has an individual measurement resonator, and two control lines - to change the flux through the SQUID of transmon and to excite it with microwave radiation. In this study, we use only the left half of the chain. The physical parameters of the device are presented in Table 1. ### Training the device The architecture of the four-qubit QC that is run on the device is shown in Figure 1(b). Gaussian driving pulses with controllable amplitude are used for single-qubit operations (\(t_{X,Y}=40\) ns) and smoothed rectangular DC pulses for two-qubit operations (\(t_{\mathrm{iSWAP}}=25\) ns). Each layer of operations has a duration of \(t_{l}=80\) ns, which includes an idling margin to account for variations of propagation delay among the control lines. Total QC execution time is \(t_{\mathrm{QC}}=1460\) ns (incl. 500 ns for readout); however, we have to use a \(t_{\mathrm{rep}}=50\)\(\mu\)s repetition period to allow the system to return to the state of thermal equilibrium with the environment of 20 mK, denoted as \(|\emptyset\rangle\). This repetition period can be significantly reduced, though, to 1-5 \(\mu\)s, by using unconditional reset protocols [37, 38]. Controlled evolution of a real quantum system ends in a statistically mixed state \(\rho(\mathbf{x},\mathbf{\theta})\), having limited resemblance to the QC target state \(|\Phi(\mathbf{x},\mathbf{\theta})\rangle\), and this is supposed to be the main restriction for VQA development on larger NISQ devices [11]. However, while in our case the QC execution \begin{table} \begin{tabular}{l c c c c} \hline \hline Transmon & 1 & 2 & 3 & 4 \\ \hline \(\omega_{ge}/2\pi\), GHz & 5.80 & 4.96 & 6.05 & 4.94 \\ \(T_{1}\), \(\mu\)s & 7.2 & 9.3 & 8.0 & 10.5 \\ \(T_{2}^{*}\), \(\mu\)s & 3.8 & 4.1 & 3.7 & 3.9 \\ \(\omega_{r}/2\pi\), GHz & 7.16 & 7.24 & 7.31 & 7.40 \\ \(t_{X,Y}\), ns & 40 & 40 & 40 & 40 \\ \(t_{\mathrm{iSWAP}}\), ns & 25 & 27 & 27 \\ \hline \hline \end{tabular} \end{table} Table 1: Physical parameters for the four leftmost transmons. Idling (“parking”) point frequencies \(\omega_{ge}^{(i)}\) and readout resonator frequencies \(\omega_{r}^{(i)}\) are given, decoherence times are measured when the transmons are in their idling points, near sweet spots. Precise gate durations are also listed. time \(t_{\mathrm{QC}}\) is comparable to the average decoherence time \(T_{2}\approx 4\)\(\mu\)s, we are still able to successfully perform training and inference. The feature vectors \(\mathbf{x}\in\mathbb{R}^{4}\) are presented to the QC by single-qubit \(X\)-rotations of the first layer. The corresponding angles are calculated by applying the inverse tangent function to \(\{x_{i}\}\). For datasets cancer and wines, four most relevant components of the feature vector are chosen. For a certain dataset, we will denote as \(\mathcal{X}\), \(\mathcal{Y}\) the sets of feature vectors and corresponding labels, and as \(\mathcal{T}\), \(\tilde{\mathcal{T}}:\mathcal{T}\cup\tilde{\mathcal{T}}=\mathcal{X}\) the train and test feature subsets, respectively. Additionally, we merge into the first layer four components \(\theta_{1-4}\) of the weight vector \(\boldsymbol{\theta}\in\mathbb{R}^{15}\) by adding them to the respective feature angles [28]. As this operation is done classically, the model, strictly speaking, becomes hybrid classical-quantum. However, we use a more involved pre-processing for image recognition below. The larger part of Hilbert space can be reached by the image \(|\Phi(\mathbf{x},\boldsymbol{\theta})\rangle\), the better performance of the model, and thus the QC must be sufficient to generate fully entangled states. While it is possible to optimize the structure of the circuit along with tuning its parameters \(\boldsymbol{\theta}\)[16], we use a V-shaped sequence of fixed two-qubit operations, interleaved by two layers of single-qubit \(X\) and \(Y\) rotations, as the entangling block. This structure allows us to effectively use the full dimension of the Hilbert space of 4 qubits, has sufficient flexibility and is easy to calibrate. We use only roughly-calibrated quasi-iSWAP gates [39; 40]. Their exact matrix representation does not significantly affect the expressivity of the circuit, which we check in numerical simulations. Fortunately, there is no need to know exactly which final state \(\rho(\mathbf{x}_{i},\boldsymbol{\theta})\) is prepared, contrary to quantum chemistry problems [5]. Furthermore, it seems that any parameterized entangling quantum evolution of a NISQ quantum simulator may turn out to be useful, acting as an unknown, but controllable, unitary gate over the whole register. The last four layers are an arbitrary Euler rotation of the first qubit via an \(Y\)-\(X\)-\(Y\) sequence, and a measurement. The prediction \(g(\mathbf{x}_{i},\boldsymbol{\theta})=\mathrm{Tr}\left[\rho(\mathbf{x}_{i}, \boldsymbol{\theta})\sigma_{z}^{(1)}\right]\) is calculated by running the QC repeatedly and averaging the \(\sigma_{z}^{(1)}\) outcomes. Then it is thresholded at \(g(\mathbf{x}_{i},\boldsymbol{\theta})=0\) to obtain the binary prediction. With such an output, \(k\)-class classification is possible to realize by training \(k\) one-versus-others models with proper optimal parameters \(\{\boldsymbol{\theta}_{1},...,\boldsymbol{\theta}_{k}\}\), or \(k(k-1)/2\) pairwise classifiers [33]. defined by \[\mathcal{L}\left[g(\mathbf{x}_{i},\boldsymbol{\theta}),y_{i}\right]= \log_{2}\left(1+\exp[-y_{i}\cdot g(\mathbf{x}_{i},\boldsymbol{ \theta})\cdot\beta]\right) \tag{1}\] \[+\gamma\cdot|\boldsymbol{\theta}|^{2},\] which favours the label \(y_{i}\in\{-1,1\}\) and the prediction \(g(\mathbf{x}_{i},\boldsymbol{\theta})\) to have the same sign, and grows linearly in \(|g(\mathbf{x}_{i},\boldsymbol{\theta})|\) when they have opposite signs. We find that choosing large \(\beta=10\) to strongly penalize sign difference is beneficial to the training process. The term \(\gamma\cdot|\boldsymbol{\theta}|^{2}\) with \(\gamma=0.2\) allows to penalize the model for overfitting. While important for the image recognition task below, it is not necessary for simple datasets. We also check that for the studied datasets, the quadratic cost yields comparable training performance. The full cost is calculated as the expectation value \(\mathbb{E}\left[\mathcal{L}\left[g(\mathbf{x}_{i},\boldsymbol{\theta}),y_{i} \right]\right]\) over a chosen data subset \(\mathcal{S}\), so \(i:\mathbf{x}_{i}\in\mathcal{S}\), with \(\mathcal{S}\) being \(\mathcal{T}\), or \(\tilde{\mathcal{T}}\), or a mini-batch \(\mathcal{B}\) of size \(b\). The \(j\)-th component of its gradient over \(\boldsymbol{\theta}\) can be conveniently computed using the parameter-shift rule [1; 22] which requires only two measurements at \(\theta_{j}\pm\pi/2\). We use the Pennylane library [41] with a custom software wrapper to our experimental setup to perform both the automated differentiation and the optimization. Figure 1(c, d) shows a visualization of the training process for the cancer dataset with 569 \begin{table} \begin{tabular}{l c c c c} \hline Dataset & parity & cancer & wines & mnist \\ \hline \# samples & 16 & 569 & 178 & 1797 \\ \# classes & 2 & 2 & 3 & 10 \\ Accuracy & 1.0 & \(0.95^{*}\) & \(0.94^{*}\) & 0.90 \\ \hline \end{tabular} \end{table} Table 2: Summary of the dataset properties. For parity, due to the low number of samples for a 4-bit task train/test split was equal. For the remaining two datasets the splitting was \(2/1\). \({}^{*}\) cross-validated accuracy, averaged over 6 different random splits Figure 1: (a) – Micrograph of the eight-transmon device used as the quantum AI accelerator (false coloured). The chip is symmetric, one of the two transmission lines (purple) is visible. Readout resonators (red), microwave antennas (yellow) and flux control lines (blue) address each transmon individually. T-shaped shunting capacitors are shown in green. (b) – Structure of the QC. The polar angles of single-qubit \(X\), \(Y\) rotations constitute the parameter vector \(\mathbf{\theta}\) while two-qubit operations are fixed. (c) – Distributions of \(\langle\sigma_{z}^{(1)}\rangle\) for two classes vs. training iteration for cancer dataset. (d) – Cost and accuracy convergence, calculated for both \(\mathcal{T}\) and \(\tilde{\mathcal{T}}\) for comparison. (e) – Measured cost function landscape for \(\tilde{\mathcal{T}}\) around the found minimum in the linear hull of two random orthogonal directions \(\mathbf{\theta}^{\prime}\), \(\mathbf{\theta}^{\prime\prime}\). Accuracies are indicated at several local minima. (f) – A 1D slice of cost function, shown in (e) with a dashed line. (g) – Output of the circuit \(\langle\sigma_{z}^{(1)}\rangle\) showing expected harmonic dependence on \(\mathbf{\theta}\)-components, data for \(\theta_{1}\). samples and 2-to-1 \(\mathcal{T}\)-to-\(\tilde{\mathcal{T}}\) split [42]. Figure 1(c) shows the distribution of the model predictions for \(\mathbf{x}\in\tilde{\mathcal{T}}\) at each training step. Each iteration consists of one gradient evaluation and a Nesterov accelerated [43] SGD step over \(x_{i}\in\mathcal{B}\), \(b=64\). At the beginning of the training, the two classes are indistinguishable while at the end of the training algorithm the distributions of \(g(\mathbf{x},\boldsymbol{\theta})\) for the two classes almost do not overlap. The accuracy of the algorithm on both \(\mathcal{T}\) and \(\tilde{\mathcal{T}}\) steadily increases with the number of iterations, reaching approx. 95% in about 10 iterations. We also do not observe any systematic decline in accuracy if the training is further continued. To correct for the statistical fluctuations of the accuracy estimation due to the particular realization of the sampling for the \(\mathcal{T}\)-to-\(\tilde{\mathcal{T}}\) split, we use the cross-validation method, averaging results over several different splits. Following [12], we also study the behaviour of the cost function calculated over \(\mathbf{x}\in\tilde{\mathcal{T}}\) near the found optimum. Using a known visualization method [44], we plot a 2D slice of the 15D parameter space, a square in the linear hull of two normalized orthogonal random vectors \(\boldsymbol{\theta}^{\prime},\ \boldsymbol{\theta}^{\prime\prime}\) added to the optimal vector \(\boldsymbol{\theta}^{*}\). From Figure 1(e) it can be seen that even for such a small QC, the minimum indeed is not unique which can lead to trapping of the algorithm, and that the cost is non-convex in the original space [44]. We also find that setting \(\boldsymbol{\theta}^{*}=0\) in this experiment yields similar topography with local minima corresponding to above 80% accuracies, so a moderately good solution can be found just by moving along a randomly chosen direction. We also check experimentally that the dependence of \(g(\mathbf{x}_{i},\boldsymbol{\theta})\) on each of the parameters \(\theta_{j}\) is harmonic, according to theory, and show in Figure 1(g) how \(g(\mathbf{x}_{i},\boldsymbol{\theta})\) varies with \(\theta_{1}\), the first and the deepest parameter in the circuit, when the other parameters and the input features \(\mathbf{x}\) are set to zero. As can be seen, due to the inaccuracies in the calibration of the two-qubit gates and non-negligible decoherence, the value of the prediction never reaches "+1" in contrast to what one would expect from the QC structure for \(\theta_{1}=\pi\). ### Performance analysis In Figure 2 we summarize the model training and performance on all three datasets, see also Table 2. In the parity problem, 4-bit sequences should be decided to contain even or odd number of "1"-s. This problem is a simple test which displays the reproducibility of quantum operations and the sensitivity of the model in capturing class change even when a single bit is flipped, which is difficult for classical models [45]. Having only 16 samples, we split the dataset equally and calculate Figure 2: Training process of classifiers for all three simple problems. (a)-(c) – Accuracy vs. the number of iterations. (d)-(f) – Same for the cost function. (g)-(i) – Distributions of \(\langle\sigma_{z}^{(1)}\rangle\) for two classes during training process. For wines dataset accuracies of all 3 ”one-vs-others” classifiers and total accuracy are plotted. In picture (i) the distribution of \(\langle\sigma_{z}^{(1)}\rangle\) for these classifiers is plotted. the cost function on the full subset \(\mathcal{T}\equiv\mathcal{B}\), \(b=8\). At the optimal point, the accuracy reaches 100%, as there exists an analytical solution composed of four CNOT gates [46] which can be mimicked by our anzats. At the same time, the cost function does not reach zero due to the imperfections of quantum operations and decoherence. The cancer problem, already briefly presented above, is also binary, but has a significantly larger dataset which allows a better splitting ratio and puts the model under a more stringent test in terms of performance. As it can be seen from Figure 1(b), with \(t_{\text{rep}}=50\)\(\mu\)s and averaging over 1000 repetitions, the measurement of \(\langle\sigma_{z}^{(1)}\rangle\) takes 50 ms time. In our setup, rewriting the pulse sequence waveforms to update the gates in the QC takes comparable time (it could be reduced at least an order of magnitude, though, with better hardware). To find the gradient of the cost function calculated on a single \(x_{i}\), it is necessary to measure \(\langle\sigma_{z}^{(1)}\rangle\) for \(2m+1\) sets of angles \(\boldsymbol{\theta}\in\mathbb{R}^{m}\). Evaluating the gradient for the circuit from Figure 1(b) over \(m\)=15 variables takes \(t_{\text{grad}}=1.55\) s, which with the additional time for rewriting the controlling sequences gives about 3 s. Then, for a batch size \(b=64\), one iteration takes around \(b\cdot t_{\text{grad}}=3\) min. Finally, reaching the accuracy plateau in around 20 iterations, as in Figure 1 (c,d), takes around 1 h. As an elementary test of the model capability to solve multilabel classification problems, we use the three-class wines dataset. Multilabel classification is done by training three one-vs-others binary classifiers aiming to detect each of the cultivars. Then, for a given \(x\in\mathcal{\tilde{T}}\), we choose among the three found \(\boldsymbol{\theta}_{1-3}\) the one delivering the highest value to \(g(\mathbf{x},\bullet)\), and predict the class accordingly. We find that all of three one-vs-others classifiers exhibit similar training behaviour - an accuracy of classification starts from approximately 1/3, reaches value of 90% in 10 iterations and slightly fluctuates further, which is normal for mini-batch learning. As a result, the total classification accuracy is also slightly above 90%; the cross-validation procedure gives a value of 94%. ## 3 Recognition of handwritten digits The feature space dimension \(m=4\) for the datasets considered above is obviously quite small. The situation is different for the image recognition problem: even for a downsampled and cropped mnist picture of size \(8\times 7\) pixels with intensities ranging from 0 to 1, feature space is equipotent to \(\mathbb{R}^{56}\). Choosing a particular way to load information from this space into the quantum state of just four qubits is not a trivial task. We use an approach combining the data re-uploading concept [17] and convolutional neural networks (CNN) [47, 30]. For pre-processing we use normalisation and inverse tangent transformation of the features. We use a modified QC with the structure similar to the one shown in Figure 1(a), given in Figure 3(a). In Figure 3(b), we illustrate how the \(8\times 7\) images are padded with a zero bottom row and then divided in \(3\times 3\) partially overlapping local receptive fields (LRF) moving with stride 2 in both directions [47]. LRFs traverse the image twice, so there are 24 weight kernels in total. Pixels belonging to the \(i\)-th LRF \(\mathcal{F}_{i}\) are convolved with weights \(w_{j}^{(i)}\), and, with addition of a bias \(\beta_{i}\), are converted to angles \(\theta_{1-24}\): \[\theta_{i}=\beta_{i}+\sum_{x_{j}\in\mathcal{F}_{i}}w_{j}^{(i)}x_{j}.\] In contrast to conventional CNN, the kernels for each LRF are independent, which makes the model more flexible. The last three parameters \(\theta_{25-27}\) are independent, so with an addition of a final bias \(\beta_{0}\) to the output of the QC, the total weight dimension is 244. In contrast to the circuit in Figure 1(a), the feature data are now recorded in multiple layers. The information about every pixel is also written several times. The model was tested on a subset of the mnist dataset, see Table 2, by training 10 separate one-vs-others classifiers to be able to distinguish between all 10 digits. Because of the strong (1/9) disproportionality in the quantities for each of the corresponding pairs, we perform dataset balancing by resampling the target digit images with added Gaussian noise. The training process is visualized in Figure 3(d): the total 10-digit accuracy steadily increases from approximately 10% in the beginning (random guessing) to approx. 90% after 100 iterations. The accuracy of individual classifiers varies from 100% for '6 vs. others' to 92% for '8 vs. others', which decreases the full Figure 3: Image recognition for MNIST. (a) – The QC used to process larger feature vectors. (b) – Data encoding into parameters of single-qubit operations, using convolutional kernels. (c) – Confusion matrix to analyze the performance of each classifier. Each intersection of \(i\)-th row and \(j\)-th column shows the number of pictures belonging to the \(i\)-th class and recognized as elements of \(j\)-th class. Non-diagonal elements show misclassifications. (d) – Visualization of training process of the classifier. The dependence of the accuracies on the number of iterations for 10 one-vs-others classifiers and total accuracy are shown. accuracy to 90%. To analyze the errors, we construct a confusion matrix, which is shown in Figure 3(c), and find that most confusion is caused by the "8" classifier, with most misclassifications between "8" and "3", which looks reasonable. Probably, the accuracy could be further improved, however as the training for Figure 3 had taken around 100 h., we did not continue it further. We have also tested the model on the fashion mnist dataset [48]. As for the classical models [48], the classification accuracy for the fashion mnist dataset turned out to be worse than for the mnist dataset. We achieve only 85% accuracy for 4 different types of clothes, while for four digits "0 - 3" we report 98% accuracy. ## 4 Conclusion We experimentally implement a supervised quantum learning algorithm in a chain of superconducting qubits to solve multilabel classification and image recognition problems. We present a suitable gate sequence and a training algorithm, which allow to achieve classification accuracy 90% for the mnist dataset. We note that the presented model does not yet outperform even the simplest classical model, such as linear classifier, which achieves an accuracy of 95% with only 570 trainable parameters. This means, though, that it is possible to obtain 95% accuracy using a QC with only 1 layer and 1 qubit by training 10 one-vs-others classifiers if a linear combination of all features is recorded in the angle of single-qubit operations. However, the main work in that case will be performed by a classical computer, so to expect advantage in quantum machine learning it is necessary to find the right balance between the classical and quantum parts of the model. For example, replacing with a QC the last low-dimensional layers of a convolutional network could be a direction of further study. We also address the issue of the low speed of inference that we observe in practice with a real device. Despite the fact that we use state-of-the-art gate durations (10-s of ns), and the fact that the superconducting quantum computing platform currently features the fastest known gates (compare with silicon, with 100-s of ns [49], and trapped atom/ion or diamond platforms, with 100-s of \(\mu\)s [50, 51, 52]), the training process is currently orders of magnitude slower than for the classical machine learning methods. We note, however, that the training time of the hybrid model presented here could be significantly reduced by implementing an unconditional reset instead of simple waiting [37, 38] (about 50 times faster) and training only one multilabel classifier using multiplexed readout instead of 10 binary classifiers. Thus, the training of our model for the mnist dataset could be reduced from 100 hours to approx. 12 min., which is more competitive. It is also possible to reduce total number of layers in the QC by performing several two-qubit operations in parallel. To achieve quantum advantage in the domain of quantum machine learning it is necessary to realize very fast gates and supremacy-scale number of qubits. While in this work we do not notice significant impact of gradient decay on the performance of quantum classifiers, increasing the number of qubits will require an ingenious circuit architecture to cope with that problem. In the classical machine learning, a similar problem has been overcome by using skip connections [53] and batch normalization [54], but at the moment it is not known whether any analogs of these techniques could be reasonably implemented in the quantum case, and further research is necessary. ## 5 Acknowlegments The work was partially supported by contract no. RQC-2 dated July 14, 2022 between MIPT and RQC. Authors thank E. Korostylev, A. Strelnikov for valuable technical support. All samples were fabricated in the Shared Facility Center of MIPT.
2310.06986
High order biorthogonal functions in H(Curl)
From the literature, it is known that the choice of basis functions in hp-FEM heavily influences the computational cost in order to obtain an approximate solution. Depending on the choice of the reference element, suitable tensor product like basis functions of Jacobi polynomials with different weights lead to optimal properties due to condition number and sparsity. This paper presents biorthogonal basis functions to the primal basis functions mentioned above. The authors investigate hypercubes and simplices as reference elements, as well as the cases of $H^1$ and H(Curl). The functions can be expressed sums of tensor products of Jacobi polynomials with maximal two summands.
Tim Haubold, Sven Beuchler, Joachim Schöberl
2023-10-10T20:15:13Z
http://arxiv.org/abs/2310.06986v1
# High order biorthogonal functions in \(H(\text{curl})\)+ ###### Abstract From the literature, it is known that the choice of basis functions in \(hp\)-FEM heavily influences the computational cost in order to obtain an approximate solution. Depending on the choice of the reference element, suitable tensor product like basis functions of Jacobi polynomials with different weights lead to optimal properties due to condition number and sparsity. This paper presents biorthogonal basis functions to the primal basis functions mentioned above. The authors investigate hypercubes and simplices as reference elements, as well as the cases of \(H^{1}\) and \(H(\text{curl})\). The functions can be expressed sums of tensor products of Jacobi polynomials with maximal two summands. Finite elements, Numerical methods for partial differential equations, orthogonal polynomials 65N22, 65N30, 33C45 ## 1 Introduction It is well known, that \(hp\) finite element methods (fem) often exhibit exponential convergence rates depending on the domain and the boundary, see e.g. [28, 26, 23], if the exact solution of the underlying partial differential equation is (locally) sufficiently smooth. One of the most important algorithmic parts of a \(hp\)-fem method is the choice of basic functions. It has been shown in [5, 21] that the choice directly influences the condition number of the local and global matrices. Nodal basis functions, e.g. based on Lagrangian polynomials, exhibit an exponential growth in the condition number. We also mention the related spectral element method [20]. For efficient assembly routines, see [16]. An alternative to nodal basis functions are modal basis function, e.g. based on Bernstein or classical orthogonal polynomials. The former choice has the advantage that assembly routines in optimal complexity exist and, by their natural connection to NURBS, can easily be visualized, see [1, 3]. On the other hand, the latter choice yields in overall better condition numbers. Furthermore, in case of a polygonal domain and piecewise constant material functions, they yield in sparse element matrices and can be assembled in optimal complexity as well, see [19]. In this paper, we use basis functions based on integrated orthogonal polynomials. This choice goes back to Szabo and Babuska, see e.g. [28]. For simplices, an analogue choice was introduced by [14, 27, 11]. Due to different variational formulations, one naturally derives the different function spaces \(H^{1}\), \(H(\text{curl})\) and \(H(\text{div})\). These spaces are connected by an exact sequence, the so called _De-Rham-complex_. For the weak gradient, curl and divergence operator, these complexes read in \(2D\) as either \[\mathbb{R}\overset{id}{\to}H^{1}(\Omega)\overset{\nabla}{\to}H(\text{curl}, \Omega)\overset{curl}{\to}L^{2}(\Omega),\] or \[\mathbb{R}\overset{id}{\to}H^{1}(\Omega)\overset{curl}{\to}H(\text{div}, \Omega)\overset{dip}{\to}L^{2}(\Omega),\] and in \(3D\) as \[\mathbb{R}\stackrel{{ i\mathcal{U}}}{{\to}}H^{1}(\Omega)\stackrel{{ \nabla}}{{\to}}H(\mathrm{curl},\Omega)\stackrel{{ cwtl}}{{\to}}H( \mathrm{div},\Omega)\stackrel{{ d\Omega jv}}{{\to}}L^{2}(\Omega),\] where \(\Omega\) is an arbitrary bounded, simply connected domain. These complexes and their affiliated discrete spaces are discussed e.g. in [24, 12, 13]. A set of high order basis functions on simplices based on barycentric coordinates was introduced in [18]. The high-order basis function, which we will consider, are based on Jacobi polynomials and were systematically introduced e.g. in [32]. This construction principle was also used in [2, 10, 9] to modify the weights of the chosen Jacobi polynomials due to optimization of sparsity pattern and condition number. A connected construction was given by Fuentes and coworkers [17]. This ansatz avoids the problem of element orientation and thus can combine different element types, like simplices, hexahedron, and even pyramids. The aim of this paper is to give new high order dual functions for the \(H^{1}\) and \(H(\mathrm{curl})\) basis functions defined in [7] and [10]. Consider an arbitrary basis \(\{\phi_{i}\}_{i}\), then the respective (\(L^{2}\)) dual functions \(\{\psi_{j}\}_{j}\in L^{2}(\Omega)\) are given by the relation \[\langle\phi_{i},\psi_{j}\rangle_{L^{2}}=\delta_{ij},\] where \(\delta_{km}=1\) for \(k=m\) and \(\delta_{km}=0\) else. Dual functions are used in defining interpolation operators, see e.g. [22, Chap. 7], or transfer operators between finite elements spaces, see e.g. [30, 31]. Another application is the determination of starting values for time dependent parabolic problems. Consider for example some function \(t(\vec{x})\), which we want to approximate by our basis functions \(\phi_{i}\), i.e. \[t(\vec{x})=\sum_{i=1}^{N}\alpha_{i}\phi_{i},\] where \(N=\dim(\phi).\) This best approximation problem is solved by multiplication with test functions \(v\in L^{2}(\Omega)\) and integration thereof. The following linear system needs to be solved: \[\int_{\Omega}t(\vec{x})v(\vec{x})\mathrm{d}\vec{x}=\sum_{i=1}^{N}\int_{\Omega} \alpha_{i}\phi_{i}(\vec{x})v(\vec{x})\mathrm{d}\vec{x}\quad\forall v\in L^{2 }(\Omega).\] The choice \(v=\phi_{i}\) for all \(i\) would lead into a dense or almost dense system. On the other hand, the choice of biorthogonal functions leads to a diagonal system. There are also purely biorthogonal polynomial systems, see e.g. [15]. An algorithmic implementation for the \(H^{1}\) dual functions can be found in the finite element software Ngsolve[25], which uses a projected based interpolation, see also [12, 13]. In \(2D\) the general problem reads: _Problem 1.1_.: Find \(u_{hp}\in\mathbb{V}_{hp}\) such that \[u_{hp}(\lambda) =u(\lambda) \forall\text{ vertices }\lambda, \tag{1.2}\] \[\int_{E}u_{hp}v =\int_{E}uv \forall v\in\mathcal{P}^{p-2}\text{ or }\mathcal{Q}^{p-2} \forall\text{ edges }E,\] (1.3) \[\int_{Q/T}u_{hp}v =\int_{Q/T}uv \forall v\in\mathcal{P}^{p-3}\text{ or }\mathcal{Q}^{p-3} \forall\text{ triangles/quadrilaterals }T. \tag{1.1}\] A similar approach is valid in \(3D\). Depending on the choice of the primal spaces \(\mathbb{V}_{hp}\), the authors present dual basis functions for \(H^{1}\) and \(H\)(curl) on several elements. Some cases, in particular on hypercubes in all dimensions as well as in \(H^{1}\) are quite simple and should be known in the \(hp\)-fem community. Nevertheless, we decided to present also these cases since they give the reader an impression of the construction principle for the difficult ones in \(H\)(curl) on simplices. This major novelty of this contribution is the development of closed expressions of dual functions in terms of Jacobi polynomials. In our publications, the primal basis functions are tensor products of Legendre and integrated Legendre polynomials for the hypercubes elements. On simplices, we used the functions [7] and [10]. The outline of this paper is as follows. Section 2 introduces and summarizes the required properties of Jacobi polynomials. In section 3 the dual basis functions for \(H^{1}\) are stated. The main part of this paper is devoted to section 4, where the biorthogonal functions for \(H\)(curl) are derived. Section 5 summarizes the main results of this paper. ## 2 Preliminaries We start with the definition of the required orthogonal polynomials. For \(n\in\mathbb{N}\), \(\alpha,\beta>-1\), let \[p_{n}^{(\alpha,\beta)}(x)=\frac{1}{2^{n}n!\,(1-x)^{\alpha}(1+x)^{\beta}}\frac {\mathrm{d}^{n}}{\mathrm{d}x^{n}}(1-x)^{\alpha}(1+x)^{\beta}(x^{2}-1)^{n} \tag{1}\] be the \(n\)-Jacobi polynomial with respect to the weight \(\omega(x)=(1-x)^{\alpha}(1+x)^{\beta}\). Moreover, the integrated Jacobi polynomials are given by \[\hat{P}_{n}^{\alpha}(x)=\int_{-1}^{x}p_{n-1}^{(\alpha,0)}(t)\;\mathrm{d}t, \quad\hat{P}_{1}^{\alpha}(x)=1 \tag{2}\] for \(n\geq 0\) and \(\beta=0\). In the special case \(\alpha=\beta=0\), one obtains \[L_{n}(x)=P_{n}^{(0,0)}(x)\quad\text{and}\quad\hat{L}_{n}(x)=\hat{P}_{n}^{0}(x) \tag{3}\] the Legendre and integrated Legendre polynomials, respectively. The Jacobi polynomials form an orthogonal system in the weighted \(L_{2,\omega}\) scalar product \[I_{n,m}^{(\alpha,\beta)}=\int_{-1}^{1}\omega(x)P_{n}^{(\alpha,\beta)}(x)P_{m} ^{(\alpha,\beta)}(x)\;\mathrm{d}x=\delta_{nm}\frac{2^{\alpha+\beta+1}}{(2n+ \alpha+\beta+1}\frac{\Gamma(n+\alpha+1)\Gamma(n+\beta+1)}{n!\,\Gamma(n+\alpha +\beta+1)}, \tag{4}\] where \(\Gamma(\cdot)\) denotes the Gamma function, see e.g. [4]. In this publication, the cases \(\beta\in\{0,1\}\) are of special interest. The relation (4) simplifies to \[I_{n,m}^{(\alpha,0)}=\delta_{nm}\frac{2^{\alpha+1}}{(2n+\alpha+1)}\quad\text{ and}\quad I_{n,m}^{(\alpha,1)}=\delta_{nm}\frac{2^{\alpha+2}}{(2n+\alpha+2)}\frac{(n+1)}{(n+ \alpha+1)}, \tag{5}\] respectively. Note that the integrated Legendre and Jacobi polynomials can be written as Jacobi polynomials by using the relations, \[\widehat{L}_{i}(x)=\frac{(x^{2}-1)}{2(i-1)}P_{i-2}^{(1,1)}(x)\quad\text{and} \quad\widehat{P}_{i}^{\alpha}(x)=\frac{(1+x)}{n}P_{i-1}^{(\alpha-1,1)}(x), \tag{6}\] see e.g. [29]. ## 3 Dual function in \(H^{1}\) We will start by introducing the \(H^{1}\) dual functions for the quadrilateral and for the triangular case. ### \(H^{1}\) dual functions on the quadrilateral Consider the master element \(\square=(-1,1)^{2}\). The standard interior basis functions are \(u^{\square}_{ij}(x,y)=\widehat{L}_{i}(x)\widehat{L}_{j}(y)\), see [28]. To find the dual functions, we use (6). The dual function \(\hat{b}^{\square}_{kl}(x,y)=b^{\square}_{k}(x)b^{\square}_{l}(y)\) has to satisfy the relation \[\int_{\square}u^{\square}_{ij}(x,y)\hat{b}^{\square}_{kl}(x,y)\,\mathrm{d}x= \int_{-1}^{1}\widehat{L}_{i}(x)b^{\square}_{k}(x)\,\mathrm{d}x\int_{-1}^{1} \widehat{L}_{j}(y)b^{\square}_{l}(y)\,\mathrm{d}y=c_{ijkl}\delta_{i,k}\delta_ {j,l}. \tag{11}\] By inserting the Jacobi polynomials, we directly notice the orthogonality relation (5) for \(\alpha=\beta=1\). Thus, one obtains \[\tilde{c}\int_{-1}^{1}(x^{2}-1)P^{(1,1)}_{i-2}(x)\hat{b}^{\square}_{k}(x)\, \mathrm{d}x\int_{-1}^{1}(y^{2}-1)P^{(1,1)}_{i-2}(y)\hat{b}^{\square}_{l}(y)\, \mathrm{d}y=c_{kl}\delta_{i,k}\delta_{j,l}.\] This motivates the choice \[b^{\square}_{k}(x)=\chi_{k}P^{(1,1)}_{k-2}(x)\text{ and }b^{\square}_{l}(y)= \chi_{l}P^{(1,1)}_{l-2}(y). \tag{12}\] with some factors \(\chi_{l}\). The extension to the three-dimensional case is straightforward. ### \(H^{1}\) dual function on the simplex We now apply the same strategy to the simplicial case. The triangular basis functions, given in [11], are used on the triangle \(\triangle\) with vertices \((-1,\,-1),(1,\,-1),(0,1)\). For the tetrahedron \(\blacktriangle\) with vertices \((-1,\,-1,\,-1)\), \((1,\,-1,\,-1)\), \((0,1,\,-1)\) and \((0,0,1)\), the basis functions of [6] are used. The interior bubbles are given as \[u^{\triangle}_{ij}(x,y) =\widehat{L}_{i}\left(\frac{2x}{1-y}\right)\left(\frac{1-y}{2} \right)^{i}\widehat{P}^{2i}_{j}(y) \tag{13}\] \[u^{\blacktriangle}_{ijk}(x,y,z) =\widehat{L}_{i}\left(\frac{4x}{1-2y-z}\right)\left(\frac{1-2y-z} {4}\right)^{i}\widehat{P}^{2i}_{j}\left(\frac{2y}{1-z}\right)\left(\frac{1-z }{2}\right)^{j}\widehat{P}^{2i+2j}_{k}(z),\] for the triangle and tetrahedron, respectively. Using (6), \(u^{\triangle}_{ij}(x,y)\) is rewritten as \[u^{\triangle}_{ij}(x,y)=c\frac{1}{2}\left(\left(\frac{2x}{1-y}\right)^{2}-1 \right)P^{(1,1)}_{i-2}\left(\frac{2x}{1-y}\right)\left(\frac{1-y}{2}\right)^{ i}\left(\frac{1+y}{2}\right)P^{(2i-1,1)}_{j-1}(y),\] with some known constant \(c\). As before we search for \(\hat{b}^{\triangle}_{kl}(x,y)=b^{\triangle}_{k}\left(\frac{2x}{1-y}\right)b^{ \triangle}_{kl}(y)=b^{\triangle}_{k}\left(\eta\right)b^{\triangle}_{kl}(y)\), where \(\eta=\frac{2x}{1-y}\). Using the Duffy transformation we write down the biorthogonality condition as \[\int_{\triangle}u^{\triangle}_{ij}(x,y)\hat{b}^{\triangle}_{kl}(x,y)\, \mathrm{d}x \tag{14}\] \[\quad=c\int_{-1}^{1}\left(\frac{\eta^{2}-1}{2}\right)P^{(1,1)}_{i- 2}(\eta)b^{\triangle}_{k}(\eta)\mathrm{d}\eta\int_{-1}^{1}\left(\frac{1-y}{2} \right)^{i+1}P^{(2i-1,1)}_{j-1}(y)b^{\triangle}_{kl}(y)\,\mathrm{d}y.\] Again this motivates the choice \[b^{\triangle}_{k}(\eta)=P^{(1,1)}_{k-2}(\eta)\quad\text{and }b^{\triangle}_{kl}(y)= \left(\frac{1-y}{2}\right)^{k-2}P^{(2k-1,1)}_{l-1}(y).\] Normalizing the dual functions means that the system matrix is again the identity matrix. We summarize in the following lemma: **Lemma 3.1** (\(H^{1}\) dual functions on a triangle).: _The interior functions \(u^{\triangle}_{ij}(x,y)\) as in (3.3) and_ \[\hat{b}^{\triangle}_{kl}(x,y)=P^{(1,1)}_{k-2}\left(\frac{2x}{1-y}\right)\left( \frac{1-y}{2}\right)^{k-2}P^{(2k-1,1)}_{l-1}(y)\quad\forall\,k\geq 2,i\geq 1 \tag{3.5}\] _are a biorthogonal system on \(\triangle\), i.e._ \[\int_{\triangle}u^{\triangle}_{ij}(x,y)\hat{b}^{\triangle}_{kl}(x,y)\, \mathrm{d}x\,\mathrm{d}y=c\delta_{ik}\delta_{jl}\] _On the tetrahedron \(\blacktriangle\) the interior functions \(u^{\blacktriangle}_{ijk}\) and_ \[b^{\blacktriangle}_{lnm}(x,y,z)\] \[\qquad=P^{(1,1)}_{l-2}\left(\frac{4x}{1-2y-z}\right)\left(\frac{ 1-2y-z}{4}\right)^{(l-2)}P^{(2i-1,1)}_{n}\left(\frac{2z}{1-y}\right)^{(n-1)} P^{(2i+2j-1,1)}_{m}(z)\] _are biorthogonal, i.e._ \[\int_{\blacktriangle}u^{\blacktriangle}_{ijk}(x,y,z)b^{\blacktriangle}_{ lnm}(x,y,z)\,\mathrm{d}x\,\mathrm{d}y\,\mathrm{d}z=\tilde{c}\delta_{il}\delta_{ jn}\delta_{km}. \tag{3.6}\] Proof.: The biorthogonality follows by inserting (3.5) in (3.4). Analogously, the biorthogonality for the tetrahedron follows by (3.6). ## 4 Dual functions in \(H(\mathrm{curl})\) In a next step, dual functions for \(H(\mathrm{curl})\) will be derived. We apply the following notation a function \(u\) denotes a \(H^{1}\) basis function, \(v^{Q,a}\) a \(H(\mathrm{curl})\) basis function of type \(a=I,II,III\) on a reference element \(Q\). Here \((i,j)\) are the indices of the basis functions, while \((k,l)\) are the indices of the dual functions in \(2D\), or \((i,j,k)\), and \((l,m,n)\) respectively in \(3D\). Furthermore \(p\) denotes either the total or the maximal polynomial degree, depending on the reference element. Finding dual functions for \(H(\mathrm{curl})\) functions is more complicated. Not only are the shape functions vectorial, but they also appear in multiple types. Our goal is to find all dual functions which are orthogonal to the corresponding type of \(H(\mathrm{curl})\) and additionally are zero for all other types. ### Quadrilateral basis Recall that the \(H(\mathrm{curl})\) face shape functions on the quadrilateral \(\square=(-1,1)^{2}\) are \[v^{\square,I}_{ij}(x,y) =\nabla\,\left(\widehat{L}_{i}(x)\widehat{L}_{j}(y)\right)= \begin{pmatrix}L_{i-1}(x)\widehat{L}_{j}(y)\\ \widehat{L}_{i}(x)L_{j-1}(y)\end{pmatrix},\] \[v^{\square,II}_{ij}(x,y) =\begin{pmatrix}L_{i-1}(x)\widehat{L}_{j}(y)\\ -\widehat{L}_{i}(x)L_{j-1}(y)\end{pmatrix},\] \[v^{\square,III}_{i}(x,y) =\begin{pmatrix}\widehat{L}_{i}(y)\\ 0\end{pmatrix},\quad v^{\square,III}_{i+p}(x,y)=\begin{pmatrix}0\\ -\widehat{L}_{i}(x)\end{pmatrix}, \tag{4.1}\] for \(2\leq i,j\leq p\), see [2, 8]. After linear combination we see, that we can also define the auxiliary interior functions as \[\bar{v}_{ij}^{\square,I}(x,y) =\nabla(\widehat{L}_{i}(x))\widehat{L}_{j}(y)=\begin{pmatrix}L_{i- 1}(x)\widehat{L}_{j}(y)\\ 0\end{pmatrix}, \text{ for }1\leq i\leq p,2\leq j\leq p, \tag{4.2}\] \[\bar{v}_{ij}^{\square,II}(x,y) =\widehat{L}_{i}(x)\nabla(\widehat{L}_{j}(y))=\begin{pmatrix}0\\ \widehat{L}_{i}(x)L_{j-1}(y)\end{pmatrix}, \text{ for }1\leq j\leq p,2\leq i\leq p. \tag{4.3}\] Note that the functions of type \(III\) are now a special case of the new type \(I\) and \(II\) for \(i=1\) and \(j=1\), respectively. By application of (2.6) we find the biorthogonal functions to the auxiliary functions. **Definition 4.1** (Dual functions on a quadrilateral).: _Let_ \[\bar{b}_{kl}^{\square,I}(x,y) \coloneqq\begin{pmatrix}L_{k-1}(x)P_{l-2}^{(1,1)}(y)\\ 0\end{pmatrix}, \text{ for }1\leq k\leq p,2\leq l\leq p, \tag{4.4}\] \[\bar{b}_{kl}^{\square,II}(x,y) \coloneqq\begin{pmatrix}0\\ P_{k-2}^{(1,1)}(x)L_{l-1}(y)\end{pmatrix}, \text{ for }1\leq l\leq p,2\leq k\leq p.\] **Corollary 4.2**.: _For \(\bar{v}^{\square,I}\) and \(\bar{v}^{\square,II}\) as in (4.2) and (4.3) and the functions \(\bar{b}^{\square,I}\) and \(\bar{b}^{\square,II}\) as in (4.4) are biorthogonal, i.e._ \[\int_{\square}\bar{v}_{ij}^{\square,\omega_{1}}(x,y)\bar{b}_{kl}^{\square, \omega_{2}}(x,y)=c\delta_{ik}\delta_{jl}\delta_{\omega_{1},\omega_{2}}\] Proof.: The proof follows as in Lemma 3.1. We want to apply the dual functions from Corollary 4.2 to (4.1). For this, we state the following lemma: **Lemma 4.3**.: _Let the functions \(\phi_{i}^{t}\) and \(\psi_{j}^{r}\) satisfy_ \[\langle\phi_{i}^{t},\psi_{j}^{r}\rangle=d_{i,t}\delta_{ij}\delta_{tr}\] _with a constant \(d_{i,t}>0\). Moreover, let \(\bar{\phi}_{i}^{t}=\sum_{s=1}^{k}a_{t,s}\phi_{i}^{s}\) and \(\bar{\psi}_{j}^{r}=\sum_{s=1}^{k}b_{r,s}\psi_{j}^{s}\). If \(ADB^{\top}=I\), where \(A=[a_{t,s}]_{t,s=1}^{k}\), \(B=[b_{r,s}]_{r,s=1}^{k}\), and \(D=\operatorname{diag}[d_{i,s}]_{s=1}^{k}\), then_ \[\langle\bar{\phi}_{i}^{t},\bar{\psi}_{j}^{r}\rangle=\delta_{ij}\delta_{rt}.\] Proof.: The proof is elementary. We apply now lemma 4.3 to find the biorthogonal linear combination of (4.4). For (4.1) it is \(A=\begin{bmatrix}1&1\\ 1&-1\end{bmatrix}\) and \(D\) is given by the relations \[\langle v_{ij}^{\square,II},\bar{b}_{kl}^{\square,I}\rangle=\langle v_{ij}^{ \square,I},\bar{b}_{kl}^{\square,I}\rangle=c_{I}\,\delta_{ik}\delta_{jl}\text { and }-\langle v_{ij}^{\square,II},\bar{b}_{kl}^{\square,II}\rangle=\langle v_{ij}^ {\square,I},\bar{b}_{kl}^{\square,II}\rangle\quad=c_{II}\,\delta_{ik}\delta_{jl}\] where \(c_{I}\neq 0\) and \(c_{II}\neq 0\) are constants depending on \(i,j,k,l\). This motivates the choices \[b_{ij}^{\square,II}(x,y)=\frac{\bar{b}_{ij}^{\square,II}(x,y)}{c_{II}}-\frac{ \bar{b}_{\square,ij}^{I}(x,y)}{c_{I}},\quad b_{ij}^{\square,I}(x,y)=\frac{\bar{ b}_{ij}^{\square,I}(x,y)}{c_{II}}+\frac{\bar{b}_{ij}^{\square,II}(x,y)}{c_{I}}.\] Lemma 4 guarantees the orthogonality relations \[\langle v^{\square,I}_{ij},b^{II}_{kl}\rangle=0=\langle v^{\square,I}_{ij},b^{II}_ {kl}\rangle\qquad\forall i,j,k,l\geq 2.\] Orthogonality of \(v^{III}_{i}\) and \(v^{III}_{i+p}\) to \(b^{I}_{kl}\) and \(b^{II}_{kl}\) is trivial. We summarize in the following lemma. **Lemma 4**: _Let \(v^{\square,I}_{ij},v^{\square,II}_{ij},v^{\square,III}_{i}\) and \(v^{\square,III}_{i+p}\) be as in (11), and \(\tilde{b}^{\square,I}_{kl}\) and \(\tilde{b}^{\square,II}_{kl}\) as in (12). Furthermore, let_ \[\alpha_{ij}=\frac{1}{8}i(2i-1)(2j-1). \tag{13}\] _Then the functions_ \[b^{\square,I}_{ij}(x,y) =\alpha_{ij}\,b^{\square,I}_{ij}(x,y)-\alpha_{ji}\,b^{\square,II}_ {ij}(x,y),\] \[b^{\square,II}_{ij}(x,y) =\alpha_{ij}\,b^{\square,I}_{ij}(x,y)+\alpha_{ji}\,b^{\square,II} _{ij}(x,y),\] \[b^{\square,III}_{i}(x,y) =\tilde{b}^{\square,I}_{1I}(x,y),\quad b^{\square,III}_{i+p}(x,y )=\tilde{b}^{\square,II}_{1I}(x,y),\] _are biorthogonal to (11). \({}_{\Box}\)_ Proof: Biorthogonality was already shown above. For the coefficients in (13) apply the usual orthogonality results (5) for \(\alpha=\beta=0\) and \(\alpha=\beta=1\). One obtains \[c=\langle v^{I}_{ij},b^{I}_{ij}\rangle=\int_{-1}^{1}L_{i-1}(x)L_{i-1}(x)\, \mathrm{d}x\int_{-1}^{1}\frac{y^{2}-1}{2(j-1)}P^{(1,1)}_{j-2}(y)P^{(1,1)}_{j-2 }(y)\,\mathrm{d}y=\alpha_{ji}^{-1}.\] The coefficient \(\alpha_{ij}\) is computed analogously. \({}_{\Box}\) ### Triangular case The triangular case is more complicated. Following [10], the basis functions of \(H(\mathrm{curl})\) on the reference triangle \(\triangle\) are given as \[v^{\triangle,I}_{ij}(x,y) =\nabla(u^{\triangle}_{ij}(x,y))=\nabla(f_{i}(x,y))g_{ij}(y)+f_{ i}(x,y)\nabla(g_{ij}(y)) \tag{14}\] \[v^{\triangle,II}_{ij}(x,y) =\nabla(f_{i}(x,y))g_{ij}(y)-f_{i}(x,y)\nabla(g_{ij}(y))\] \[v^{\triangle,III}_{1j}(x,y) =\nabla(f_{1}(x,y))\tilde{P}^{3}_{j}(y),\] where \(f_{i}(x,y)=\widehat{L}_{i}(\frac{2x}{1-y})\left(\frac{1-y}{2}\right)^{i}\) and \(g_{ij}(y)=\widehat{P}^{2i}_{j}(y)\). The gradients of the auxiliary functions \(f_{i}(x,y)\) and \(g_{ij}(y)\) can be calculated as \[\nabla(f_{i}(x,y)) =\left(\frac{1-y}{2}\right)^{(i-1)}\begin{pmatrix}L_{i-1}\left( \frac{2x}{1-y}\right)\\ \frac{1}{2}L_{i-2}\left(\frac{2x}{1-y}\right)\end{pmatrix} \text{for }i\geq 2, \tag{15}\] \[\nabla(g_{ij}(y)) =\begin{pmatrix}0\\ P^{(2i,0)}_{j-1}(y)\end{pmatrix} \text{for }i\geq 2,j\geq 1,\] \[\nabla(f_{1}(x,y)) =\frac{1-y}{4}\begin{pmatrix}1\\ \frac{1}{2}\frac{2x}{(1-y)}\end{pmatrix},\] where we simplified the first gradient as shown in [11]. We follow the ansatz as described for the quadrilateral case. First split \(v_{ij}^{\triangle,I/II/III}\) in the functions \[\begin{split}\bar{v}_{ij}^{\triangle,I}(x,y)&=\nabla (f_{i}(x,y))g_{ij}(y)\quad\text{ for }i\geq 1,j\geq 2,\\ \bar{v}_{ij}^{\triangle,II}(x,y)&=f_{i}(x,y) \nabla(g_{ij}(y))\quad\text{ for }i,j\geq 2.\end{split} \tag{22}\] The functions \(\{\bar{v}_{ij}^{\triangle,I}(x,y)\}_{ij}\) and \(\{\bar{v}_{ij}^{\triangle,II}(x,y)\}_{ij}\) are also a basis of the space \(H_{0}(\operatorname{curl})\). Next we derive the biorthogonal vectorial functions for \(\bar{v}_{ij}^{\triangle,I}(x,y)\) and \(\bar{v}_{ij}^{\triangle,II}(x,y)\), and then solve the original problem by linear combination, as in the quadrilateral case. Here the main idea of the construction is that we first find vectorial functions which are orthogonal to either \(\nabla(g_{ij}(x,y))\) or \(\nabla(f_{i}(x,y))\) and then biorthogonalise those to the respective other basis functions. It is clear that we have the following structure of the orthogonal vectors: \[B_{kl}(x,y)=\begin{pmatrix}b_{kl}(x,y)\\ 0\end{pmatrix}\quad\text{and}\quad C_{kl}(x,y)\quad=\begin{pmatrix}c_{1,kl}(x, y)\\ c_{2,kl}(x,y)\end{pmatrix}. \tag{23}\] In the following we use the notation \(\eta=\frac{2x}{1-y}\) and write all functions in dependence of \((\eta,y)\), e.g. write \(a(x,y)\) as \(a(\eta,y)\). The first problem which needs to be solved then reads: **Problem 4.5**: Find polynomials \(B_{kl}(\eta,y)\) such that \[\langle\bar{v}_{ij}^{\triangle,II}(\eta,y),B_{kl}(\eta,y)\rangle=0\text{ and }\langle\bar{v}^{\triangle,I},B_{kl}(\eta,y)\rangle=d_{ijkl}^{(1)}\delta_{ik} \delta_{jl}.\] Since the first component of \(\tilde{v}^{\triangle,II}(\eta,y)\) is zero, \(B_{kl}(\eta,y)\) as in (23) naturally fulfils the first condition. Furthermore, we can assume a tensorial-like structure, i.e. \[B_{kl}(\eta,y)=\begin{pmatrix}b_{k}^{(1)}(\eta)b_{kl}^{(2)}(y)\\ 0\end{pmatrix}.\] Now \(b_{k}^{(1)}(z)\) and \(b_{kl}^{(2)}(y)\) only needs to fulfil the relationship \[\int_{-1}^{1}L_{i-1}(\eta)\ b_{k}^{(1)}(\eta)\mathrm{d}\eta\int_{-1}^{1} \left(\frac{1-y}{2}\right)^{i}\frac{(1+y)}{2j}P_{j-1}^{(2i-1,1)}(y)\ b_{kl}^{( 2)}(y)\,\mathrm{d}y=d_{ijkl}^{(1)}\delta_{ik}\delta_{jl},\] where we applied the Duffy transformation. This motivates the choice \(b_{k}^{(1)}(\eta)=L_{k-1}(\eta)\) and \(b_{kl}^{(2)}(y)=\left(\frac{1-y}{2}\right)^{k-1}P_{l-1}^{(2k-1,1)}(y)\). For the second type of dual functions, we need to solve the following problem: **Problem 4.6**: Find polynomials \(C_{kl}(\eta,y)\) such that, \[\langle\tilde{v}_{ij}^{\triangle,I}(\eta,y),C_{kl}(\eta,y)\rangle=0\text{ and }\langle\tilde{v}_{ij}^{\triangle,II}(\eta,y),C_{kl}(\eta,y)\rangle=d_{ijkl}^{(2 )}\delta_{ik}\delta_{jl}. \tag{24}\] We will need the following auxiliary lemma. **Lemma 4.7**: _For \(1\leq i\leq k\), the relation_ \[\int_{-1}^{1}L_{i}(x)P_{k}^{(1,1)}(x)\,\mathrm{d}x=\begin{cases}\frac{4}{2+k}& \text{ if }k\geq i\text{ and }(k-i)\operatorname{mod}2=0\\ 0&\text{ else}\end{cases}\] holds._ Proof:: A classical result of Jacobi polynomials states that \(P_{n}^{(\alpha,\alpha)}(x)\) is even if \(n\) is even, and it is odd if \(n\) is odd. Thus, the relation is trivial if \(k\) and \(i\) have a different parity. In the following, assume \(i\) and \(k\) have the same parity. Let \(I_{ik}\coloneqq\int_{-1}^{1}L_{i}(x)P_{k}^{(1,1)}(x)\,\mathrm{d}x\). If \(i>k\) it follows that \(I_{ik}\) is zero, due to the orthogonality condition of \(L_{i}(x)\). Now assume \(k\geq i\). By partial integration it follows \[I_{ik}=\int_{-1}^{1}L_{i}(x)P_{k}^{(1,1)}(x)\,\mathrm{d}x =\int_{-1}^{1}L_{i}(x)\frac{2}{2+k}\frac{\mathrm{d}}{\mathrm{d}x}L _{k+1}(x)\,\mathrm{d}x\] \[=\frac{2}{2+k}\left[L_{i}(x)L_{k+1}(x)\right]|_{-1}^{1}-\int_{-1 }^{1}\left(\frac{\mathrm{d}}{\mathrm{d}x}L_{i}(x)\right)L_{k+1}(x)\,\mathrm{d}x\] \[=\frac{4}{2+k},\] where the last integral vanishes due to the orthogonality condition of \(L_{k+1}(x)\) and \(\left[L_{i}(x)L_{k+1}(x)\right]|_{-1}^{1}=2\) due to the odd parity. For Problem 4.6, we start with the biorthogonality condition. We again assume a tensorial-like structure, i.e. \(C_{kl}(\eta,y)=c_{kl}^{(3)}(y)\left(c_{k}^{(1)}(\eta)\quad c_{k}^{(2)}(\eta) \right)^{\top}\). The condition is \[\langle\hat{v}_{ij}^{\triangle,II}(\eta,y),C_{kl}(\eta,y)\rangle =\int_{-1}^{1}\widehat{L}_{i}(\eta)c_{k}^{(2)}(\eta)\,\mathrm{d} \eta\int_{-1}^{1}\left(\frac{1-y}{2}\right)^{i+1}P_{j-1}^{(2i,0)}(y)c_{kl}^{( 3)}(y)\,\mathrm{d}y\] \[=d_{ijkl}^{(2)}\delta_{ik}\delta_{jl}.\] This leads to the choice \(c_{k}^{(2)}(\eta)=\kappa P_{k-2}^{(1,1)}(\eta)\) and \(c_{kl}^{(3)}(y)=\left(\frac{1-y}{2}\right)^{k-1}P_{l-1}^{(2k,0)}(y)\), where \(\kappa\) is some constant. Condition \(\langle\hat{v}_{ij}^{\triangle,I},C_{kl}\rangle=0\) is satisfied, if \(\langle\nabla f_{i}(\eta,y),C_{kl}(\eta)\rangle=0\). Since both components of \(\nabla f_{i}\) depend on \(\left(\frac{1-y}{2}\right)^{i-1}\hat{P}_{j}^{2i,0}(y)\), the orthogonality relation reduces to \[\int_{-1}^{1}L_{i-1}(\eta)c_{k}^{(1)}(\eta)\,\mathrm{d}\eta+\int_{-1}^{1} \frac{1}{2}L_{i-2}(\eta)c_{k}^{(2)}(\eta)\,\mathrm{d}\eta=0.\] Figure 1: Biorthogonal sparsity pattern for \(p=6\) Due to lemma 4.7 this condition is fulfilled if \(c_{k}^{(1)}(\eta)=(2+k-1)P_{k-1}^{(1,1)}(\eta)\) and \(c_{k}^{(2)}(\eta)=-2(2+k-2)P_{k-2}^{(1,1)}(\eta)\). Now we have found biorthogonal functions for \(\bar{v}_{ij}^{\triangle,I}\) and \(\bar{v}_{ij}^{\triangle,II}\), which results in a diagonal matrix which can be seen in Figure 0(a). **Definition 4.8** (Dual functions on a triangle).: _For \(k\geq 2,l\geq 1\) define_ \[\begin{split} B_{kl}(x,y)&\coloneqq\left(L_{k-1} \left(\frac{2x}{1-y}\right)\left(\frac{1-y}{2}\right)^{k-1}P_{l-1}^{2k-1,1}(y) \right),\\ & C_{kl}(x,y)&\coloneqq\left(\begin{array}{c}(2+k-1 )P_{k-1}^{(1,1)}\left(\frac{2x}{1-y}\right)\left(\frac{1-y}{2}\right)^{k-1}P_ {l-1}^{(2k,0)}(y)\\ -2(2+k-2)P_{k-2}^{(1,1)}\left(\frac{2x}{1-y}\right)\left(\frac{1-y}{2}\right) ^{k-1}P_{l-1}^{(2k,0)}(y)\end{array}\right).\end{split} \tag{4.11}\] With this choice we have proven the following corollary. **Corollary 4.9**.: _Let \(\bar{v}_{ij}^{\triangle,I},\bar{v}_{ij}^{\triangle,II}\) be defined by (4.8) and the functions \(B_{kl}\) and \(C_{kl}\) be defined as in (4.11). Then these functions are biorthogonal._ Obviously those are not biorthogonal to the basis \(v_{ij}^{\triangle,I},v_{ij}^{\triangle,II}\) and \(v_{1j}^{\triangle,III}\), which can be seen in Figure 0(b). Thus, we derive the biorthogonal functions by linear combination, as in the quadrilateral case. **Lemma 4.10**.: _Let \(\alpha_{1}=\frac{1}{8}(2i-1)(2j+2i-1)(j+2i-1)\), \(\alpha_{2}=\frac{1}{16}(2i-1)(2j+2i-1)\) and \(\alpha_{3}=\frac{1}{16}(2j+2)(j+2)\). Then, for \(2\leq i,k\leq p,1\leq j,l\leq p\), and \(i+j,k+l\leq p\), the functions \(v_{ij}^{\triangle,I},v_{ij}^{\triangle,II}\) and \(v_{1j}^{\triangle,III}\) as in (4.6) are biorthogonal to_ \[b_{kl}^{\triangle,I}(x,y) =-\frac{1}{2}(\alpha_{1}B_{kl}(x,y)+\alpha_{2}C_{kl}(x,y)),\] \[b_{kl}^{\triangle,II}(x,y) =\frac{1}{2}(\alpha_{1}B_{kl}(x,y)-\alpha_{2}C_{kl}(x,y)),\] \[b_{1l}^{\triangle,III}(x,y) =\alpha_{3}B_{1l}(x,y),\] _where \(B_{kl}\) and \(C_{kl}\) are given in (4.11)._ Proof.: Since we have shown biorthogonality of \(\bar{v}_{ij}^{\triangle,I},\bar{v}_{ij}^{\triangle,II}\) to \(B_{kl},C_{kl}\) in Corollary 4.9, we get \[\langle v_{ij}^{\triangle,I},B_{kl}\rangle=\langle\bar{v}_{ij}^{ \triangle,I},B_{kl}\rangle+\langle\bar{v}_{ij}^{\triangle,II},B_{kl}\rangle= \langle\bar{v}_{ij}^{\triangle,I},B_{kl}\rangle=c\delta_{ik}\delta_{jl}\] \[\langle v_{ij}^{\triangle,I},C_{kl}\rangle=\langle\bar{v}_{ij}^{ \triangle,I},C_{kl}\rangle+\langle\bar{v}_{ij}^{\triangle,II},C_{kl}\rangle= \langle\bar{v}_{ij}^{\triangle,II},C_{kl}\rangle=\bar{c}\delta_{ik}\delta_{jl}.\] As in the quadrilateral case, lemma 4.3 with the matrix \(A=\begin{bmatrix}1&1\\ 1&-1\end{bmatrix}\) is applied. For the diagonal matrix \(D\) in this lemma, we have to compute \(\langle\bar{v}_{ij}^{\triangle,I},B_{ij}\rangle\) and \(\langle\bar{v}_{ij}^{\triangle,II},C_{ij}\rangle\) for \((i,j)=(k,l)\). Using (2.5) with \(\alpha=\beta=0\) and \((\alpha,\beta)=(2i-1,0)\) one obtains \[\langle\bar{v}_{ij}^{\triangle,I},B_{ij}\rangle =\int_{-1}^{1}(L_{i-1}(\eta))^{2}\,\mathrm{d}\eta\int_{-1}^{1} \left(\frac{1-y}{2}\right)^{2i-1}\frac{(1+y)}{j}(P_{j-1}^{(2i-1,1)}(y))^{2}\, \mathrm{d}y\] \[=\frac{8}{(2i-1)(2j+2i-1)(j+2i-1)}.\] This gives \(\alpha_{1}=\frac{1}{8}(2i-1)(2j-2i-1)(j+2i-1)\). Analogously (2.5) with \(\alpha=\beta=1\) and \((\alpha,\beta)=(2i,0)\) gives \[\langle\tilde{v}_{ij}^{\triangle,II},C_{ij}\rangle =-2i\int_{-1}^{1}\left(\frac{\eta^{2}-1}{2(i-1)}\right)(P_{i-2}^{ (1,1)}(\eta))^{2}\,\mathrm{d}\eta\int_{-1}^{1}\left(\frac{1-y}{2}\right)^{2i}P _{j-1}^{(2i,0)}(y)\,\mathrm{d}y\] \[=\frac{16}{(2i-1)(2j+2i-1)},\] implies \(\alpha_{2}=\frac{1}{16}(2i-1)(2j+2i-1).\) The only thing remaining to show is that \(B_{kl},C_{kl}\) are orthogonal to \(v_{1,j}^{\triangle,III}\) and that \(b_{1I}^{\triangle,III}\) is orthogonal to \(\tilde{v}_{ij}^{\triangle,I}\) and \(\tilde{v}_{ij}^{\triangle,II}.\) Indeed, \[\langle v_{1j}^{\triangle,III},B_{kl}\rangle=\int_{-1}^{1}1\cdot L_{k-1}(\eta )\,\mathrm{d}\eta\int_{-1}^{1}\frac{1}{2}\left(\frac{1-y}{2}\right)^{k}P_{l-1 }^{(2k-1,1)}(y)=0,\] due to the orthogonality of \(L_{k-1}(\eta)\) for all \(k\geq 2.\) For the orthogonality of \(\langle v_{1j}^{\triangle,III},C_{kl}\rangle.\) we can apply Lemma 4.2. On the other hand, orthogonality of \(b_{1I}^{\triangle,III}\) follows from Corollary 4.2. Since all basis functions are properly scaled, the next corollary follows directly. Let \(i,k\geq 2\) and \(j,l\geq 1.\) Furthermore let \(v_{ij}^{\triangle,I},v_{ij}^{\triangle,II},v_{1j}^{\triangle,III}\) be the basis of the \(H(\operatorname{curl})\) interior functions and let \(b_{kl}^{\triangle,I},b_{kl}^{\triangle,II},b_{1I}^{\triangle,III}\) be the corresponding normalized dual face functions, then holds for the entries of the element Matrix \(G\) that \[G_{ij,kl}=\langle v_{ij}^{\triangle,T_{1}}(x,y),b_{kl}^{\triangle,T_{2}}(x,y) \rangle=\delta_{i,k}\delta_{j,l}\delta_{T_{1},T_{2}}.\] ### Tetrahedral case Our ansatz is the same as in the triangular case. Firstly, we split the basis functions in a simpler basis, then find the orthogonal basis to this simpler basis, and lastly build the right dual basis by linear combination. Recall the basis of \(H(\operatorname{curl})\) on the tetrahedral \(\blacktriangle\) is given by \[v_{ijk}^{I} =\nabla(f_{i}g_{ij}h_{ijk})=\nabla(f_{i})g_{ij}h_{ijk}+f_{i} \nabla(g_{ij})h_{ijk}+f_{i}g_{ij}\nabla(h_{ijk}), \tag{4.12}\] \[v_{ijk}^{II} =\nabla(f_{i})g_{ij}h_{ijk}-f_{i}\nabla(g_{ij})h_{ijk}+f_{i}g_{ij }\nabla(h_{ijk}),\] \[v_{ijk}^{III} =\nabla(f_{i})g_{ij}h_{ijk}+f_{i}\nabla(g_{ij})h_{ijk}-f_{i}g_{ij }\nabla(h_{ijk}),\] \[v_{1jk}^{IV} =v_{[1,2]}^{\mathcal{N}_{0}}g_{ij}h_{ijk},\] for \(2\leq i,1\leq j,k,\) and \(i+j+k\leq p,\) where \(f_{i}(x,y,z)=\widehat{L}_{i}\left(\frac{4x}{1-2y-z}\right)\left(\frac{1-2y-z}{ 4}\right)^{i}\), \(g_{ij}(y,z)=\widehat{P}_{j}^{2i}\left(\frac{2y}{1-z}\right)\left(\frac{1-z}{2} \right)^{j},\) and \(h_{ijk}(z)=\widehat{P}_{k}^{2i+2j}(z).\) Here \(v_{[1,2]}^{\mathcal{N}_{0}}\) is the lowest order Nedelec function of first kind, based on the edge from vertex \(1\) to \(2\). With the substi tutions \(\eta=\frac{4x}{1-2y-z}\) and \(\chi=\frac{2y}{1-z}\), the gradients of the auxiliary functions are \[\nabla f_{i} =\begin{pmatrix}L_{i-1}(\eta)\\ \frac{1}{2}L_{i-2}(\eta)\\ \frac{1}{4}L_{i-2}(\eta)\end{pmatrix}\,\left(\frac{1-\chi}{2}\right)^{i-1}\, \left(\frac{1-z}{2}\right)^{i-1},\] \[\nabla g_{ij} =\begin{pmatrix}0\\ P_{j-1}^{(2i,0)}(\chi)\\ \frac{\chi}{2}P_{j-1}^{(2i,0)}(\chi)-\frac{i}{2}\widehat{P}_{j}^{2i}(\chi) \end{pmatrix}\,\left(\frac{1-z}{2}\right)^{j-1}\text{ and }\] \[\nabla h_{ijk} =\begin{pmatrix}0\\ 0\\ P_{k-1}^{(2i+2j,0)}(z)\end{pmatrix},\] with the simplifications as in [11, 7]. We now derive three biorthogonal vectors with respect to \((\nabla f_{i})\,g_{ij}\,h_{ijk},f_{i}\,(\nabla g_{ij})\,h_{ijk}\) and \(f_{i}\,g_{ij}\,(\nabla h_{ijk})\). Similar to the triangular case, we have the following conditions on the dual functions \(B_{ijk},C_{ijk}\) and \(D_{ijk}\) : _Problem 4.12_.: Find \(B_{lmn},C_{lmn},D_{lmn}\) such that \[f_{i}\,(\nabla g_{ij})\,h_{ijk}\,\perp\,B_{lmn}\,\perp\,f_{i}\,g _{ij}\,(\nabla h_{ijk})\text{ and }\langle B_{lmn},(\nabla f_{i})\,g_{ij}\,h_{ijk}\rangle =r_{ijklmn}\delta_{il}\delta_{jm}\delta_{kn},\] \[(\nabla f_{i})\,g_{ij}\,h_{ijk}\,\perp\,C_{lmn}\,\perp\,f_{i}\,g _{ij}\,(\nabla h_{ijk})\text{ and }\langle C_{lmn},f_{i}\,(\nabla g_{ij})\,h_{ijk}\rangle =s_{ijklmn}\delta_{il}\delta_{jm}\delta_{kn},\] \[(\nabla f_{i})\,g_{ij}\,h_{ijk}\,\perp\,D_{lmn}\,\perp\,f_{i}\,( \nabla g_{ij})\,h_{ijk}\text{ and }\langle D_{lmn},f_{i}\,g_{ij}\,(\nabla h_{ijk})\rangle =t_{ijklmn}\delta_{il}\delta_{jm}\delta_{kn}.\] Since the basis vectors build a lower triangular system, we know that we need to find an upper triangular system. Thus, \[B_{lmn}=\begin{pmatrix}b_{lmn}(\eta,\chi,z)\\ 0\\ 0\end{pmatrix},\quad C_{lmn}=\begin{pmatrix}c_{lmn}^{(1)}(\eta,\chi,z)\\ c_{lmn}^{(2)}(\eta,\chi,z)\\ 0\end{pmatrix},\quad\text{and}\quad D_{lmn}=\begin{pmatrix}d_{lmn}^{(1)}(\eta, \chi,z)\\ d_{lmn}^{(2)}(\eta,\chi,z)\\ d_{lmn}^{(3)}(\eta,\chi,z)\end{pmatrix}.\] The construction of \(B_{lmn}\) is trivial, and the construction of \(C_{lmn}\) follows from the \(2D\) case. Thus, \[B_{lmn} =\begin{pmatrix}1\\ 0\\ 0\end{pmatrix}\,L_{l-1}(\eta)\left(\frac{1-\chi}{2}\right)^{l-1}P_{m-1}^{(2l-1, 1)}(\chi)\left(\frac{1-z}{2}\right)^{l+m-2}P_{n-1}^{(2l+2m-1,1)}(z)\text{ and }\] \[C_{lmn} =\begin{pmatrix}(2+l-1)P_{l-1}^{(1,1)}(\eta)\\ -2(2+l-2)P_{l-2}^{(1,1)}(\eta)\\ 0\end{pmatrix}\,\left(\frac{1-\chi}{2}\right)^{l-1}P_{m-1}^{(2l,0)}(\chi)\left( \frac{1-z}{2}\right)^{l+m-2}P_{n-1}^{(2l+2m-1,1)}(z),\] where the exponents of \(\left(\frac{1-\chi}{2}\right)\) and \(\left(\frac{1-z}{2}\right)\) are determined with respect to the functional determinant of the Duffy trick, i.e. \(\left(\frac{1-\chi}{2}\right)\,\left(\frac{1-z}{2}\right)^{2}.\) To derive \(D_{lmn}\), we go step by step. It follows immediately that \[d_{lmn}^{(3)}=P_{l-2}^{(1,1)}(\eta)\left(\frac{1-\chi}{2}\right)^{l-2}P_{m-1}^ {(2l-1,1)}(\chi)\left(\frac{1-z}{2}\right)^{l+m-2}P_{n-1}^{(2l+2m,0)}(z), \tag{4.13}\] due to \(\langle D_{lmn},f_{i}\,g_{ij}\,(\nabla h_{ijk})\rangle=s_{ijklmn}\delta_{il}\delta_ {jm}\delta_{kn}\). Next we derive \(d^{(2)}_{lmn}\) by demanding \[0=\langle f_{i}(\nabla g_{ij})h_{ijk},D_{lmn}\rangle. \tag{4.14}\] An rather obvious choice for \(d^{(2)}_{lmn}\) is \(d^{(2)}_{lmn}=P^{(1,1)}_{l-2}(\eta)\,\bar{d}^{(2)}(\chi)\left(\frac{1-z}{2} \right)^{l+m-2}P^{(2l+2m,0)}_{n-1}(z)\). This choice reduces the condition (4.14) to \[\begin{split} 0=&\int_{-1}^{1}\left(\frac{1-\chi}{2} \right)^{i+1}P^{(2i,0)}_{j-1}(\chi)\,\bar{d}^{(2)}(\chi)\\ &+\left(\frac{1-\chi}{2}\right)^{2i-1}\left(\frac{\chi}{2}P^{(2i,0)}_{j-1}(\chi)-\frac{j}{2}\widehat{P}^{2i}_{j}(\chi)\right)P^{(2i-1,1)}_{m -1}(\chi)\,\mathrm{d}\chi,\end{split} \tag{4.15}\] since the condition (4.14) is trivial for \(i\neq l\) and \(k\neq n\) with this choice of \(d^{(2)}_{ijk}\). On the right part of the integral, the combination \(\widehat{P}^{2i}_{j}(\chi)P^{(2i-1,1)}_{m-1}(\chi)\) already fulfils the orthogonality relation. On the other hand, for the product of the two different Jacobi polynomials, namely \(P^{(2i,0)}_{j-1}(\chi)\) and \(P^{(2i-1,1)}_{m-1}(\chi)\), we can't apply standard orthogonality results. We eliminate this mixed part by linear combination. Therefore, we choose \[\bar{d}^{(2)}=-\frac{\chi}{2}\left(\frac{1-\chi}{2}\right)^{l-2}P^{(2l-1,1)}_ {m-1}(\chi)+\hat{d}^{(2)},\] such that the mixed products cancel each other out. Those linear combinations result in the further reduced condition \[0=\int_{-1}^{1}\left(\frac{1-\chi}{2}\right)^{i+1}P^{(2i,0)}_{j-1}(\chi)\, \hat{d}^{(2)}(\chi)-\frac{j}{2}\left(\frac{1-\chi}{2}\right)^{2i-1}\widehat{P }^{2i}_{j}(\chi)P^{(2i-1,1)}_{m-1}(\chi)\,\mathrm{d}\chi. \tag{4.16}\] The last part of the integral in (4.16) only appears if \(m=j\). We can achieve the same for the first part of the integral, if we choose \(\hat{d}^{(2)}=c\left(\frac{1-\chi}{2}\right)^{l-1}P^{(2l,0)}_{m-1}(\chi)\). Since both instances in (4.16) are integrals over Jacobi polynomials with matching indices, order and weights, we can determine the constant \(c\) directly. It holds that \[\int_{-1}^{1}\left(\frac{1-\chi}{2}\right)^{2i}\left(P^{(2i,0)}_{ j-1}(\chi)\right)^{2}\,\mathrm{d}\chi =\frac{1}{2j+2i-1}\] \[\int_{-1}^{1}\left(\frac{1-\chi}{2}\right)^{2i-1}\left(\frac{1+ \chi}{2}\right)\left(P^{(2i-1,1)}_{j-1}(\chi)\right)^{2}\,\mathrm{d}\chi =\frac{j}{(2j+2i-1)(2i+j-1)}.\] Collecting everything \[d^{(2)}_{lmn}=P^{(1,1)}_{l-2}(\eta)\left(\frac{1-\chi}{2}\right)^{l-2}Q_{m,2}( \chi)\left(\frac{1-z}{2}\right)^{l+m-2}P^{(2l+2m,0)}_{n-1}(z)\] with the polynomial \[Q_{m,2}(\chi)=-\frac{\chi}{2}P^{(2l-1,1)}_{m-1}(\chi)+\frac{m}{2l+m-1}\frac{1- \chi}{2}P^{(2l,0)}_{m-1}(\chi) \tag{4.17}\] of degree \(m\). Now we need to determine \(d^{(1)}_{lmn}\), by \(0=\langle(\nabla f_{l})g_{ij}h_{ijk},D_{lmn}\rangle\). Inserting \(d^{(2)}_{lmn}\) and \(d^{(3)}_{lmn}\) yields the following condition, after some simplification \[0= \int_{(-1,1)^{3}}L_{i-1}(\eta)\left(\frac{1-\chi}{2}\right)^{i-1} \widehat{P}^{2i}_{j}(\chi)\left(\frac{1-z}{2}\right)^{i+j+1}\widehat{P}^{2i+2 j}_{k}(z)d^{(1)}_{lmn}(\eta,\chi,z)\,\mathrm{d}\eta\,\mathrm{d}\chi\,\mathrm{d}z\] \[+\int_{(-1,1)^{3}}L_{i-2}(\eta)P^{(1,1)}_{l-2}(\eta)\left(\frac{ 1-\chi}{2}\right)^{i+l-1}\widehat{P}^{2i}_{j}(\chi)\left[P^{(2l-1,1)}_{m-1}( \chi)+\frac{m}{2l+m-1}P^{(2l,0)}_{m-1}(\chi)\right]\] \[\left(\frac{1-z}{2}\right)^{i+j+l+m-1}\widehat{P}^{2i+2j}_{k}(z) P^{(2l+2m-1,1)}_{n-1}(z)\,\mathrm{d}z\,\mathrm{d}\chi\,\mathrm{d}\eta.\] If we choose \(d^{(1)}_{lmn}(\eta,\chi,z)=\frac{2+l-1}{2(2+l-2)}P^{(1,1)}_{l-1}(\eta)\hat{d} ^{(1)}(\chi)\left(\frac{1-z}{2}\right)^{l+m-2}P^{(2l+2m,0)}_{n-1}(z)\), we can factor all terms depending on \(\eta\) and \(z\) out. Thus, we only need to determine \(\hat{d}^{(1)}(\chi)\), by the condition \[0= \int_{-1}^{1}\left(\frac{1-\chi}{2}\right)^{i-1}\widehat{P}^{2i}_ {j}(\chi)\,\hat{d}^{(1)}(\chi)\] \[+\left(\frac{1-\chi}{2}\right)^{i+l-1}\widehat{P}^{2i}_{j}(\chi) \left[P^{(2l-1,1)}_{m-1}(\chi)+\frac{m}{2l+m-1}P^{(2l,0)}_{m-1}(\chi)\right]\, \mathrm{d}\chi.\] It is obvious, that we can choose \[d^{(1)}_{lmn}= \frac{-(2+l-1)}{2(2+l-2)}P^{(1,1)}_{l-1}(\eta)\left(\frac{1-\chi }{2}\right)^{l-1}Q_{m-1,1}(\chi)\left(\frac{1-z}{2}\right)^{l+m-2}P^{(2l+2m,0 )}_{n-1}(z)\] with the \(m-1\) degree polynomial \[Q_{m-1,1}(\chi)=P^{(2l-1,1)}_{m-1}(\chi)+\frac{m}{2l+m-1}P^{(2l,0)}_{m-1}(\chi). \tag{4.18}\] We summarize in the following definition. **Definition 4.13**: (Dual basis on the tetrahedron). _Let \(Q_{m-1,1}\) and \(Q_{m,2}\) be defined by (4.18) and (4.17), respectively. Let \(l\geq 2,m,n\geq 1\) then define_ \[\bar{b}^{\boldsymbol{\Delta},II}_{lmn}\coloneqq\begin{pmatrix}(l+1 )P^{(1,1)}_{l-1}(\eta)\\ -2lP^{(1,1)}_{l-2}(\eta)\\ 0\end{pmatrix}\left(\frac{1-\chi}{2}\right)^{l-1}P^{(2l,0)}_{m-1}(\chi)\left( \frac{1-z}{2}\right)^{l+m-2}P^{(2l+2m-1,1)}_{n-1}(z),\] \[\bar{b}^{\boldsymbol{\Delta},III}_{lmn}\coloneqq\begin{pmatrix}- \frac{(l+1)}{2l}P^{(1,1)}_{l-1}(\eta)\left(\frac{1-\chi}{2}\right)^{l-1}Q_{m- 1,1}(\chi)\\ P^{(1,1)}_{l-2}(\eta)\left(\frac{1-\chi}{2}\right)^{l-2}Q_{m,2}(\chi)\\ P^{(1,1)}_{l-2}(\eta)\left(\frac{1-\chi}{2}\right)^{l-2}P^{(2l-1,1)}_{m-1}(\chi) \end{pmatrix}\left(\frac{1-z}{2}\right)^{l+m-2}P^{(2l+2m,0)}_{n-1}(z), \tag{4.19}\] _where \(\eta=\frac{4\chi}{1-2y-z}\), \(\chi=\frac{1-2y-z}{4}\)._ Thus, the following lemma has been shown. **Lemma 4.14**.: _Let \(f_{i},g_{ij}\) and \(h_{ijk}\) be defined as in (4.12). Then, the functions_ \[\bar{v}_{ijk}^{\mathbf{\Delta},I}=(\nabla f_{i})g_{ij}h_{ijk},\quad\bar{v}_{ijk}^{ \mathbf{\Delta},II}=f_{i}(\nabla g_{ij})h_{ijk},\quad\text{and}\quad\bar{v}_{ijk}^{ \mathbf{\Delta},III}=f_{i}g_{ij}(\nabla h_{ijk})\] _are biorthogonal to \(\bar{b}_{lmn}^{\mathbf{\Delta},I},\bar{b}_{lmn}^{\mathbf{\Delta},II}\) and \(\bar{b}_{lmn}^{\mathbf{\Delta},III}\) defined by (4.19), i.e._ \[\langle\bar{v}_{ijk}^{\mathbf{\Delta},\omega_{1}},\bar{b}_{lmn}^{\mathbf{\Delta}, \omega_{2}}\rangle=c_{ijk}^{\omega_{1}}\,\delta_{il}\delta_{jm}\delta_{kn} \delta_{\omega_{1},\omega_{2}},\quad\omega_{1},\omega_{2}\in\{I,II,III\}.\] As before, we transfer this to the original interior basis functions. **Lemma 4.15**.: _Let \(\bar{b}_{lmn}^{\mathbf{\Delta},I},\bar{b}_{lmn}^{\mathbf{\Delta},I}\) and \(\bar{b}_{lmn}^{\mathbf{\Delta},I}\) be defined by (4.19). For \(i\geq 2,j\geq 1,k\geq 1\) the functions \(v_{ijk}^{\mathbf{\Delta},I},v_{ijk}^{\mathbf{\Delta},II},v_{ijk}^{\mathbf{\Delta},III}\) and \(v_{1jk}^{\mathbf{\Delta},IV}\) are biorthogonal to_ \[b_{lmn}^{\mathbf{\Delta},I} =\frac{1}{2}\alpha_{lmn}^{(2)}\bar{b}_{lmn}^{\mathbf{\Delta},II}+ \frac{1}{2}\alpha_{lmn}^{(3)}\bar{b}_{lmn}^{\mathbf{\Delta},III},\] \[b_{lmn}^{\mathbf{\Delta},II} =\frac{1}{2}\alpha_{lmn}^{(1)}\bar{b}_{lmn}^{\mathbf{\Delta},I}-\frac {1}{2}\alpha_{lmn}^{(2)}\bar{b}_{lmn}^{\mathbf{\Delta},II},\] \[b_{lmn}^{\mathbf{\Delta},III} =\frac{1}{2}\alpha_{lmn}^{(1)}\bar{b}_{lmn}^{\mathbf{\Delta},I}-\frac {1}{2}\alpha_{lmn}^{(3)}\bar{b}_{lmn}^{\mathbf{\Delta},III},\] \[b_{1mn}^{\mathbf{\Delta},IV} =\alpha_{lmn}^{(4)}\bar{b}_{1mn}^{\mathbf{\Delta},I},\] _where_ \[\alpha_{lmn}^{(1)} =\frac{1}{2^{7}}(2l-1)(2m+2l-1)(m+2l-1)(2n+2l+2m-1)(n+2l+2m-1)\] \[\alpha_{lmn}^{(2)} =\frac{1}{2^{6}}(2l-1)(2m+2l-1)(2n+2l+2m-1)(n+2l+2m-1)\] \[\alpha_{lmn}^{(3)} =\frac{-1}{2^{5}}l(2l-1)(2m+2l-1)(m+2l-1)(2n+2l+2m-1)\] \[\alpha_{lmn}^{(4)} =\frac{1}{2^{5}}(2m+2)(m+2)(n+2m+2)(2n+2m+2)\] Proof.: Again, we apply Lemma 4.3. Now, \(A=\begin{bmatrix}1&1&1\\ 1&-1&1\\ 1&1&-1\end{bmatrix}\). from which its inverse is easily be computed as \(A^{-1}=\frac{1}{2}\begin{bmatrix}0&1&1\\ 1&-1&0\\ 1&0&-1\end{bmatrix}\). It remains to compute the diagonal entries. The coefficients \(c_{ijk}^{\omega}\) can be computed analogously to the triangular case be using the exact values of the integrals over Jacobi polynomials. Finally, one obtains \[\langle\bar{v}_{ijk}^{I},\bar{b}_{ijk}^{I}\rangle =\frac{2^{7}}{(2i-1)}\frac{1}{(2j+2i-1)(j+2i-1)(2k+2i+2j-1)(k+2i+ 2j-1)},\] \[\langle\bar{v}_{ijk}^{II},\bar{b}_{ijk}^{II}\rangle =\frac{2^{6}}{(2i-1)}\frac{1}{(2j+2i-1)(2k+2i+2j-1)(k+2i+2j-1)},\] \[\langle\bar{v}_{ijk}^{III},\bar{b}_{ijk}^{III}\rangle =\frac{2^{6}}{(2i-1)}\frac{1}{i(2i+2j-1)(j+2i-1)(2k+2i+2j-1)},\] by using the orthogonality relations of the Jacobi polynomials (5). It remains to show, that to \[v^{IV}_{1jk}=\begin{pmatrix}\frac{1}{\eta}\\ \frac{\eta}{4}\end{pmatrix}\left(\frac{1-\chi}{2}\right)\widehat{P}^{1}_{j}( \chi)\left(\frac{1-z}{2}\right)^{j+1}\widehat{P}^{2j+3}_{k}(z)\] the dual shape functions are naturally orthogonal. For \(B_{ijk}\) orthogonality follows since the first component of \(v^{IV}_{1jk}\) is independent of \(\eta=\frac{4\chi}{1-2y-z}\). For \(C_{ijk}\) we apply the relations \[\int_{-1}^{1}(i+1)P^{(1,1)}_{i-1}(x)\,\mathrm{d}x =2\int_{-1}^{1}\frac{\mathrm{d}}{\mathrm{d}x}L_{i}(x)\,\mathrm{d}x =2\left[L_{i}(x)\right]_{-1}^{1}=2(1-(-1)^{i})\quad\text{and}\] \[\int_{-1}^{1}\frac{x}{2}(2i)P^{(1,1)}_{i-2}(x)\,\mathrm{d}x =2\int_{-1}^{1}x\frac{\mathrm{d}}{\mathrm{d}x}L_{i-1}(x)\,\mathrm{ d}x\] \[=2\left[xL_{i}(x)\right]_{-1}^{1}-\underbrace{\int_{-1}^{1}L_{- 1}(x)}_{=0}\,\mathrm{d}x=2(1-(-1)^{i})\] to see that the scalar product \(\langle v^{IV}_{1jk},C_{lmn}\rangle=0\), for all \(i,j,k,l,m,n\). For \(D_{ijk}\) the same relation is applied, thus \(\langle v^{IV}_{1jk},D_{lmn}\rangle=0\), for all \(i,j,k,l,m,n\). On the other hand the dual function to \(v^{IV}_{1jk}\) is easily found to be \(B_{1jk}=\begin{pmatrix}1\\ 0\\ 0\end{pmatrix}P^{(2,1)}_{j-1}(y)\left(\frac{1-z}{2}\right)^{j-1}P^{(2j+2,1)}_{k -1}(z)\). It is obviously orthogonal to \(\nabla g_{ij}\) and \(\nabla h_{ijk}\), furthermore it is orthogonal to \(\nabla f_{i}\) since it is independent of \(\eta\). We conclude with some remarks. _Remark 4.16_.: It is usually possible to modify the index of the integrated Jacobi polynomials to modify the sparsity pattern and condition number of the element matrices. But in the context of polynomial dual functions the index \((2i)\) and \((2i+2j)\) are minimal, otherwise the dual functions will become rational with singularities for low polynomial degrees for the interior \(H(\mathrm{curl})\) shape functions. _Remark 4.17_.: The coefficients \(\alpha^{(1)}_{lmn},\alpha^{(2)}_{lmn}\) and \(\alpha^{(3)}_{lmn}\) can be significantly reduced by dividing each with \[(2l+2m-1)(2l-1)(2n-2l-2m-1).\] In this case one needs to compute the element matrix corresponding to biorthogonal system by numerical quadrature or similar methods. ## 5 Conclusion We summarize the main contribution of this paper in a more abstract notation. Let \(\mathbb{V}_{0}\) be the space \(H^{1}_{0}\) or \(H_{0}(\mathrm{curl})\) on an element \(\Omega\) and \(\mathbb{V}_{hp,0}=\mathbb{V}_{hp}\cap\mathbb{V}_{0}\). For given element based families of \(hp\)-FEM basis functions \(\phi_{i}\in\mathbb{V}_{hp,0}\), the authors developed biorthogonal test functions \(\psi_{j}\in W_{hp}\). This allows us to represent the \(L^{2}\)-like projection based interpolation operator \(\mathcal{P}:\mathbb{V}_{0}(\Omega)\mapsto\mathbb{V}_{hp,0}\) given by \[\int_{\Omega}(\mathcal{P}u)(\vec{x})\;v_{hp}(\vec{x})(\vec{x})\;\mathrm{d}\vec{ x}=\int_{\Omega}u(\vec{x})v_{hp}(\vec{x})\;\mathrm{d}\vec{x}\quad\forall v_{hp} \in W_{hp}.\] by \[(Pu)(\vec{x})=\sum_{i}\phi_{i}(\vec{x})g_{i}\quad\text{with}\quad g_{i}=\int_{ \Omega}u(\vec{x})\psi_{i}(\vec{x})\;\mathrm{d}\vec{x}.\]
2303.00217
Improved Quantum Query Complexity on Easier Inputs
Quantum span program algorithms for function evaluation sometimes have reduced query complexity when promised that the input has a certain structure. We design a modified span program algorithm to show these improvements persist even without a promise ahead of time, and we extend this approach to the more general problem of state conversion. As an application, we prove exponential and superpolynomial quantum advantages in average query complexity for several search problems, generalizing Montanaro's Search with Advice [Montanaro, TQC 2010].
Noel T. Anderson, Jay-U Chung, Shelby Kimmel, Da-Yeon Koh, Xiaohan Ye
2023-03-01T03:40:37Z
http://arxiv.org/abs/2303.00217v3
# Improved Quantum Query Complexity on Easier Inputs ###### Abstract Quantum span program algorithms for function evaluation sometimes have reduced query complexity when promised that the input has a certain structure. We design a modified span program algorithm to show these improvements persist even without a promise ahead of time, and we extend this approach to the more general problem of state conversion. As an application, we prove exponential and superpolynomial quantum advantages in average query complexity for several search problems, generalizing Montanaro's Search with Advice [Montanaro, TQC 2010]. ## 1 Introduction Quantum algorithms often perform better when given a promise on the input. For example, if we know that there are \(M\) marked items out of \(N\), or no marked items at all, then Grover's search can be run in time and query complexity \(O(\sqrt{N/M})\), rather than \(O(\sqrt{N})\), the worst case complexity with a single marked item [1]. In the case of Grover's algorithm, a series of results [9, 10, 11] removed the promise; if there are \(M\) marked items, there is a quantum search algorithm that runs in \(O(\sqrt{N/M})\) complexity, even without knowing the number of marked items ahead of time. Most relevant for our work, several of these algorithms involve iteratively running Grover's search with exponentially growing runtimes [9, 10] until a marked item is found. Grover's algorithm was one of the first quantum query algorithms discovered [17]. Since that time, span programs and the dual of the general adversary bound were developed, providing frameworks for creating optimal query algorithms for function decision problems [25, 26] and nearly optimal algorithms for state conversion problems, in which the goal is to generate a quantum state based on an oracle and an input state [22]. Moreover, these frameworks are also useful in practice [3, 5, 4, 7, 12, 15]. For some span program algorithms, analogous to multiple marked items in Grover's search, there are features which, if promised to exist, allow for improvement over the worst case query complexity. For example, a span program algorithm for deciding \(st\)-connectivity uses \(O\left(n^{3/2}\right)\) queries on an \(n\)-vertex graph. However, if promised that the shortest path, if it exists, has length at most \(k\), then the problem can be solved with \(O(\sqrt{k}n)\) queries [5]. Our contribution is to remove the requirement of the promise; we improve the query complexity of generic span program and state conversion algorithms in the case that some speed-up inducing property (such as multiple marked items or a short path) is present, even without knowing about the structure in advance. One might expect this is trivial: surely if an algorithm produces a correct result with fewer queries when promised a property is present, then it should also produce a correct result with fewer queries without the promise if the property still holds? While this is true and these algorithms always output a result, even if run with fewer queries, the problem is that they don't produce a flag of completion, and their output cannot always be easily verified. Without a flag of completion or a promise of structure, it is impossible to be confident that the result is correct. Span program and state conversion algorithms differ from Grover's algorithm in their lack of a flag; in Grover's algorithm one can use a single query to test whether the output is a marked item, thus flagging that the output of the algorithm is correct, and that the algorithm has run for a sufficiently long time. We note that when span program algorithms previously have claimed an improvement with structure, they always included a promise, or they give the disclaimer that running the algorithm will be incorrect with high probability if the promise is not known ahead of time to be satisfied, e.g. Ref. [12, App. C.3]. We use an approach that is similar to the iterative modifications to Grover's algorithm; we run subroutines for exponentially increasing times, and we have novel ways to flag when the computation should halt. In the case of bounded error, our algorithms match the asymptotic performance of existing algorithms on the hardest inputs. On easier inputs, they on average match the asymptotic performance, up to log factors, of existing algorithms when those existing algorithms additionally have an optimal promise. Because our algorithms use fewer queries on easier inputs without needing to know they are easier inputs, they provide the possibility of improved average-case query complexity when there is a distribution of easier and harder inputs. In this direction, we generalize a result by Montanaro that showed a super-exponential quantum advantage in average query complexity for the problem of searching for a single marked item under a certain distribution [24]. In particular, we provide a framework for proving similar advantages using quantum algorithms based on classical decision trees, opening up the potential for a broader range of applications than the approach used by Montanaro. We apply this technique to prove an exponential and superpolynomial quantum advantage in average query complexity for searching for multiple items and searching for the first occurring marked items, respectively. Where prior work showed improvements for span program algorithms with a promise, our results immediately provide an analogous improvement without the promise: * For undirected \(st\)-connectivity described above, our algorithm determines whether there is a path from \(s\) to \(t\) in an \(n\)-vertex graph with \(\tilde{O}(\sqrt{k}n)\) queries if there is a path of length \(k\), and if there is no path, the algorithm uses \(\tilde{O}(\sqrt{nc})\) queries, where \(c\) is the size of the smallest cut between \(s\) and \(t\). In either case, \(k\) and \(c\) need not be known ahead of time. * For an \(n\)-vertex undirected graph, we can determine if it is connected in \(\tilde{O}(n\sqrt{R})\) queries, where \(R\) is the average effective resistance, or not connected in \(\tilde{O}(\sqrt{n^{3}/\kappa})\) queries, where \(\kappa\) is the number of components. These query complexities hold without knowing \(R\) or \(\kappa\) ahead of time. See Ref. [20] for the promise version of this problem. * For cycle detection on an \(n\)-vertex undirected graph, whose promise version was analyzed in Ref. [15], if the circuit rank is \(C\), then our algorithm will detect a cycle in \(\tilde{O}(\sqrt{n^{3}/C})\) queries, while if there is no cycle and at most \(\mu\) edges, the algorithm will decide there is no cycle in \(\tilde{O}(\mu\sqrt{n})\) queries. This holds without knowing \(C\) or \(\mu\) ahead of time. To achieve our results for decision problems, we modify the original span program function evaluation algorithm to create two one-sided error subroutines. In the original span program algorithm, the final measurement tells you with high probability whether \(f(x)=1\) or \(f(x)=0\). In one of our subroutines, the final measurement certifies that with high probability \(f(x)=1\) providing our flag of completion, or it signals that more queries are needed to determine whether \(f(x)=1\). The other behaves similarly for \(f(x)=0\). By interleaving these two subroutines with exponentially increasing queries, we achieve our desired performance. The problem is more challenging for state conversion, as the standard version of that algorithm does not involve any measurements, and so there is nothing to naturally use as a flag of completion. We thus design a novel probing routine that iteratively tests exponentially increasing query complexities until a sufficient level is reached, before then running an algorithm similar to the original state conversion algorithm. While we analyze query complexity, the algorithms we create have average time complexity on input \(x\) that scales like \(O(T_{U}\mathbb{E}[Q_{x}])\), where \(\mathbb{E}[Q_{x}]\) is the average query complexity on input \(x\), and \(T_{U}\) is the time complexity of implementing an input-independent unitary. Since the existing worst-case span program and state conversion algorithms have time complexities that scale as \(O(\max_{x}T_{U}\mathbb{E}[Q_{x}])\), our algorithms also improve in average time complexity relative to the original algorithms on easier inputs. For certain problems, like \(st\)-connectivity [6] and search [14], it is known that \(T_{U}=\tilde{O}(1)\), meaning that the query complexities of our algorithms for these problems match the time complexity up to log factors. ### Directions for Future Work Ambainis and de Wolf show that while there is no quantum query advantage for the problem of majority in the worst case, on average there is a quadratic quantum advantage [2]. However, their quantum algorithm uses a technique that is specific to the problem of majority, and it is not clear how it might extend to other problems. On the other hand, since our approach is based on span programs, a generic optimal framework, it may provide opportunities of proving similar results for more varied problems. In the original state conversion algorithm, to achieve an error of \(\varepsilon\) in the output state (by some metric), the query complexity scales as \(O\left(\varepsilon^{-2}\right)\)[22]. In our result, the query complexity scales as \(O\left(\varepsilon^{-5}\right)\). While this does not matter for applications like discrete function evaluation, as considered in Section 4.2, in cases where accuracy must scale with the input size, this error term could overwhelm any advantage from our approach, and so it would be beneficial to improve this error scaling. Ito and Jeffery [19] give an algorithm to estimate the positive witness size (a measure of how easy an instance is) with fewer queries on easier inputs. While there are similarities between our approaches, neither result seems to directly imply the other. Better understanding the relationship between these strategies could lead to improved algorithms for determining properties of input structure for both span programs and state conversion problems. Our work can be contrasted with the work of Belovs and Yolcu [8], which also has a notion of reduced query complexity on easier inputs. Their work focuses on the "Las Vegas query complexity," which is related to the amount of the state that a controlled version of the oracle acts on over the course of the algorithm, and which is an input-dependent quantity. They show the "Monte Carlo query complexity," what we call the query complexity, scales as the Las Vegas query complexity of the worst-case input. We suspect that using techniques similar to those in our work, it would be possible to modify their algorithm to obtain an algorithm with input-dependent average "Monte Carlo query complexity" that is roughly the same as the "Las Vegas query complexity" for that input, without knowing anything about the input ahead of time. Preliminaries **Basic Notation:** For \(n>2\), let \([n]\) represent \(\{1,2,\ldots,n\}\), while for \(n=2\), \([n]=\{0,1\}\). We use \(\log\) to denote base \(2\) logarithm. For set builder notation like \(\{r_{z}:z\in Z\}\) we will frequently use \(\{r_{z}\}_{z\in Z}=\{r_{z}\}\), where we drop the subscript outside the curly brackets if clear from context. We denote a linear operator from the space \(V\) to the space \(U\) as \(\mathcal{L}(V,U)\). We use \(I\) for the identity operator. (It will be clear from context which space \(I\) acts on.) Given a projection \(\Pi\), its complement is \(\overline{\Pi}=I-\Pi.\) For a matrix \(M\), by \(M_{xy}\) or \((M)_{xy}\), we denote the element in the \(x^{\rm th}\) row and \(y^{\rm th}\) column of \(M\). By \(\hat{O}\), we denote big-O notation that ignores log factors. The \(l_{2}\)-norm of a vector \(|v\rangle\) is denoted by \(\||v\rangle\|\). For any unitary \(U\), let \(P_{\Theta}(U)\) be the projection onto the eigenvectors of \(U\) with phase at most \(\Theta\). That is, \(P_{\Theta}(U)\) is the projection onto \(\mathrm{span}\{|u\rangle:U|u\rangle=e^{i\theta}|u\rangle\) with \(|\theta|\leq\Theta\}\). For a function \(f:D\to[m]\), we define \(f^{-1}(b)=\{x\in D:f(x)=b\}\). ### Quantum Algorithmic Building Blocks We consider quantum query algorithms, in which one can access a unitary \(O_{x}\), called the oracle, which encodes a string \(x\in X\) for \(X\subseteq[q]^{n}\), \(q\geq 2\). The oracle acts on the Hilbert space \(\mathbb{C}^{n}\otimes\mathbb{C}^{q}\) as \(O_{x}|i\rangle|b\rangle=|i\rangle|x_{i}+b\ \mathrm{mod}\ q\rangle\), where \(x_{i}\in[q]\) is the \(i^{\rm th}\) element of \(x\). Given \(O_{x}\) for \(x\in X\), we would like to perform a computation that depends on \(x\). The query complexity is the minimum number of uses of the oracle required such that for all \(x\in X\), the computation is successful with some desired probability of success. We denote by \(\mathbb{E}[Q_{x}]\) the average number of queries used by the algorithm on input \(x\). Given a probability distribution \(\{p_{x}\}_{x\in X}\) over the elements of \(X\), then \(\sum_{x\in X}p_{x}\mathbb{E}[Q_{x}]\) is the average quantum query complexity of performing the computation with respect to \(\{p_{x}\}\). Several of our key algorithmic subroutines use a parallelized version of phase estimation [23], in which for a unitary \(U\), a precision \(\Theta>0\), and an accuracy \(\epsilon>0\), a circuit \(D(U)\) implements \(O(\log\frac{1}{\epsilon})\) copies of the phase estimation circuit on \(U\), each to precision \(O(\Theta)\), that all measure the phase of a single state on the same input register. If \(U\) acts on a Hilbert Space \(\mathcal{H}\), then \(D(U)\) acts on the space \(\mathcal{H}_{A}\otimes((\mathbb{C}^{2})^{\otimes b})_{B}\) for \(b=O\left(\log\frac{1}{\Theta}\log\frac{1}{\epsilon}\right)\), where we have used \(A\) to label the register that stores the input state, and \(B\) to label the registers that store the results of the parallel phase estimations. The circuit \(D(U)\) can be used for Phase Checking: applying \(D(U)\) to \(|\psi\rangle_{A}|0\rangle_{B}\) and then measuring register \(B\) in the standard basis; the probability of outcome \(|0\rangle_{B}\) provides information on whether \(|\psi\rangle\) is close to an eigenvector of \(U\) that has eigenphase close to \(0\) (in particular, with eigenphase within \(\Theta\) of \(0\)). To characterize this probability, we define \(\Pi_{0}(U)\) to be the orthogonal projection onto the subspace of \(\mathcal{H}_{A}\otimes((\mathbb{C}^{2})^{\otimes b})_{B}\) that \(D(U)\) maps to states with \(|0\rangle_{B}\) in the \(B\) register. That is, \(\Pi_{0}(U)=D(U)^{\dagger}\left(I_{A}\otimes|0\rangle\!\langle 0|_{B}\right)D(U).\) (Since \(\Pi_{0}(U)\) depends on the choice of \(\Theta\) and \(\epsilon\) used in \(D(U)\), those values must be specified, if not clear from context, when discussing \(\Pi_{0}(U)\).) We now summarize prior results for Phase Checking in Lemma 1: **Lemma 1** (Phase Checking [21, 13, 23]).: _Let \(U\) be a unitary on a Hilbert Space \(\mathcal{H}\), and let \(\Theta,\epsilon>0\). We call \(\Theta\) the precision and \(\epsilon\) the accuracy. Then there is a circuit \(D(U)\) that acts on the space \(\mathcal{H}_{A}\otimes((\mathbb{C}^{2})^{\otimes b})_{B}\) for \(b=O\left(\log\frac{1}{\Theta}\log\frac{1}{\epsilon}\right)\), and that uses \(O\left(\frac{1}{\Theta}\log\frac{1}{\epsilon}\right)\) calls to control-\(U\). Then for any state \(|\psi\rangle\in\mathcal{H}\)_ * \(\|P_{0}(U)|\psi\rangle\|^{2}\leq\|\Pi_{0}(U)\left(|\psi\rangle_{A}|0\rangle_{B }\right)\|^{2}\leq\|P_{\Theta}(U)|\psi\rangle\|^{2}+\epsilon,\) _and_ * \(\|\Pi_{0}(U)\left(\overline{P}_{\Theta}(U)|\psi\rangle\right)_{A}|0\rangle_{B }\|^{2}\leq\epsilon\)_._ We also consider implementing \(D(U)\) as described above, applying a \(-1\) phase to the \(A\) register if the \(B\) register is _not_ in the state \(|0\rangle_{B}\), and then implementing \(D(U)^{\dagger}\). We call this circuit Phase Reflection1 and denote it as \(R(U).\) Note that \(R(U)=\Pi_{0}(U)-\overline{\Pi}_{0}(U)\), where \(R(U)\) and \(\Pi_{0}(U)\) have the same implicit precision \(\Theta\) and accuracy \(\epsilon\). The following lemma summarizes prior results on relevant properties of Phase Reflection. Footnote 1: In Ref. [22], this procedure is referred to as “Phase Detection,” but since no measurement is made, and rather only a reflection is applied, we thought renaming this protocol as “Phase Reflection” would be more descriptive and easier to distinguish from “Phase Checking.” We apologize for any confusion this may cause when comparing to prior work. **Lemma 2** (Phase Reflection [23, 22]).: _Let \(U\) be a unitary on a Hilbert Space \(\mathcal{H}\), and let \(\Theta,\epsilon>0\). We call \(\Theta\) the precision and \(\epsilon\) the accuracy. Then there is a circuit \(R(U)\) that acts on the space \(\mathcal{H}_{A}\otimes((\mathbb{C}^{2})^{\otimes b})_{B}\) for \(b=O\left(\log\frac{1}{\Theta}\log\frac{1}{\epsilon}\right)\), and that uses \(O\left(\frac{1}{\Theta}\log\frac{1}{\epsilon}\right)\) calls to control-\(U\) and control-\(U^{\dagger}\), such that for any state \(|\psi\rangle\in\mathcal{H}\)_ * \(R(U)(P_{0}(U)|\psi\rangle)|0\rangle_{B}=(P_{0}(U)|\psi\rangle)_{A}|0\rangle_{B}\)_, and_ * \(\|(R(U)+I)(\overline{P}_{\Theta}(U)|\psi\rangle)_{A}|0\rangle_{B}\|<\epsilon\)_._ We will use Iterative Quantum Amplitude Estimation [16], which is like standard quantum amplitude estimation [10], but with exponentially better success probability: **Lemma 3** (Iterative Quantum Amplitude Estimation [16]).: _Let \(\delta>0\) and \(\mathcal{A}\) be a quantum circuit such that on a state \(|\psi\rangle\), \(\mathcal{A}|\psi\rangle=\alpha_{0}|0\rangle|\psi_{0}\rangle+\alpha_{1}|1\rangle |\psi_{1}\rangle\). Then there is an algorithm that estimates \(|\alpha_{0}|^{2}\) to additive error \(\delta\) with success probability at least \(1-p\) using \(O\left(\frac{1}{\delta}\log\left(\frac{1}{p}\log\frac{1}{\delta}\right)\right)\)calls to \(\mathcal{A}\) and \(\mathcal{A}^{\dagger}\)._ A key lemma in span program and state conversion algorithms is the effective spectral gap lemma: **Lemma 4** (Effective spectral gap lemma, [22]).: _Let \(\Pi\) and \(\Lambda\) be projections, and let \(U=(2\Pi-I)(2\Lambda-I)\) be the unitary that is the product of their associated reflections. If \(\Lambda|w\rangle=0\), then \(\|P_{\Theta}(U)\Pi|w\rangle\|\leq\frac{\Theta}{2}\|w\rangle\|\)._ ### Span Programs Span programs are a tool for designing quantum query algorithms for decision problems. **Definition 5** (Span Program).: _A span program is a tuple \(\mathcal{P}=(H,V,\tau,A)\) on \([q]^{n}\) where_ 1. \(H\) _is a direct sum of finite-dimensional inner product spaces:_ \(H=H_{1}\oplus H_{2}\cdots H_{n}\oplus H_{\text{true}}\oplus H_{\text{false}},\) _and for_ \(j\in[n]\) _and_ \(a\in[q]\)_, we have_ \(H_{j,a}\subseteq H_{j}\)_, such that_ \(\sum_{a\in[q]}H_{j,a}=H_{j}\)_._ 2. \(V\) _is a vector space_ 3. \(\tau\in V\) _is a target vector, and_ 4. \(A\in\mathcal{L}(H,V)\)_._ _Given a string \(x\in[q]^{n}\), we use \(H(x)\) to denote the subspace \(H_{1,x_{1}}\oplus\cdots\oplus H_{n,x_{n}}\oplus H_{\text{true}}\), and we denote by \(\Pi_{H(x)}\) the orthogonal projection onto the space \(H(x)\)._ We use Definition 5 for span programs because it applies to both binary and non-binary inputs (\(q\geq 2\)). The definitions in Refs. [5, 12] only apply to binary inputs (\(q=2\)). **Definition 6** (Positive and Negative Witness).: _Given a span program \(\mathcal{P}=(H,V,\tau,A)\) on \([q]^{n}\) and \(x\in[q]^{n}\), then \(|w\rangle\in H(x)\) is a positive witness for \(x\) in \(\mathcal{P}\) if \(A|w\rangle=\tau\). If a positive witness exists for \(x\), we define the positive witness size of \(x\) in \(\mathcal{P}\) as_ \[w_{+}(\mathcal{P},x)=w_{+}(x)\coloneqq\min\left\{\||w\rangle\|^{2}:|w\rangle \in H(x)\text{ and }A|w\rangle=\tau\right\}. \tag{1}\] _Then \(|w\rangle\in H(x)\) is an optimal positive witness for \(x\) if \(\||w\rangle\|^{2}=w_{+}(P,x)\) and \(A|w\rangle=\tau\)._ _We say \(\omega\in\mathcal{L}(V,\mathbb{R})\) is a negative witness for \(x\) in \(P\) if \(\omega\tau=1\) and \(\omega A\Pi_{H(x)}=0\). If a negative witness exists for \(x\), we define the negative witness size of \(x\) in \(P\) as_ \[w_{-}(P,x)=w_{-}(x)\coloneqq\min\left\{\|\omega A\|^{2}:\omega\in L(V, \mathbb{R}),\omega A\Pi_{H(x)}=0,\text{ and }\omega\tau=1\right\}. \tag{2}\] _Then \(\omega\) is an optimal negative witness for \(x\) if \(\|\omega A\|^{2}=w_{-}(P,x)\), \(\omega A\Pi_{H(x)}=0,\) and \(\omega\tau=1\)._ Each \(x\in[q]^{n}\) has a positive or negative witness (but not both). We say that a span program \(\mathcal{P}\) decides the function \(f:X\subseteq[q]^{n}\to\{0,1\}\) if each \(x\in f^{-1}(1)\) has a positive witness in \(\mathcal{P}\), and each \(x\in f^{-1}(0)\) has a negative witness in \(\mathcal{P}\). Then we denote the maximum positive and negative witness of \(\mathcal{P}\) on \(f\) as \[W_{+}(\mathcal{P},f)=W_{+}\coloneqq\max_{x\in f^{-1}(1)}w_{+}(\mathcal{P},x), \qquad W_{-}(\mathcal{P},f)=W_{-}\coloneqq\max_{x\in f^{-1}(0)}w_{-}( \mathcal{P},x). \tag{3}\] Given a span program that decides a function, one can use it to design an algorithm that evaluates that function with query complexity that depends on \(W_{+}(\mathcal{P},f)\) and \(W_{-}(\mathcal{P},f)\): **Theorem 7** ([25, 19]).: _For \(X\subseteq[q]^{n}\) and \(f:X\to\{0,1\}\), let \(\mathcal{P}\) be a span program that decides \(f\). Then there is a quantum algorithm that for any \(x\in X\), evaluates \(f(x)\) with bounded error, and uses \(O\left(\sqrt{W_{+}(\mathcal{P},f)W_{-}(\mathcal{P},f)}\right)\) queries to the oracle \(O_{x}\)._ Not only can any span program that decides a function \(f\) be used to create a quantum query algorithm that decides \(f\), but there is always a span program that creates an algorithm with asymptotically optimal query complexity [25, 26]. Thus when designing quantum query algorithms for function decision problems, it is sufficient to consider only span programs. Given a function \(f:X\to\{0,1\}\), we denote the negation of the \(f\) as \(f^{\neg}\), where \(\forall x\in X,f^{\neg}(x)=\neg f(x)\). We use a transformation that takes a span program \(\mathcal{P}\) that decides a function \(f:X\to\{0,1\}\) and creates a span program \(\mathcal{P}^{\dagger}\) that decides \(f^{\neg}\), while preserving witness sizes for each input \(x\). While such a transformation is known for Boolean span programs [25], in Lemma 8 we show it exists for the span programs of Definition 5. The proof is in Appendix A. **Lemma 8**.: _Given a span program \(\mathcal{P}=(H,V,\tau,A)\) on \([q]^{n}\) that decides a function \(f:X\to\{0,1\}\) for \(X\subseteq[q]^{n}\), there is a span program \(\mathcal{P}^{\dagger}=(H^{\prime},V^{\prime},\tau^{\prime},A^{\prime})\) that decides \(f^{\neg}\) such that \(\forall x\in f^{-1}(1),w_{+}(\mathcal{P},x)=w_{-}(\mathcal{P}^{\dagger},x)\) and \(\forall x\in f^{-1}(0),w_{-}(\mathcal{P},x)=w_{+}(\mathcal{P}^{\dagger},x).\)_ ### State Conversion In the state conversion problem, for \(X\subseteq[q]^{n}\), we are given descriptions of sets of pure states \(\{|\rho_{x}\rangle\}_{x\in X}\) and \(\{|\sigma_{x}\rangle\}_{x\in X}\). Then given access to an oracle for \(x\), and the quantum state \(|\rho_{x}\rangle\), the goal is to create a state \(|\sigma^{\prime}_{x}\rangle\) such that \(\||\sigma^{\prime}_{x}\rangle-|\sigma_{x}\rangle|0\rangle\|\leq\varepsilon\). We call \(\varepsilon\) the error of the state conversion procedure. Let \(\rho\) and \(\sigma\) be the Gram matrices of the sets \(\{|\rho_{x}\rangle\}\) and \(\{|\sigma_{x}\rangle\}\), respectively, so \(\rho\) and \(\sigma\) are matrices whose rows and columns are indexed by the elements of \(X\) such that \(\rho_{xy}=\langle\rho_{x}|\rho_{y}\rangle\), and \(\sigma_{xy}=\langle\sigma_{x}|\sigma_{y}\rangle\). We now define the analogue of a span program for the problem of state conversion, which we call a _converting vector set_: **Definition 9** (Converting vector set).: _Let \(\mathscr{P}=\left(\{|v_{xj}\rangle\},\{|u_{xj}\rangle\}\right)_{x\in X,j\in[n]}\), where \(\forall x\in X,j\in[n],|v_{xj}\rangle,|u_{xj}\rangle\in\mathbb{C}^{d}\) for some \(d\in\mathbb{N}\). Then we say \(\mathscr{P}\) converts \(\rho\) to \(\sigma\) if it satisfies_ \[\forall x,y\in X,\quad(\rho-\sigma)_{xy}=\sum_{j\in[n]:x_{j}\neq y_{j}}\langle u _{xj}|v_{yj}\rangle. \tag{4}\] _We call such a \(\mathscr{P}\) a converting vector set from \(\rho\) to \(\sigma\)._ Then the query complexity of state conversion is characterized as follows: **Theorem 10** ([22]).: _Given \(X\in[q]^{n}\) and a converting vector set \(\mathscr{P}=\left(\{|v_{xj}\rangle\},\{|u_{xj}\rangle\}\right)_{x\in X,j\in[n]}\) from \(\rho\) to \(\sigma\), then there is quantum algorithm that on every input \(x\in X\) converts \(|\rho_{x}\rangle\) to \(|\sigma_{x}\rangle\) with error \(\varepsilon\) and has query complexity_ \[O\left(\max\left\{\max_{x\in X}\sum_{j}\||v_{xj}\rangle\|^{2},\max_{y\in X} \sum_{j}\||u_{yj}\rangle\|^{2}\right\}\frac{\log(1/\varepsilon)}{\varepsilon^ {2}}\right). \tag{5}\] Analogous to witness sizes in span programs, we define a notion of witness sizes for converting vector sets: **Definition 11** (Converting vector set witness sizes).: _Given a converting vector set \(\mathscr{P}=\left(\{|u_{xj}\rangle\},\{|v_{xj}\rangle\}\right)\), we define the witness sizes of \(\mathscr{P}\) as_ \[w_{+}(\mathscr{P},x) \coloneqq\sum_{j}\||v_{xj}\rangle\|^{2} \text{positive witness size for $x$ in $\mathscr{P}$}\] \[w_{-}(\mathscr{P},x) \coloneqq\sum_{j}\||u_{xj}\rangle\|^{2} \text{negative witness size for $x$ in $\mathscr{P}$}\] \[W_{+}(\mathscr{P}) =W_{+}\coloneqq\max_{x\in X}w_{+}(\mathscr{P},x) \text{maximum positive witness size of $\mathscr{P}$}\] \[W_{-}(\mathscr{P}) =W_{-}\coloneqq\max_{x\in X}w_{-}(\mathscr{P},x) \text{maximum negative witness size of $\mathscr{P}$} \tag{6}\] By scaling the converting vector sets, we obtain the following two results: a rephrasing of Theorem 10 in terms of witness sizes, and a transformation that exchanges positive and negative witness sizes. Both proofs can be found in Appendix A. **Corollary 12**.: _Let \(\mathscr{P}\) be a converting vector set from \(\rho\) to \(\sigma\) with maximum positive and negative witness sizes \(W_{+}\) and \(W_{-}\). Then there is quantum algorithm that on every input \(x\in X\) converts \(|\rho_{x}\rangle\) to \(|\sigma_{x}\rangle\) with error \(\varepsilon\) and uses \(O\left(\sqrt{W_{+}W_{-}}\frac{\log(1/\varepsilon)}{\varepsilon^{2}}\right)\) queries to \(O_{x}\)._ **Lemma 13**.: _If \(\mathscr{P}\) converts \(\{|\rho_{x}\rangle\}_{x\in X}\) to \(\{|\sigma_{x}\rangle\}_{x\in X}\), then there is a complementary converting vector set \(\mathscr{P}^{\dagger}\) that also converts \(\rho\) to \(\sigma\), such that for all \(x\in X\) and for all \(j\in[n]\), we have \(w_{+}(\mathscr{P},x)=w_{-}(\mathscr{P}^{\dagger},x)\), and \(w_{-}(\mathscr{P},x)=w_{+}(\mathscr{P}^{\dagger},x)\); the complement exchanges the values of the positive and negative witness sizes._ Function Decision Our main result for function decision (deciding if \(f(x)=0\) or \(f(x)=1\)) is the following: **Theorem 14**.: _For \(X\subseteq[q]^{n}\), let \(\mathcal{P}\) be a span program that decides \(f:X\to\{0,1\}\). Then there is a quantum algorithm such that for any \(x\in X\) and \(\delta>0\)_ 1. _The algorithm returns_ \(f(x)\) _with probability_ \(1-\delta\)_._ 2. _On input_ \(x\)_, if_ \(f(x)=1\) _the algorithm uses_ \(O\left(\sqrt{w_{+}(x)W_{-}}\log\left(\frac{W_{+}}{w_{+}(x)\delta}\right)\right)\) _queries on average, and if_ \(f(x)=0\) _it uses_ \(O\left(\sqrt{w_{-}(x)W_{+}}\log\left(\frac{W_{-}}{w_{-}(x)\delta}\right)\right)\) _queries on average._ 3. _The worst-case (not average) query complexity is_ \(O\left(\sqrt{W_{+}W_{-}}\log(1/\delta)\right)\)_._ Comparing Theorem 14 to Theorem 7 (which assumes constant error \(\delta\)), we see that in the worst case, with an input \(x\) where \(w_{+}(x)=W_{+}\) or \(w_{-}(x)=W_{-}\), the average and worst-case performance of our algorithm is the same as the standard span program algorithm. However, when we have an instance \(x\) with a smaller witness size, then our algorithm has improved average query complexity, without having to know about the witness size ahead of time. We can also compare the query complexity our algorithm, which does not require a promise, to the original span program algorithm when that algorithm is additionally given a promise. If the original span program algorithm is promised that, if \(f(x)=1\), then \(w_{+}(x)=O(\mathsf{w})\), then the bounded error query complexity of the original algorithm on this input would be \(O(\sqrt{\mathsf{w}W_{-}})\) by Theorem 7. On the other hand, without needing to know ahead of time that \(w_{+}(x)=O(\mathsf{w})\), our algorithm would use \(\tilde{O}(\sqrt{\mathsf{w}W_{-}})\) queries on this input on average, and in fact would do better than this if \(w_{+}(x)=o(\mathsf{w})\). A key routine in our algorithm is to apply Phase Checking to a unitary \(U(\mathcal{P},x,\alpha)\), which we describe now. We follow notation similar to that in [5]. In particular, for a span program \(\mathcal{P}=(H,V,\tau,A)\) on \([q]^{n}\), let \(\tilde{H}=H\oplus\mathrm{span}\{|\hat{0}\rangle\}\), and \(\tilde{H}(x)=H(x)\oplus\mathrm{span}\{|\hat{0}\rangle\}\), where \(|\hat{0}\rangle\) is orthogonal to \(H\) and \(V\). Then we define \(\tilde{A}^{\alpha}\in\mathcal{L}(\tilde{H},V)\) as \[\tilde{A}^{\alpha}=\frac{1}{\alpha}|\tau\rangle\!\langle\hat{0}|+A. \tag{7}\] Let \(\Lambda^{\alpha}\in\mathcal{L}(\tilde{H},\tilde{H})\) be the orthogonal projection onto the kernel of \(\tilde{A}^{\alpha}\), and let \(\Pi_{x}\in\mathcal{L}(\tilde{H},\tilde{H})\) be the projection onto \(\tilde{H}(x).\) Finally, let \[U(\mathcal{P},x,\alpha)=(2\Pi_{x}-I)(2\Lambda^{\alpha}-I). \tag{8}\] Note that \(2\Pi_{x}-I\) can be implemented with two applications of \(O_{x}\)[19, Lemma 3.1], and \(2\Lambda^{\alpha}-I\) can be implemented without any applications of \(O_{x}\). Queries are only made in our algorithm when we apply \(U(\mathcal{P},x,\alpha)\). To analyze the query complexity, we will track the number of applications of \(U(\mathcal{P},x,\alpha)\). The time complexity will also scale with the number of applications of \(U(\mathcal{P},x,\alpha).\) We denote the time required to implement \(U(\mathcal{P},x,\alpha)\) by \(T_{U}\), which is an input independent quantity. Since our query complexity analysis counts the number of applications of \(U(\mathcal{P},x,\alpha)\), and the runtime scales with the number of applications of \(U(\mathcal{P},x,\alpha)\), to bound the average time complexity of our algorithms, simply determine \(T_{U}\) and multiply this by the query complexity. The following lemma gives us guarantees about the results of Phase Checking of \(U(\mathcal{P},x,\alpha)\) applied to the state \(|\hat{0}\rangle\): **Lemma 15**.: _Let the span program \(\mathcal{P}\) decide the function \(f\), and let \(C\geq 2\). Then for Phase Checking with unitary \(U(\mathcal{P},x,\alpha)\) on the state \(|\hat{0}\rangle_{A}|0\rangle_{B}\) with error \(\epsilon\) and precision \(\Theta=\sqrt{\frac{\epsilon}{\alpha^{2}W_{-}}}\),_ 1. _If_ \(f(x)=1\)_, and_ \(\alpha^{2}\geq Cw_{+}(\mathcal{P},x)\)_, then the probability of measuring the_ \(B\) _register to be in the state_ \(|0\rangle_{B}\) _is at least_ \(1-1/C\)_._ 2. _If_ \(f(x)=0\) _and_ \(\alpha^{2}\geq 1/W_{-}(\mathcal{P},f)\)_, then the probability of measuring the_ \(B\) _register in the state_ \(|0\rangle_{B}\) _is at most_ \(\frac{3}{2}\epsilon\)_._ Note that if \(f(x)=1\) and \(\alpha^{2}<Cw(\mathcal{P},x)\), Lemma 15 makes no claims about the output. To prove Lemma 15, we use techniques from the Boolean function decision algorithm of Belovs and Reichardt [5, Section 5.2] and Cade et al. [12, Section C.2] and the dual adversary algorithm of Reichardt [26, Algorithm 1]. Our approach differs from these previous algorithms in the addition of a parameter that controls the precision of our phase estimation. This approach has not (to the best of our knowledge)2 been applied to the non-Boolean span program formulation of Definition 5, so while not surprising that it works in this setting, our analysis in Appendix B may be of independent interest for other applications. Footnote 2: Jeffery and Ito [19] also design a function decision algorithm for non-Boolean span programs, but it has a few differences from our approach and from that of Refs. [5, 12]; for example, the initial state of Jeffery and Ito’s algorithm might require significant time to prepare, while our initial state can be prepared in \(O(1)\) time. We use Alg. 1 to prove Theorem 14. **High Level Idea of Alg. 1** The algorithm makes use of a test that, when successful, tells us that \(f(x)=1\). However, the test is one-sided, in that failing the test does not mean that \(f(x)=0\), but instead is inconclusive. We repeatedly run this test for both functions \(f\) and \(f^{\neg}\) while increasing the queries used at each round. If we see an inconclusive result for both \(f\) and \(f^{\neg}\) at an intermediate round, we can conclude neither \(f(x)=1\) nor \(f(x)=0\), so we repeat the subroutine with larger queries. Once we reach a critical round that depends on \(w_{+}(x)\) (if \(f(x)=1\)) or on \(w_{-}(x)\) (if \(f(x)=0\)), the probability of an inconclusive result becomes unlikely from that critical round onward. We stop iterating when the test returns a conclusive result, or when we have passed the critical round for all \(x\). While it is unlikely that we get an inconclusive result at the final round, we return 1 if this happens. More specifically, we use Phase Checking to perform our one-sided test; we iteratively run Phase Checking on \(U(P,x,\alpha_{+})\) in Line 5 to check if \(f(x)=1\), and on \(U(P^{\dagger},x,\alpha_{-})\) in Line 8 to check if \(f(x)=0\), increasing the parameters \(\alpha_{+}\) and \(\alpha_{-}\) by a factor of 2 at each round. At some round, which we label \(i^{*}\), \(\alpha_{+}^{2}\) becomes at least \(3w_{+}(P,x)\) or \(\alpha_{-}^{2}\) becomes at least \(3w_{-}(P,x)=3w_{+}(P^{\dagger},x)\) (by Lemma 13), depending on whether \(f(x)=1\) or \(f(x)=0\), respectively. Using Lemma 15 Item 1, from round \(i^{*}\) onward we have a high probability of measuring the \(B\) register to be in the state \(|0\rangle_{B}\) at Line 5 if \(f(x)=1\) or at Line 8 if \(f(x)=0\), causing the algorithm to terminate and output the correct result. Item 2 of Lemma 15 ensures that at all rounds we have a low probability of outputting the incorrect result. We don't need to know \(i^{*}\) ahead of time; the behavior of the algorithm will change on its own, giving us a smaller average query complexity for instances with smaller witness size, the easier instances. The number of queries used by Phase Checking increases by a factor of 2 at each round of the **for** loop. If \(N_{i}\), the number of repetitions of Phase Checking at round \(i\), were a constant \(N\), then using a geometric series, we would find that the query complexity would be asymptotically equal to the queries used by Phase Checking in the round at which the algorithm terminates, times \(N\). At round \(i^{*}\), the round at which termination is most likely, the query complexity of Phase Checking is \(O\left(\sqrt{w_{+}(P,x)W_{-}(P,f)}\right)\) or \(O\left(\sqrt{w_{-}(P,x)W_{+}(P,f)}\right)\) (depending on if \(f(x)=1\) or \(f(x)=0\)) by Lemma 1. We show the probability of continuing to additional rounds after \(i^{*}\) is exponentially decreasing with each extra round, so we find an average query complexity of \(O\left(N\sqrt{w_{+}(P,x)W_{-}(P,f)}\right)\) or \(O\left(N\sqrt{w_{-}(P,x)W_{+}(P,f)}\right)\) on input \(x\). Since there can be \(T=\left\lceil\log\left(\sqrt{W_{+}W_{-}}\right)\right\rceil\) rounds in Alg. 1 the worst case, this suggests that each round should have a probability of error bounded by \(O\left(T^{-1}\right)\), which we can accomplish through repetition and majority voting, but which requires \(N=\Omega(\log T)\), adding an extra log factor to our query complexity. To mitigate this effect, we modify the number of repetitions (given by \(N_{i}\) in Alg. 1) over the course of the algorithm so that we have a lower probability of error (more repetitions) at earlier rounds, and a higher probability (fewer repetitions) at later rounds. This requires additional queries at the earlier rounds, but since these rounds are cheaper to begin with, we can spend some extra queries to reduce our error. As a result, instead of a log factor that depends only on \(T\), we end up with a log factor that also decreases with increasing witness size, so when \(w_{+}(x)=W_{+}\) or \(w_{-}(x)=W_{-}\), our average query complexity is at most \(O\left(\sqrt{W_{+}W_{-}}\log(1/\delta)\right)\) without any additional log factors. Proof of Theorem 14.: We analyze Alg. 1. We first prove that the total success probability is at least \(1-\delta\). Consider the case that \(f(x)=0\). Let \(i^{*}=\left\lceil\log\sqrt{3w_{-}(\mathcal{P},x)W_{+}(\mathcal{P},f)}\right\rceil\), which is the round at which we will show our probability of exiting the **for** loop becomes large. The total number of possible iterations is \(T=\left\lceil\log\sqrt{3W_{+}W_{-}}\right\rceil\), which is at least \(i^{*}\). Let \(N=\left\lceil T+\log(3/\delta)\right\rceil+1\), so \(N_{i}=18(N-i)\). Let \(Pr(cont\ i)\) denote the probability of continuing to the next round of the **for** loop at round \(i\), conditioned on reaching round \(i\), let \(Pr(err\ i)\) be the probability of returning the wrong answer at round \(i\), conditioned on reaching round \(i\), and let \(Pr(final)\) be the probability of reaching the end of the **for** loop without terminating. (Since we return 1 if we reach the end of the **for** loop without terminating, this event produces an error when \(f(x)=0\).) The total probability of error is then \[Pr(error)=\sum_{i=0}^{T}\left(\prod_{j=0}^{i-1}Pr(cont\ j)\right)\cdot Pr(err\ i)+ Pr(final). \tag{9}\] We will use the probability tree diagram in Fig. 0(a) to help us analyze events and probabilities. Since \(f(x)=0\), \(Pr(err\ i)\) is the probability of returning 1, which depends on the probability of measuring \(|0\rangle_{B}\) in Phase Checking of \(U(\mathcal{P},x,\alpha_{+})\) in Line 5 of Alg. 1. Since \(\alpha_{+}^{2}W_{-}\geq 1\), we can use Item 2 of Lemma 15, to find that there is at most a \(\frac{3}{2}\epsilon=1/3\) probability of measuring \(|0\rangle_{B}\) at each repetition of Phase Checking. Using Hoeffding's inequality [18], the probability of measuring outcome \(|0\rangle\) at least \(N_{i}/2\) times and returning 1 in Line 6 of Alg. 1 is at most \(a_{i}\coloneqq e^{-N_{i}/18}\). Therefore, \[Pr(err\ i)\leq a_{i} \tag{10}\] which holds for all \(i\) but in particular, gives us a bound on the first left branching of Fig. 0(a), corresponding to outputting a 1 when \(i\geq i^{*}\). When \(i<i^{*}\), we trivially bound the probability of continuing to the next round: \[Pr(cont\ i)\leq 1. \tag{11}\] When \(i\geq i^{*}\), we continue to the next round when we do not return 1 in Line 6 of Alg. 1 and then do not return 0 in Line 9, corresponding to the two right branchings of the diagram in Fig. 0(a). We upper bound the probability of the first event (first right branch in Fig. 0(a)) by 1. To bound the probability of the second event, consider Phase Checking of \(U(\mathcal{P}^{\dagger},x,\alpha_{-})\) in Line 8. Since \(i\geq i^{*}\), we have \(\alpha_{-}^{2}\geq 3w_{-}(\mathcal{P},x)=3w_{+}(\mathcal{P}^{\dagger},x)\) by Lemma 8. Also since \(W_{+}(\mathcal{P},f)=W_{-}(\mathcal{P}^{\dagger},f)\) by Lemma 8, we have \(\alpha^{2}\geq 1/W_{-}(\mathcal{P}^{\dagger},f)\). Thus, as we are performing Phase Checking with precision \(\sqrt{\epsilon/(\alpha^{2}W_{+}(\mathcal{P},f))}=\sqrt{\epsilon/(\alpha^{2}W_ {-}(\mathcal{P}^{\dagger},f))}\), we can use Item 1 of Lemma 15 with \(C=3\) to conclude that the probability of measuring \(|0\rangle_{B}\) at a single repetition of Line 8 is at least \(2/3\). Using Hoeffding's inequality [18], the probability of measuring \(|0\rangle_{B}\) more than \(N_{i}/2\) times, and therefore returning 0, is at least \(1-e^{-N_{i}/18}\). Thus the probability of not returning 0 in Line 9 is at most \[1-(1-e^{-N_{i}/18})=a_{i}. \tag{12}\] Therefore when \(i\geq i^{*}\), using the product rule, the probability of following both right branchings of Fig. 0(a) and continuing to the next iteration of the **for** loop is \[Pr(cont\ i)\leq 1\cdot a_{i}=a_{i}. \tag{13}\] Finally, if we ever reach the end of the **for** loop without terminating, our algorithm returns 1, which is the wrong answer. This happens with probability \[Pr(final)=\prod_{i=0}^{T}Pr(cont\ i)\leq\prod_{i=i^{*}}^{T}a_{i}, \tag{14}\] using Eq. (11) for \(i<i^{*}\) and Eq. (13) for \(i\geq i^{*}\). Now we calculate the total probability of error. Plugging in Eq. (10), Eq. (11), Eq. (13), and Eq. (14) into Eq. (9), and splitting the first term of Eq. (9) into two parts to account for the different behavior of the algorithm before and after round \(i^{*}\), we get: \[Pr(error)\leq\sum_{i=0}^{i^{*}}a_{i}+\sum_{i=i^{*}+1}^{T}\prod_{j=i^{*}}^{i}a_ {j}+\prod_{i=i^{*}}^{T}a_{i}. \tag{15}\] Since \(N_{i}=18(N-i)\), we have \(a_{i}=e^{-(N-i)}\), which means the first sum in Eq. (15) is a geometric series, and is bounded by: \[\sum_{i=0}^{i^{*}}a_{i}\leq\frac{1}{e-1}\cdot e^{-N+(i^{*}+1)}<\delta/2, \tag{16}\] where the final inequality arises from our choice of \(N=\lceil T+\log(3/\delta)\rceil+1\). Combining the second and third terms of Eq. (15), and upper bounding their \(a_{j}\)'s and \(a_{i}\)'s by \(a_{T}\), we get another geometric series that sums to less than \(\delta/2\): \[\sum_{i=1}^{T-i^{*}+1}(a_{T})^{i}<\frac{e^{-(N-T)}}{1-e^{-(N-T)}}<\delta/2. \tag{17}\] Thus, \(Pr(error)<\delta\), and our success probability is at least \(1-\delta\). Now we analyze the probability of error for \(f(x)=1\) and set \(i^{*}=\left\lceil\log\sqrt{3w_{+}(\mathcal{P},x)W_{-}(\mathcal{P},f)}\right\rceil\). Then nearly identical analyses as in the \(f(x)=0\) case (and using Lemma 8 to relate witness sizes of \(\mathcal{P}\) and \(\mathcal{P}^{\dagger}\)) provide the bounds on probabilities of relevant events, corresponding to branchings in Fig. 0(b). By following the first right branching and then the next left branching in Fig. 0(b), we see the probability of error at round \(i\) for \(i\geq i^{*}\) is \[Pr(err\ i)\leq a_{i}^{2}\leq a_{i}, \tag{18}\] since by our choice of parameters, \(a_{i}\) is always less than \(1\). Following the two right branchings in Fig. 0(b), the probability of continuing when \(i\geq i^{*}\) is \[Pr(cont\ j)\leq a_{i}\cdot 1=a_{i}. \tag{19}\] Thus the rest of the analysis is the same, and so we find that for \(f(x)=0\) or \(f(x)=1\), the probability of success is at least \(1-\delta\). Figure 1: Probability tree diagrams for a round of the **for** loop in Alg. 1 when \(i\geq i^{*}\), and \(f(x)=0\) (Fig. 0(a)), and \(f(x)=1\) (Fig. 0(b)). By our choice of parameters, \(a_{i}\) is small (it is always less than \(1/4\)), and decreases exponentially with increasing \(i\). Now we calculate the average query complexity on input \(x\), \(\mathbb{E}[Q_{x}]\), given by \[\mathbb{E}[Q_{x}]=\sum_{i=0}^{T}\left(\prod_{j=0}^{i-1}Pr(cont\ j)\right)\cdot(1- Pr(cont\ i))\cdot Q(i). \tag{20}\] Here, \(Q(i)\) is the number of queries used by the algorithm up to and including round \(i\), and \(\left(\prod_{j=0}^{i-1}Pr(cont\ j)\right)\cdot(1-Pr(cont\ i))\) is the probability that we terminate at round \(i\). The only time we make queries is in the Phase Checking subroutine. By Lemma 1, the number of queries required to run a single repetition of Phase Checking in the \(i^{\text{th}}\) round is \(O\left(\frac{1}{\theta}\log\left(\frac{1}{\epsilon}\right)\right)=O(\alpha_{ \pm}\sqrt{W_{\mp}})=O(2^{i})\), since \(\epsilon=\Theta(1)\). Taking into account the \(N_{i}\) repetitions of Phase Checking in the \(i^{\text{th}}\) round, we find \[Q(i)=\sum_{j=0}^{i}O\left(2^{j}N_{j}\right). \tag{21}\] Now setting \(i^{*}\) to be \(\left\lceil\log\sqrt{3w_{+}(\mathcal{P},x)W_{-}(\mathcal{P},f)}\right\rceil\) or \(\left\lceil\log\sqrt{3w_{-}(\mathcal{P},x)W_{+}(\mathcal{P},f)}\right\rceil\) depending on whether \(f(x)=1\) or \(0\), respectively, we can use our bounds on event probabilities from our error analysis to bound the relevant probabilities for average query complexity. When \(i<i^{*}\), we use the trivial bound \(Pr(cont\ i)\leq 1\). When \(i\geq i^{*}\), we use Eqs. (13) and (19) and our choice of \(a_{i}\) to conclude that \(Pr(cont\ i)\leq a_{i}\leq 1/4\). For all \(i\), we use that \((1-Pr(cont\ i))\leq 1.\) Splitting up the sum in Eq. (20) into 2 terms, for \(i\leq i^{*}\) and \(i>i^{*}\), and using these bounds on \(Pr(cont\ i)\) along with Eq. (21), we have \[\mathbb{E}[Q_{x}]\leq\sum_{i=0}^{i^{*}}\sum_{j=0}^{i}O\left(2^{j}N_{j}\right)+ \sum_{i=i^{*}+1}^{T}\left(\frac{1}{4}\right)^{i-i^{*}}\sum_{j=0}^{i}O\left(2^{ j}N_{j}\right). \tag{22}\] We use the following inequalities to simplify Eq. (22), \[\sum_{j=0}^{i}2^{j}(N-j)\leq 2^{i+1}(N-i+1),\quad\text{and} \tag{23}\] \[\sum_{i=i^{*}+1}^{T}2^{-i}(T-i)\leq 2^{-i^{*}}(T-i^{*}) \tag{24}\] and finally find that \[\mathbb{E}[Q_{x}]=O\left(2^{i^{*}}(N-i^{*}+1)\right). \tag{25}\] By our choice of \(i^{*}\), \(T\), and \(N\), on input \(x\) when \(f(x)=1\), the total query complexity is \[\mathbb{E}[Q_{x}]=O\left(\sqrt{w_{+}(\mathcal{P},x)W_{-}(\mathcal{P},f)}\log \left(\frac{W_{+}(\mathcal{P},f)}{w_{+}(\mathcal{P},x)\delta}\right)\right), \tag{26}\] and when \(f(x)=0\), the total average query complexity is \[\mathbb{E}[Q_{x}]=O\left(\sqrt{w_{-}(\mathcal{P},x)W_{+}(\mathcal{P},f)}\log \left(\frac{W_{-}(\mathcal{P},f)}{w_{-}(\mathcal{P},x)\delta}\right)\right), \tag{27}\] and the worst case query complexity is \[\sum_{j=0}^{T}O\left(2^{j}N_{j}\right)=O\left(\sqrt{W_{+}W_{-}}\log(1/\delta) \right), \tag{28}\] where we have again used Eq. (23). ### Application to st-connectivity As an example application of our algorithm, we analyze the query complexity of \(st\)-connectivity on an \(n\)-vertex graph. There is a span program \(P\) such that for inputs \(x\) where there is a path from \(s\) to \(t\), \(w_{+}(P,x)=R_{s,t}(x)\) where \(R_{s,t}(x)\) is the effective resistance from \(s\) to \(t\) on the subgraph induced by \(x\), and for inputs \(x\) where there is not a path from \(s\) to \(t\), \(w_{+}(P,x)=C_{s,t}(x)\), where \(C_{s,t}(x)\) is the effective capacitance between \(s\) and \(t\)[5, 20]. In an \(n\)-vertex graph, the effective resistance is less than \(n\), and the effective capacitance is less than \(n^{2}\), so by Theorem 14, we can determine with bounded error that there is a path on input \(x\) with \(\tilde{O}(\sqrt{R_{s,t}(x)n^{2}})\) average queries or that there is not a path with \(\tilde{O}(\sqrt{C_{s,t}(x)n})\) average queries. In the worst case, when \(R_{s,t}(x)=n\) or \(C_{s,t}(x)=n^{2}\), we recover the worst-case query complexity of \(O(n^{3/2})\) of the original span program algorithm. The effective resistance is at most the shortest path between two vertices, and the effective capacitance is at most the smallest cut between two vertices. Thus our algorithm determines whether or not there is a path from \(s\) to \(t\) with \(\tilde{O}(\sqrt{k}n)\) queries on average if there is a path of length \(k\), and if there is no path, the algorithm uses \(\tilde{O}(\sqrt{cn})\) queries on average, where \(c\) is the size of the smallest cut between \(s\) and \(t\). Importantly, one does not need to know bounds on \(k\) or \(c\) ahead of time to achieve this query complexity. The analysis of the other examples listed in Section 1 is similar. ## 4 State Conversion Algorithm Our main result for state conversion is the following: **Theorem 16**.: _Let \(\mathscr{P}\) be a converting vector set from \(\{|\rho_{x}\rangle\}_{x\in X}\) to \(\{|\sigma_{x}\rangle\}_{x\in X}\). Then there is quantum algorithm such that for any \(x\in X\), any failure probability \(\delta\leq 1/3\), and any error \(\varepsilon>0\),_ 1. _With probability_ \(1-\delta\)_, on input_ \(x\) _the algorithm algorithm converts_ \(|\rho_{x}\rangle\) _to_ \(|\sigma_{x}\rangle\) _with error_ \(\varepsilon\)_._ 2. _On input_ \(x\)_, if_ \(w_{+}(\mathscr{P},x)W_{-}(\mathscr{P})\leq w_{-}(\mathscr{P},x)W+-(\mathscr{P})\)_, the average query complexity is_ \[O\left(\frac{\sqrt{w_{+}(\mathscr{P},x)W_{-}(\mathscr{P})}}{\varepsilon^{5}} \log\left(\frac{1}{\varepsilon}\right)\log\left(\frac{W_{+}(\mathscr{P})}{w_{+ }(\mathscr{P},x)\delta}\log\left(\frac{1}{\varepsilon}\right)\right)\right).\] (29) _If_ \(w_{+}(\mathscr{P},x)W_{-}(\mathscr{P})\geq w_{-}(\mathscr{P},x)W_{+}(\mathscr{ P})\)_, the average query complexity is_ \[O\left(\frac{\sqrt{w_{-}(\mathscr{P},x)W_{+}(\mathscr{P})}}{\varepsilon^{5}} \log\left(\frac{1}{\varepsilon}\right)\log\left(\frac{W_{-}(\mathscr{P})}{w_{- }(\mathscr{P},x)\delta}\log\left(\frac{1}{\varepsilon}\right)\right)\right).\] (30) Comparing Theorem 16 with Corollary 12, and considering the case of \(\varepsilon,\delta=\Omega(1)\), we see that in the worst case, when we have an input \(x\) where \(w_{+}(\mathscr{P},x)=W_{+}(\mathscr{P})\) or \(w_{-}(\mathscr{P},x)=W_{-}(\mathscr{P})\) the average query complexity of our algorithm is asymptotically the same as the standard state conversion algorithm. However, when we have an instance \(x\) with a smaller value of \(w_{\pm}(\mathscr{P},x)\), then our algorithm has improved query complexity, without knowing anything about the input witness size ahead of time. Our algorithm has worse scaling in \(\varepsilon\) than Corollary 12, so our algorithm will be most useful when \(\varepsilon\) can be constant. One could also do a hybrid approach: initially run our algorithm and then switch to that of Corollary 12. The problem of state conversion is a more general problem than function decision, and it can be used to solve the function decision problem. However, because of the worse scaling with \(\varepsilon\) in Theorem 16, we considered function decision separately (see Section 3). We use Alg. 2 to prove Theorem 16. We now describe a key unitary, \(\mathcal{U}(\mathcal{P},x,\alpha,\hat{\varepsilon})\), that appears in the algorithm. In the following, we use most of the notation conventions of Ref. [22]. Let \(\{|\mu_{i}\rangle\}_{i\in[q]}\) and \(\{|\nu_{i}\rangle\}_{i\in[q]}\) be unit vectors in \(\mathbb{C}^{q}\) as defined in [22, Fact 2.4], such that \[\langle\mu_{i}|\nu_{j}\rangle=\frac{k}{2(k-1)}(1-\delta_{i,j}). \tag{31}\] For \(X\subseteq[q]^{n}\), let \(\mathscr{P}=(\{|u_{xj}\rangle\},\{|v_{xj}\rangle\})\) where \(\forall x\in X,j\in[n],|v_{xj}\rangle,|u_{xj}\rangle\in\mathbb{C}^{m}\) be a converting vector set from \(\rho\) to \(\sigma\). For all \(x\in X\), the states \(|\rho_{x}\rangle\) and \(|\sigma_{x}\rangle\) are in the Hilbert space \(\mathcal{H}\). Then for all \(x\in X\), define \(|t_{x\pm}\rangle,|\psi_{x,\alpha,\hat{\varepsilon}}\rangle\in(\mathbb{C}^{2} \otimes\mathcal{H})\oplus(\mathbb{C}^{n}\otimes\mathbb{C}^{q}\otimes\mathbb{C} ^{m})\) as \[|t_{x\pm}\rangle=\frac{1}{\sqrt{2}}\left(|0\rangle|\rho_{x}\rangle\pm|1\rangle |\sigma_{x}\rangle\right),\quad\text{ and }\quad|\psi_{x,\alpha,\hat{\varepsilon}}\rangle=\sqrt{\frac{\hat{ \varepsilon}}{\alpha}}|t_{x-}\rangle-\sum_{j\in[n]}|j\rangle|\mu_{x_{j}}\rangle| u_{xj}\rangle, \tag{32}\] where \(\alpha\) is analogous to the parameter \(\alpha\) in Eq. (7). We will choose \(\hat{\varepsilon}\) to achieve a desired accuracy of \(\varepsilon\) in our state conversion procedure. Set \(\Lambda^{\alpha,\hat{\varepsilon}}\) to equal the projection onto the orthogonal complement of the span of the vectors \(\{|\psi_{x,\alpha,\hat{\varepsilon}}\rangle\}_{x\in X}\), and set \(\Pi_{x}=I-\sum_{j\in[n]}|j\rangle\!\langle j|\otimes|\mu_{x_{j}}\rangle\! \langle\mu_{x_{j}}|\otimes I_{\mathbb{C}^{n}}\). Finally, we set \(\mathcal{U}(\mathcal{P},x,\alpha,\hat{\varepsilon})=(2\Pi_{x}-I)(2\Lambda^{ \alpha,\hat{\varepsilon}}-I)\). The reflection \(2\Pi_{x}-I\) can be implemented with two applications of \(O_{x}\)[22], and the reflection \((2\Lambda^{\alpha,\hat{\varepsilon}}-I)\) is independent of \(x\) and so requires no queries. As with function decision, the time and query complexity of the algorithm is dominated by the number of applications of \(\mathcal{U}(\mathcal{P},x,\alpha,\hat{\varepsilon})\). If \(T_{U}\) is the time required to implement \(\mathcal{U}(\mathcal{P},x,\alpha,\hat{\varepsilon})\), then the time complexity of our algorithm is simply the query complexity times \(T_{U}\). ``` Input : Converting vector set \(\mathscr{P}\) from \(\rho\) to \(\sigma\), failure probability \(\delta<1/3\), error \(\varepsilon\), oracle \(O_{x}\), initial state \(|\rho_{x}\rangle\) Output :\(|\tilde{\sigma}_{x}\rangle\) such that \(\|\tilde{\sigma}_{x}\rangle-|1\rangle|\sigma_{x}\rangle|0\rangle\|\leq\varepsilon\) /* Probing Stage */ 1\(\hat{\varepsilon}\leftarrow\varepsilon^{2}/36\) 2\(T\leftarrow\lceil\log\left(W_{+}(\mathscr{P})W_{-}(\mathscr{P})\right)\rceil\) 3fori=0 to \(T\)do 4\(\delta_{i}\gets 2^{\log\delta-T+i-1}\) 5for\(\mathscr{P}^{\prime}\in\{\mathscr{P},\mathscr{P}^{\dagger}\}\)do 6\(\alpha\gets 2^{i}/W_{-}(\mathscr{P}^{\prime})\) 7\(\mathcal{A}\gets D(\mathcal{U}(\mathscr{P}^{\prime},x,\alpha,\hat{ \varepsilon}))\) (Lemma 1) to precision \(\hat{\varepsilon}^{3/2}/\sqrt{\alpha W_{-}(\mathscr{P}^{\prime\prime})}\) and accuracy \(\hat{\varepsilon}^{2}\) 8\(\hat{a}\leftarrow\) Amplitude Estimation (Lemma 3) of probability of outcome \(|0\rangle_{B}\) in register \(B\) when \(\mathcal{A}\) acts on \((|0\rangle|\rho_{x}\rangle)_{A}|0\rangle_{B}\) to additive error \(\hat{\varepsilon}/4\) with probability of failure \(\delta_{i}\) 9if\(\hat{a}-1/2>-\frac{11}{4}\hat{\varepsilon}\)then Continue to State Conversion Stage /* State Conversion Stage */ 10 Apply \(R(\mathcal{U}(\mathscr{P}^{\prime},x,\alpha,\hat{\varepsilon}))\) (Lemma 2) with precision \(\hat{\varepsilon}^{3/2}/\sqrt{\alpha W_{-}(\mathscr{P}^{\prime})}\) and accuracy \(\hat{\varepsilon}^{2}\) to \((|0\rangle|\rho_{x}\rangle)_{A}|0\rangle_{B}\) and return the result ``` **Algorithm 2**Input : Converting vector set \(\mathscr{P}\) from \(\rho\) to \(\sigma\), failure probability \(\delta<1/3\), error \(\varepsilon\), oracle \(O_{x}\), initial state \(|\rho_{x}\rangle\) **Input :** Converting vector set \(\mathscr{P}\) from \(\rho\) to \(\sigma\), failure probability \(\delta<1/3\), error \(\varepsilon\), oracle \(O_{x}\), initial state \(|\rho_{x}\rangle\) **Output :\(|\tilde{\sigma}_{x}\rangle\)** such that \(\|\tilde{\sigma}_{x}\rangle-|1\rangle|\sigma_{x}\rangle|0\rangle\|\leq\varepsilon\) /* Probing Stage 1\(\hat{\varepsilon}\leftarrow\varepsilon^{2}/36\) 2\(T\leftarrow\lceil\log\left(W_{+}(\mathscr{P})W_{-}(\mathscr{P})\right)\rceil\) 3fori=0 to \(T\)do 4\(\delta_{i}\gets 2^{\log\delta-T+i-1}\) 5for\(\mathscr{P}^{\prime}\in\{\mathscr{P},\mathscr{P}^{\dagger}\}\)do 6\(\alpha\gets 2^{i}/W_{-}(\mathscr{P}^{\prime})\) 7\(\mathcal{A}\gets D(\mathcal{U}(\mathscr{P}^{\prime},x,\alpha,\hat{ \varepsilon}))\) (Lemma 1) to precision \(\hat{\varepsilon}^{3/2}/\sqrt{\alpha W_{-}(\mathscr{P}^{\prime\prime})}\) and accuracy \(\hat{\varepsilon}^{2}\) 8\(\hat{a}\leftarrow\) Amplitude Estimation (Lemma 3) of probability of outcome \(|0\rangle_{B}\) in register \(B\) when \(\mathcal{A}\) acts on \((|0\rangle|\rho_{x}\rangle)_{A}|0\rangle_{B}\) to additive error \(\hat{\varepsilon}/4\) with probability of failure \(\delta_{i}\) 9if\(\hat{a}-1/2>-\frac{11}{4}\hat{\varepsilon}\)then Continue to State Conversion Stage /* State Conversion Stage */ 10 Apply \(R(\mathcal{U}(\mathscr{P}^{\prime},x,\alpha,\hat{\varepsilon}))\) (Lemma 2) with precision \(\hat{\varepsilon}^{3/2}/\sqrt{\alpha W_{-}(\mathscr{P}^{\prime\prime})}\) and accuracy \(\hat{\varepsilon}^{2}\) to \((|0\rangle|\rho_{x}\rangle)_{A}|0\rangle_{B}\) and return the result ``` **Algorithm 3**Input : Converting vector set \(\mathscr{P}\) from \(\rho\) to \(\sigma\), failure probability \(\delta<1/3\) **High level idea of Alg. 2**: when we apply Phase Reflection of \(\mathcal{U}(\mathscr{P}^{\prime},x,\alpha,\hat{\varepsilon})\) (for \(\mathscr{P}^{\prime}\in\{\mathscr{P},\mathscr{P}^{\dagger}\}\)) in Line 10 to \((|0\rangle|\rho_{x}\rangle)_{A}|0\rangle_{B}=\frac{1}{\sqrt{2}}(|t_{x+}\rangle_{A}| 0\rangle_{B}+|t_{x-}\rangle_{A}|0\rangle_{B})\), we want \(\frac{1}{\sqrt{2}}|t_{x+}\rangle_{A}|0\rangle_{B}\) to pick up a \(+1\) phase and \(\frac{1}{\sqrt{2}}|t_{x-}\rangle_{A}|0\rangle_{B}\) to pick up a \(-1\) phase. (Note that in this case, half of the amplitude of the state is picking up a \(+1\) phase, and half is picking up a \(-1\) phase.) If this were to happen perfectly, we would have the desired state \((|1\rangle|\sigma_{x}\rangle)_{A}|0\rangle_{B}=\frac{1}{\sqrt{2}}(|t_{x+}\rangle_ {A}|0\rangle_{B}-|t_{x-}\rangle_{A}|0\rangle_{B})\). We show that if \(\alpha\) is larger than a critical value that depends on the witness size of the input \(x\), then in Line 10, we will mostly pick up the desired phase. However, we don't know ahead of time how large \(\alpha\) should be. To determine this, we implement the Probing Stage (Lines 1-9), which uses Amplitude Estimation of a Phase Checking subroutine to test exponentially increasing values of \(\alpha\). We use the following two Lemmas (Lemma 17 and Lemma 18) to analyze Alg. 2 and prove Theorem 16: **Lemma 17**.: _For a converting vector set \(\mathscr{P}\) that coverts \(\rho\) to \(\sigma\), and Phase Checking of \(\mathcal{U}(\mathscr{P},x,\alpha,\hat{\varepsilon})\) done with accuracy \(\hat{\varepsilon}^{2}\) and precision \(\Theta=\hat{\varepsilon}^{3/2}/\sqrt{\alpha W_{-}(\mathscr{P})}\), then_ 1. _If_ \(\alpha\geq w_{+}(\mathscr{P},x)\)_, then_ \(\|\Pi_{0}(\mathcal{U}(\mathscr{P},x,\alpha,\hat{\varepsilon}))|t_{x+}\rangle|0 \rangle\|^{2}>1-\hat{\varepsilon}\)_._ 2. _If_ \(\alpha\geq 1/W_{-}(\mathscr{P})\)_, then_ \(\|P_{\Theta}(\mathcal{U}(\mathscr{P},x,\alpha,\hat{\varepsilon}))|t_{x-} \rangle\|^{2}\leq\frac{\hat{\varepsilon}^{2}}{2}\) _and_ \(\|\Pi_{0}(\mathcal{U}(\mathscr{P},x,\alpha,\hat{\varepsilon}))|t_{x-}\rangle| 0\rangle\|\leq\hat{\varepsilon}(1+1/\sqrt{2})\)_._ Lemma 17 Item 2 ensures that the \(|t_{x-}\rangle\) part of the state mostly picks up a \(-1\) phase when we apply Phase Reflection regardless of the value of \(\alpha\), and Lemma 17 Item 1 ensures that when \(\alpha\) is large enough, the \(|t_{x+}\rangle\) part of the state mostly picks up a \(+1\) phase. Lemma 17 plays a similar role in state conversion to Lemma 15 in function decision. It shows us that the behavior of the algorithm changes at some point when \(\alpha\) is large enough, without our having to know \(\alpha\) ahead of time (Item 1) but it also is used to show that we don't terminate early when we shouldn't, leading to an incorrect outcome (Item 2). Proof of Lemma 17.: Throughout this proof, \(P_{\Theta}\) and \(\Pi_{0}\) are shorthand for \(P_{\Theta}(\mathcal{U}(\mathscr{P},x,\alpha,\hat{\varepsilon}))\) and \(\Pi_{0}(\mathcal{U}(\mathscr{P},x,\alpha,\hat{\varepsilon}))\), with Phase Checking done to precision \(\Theta=\hat{\varepsilon}^{3/2}/\sqrt{\alpha W_{-}(\mathscr{P})}\) and accuracy \(\hat{\varepsilon}^{2}\). _Part 1:_ We first prove that \(\|P_{0}|t_{x+}\rangle\|^{2}\geq 1-\hat{\varepsilon}\), which by Lemma 1 gives us a bound on \(\|\Pi_{0}|t_{x+}\rangle\|^{2}\). Following [22, Claim 4.4], we consider the state \[|\varphi\rangle=|t_{x+}\rangle+\frac{\sqrt{\hat{\varepsilon}}}{2\sqrt{\alpha }}\frac{2(k-1)}{k}\sum_{j\in[n]}|j\rangle|\nu_{x_{j}}\rangle|v_{xj}\rangle, \tag{33}\] where \(|\nu_{x_{j}}\rangle\) is from Eq. (31). Note that for \(|\psi_{y,\alpha,\hat{\varepsilon}}\rangle\) for all \(y\in X\), because \(\langle t_{y-}|t_{x+}\rangle=\frac{1}{2}\left(\langle\rho_{y}|\rho_{x}\rangle -\langle\sigma_{y}|\sigma_{x}\rangle\right)\), and also \(\sum_{j:x_{j}\neq y_{j}}\langle u_{yj}|v_{xj}\rangle=\langle\rho_{y}|\rho_{x} \rangle-\langle\sigma_{y}|\sigma_{x}\rangle\) from the constraints of Eq. (4), we have \[\langle\psi_{y,\alpha,\hat{\varepsilon}}|\varphi\rangle=\frac{\sqrt{\hat{ \varepsilon}}}{\sqrt{\alpha}}\langle t_{y-}|t_{x+}\rangle-\frac{\sqrt{\hat{ \varepsilon}}}{2\sqrt{\alpha}}\sum_{j:x_{j}\neq y_{j}}\langle u_{yj}|v_{xj} \rangle=0. \tag{34}\] Because \(|\varphi\rangle\) is orthogonal to all of the \(|\psi_{y,\alpha,\hat{\varepsilon}}\rangle\), we have \(\Lambda^{\alpha,\hat{\varepsilon}}|\varphi\rangle=|\varphi\rangle\). Also, \(\Pi_{x}|\varphi\rangle=|\varphi\rangle\) since \(\Pi_{x}|t_{x+}\rangle=|t_{x+}\rangle\) and \(\langle\mu_{x_{j}}|\nu_{x_{j}}\rangle=0\) for every \(j\). Thus \(P_{0}|\varphi\rangle=|\varphi\rangle\). Note \[\langle\varphi|\varphi\rangle=1+\frac{\hat{\varepsilon}}{4\alpha}\frac{4(k-1)^{ 2}}{k^{2}}w_{+}(\mathscr{P},x)<1+\hat{\varepsilon}, \tag{35}\] because of our assumption that \(\alpha\geq w_{+}(\mathscr{P},x)\). Also, \(\langle t_{x+}|\varphi\rangle=1\), so \[\|P_{0}|t_{x+}\rangle\|^{2}\geq\||\varphi\rangle\langle\varphi||t_{x+}\rangle \|^{2}/\||\varphi\rangle\|^{4}\geq\frac{1}{1+\hat{\varepsilon}}>1-\hat{ \varepsilon}. \tag{36}\] Then by Lemma 1, we have \(\|P_{0}|t_{x+}\rangle\|^{2}\leq\|\Pi_{0}|t_{x+}\rangle|0\rangle\|^{2}\), so \[\|\Pi_{0}|t_{x+}\rangle|0\rangle\|^{2}>1-\hat{\varepsilon}. \tag{37}\] _Part 2:_ Let \(|w\rangle=\sqrt{\frac{\alpha}{\hat{\varepsilon}}}|\psi_{x,\alpha,\hat{ \varepsilon}}\rangle\), so \(\Lambda^{\alpha,\hat{\varepsilon}}|w\rangle=0\) and \(\Pi_{x}|w\rangle=|t_{x-}\rangle\). Applying Lemma 4, we have \[\|P_{\Theta}|t_{x-}\rangle\|^{2}=\|P_{\Theta}\Pi_{x}|w\rangle\|^{2}\leq\frac{ \Theta^{2}}{4}\||w\rangle\|^{2}. \tag{38}\] Now \[\||w\rangle\|^{2}=\frac{\alpha}{\hat{\varepsilon}}\left(\frac{\hat{ \varepsilon}}{\alpha}\langle t_{x-}|t_{x-}\rangle+\Sigma_{j}\langle u_{xj}|u_{ xj}\rangle\right)\leq 1+\frac{\alpha W_{-}}{\hat{\varepsilon}}. \tag{39}\] Combining Eqs. (38) and (39), and setting \(\Theta=\hat{\varepsilon}^{3/2}/\sqrt{\alpha W_{-}}\), we have that \[\|P_{\Theta}|t_{x-}\rangle\|^{2}\leq\frac{\hat{\varepsilon}^{3}}{4\alpha W_{- }}\left(1+\frac{\alpha W_{-}}{\hat{\varepsilon}}\right)\leq\frac{\hat{ \varepsilon}^{2}}{2}, \tag{40}\] where we have used our assumption from the statement of the lemma that \(\alpha\geq 1/W_{-}\). We next bound \(\|\Pi_{0}|t_{x-}\rangle)|0\rangle\|. Inserting the identity operator \(I=P_{\Theta}+\overline{P}_{\Theta}\) for \(\Theta=\hat{\varepsilon}^{3/2}/\sqrt{\alpha W_{-}}\), we have \[\|\Pi_{0}|t_{x-}\rangle|0\rangle\| =\left\|\Pi_{0}\left(\left(P_{\Theta}+\overline{P}_{\Theta} \right)|t_{x-}\rangle\right)|0\rangle\right\| \tag{41}\] \[\leq\|P_{\Theta}|t_{x-}\rangle\|+\|\Pi_{0}\left(\overline{P}_{ \Theta}|t_{x-}\rangle\right)|0\rangle\|\] (42) \[\leq\frac{\hat{\varepsilon}}{\sqrt{2}}+\hat{\varepsilon}. \tag{43}\] where the second line comes from the triangle inequality and the fact that a projector acting on a vector cannot increase its norm. The first term in the final line comes from Eq. (40), and the second term comes Lemma 1. The following lemma, Lemma 18, tells us that when we break out of the Probing Stage due to a successful Amplitude Estimation in Line 8, we will convert \(|\rho_{x}\rangle\) to \(|\sigma_{x}\rangle\) with appropriate error in the State Conversion Stage in Line 10, regardless of the value of \(\alpha\) (Item 1 in Lemma 18). However, Lemma 18 also tells us that once \(\alpha\geq w_{+}(\mathscr{P},x)\), then if Amplitude Estimation does not fail, we will exit the Probing Stage (Item 2 in Lemma 18). Together Item 1 and Item 2 ensure that once \(\alpha\) is large enough, the algorithm will be very likely to terminate and correctly produce the output state, but before \(\alpha\) is large enough, if there is some additional structure in the converting vector set that causes our Probing Stage to end early (when \(\alpha<w_{+}(\mathscr{P},x)\)), we will still have a successful result. **Lemma 18**.: _For a converting vector set \(\mathscr{P}\) that converts \(\rho\) to \(\sigma\), and Phase Checking and Phase Reflection of \(\mathcal{U}(\mathscr{P},x,\alpha,\hat{\varepsilon})\) done with accuracy \(\hat{\varepsilon}^{2}\) and precision \(\Theta=\hat{\varepsilon}^{3/2}/\sqrt{\alpha W_{-}(\mathscr{P})}\) for \(\hat{\varepsilon}\leq 1/9\), and \(\alpha\geq 1/W_{-}(\mathscr{P}),\)_ 1. _If_ \[\|\Pi_{0}(\mathcal{U}(\mathscr{P},x,\alpha,\hat{\varepsilon}))|0\rangle|\rho_ {x}\rangle_{A}|0\rangle_{B}\|^{2}>\frac{1}{2}-3\hat{\varepsilon},\] (44) _then_ \[\|R(\mathcal{U}(\mathscr{P},x,\alpha,\hat{\varepsilon}))(|0\rangle|\rho_{x} \rangle)_{A}|0\rangle_{B}-(|1\rangle|\sigma_{x}\rangle)_{A}|0\rangle_{B}\| \leq 6\sqrt{\hat{\varepsilon}}.\] (45) _._ 2. _If_ \(\alpha\geq w_{+}(\mathscr{P},x)\) _then_ \(\|\Pi_{0}(\mathcal{U}(\mathscr{P},x,\alpha,\hat{\varepsilon}))(|0\rangle|\rho_{x }\rangle)_{A}|0\rangle_{B}\|^{2}>\frac{1}{2}-\frac{5\hat{\varepsilon}}{2}\)_._ Proof.: For the rest of the proof, we will use \(P_{\Theta}\), \(\Pi_{0}\), and \(R\) as shorthand for \(P_{\Theta}(\mathcal{U}(\mathscr{P},x,\alpha,\hat{\varepsilon}))\), \(\Pi_{0}(\mathcal{U}(\mathscr{P},x,\alpha,\hat{\varepsilon}))\), and \(R(\mathcal{U}(\mathscr{P},x,\alpha,\hat{\varepsilon}))\) respectively. _Part 1:_ We have \[\|R|0\rangle|\rho_{x}\rangle|0\rangle-|1\rangle|\sigma_{x} \rangle|0\rangle\|=\frac{1}{\sqrt{2}}\|R(|t_{x+}\rangle+|t_{x-}\rangle)|0 \rangle-(|t_{x+}\rangle-|t_{x-}\rangle)|0\rangle\|\] \[\leq\frac{1}{\sqrt{2}}\|(R-I)|t_{x+}\rangle|0\rangle\|+\frac{1}{ \sqrt{2}}\|(R+I)|t_{x-}\rangle|0\rangle\|. \tag{46}\] In the first term of Eq. (46), we can replace \(R\) with \(\Pi_{0}-\overline{\Pi}_{0}\) (as described above Lemma 2), and we can insert \(I=\Pi_{0}+\overline{\Pi}_{0}\) to get \[\frac{1}{\sqrt{2}}\|(R-I)|t_{x+}\rangle|0\rangle\|=\frac{1}{\sqrt{2}}\|((\Pi_{ 0}-\overline{\Pi}_{0})-I)(\Pi_{0}+\overline{\Pi}_{0})|t_{x+}\rangle|0\rangle \|=\frac{2}{\sqrt{2}}\|\overline{\Pi}_{0}|t_{x+}\rangle|0\rangle\|, \tag{47}\] where we have used the fact that \(\Pi_{0}\) and \(\overline{\Pi}_{0}\) are orthogonal. To bound \(\frac{2}{\sqrt{2}}\|\overline{\Pi}_{0}|t_{x+}\rangle|0\rangle\|\), we start from our assumption that \(\|\Pi_{0}(|0\rangle|\rho_{x}\rangle)_{A}|0\rangle_{B}\|^{2}\geq 1/2-3\hat{\varepsilon}\). Writing \(|0\rangle|\rho_{x}\rangle\) in terms of \(|t_{x+}\rangle\) and \(|t_{x+}\rangle\), and using the triangle inequality, we have \[\frac{1}{2}-3\hat{\varepsilon}\leq\frac{1}{2}\|\Pi_{0}(|t_{x+}\rangle+|t_{x- }\rangle)|0\rangle\|^{2}\leq\frac{1}{2}\left(\|\Pi_{0}|t_{x+}\rangle|0\rangle \|+\|\Pi_{0}|t_{x-}\rangle|0\rangle\|\right)^{2}. \tag{48}\] From Lemma 17 Item 2, we have \(\|\Pi_{0}|t_{x-}\rangle|0\rangle\|<2\hat{\varepsilon}\), so plugging into Eq. (48), we have \[1-6\hat{\varepsilon}\leq\left(\|\Pi_{0}|t_{x+}\rangle|0\rangle\|+2\hat{ \varepsilon}\right)^{2}. \tag{49}\] Rearranging, we find: \[(\sqrt{1-6\hat{\varepsilon}}-2\hat{\varepsilon})^{2}\leq\|\Pi_{0}|t_{x+} \rangle|0\rangle\|^{2}. \tag{50}\] Since \(\|\Pi_{0}|t_{x+}\rangle|0\rangle\|^{2}+\|\overline{\Pi}_{0}|t_{x+}\rangle|0 \rangle\|^{2}=1\), we have \[\|\overline{\Pi}_{0}|t_{x+}\rangle|0\rangle\|^{2}\leq 1-(\sqrt{1-6\hat{ \varepsilon}}-2\hat{\varepsilon})^{2}<10\hat{\varepsilon}. \tag{51}\] Plugging back into Eq. (47), we find \[\frac{1}{\sqrt{2}}\|(R-I)|t_{x+}\rangle|0\rangle\|=\frac{2}{\sqrt{2}}\| \overline{\Pi}_{0}|t_{x+}\rangle|0\rangle\|\leq 2\sqrt{5\hat{\varepsilon}}. \tag{52}\] In the second term of Eq. (46), we again replace \(R\) with \(\Pi_{0}-\overline{\Pi}_{0}\) and replace \(I\) with \(\Pi_{0}+\overline{\Pi}_{0}\) to get \[\frac{1}{\sqrt{2}}\|(R+I)|t_{x-}\rangle|0\rangle\|=\frac{1}{\sqrt{2}}\|(\Pi_{ 0}-\overline{\Pi}_{0}+\Pi_{0}+\overline{\Pi}_{0})|t_{x-}\rangle|0\rangle\|= \frac{2}{\sqrt{2}}\|\Pi_{0}|t_{x-}\rangle|0\rangle\|\leq\hat{\varepsilon}(1+ 1/\sqrt{2}) \tag{53}\] by Lemma 17 Item 2, where we have used our assumption that \(\alpha\geq 1/W_{-}(\mathscr{P})\). Combining Eqs. (52) and (53), and plugging into into Eq. (46) and using that \(\hat{\varepsilon}\leq 1/9\), we have \[\|R|0\rangle|\rho_{x}\rangle|0\rangle-|1\rangle|\sigma_{x}\rangle|0\rangle\| \leq 2\sqrt{5\hat{\varepsilon}}+\hat{\varepsilon}(1+1/\sqrt{2})<6\sqrt{\hat{ \varepsilon}}. \tag{54}\] _Part 2:_ We analyze \(\|\Pi_{0}|0\rangle|\rho_{x}\rangle|0\rangle\|\). Using the triangle inequality, we have \[\|\Pi_{0}|0\rangle|\rho_{x}\rangle|0\rangle\|\geq\frac{1}{\sqrt{2}}\left(\|\Pi_ {0}|t_{x+}\rangle|0\rangle\|-\|\Pi_{0}|t_{x-}\rangle|0\rangle\|\right). \tag{55}\] The first term we bound using Lemma 17 Item 1 and the second with Lemma 17 Item 2 to give us \[\|\Pi_{0}|0\rangle|\rho_{x}\rangle|0\rangle\|\geq\frac{1}{\sqrt{2}}\left(\sqrt{ 1-\hat{\varepsilon}}-\hat{\varepsilon}\left(1+\frac{1}{\sqrt{2}}\right)\right) \tag{56}\] Squaring both sides and using a series expansion, we find \[\|\Pi_{0}|0\rangle|\rho_{x}\rangle|0\rangle\|^{2}>\frac{1}{2}-\left(\frac{3}{2} +\frac{1}{\sqrt{2}}\right)\hat{\varepsilon}>\frac{1}{2}-\frac{5}{2}\hat{ \varepsilon}. \tag{57}\] With Lemma 17 and Lemma 18, we can now proceed to the proof of Theorem 16: Proof of Theorem 16.: We analyze Alg. 2. Note that the \(l_{2}\) norm between two quantum states is at most \(2\), so we may assume \(\varepsilon\leq 2\) and hence \(\hat{\varepsilon}\leq 9\). First we show that the probability of returning a state \(|\tilde{\sigma}_{x}\rangle\) such that \(\||\tilde{\sigma}_{x}\rangle-|1\rangle|\sigma_{x}\rangle|0\rangle\|\geq\varepsilon\) is at most \(\delta\). We first analyze the case that \(w_{+}(\mathscr{P},x)W_{-}(\mathscr{P})\leq w_{-}(\mathscr{P},x)W_{+}(\mathscr{ P})=w_{+}(\mathscr{P}^{\dagger},x)W_{-}(\mathscr{P}^{\dagger})\). Let \(i^{*}=\lceil\log(w_{+}(\mathscr{P},x)W_{-}(\mathscr{P}))\rceil\) and \(T=\lceil\log W_{+}(\mathscr{P})W_{-}(\mathscr{P})\rceil\). Notice that once \(i\geq i^{*}\), we have \(\alpha\geq w_{+}(\mathscr{P},x)\), so by Lemma 18 Item 2 we have \(\|\Pi_{0}(|0\rangle|\rho_{x}\rangle)_{A}|0\rangle_{B}\|^{2}\geq\frac{1}{2}- \frac{5\hat{\varepsilon}}{2}\). Thus when we do Amplitude Estimation in Line 8 of Alg. 2 to additive error \(\hat{\varepsilon}/4\), with probability \(1-\delta_{i}\) we will find the probability of outcome \(|0\rangle_{B}\) will be at least \(1/2-11\hat{\varepsilon}/4\), causing us to continue to the State Conversion Stage. Furthermore, by combining Lemma 18 Item 1 and Item 2, our algorithm is guaranteed to output the target state within error \(6\sqrt{\hat{\varepsilon}}=\varepsilon\) regardless of an error in Amplitude Estimation. Therefore, the algorithm can only return a wrong state before round \(i^{*}\), and only if Amplitude Estimation fails. Thus, we calculate the probability of error as: \[Pr(error)=\sum_{i=0}^{i^{*}-1}\left(\prod_{j=0}^{i-1}Pr(cont\ j)\right)\cdot Pr (err\ i) \tag{58}\] where \(Pr(cont\ i)\) is the probability of continuing to the next round of the **for** loop at round \(i\), and \(Pr(err\ i)\) is the probability of a failure of Amplitude Estimation at round \(i\), both conditioned on reaching round \(i\). We upper bound \(Pr(cont\ j)\) by \(1\) and \(Pr(err\ i)\) by \(2\delta_{i}\), as \(\delta_{i}\) is the probability of Amplitude Estimation failure in Line 8, and we do two rounds (for \(\mathscr{P}\) and \(\mathscr{P}^{\dagger}\)). This then gives us: \[Pr(error) \leq\sum_{i=0}^{i^{*}-1}2\delta_{i}\] \[=\sum_{i=0}^{i^{*}-1}2\delta\cdot 2^{-T+i-1}\] \[\leq\delta, \tag{59}\] where we have used our choice of \(\delta_{i}\) and that \(i^{*}\leq T\). Thus the probability of error is bounded by \(\delta\). Now we analyze the average query complexity. Let \(Q(i)\) be the query complexity of the algorithm when it exits the Probing Stage at round \(i\). Then the average query complexity on input \(x\) is \[\mathbb{E}[Q_{x}]=\sum_{i=0}^{T}\left(\prod_{j=0}^{i-1}Pr(cont\ j) \right)(1-Pr(cont\ i))\,Q(i), \tag{60}\] where \(\left(\prod_{j=0}^{i-1}Pr(cont\ j)\right)(1-Pr(cont\ i))\) is the probability of terminating at round \(i\). At the \(i\)th round of the Probing Stage, we implement Phase Checking with precision \(\hat{\varepsilon}^{3/2}/\sqrt{2^{i}}\) and accuracy \(\hat{\varepsilon}^{2}\), which uses \(O\left(\frac{\sqrt{2^{i}}}{\hat{\varepsilon}^{3/2}}\log\left(\frac{1}{\hat{ \varepsilon}}\right)\right)\) queries for a single iteration. By Lemma 3, we use \(O\left(\frac{1}{\hat{\varepsilon}}\log\left(\frac{1}{\hat{\varepsilon}}\log \left(\frac{1}{\hat{\varepsilon}}\right)\right)\right)\) applications of Phase Checking inside the Iterative Amplitude Estimation subroutine to reach success probability of at least \(1-\delta_{i}\) with error \(\hat{\varepsilon}/4\). Therefore, \[Q(i) =\left(\sum_{j=0}^{i}O\left(\frac{\sqrt{2^{j}}}{\hat{\varepsilon}^ {5/2}}\log\left(\frac{1}{\hat{\varepsilon}}\right)\log\left(\frac{1}{\delta_{j }}\log\left(\frac{1}{\hat{\varepsilon}}\right)\right)\right)\right)+O\left( \frac{\sqrt{2^{i}}}{\hat{\varepsilon}^{3/2}}\log\left(\frac{1}{\hat{ \varepsilon}}\right)\right)\] \[=\left(\sum_{j=0}^{i}O\left(\frac{\sqrt{2^{j}}}{\hat{\varepsilon} ^{5/2}}\log\left(\frac{1}{\hat{\varepsilon}}\right)\log\left(\frac{1}{\hat{ \delta}_{j}}\log\left(\frac{1}{\hat{\varepsilon}}\right)\right)\right)\right), \tag{61}\] where the second term in the first line is the query complexity of the State Conversion Stage, so the complexity is dominated by the Probing Stage. We divide the analysis into two parts: \(i\leq i^{*}\) and \(i^{*}<i\leq T\). When \(i\leq i^{*}\), we use the trivial bound \(\left(\prod_{j=0}^{i-1}Pr(cont\ j)\right)(1-Pr(cont\ i))\leq 1\). Thus, the contribution to the average query complexity from rounds \(i\) with \(i\leq i^{*}\) is at most: \[\sum_{i=0}^{i^{*}}Q(i) \leq\sum_{i=0}^{i^{*}}\sum_{j=0}^{i}O\left(\frac{\sqrt{2^{j}}}{ \hat{\varepsilon}^{5/2}}\log\left(\frac{1}{\hat{\varepsilon}}\right)\log \left(\frac{1}{\hat{\delta}_{j}}\log\left(\frac{1}{\hat{\varepsilon}}\right) \right)\right) \tag{62}\] \[=\hat{\varepsilon}^{-5/2}\log\left(\frac{1}{\hat{\varepsilon}} \right)\sum_{i=0}^{i^{*}}\sum_{j=0}^{i}O\left(\sqrt{2^{j}}\left(T-j+\log\left( \frac{1}{\hat{\delta}}\log\left(\frac{1}{\hat{\varepsilon}}\right)\right)+1 \right)\right)\] \[=O\left(\frac{\sqrt{w_{+}(\mathscr{P},x)W_{-}(\mathscr{P})}}{ \hat{\varepsilon}^{5/2}}\log\left(\frac{1}{\hat{\varepsilon}}\right)\log\left( \frac{W_{+}(\mathscr{P})}{w_{+}(\mathscr{P},x)\delta}\log\left(\frac{1}{\hat{ \varepsilon}}\right)\right)\right)\] where we have used the following inequality twice: \[\sum_{j=0}^{i}\sqrt{2^{j}}(\log W-j)\ \leq\ 4\sqrt{2^{i}}(\log W-i+3). \tag{63}\] For \(i\) from \(i^{*}+1\) to \(T\), as discussed below Eq. (57), Amplitude Estimation in Line 8 should produce an estimate that triggers breaking out of the Probing Stage at Line 9. Thus the probability of continuing to the next iteration depends on Amplitude Estimation failing, which happens with probability \(\delta_{i}\leq\delta\leq\frac{1}{2\sqrt{2}}\), since \(\delta\leq 1/3\). Using \((1-Pr(cont\ i))\leq 1\), we thus have \[\left(\prod_{j=0}^{i-1}Pr(cont\ j)\right)(1-Pr(cont\ i))\leq\left( \frac{1}{2\sqrt{2}}\right)^{i-i^{*}}. \tag{64}\] The contribution to the average query complexity for rounds after \(i^{*}\) is therefore \[\sum_{i=i^{*}+1}^{T}\left(\frac{1}{2\sqrt{2}}\right)^{i-i^{*}}Q(i) =\sum_{i=i^{*}+1}^{T}\left(\frac{1}{2\sqrt{2}}\right)^{i-i^{*}}\sum _{j=0}^{i}O\left(\frac{\sqrt{2j}}{\hat{\varepsilon}^{5/2}}\log\left(\frac{1}{ \hat{\varepsilon}}\right)\log\left(\frac{1}{\hat{\delta}_{j}}\log\left(\frac{1} {\hat{\varepsilon}}\right)\right)\right)\] \[=O\left(\frac{\sqrt{w_{+}(\mathscr{P},x)W_{-}(\mathscr{P})}}{\hat {\varepsilon}^{5/2}}\log\left(\frac{1}{\hat{\varepsilon}}\right)\log\left(\frac {W_{+}(\mathscr{P})}{w_{+}(\mathscr{P},x)\delta}\log\left(\frac{1}{\hat{ \varepsilon}}\right)\right)\right) \tag{65}\] where we have used Eq. (63) and Eq. (24). Combining Eqs. (62) and (65) and replacing \(\hat{\varepsilon}\) with \(\varepsilon\), the average query complexity of the algorithm on input \(x\) is \[\mathbb{E}[Q_{x}]=O\left(\frac{\sqrt{w_{+}(\mathscr{P},x)W_{-}(\mathscr{P})}} {\varepsilon^{5}}\log\left(\frac{1}{\varepsilon}\right)\log\left(\frac{W_{+}( \mathscr{P})}{w_{+}(\mathscr{P},x)\delta}\log\left(\frac{1}{\varepsilon} \right)\right)\right). \tag{66}\] When \(w_{+}(\mathscr{P},x)W_{-}(\mathscr{P})\geq w_{-}(\mathscr{P},x)W_{+}(\mathscr{ P})=w_{+}(\mathscr{P}^{\dagger},x)W_{-}(\mathscr{P}^{\dagger})\), using the same analysis but with \(\mathscr{P}^{\dagger}\) and applying Lemma 8, we find \[\mathbb{E}[Q_{x}]=O\left(\frac{\sqrt{w_{-}(\mathscr{P},x)W_{+}(\mathscr{P})}} {\varepsilon^{5}}\log\left(\frac{1}{\varepsilon}\right)\log\left(\frac{W_{-}( \mathscr{P})}{w_{-}(\mathscr{P},x)\delta}\log\left(\frac{1}{\varepsilon} \right)\right)\right). \tag{67}\] ### Function Evaluation with Fast Verification The state conversion algorithm can be used to evaluate a discrete function \(f:X\to[m]\) for \(X\subseteq[q]^{n}\) on input \(x\) by converting from \(|\rho_{x}\rangle=|0\rangle\) to \(|\sigma_{x}\rangle=|f(x)\rangle\) and then measuring in the standard basis to learn \(f(x)\). When the correctness of \(f(x)\) can be verified with an additional constant number of queries, we can modify our state conversion algorithm to remove the Probing Stage, and instead use the correctness verification of the output state as a test of whether the algorithm is complete. In this case, we can remove a log factor from the complexity: **Theorem 19**.: _For a function \(f:X\to[m]\), such that \(f(x)\) can be verified without error using at most constant additional queries to \(O_{x}\), given a converting vector set \(\mathscr{P}\) from \(\rho=\{|0\rangle\}_{x\in X}\) to \(\sigma=\{|f(x)\rangle\}_{x\in X}\) and \(\delta<2^{-1/2}\), then there is a quantum algorithm that correctly evaluates \(f\) with probability at least \(1-\delta\) and uses_ \[O\left(\frac{\sqrt{\min\{w_{+}(\mathscr{P},x)W_{-}(\mathscr{P}),w_{-}( \mathscr{P},x)W_{+}(\mathscr{P})\}}}{\delta^{3/2}}\log\left(\frac{1}{\delta} \right)\right) \tag{68}\] _average queries on input \(x\)._ While removing a log factor might seem inconsequential, it yields an exponential quantum advantage in the next section for some applications, as opposed to only a superpolynomial advantage. Proof of Theorem 19.: We analyze Alg. 3, which is similar to Alg. 2, but with the Probing Stage replaced by a post-State Conversion verification procedure. We first analyze the case that \(w_{+}(\mathscr{P},x)W_{-}(\mathscr{P})\leq w_{-}(\mathscr{P},x)W_{+}(\mathscr{ P})=w_{+}(\mathscr{P}^{\dagger},x)W_{-}(\mathscr{P}^{\dagger})\). From Lemma 18, we have that when \(\alpha\geq w_{+}(\mathscr{P},x)\), then the output state \(|\psi\rangle\) of Line 7 of Alg. 3 satisfies \[\||\psi\rangle-|1\rangle|f(x)\rangle|0\rangle\|\leq 6\sqrt{\hat{\varepsilon}}= \sqrt{\delta}. \tag{69}\] This gives us \[1-\frac{\delta}{2}<Re\left(\langle\psi|(|1)|f(x)\rangle|0\rangle\right)\leq| \langle\psi|(|1)|f(x)\rangle|0\rangle)|\,. \tag{70}\] Taking the square of both sides and using that \(1-\delta\leq\left(1-\frac{\delta}{2}\right)^{2}\), we have \[1-\delta<|\langle\psi|(|1)|f(x)\rangle|0\rangle)|^{2}\,. \tag{71}\] Since \(|1\rangle|f(x)\rangle|0\rangle\) is a standard basis state, if we measure \(|\psi\rangle\) in the standard basis, this implies that probability that we measure the second register to be \(|f(x)\rangle\) is at least \(1-\delta\). Once we measure \(f(x)\), we can verify it with certainty using constant additional queries. Thus, our success probability is at least \(1-\delta 2\) at a single round when \(\alpha\geq w_{+}\) (which we only reach if we haven't already correctly evaluated \(f(x)\)), and so our overall probability of success must be at least \(1-\delta\) (from Line 1 of Alg. 3). This is because further rounds (if they happen) will only increase our probability of success. To calculate the average query complexity, we note that the \(i^{\text{th}}\) round uses \[O\left(\frac{2^{i/2}}{\hat{\varepsilon}^{3/2}}\log\left(\frac{1}{\hat{ \varepsilon}}\right)+1\right) \tag{72}\] queries, where the \(1\) is from the verification step, which we henceforward absorb into the big-Oh notation. We make a worst case assumption that the probability of measuring outcome \(|f(x)\rangle\) in a round when \(\alpha<w_{+}(\mathscr{P},x)\), or equivalently, at a round \(i\) when \(i<i^{*}\), is \(0\), so these rounds contribute \[\sum_{i=0}^{i^{*}-1}O\left(\frac{2^{i/2}}{\hat{\varepsilon}^{3/2}}\log\left( \frac{1}{\hat{\varepsilon}}\right)\right)=O\left(\frac{2^{i^{*}/2}}{\hat{ \varepsilon}^{3/2}}\log\left(\frac{1}{\hat{\varepsilon}}\right)\right)=O\left( \frac{2^{i^{*}/2}}{\hat{\delta}^{3/2}}\log\left(\frac{1}{\delta}\right)\right) \tag{73}\] queries to the average query complexity, where in the last equality, we've replaced \(\hat{\varepsilon}\) with \(\delta\). At each additional round \(i\) for \(i\geq i^{*}\), we have a \(1-\delta\) probability of successfully returning \(f(x)\), conditioned on reaching that round, and \(\delta\) probability of continuing to the next round. This gives us an average query complexity on input \(x\) of \[\sum_{i=i^{*}}^{T} \left(1-\delta\right)\delta^{(i-i^{*})}O\left(\frac{2^{i/2}}{ \delta^{3/2}}\log\left(\frac{1}{\delta}\right)\right)\] \[=O\left(\left(\frac{2^{i^{*}/2}}{\delta^{3/2}}\log\left(\frac{1}{ \delta}\right)\sum_{i=i^{*}}^{T}\left(\sqrt{2}\delta\right)\right)^{i-i^{*}} \right)\right), \tag{74}\] By our assumption that \(\delta<2^{1/2}\), the summation is bounded by a constant. Thus the average query complexity on input \(x\) is \[\mathbb{E}[Q_{x}]=O\left(\frac{2^{i^{*}/2}}{\delta^{3/2}}\log\left(\frac{1}{ \delta}\right)\right)=O\left(\frac{\sqrt{w_{+}(\mathscr{P},x)W_{-}(\mathscr{P })}}{\delta^{3/2}}\log\left(\frac{1}{\delta}\right)\right). \tag{75}\] When \(w_{+}(\mathscr{P},x)W_{-}(\mathscr{P})>w_{-}(\mathscr{P},x)W_{+}(\mathscr{P} )=w_{+}(\mathscr{P}^{\dagger},x)W_{-}(\mathscr{P}^{\dagger})\), we get the same expression except with \(\mathscr{P}\) replaced by \(\mathscr{P}^{\dagger}\), which by Lemma 13 gives us the claimed query complexity. ### Quantum Advantages for Decision Trees with Advice Montanaro showed that when searching for a single marked item, if there is a power law distribution on the location of the item, then a quantum algorithm can achieve a (super)exponential speed-up in average query complexity over the best classical algorithm [24]. He called this "searching with advice," as in order to achieve the best separations between quantum and classical performance, the algorithm had to know an ordering of the inputs such that the probability of finding the marked item was non-increasing, the "advice." In this section, we generalize Montanaro's result to decision tree algorithms, and use this generalization to prove a superpolynomial and exponential speed-up for several additional search problems. We use a decision tree construction similar to that of Beigi and Taghavi [3]. A classical, deterministic query algorithm that evaluates \(f:X\rightarrow[m]\) for \(X\subseteq[q]^{n}\) is given access to an oracle \(O_{x}\), for \(x\in X\), and uses a single query to learn \(x_{i}\), the \(i^{\text{th}}\) bit of \(x.\) We can describe the sequence of queries this classical algorithm makes as a directed tree \(\mathcal{T}\), a decision tree, with vertex set \(V(\mathcal{T})\) and directed edge set \(E(\mathcal{T})\). Each non-leaf vertex \(v\) of \(V(\mathcal{T})\) is associated with an index \(J(v)\in[n]\), which is the index of \(x\) that is queried when the algorithm reaches that vertex. The algorithm follows the edge labeled by \(x_{J(v)}\) (the query result) from \(v\) to another vertex in \(V(\mathcal{T})\). Each leaf is labelled by an element of \([m]\), which is the value that the algorithm outputs if it reaches that leaf. Let \(path(\mathcal{T},x)\) be the sequence of edges in \(E(\mathcal{T})\) that are followed on input \(x\) when queries are made starting from the root of \(\mathcal{T}\). We say that \(\mathcal{T}\) decides \(f:X\rightarrow[m]\) if the leaf node on \(path(\mathcal{T},x)\) is labelled by \(f(x)\), for all \(x\in X\). For a non-leaf vertex \(v\in\mathcal{T}\), each edge \((v,v^{\prime})\in E(\mathcal{T})\) is labeled by a subset of \([q]\), that we denote \(Q(v,v^{\prime})\). Then if a vertex \(v\) is visited in \(path(\mathcal{T},x)\), the algorithm chooses to follow the edge \((v,v^{\prime})\) to vertex \(v^{\prime}\), if \(x_{J(v)}\in Q(v,v^{\prime}).\) We require that \(\{Q(v,v^{\prime})\}\) for all edges \((v,v^{\prime})\) leaving vertex \(v\) form a partition of \([q]\), so that there is always exactly one edge that the algorithm can choose to follow based on the result of the query at vertex \(v\). To create a quantum algorithm from such a decision tree \(\mathcal{T}\), we label each edge \(e\in E(\mathcal{T})\) with a weight \(r(e)\in\mathbb{R}^{+}\) and a color \(c(e)\in\{red,black\}\), such that all edges coming out of a vertex \(v\) with the same color have the same weight. There must be exactly one edge leaving each non-leaf vertex that is black, and the rest must be red. We denote by \(r(v,black)\) the weight of the black edge leaving \(v\), and \(r(v,red)\) the weight of any red edge(s) leaving \(v\). If there are no red edges leaving \(v\), we set \(r(v,red)=\infty.\) (In Ref. [3], the red and black weights are the same throughout the entire tree, instead of being allowed to depend on \(v\).) Using these weights we design a converting vector set to decide \(f\): **Lemma 20**.: _Given a decision tree \(\mathcal{T}\) that decides a function \(f:X\rightarrow[m]\), for \(X\in[q]^{n}\), with weights \(r(e)\in\mathbb{R}^{+}\) for each edge \(e\in E(\mathcal{T})\), then there is a converting vector set \(\mathscr{P}\) that on input \(x\in X\) converts \(|\rho_{x}\rangle=|0\rangle\) to \(|\sigma_{x}\rangle=|f(x)\rangle\) such that_ \[w_{+}(\mathscr{P},x) \leq\sum_{e\in path(\mathcal{T},x)}4r(e),\] \[w_{-}(\mathscr{P},x) \leq\sum_{(v,u)\in path(\mathcal{T},x)}\frac{4}{r(v,red)}+\sum_ {\begin{subarray}{c}(v,u)\in path(\mathcal{T},x)\\ c(v,u)=red\end{subarray}}\frac{4}{r(v,black)}. \tag{76}\] Proof.: In this proof \(|u_{xj}\rangle\) and \(|v_{yj}\rangle\), with double subscripts, refer to converting vector sets, and \(u\), \(v\) with single or no subscripts refer to vertices. We use essentially the same construction as in Ref. [3], but with a slightly different analysis because of our generalization to weights that can change throughout the tree. We will make use of the unit vectors \(\{|\tilde{\mu}_{i,d}\rangle\}_{i\in[d]}\) and \(\{|\tilde{\nu}_{i,d}\rangle\}_{i\in[d]}\), defined in [3], which are scaled versions of the vectors in Eq. (31), which have the properties that \(\forall i\in[d],\left\|\tilde{\mu}_{i,d}\right\rangle\right\|^{2},\left\| \tilde{\nu}_{i,d}\right\rangle\right\|^{2}\leq 2\) and \(\langle\tilde{\mu}_{i,d}|\tilde{\nu}_{j,d}\rangle=1-\delta_{i,j}\). First note that we can assume that on any input \(x\in X\), for any index \(j\), there is at most a single vertex on in \(path(\mathcal{T},x)\) at which \(j\) is queried. Otherwise the tree would query the same index twice, which would be a non-optimal tree. The we define a converting vector set on \(\mathbb{C}^{|V(\mathcal{T})|}\otimes\mathbb{C}^{2}\otimes\mathbb{C}^{|V( \mathcal{T})|}\otimes\mathbb{C}^{m}\) as follows \[|v_{xj}\rangle=\begin{cases}\sqrt{r(v,u)}|v\rangle|c(v,u)\rangle|\tilde{\mu}_ {u,|V(\mathcal{T})|}\rangle|\tilde{\mu}_{f(x),m}\rangle&\text{ if }(v,u)\in path( \mathcal{T},x),\text{ and }J(v)=j\\ 0&\text{ otherwise}\end{cases} \tag{77}\] and \[|u_{xj}\rangle=\begin{cases}\frac{1}{\sqrt{r(v,red)}}|v\rangle|red\rangle| \tilde{\nu}_{u,|V(\mathcal{T})|}\rangle|\tilde{\nu}_{f(x)}\rangle&\text{ if }(v,u)\in path(\mathcal{T},x),J(v)=j,c(v,u)=black\\ |v\rangle\left(\frac{|red\rangle}{\sqrt{r(v,red)}}+\frac{|black\rangle}{\sqrt{ r(v,black)}}\right)|\tilde{\nu}_{u,|V(\mathcal{T})|}\rangle|\tilde{\nu}_{f(x)} \rangle&\text{ if }(v,u)\in path(\mathcal{T},x),J(v)=j,c(v,u)=red\\ 0&\text{ otherwise}\end{cases} \tag{78}\] From Definition 9, Eq. (4), we want that \[\forall x,y\in X,\quad(\rho-\sigma)_{xy}=\sum_{j\in[n]:x_{j}\neq y_{j}}\langle u _{xj}|v_{yj}\rangle. \tag{79}\] For evaluating a discrete function \(f\), we have \(|\rho_{x}\rangle=|0\rangle\), and \(|\sigma_{x}\rangle=|f(x)\rangle\), so \((\rho-\sigma)_{xy}=1-\delta_{f(x),f(y)}\). Now if \(f(x)=f(y)\), then because \(\langle\tilde{\mu}_{f(x),m}|\tilde{\nu}_{f(y),m}\rangle=0\), we have \(\forall j\in[n]\)\(\langle u_{xj}|v_{yj}\rangle=0\), so \(\sum_{j\in[n]:x_{j}\neq y_{j}}\langle u_{xj}|v_{yj}\rangle=0\), as desired. If \(f(x)\neq f(y)\), then there must be some vertex in the decision tree at which \(path(\mathcal{T},x)\) and \(path(\mathcal{T},y)\) diverge. Let's call this vertex \(v^{*}\), and assume that \(J(v^{*})\), the index of the input queried at vertex \(v^{*}\), is \(j^{*}\). Let \((v^{*},u_{1})\) be the edge on \(path(\mathcal{T},x)\) and \((v^{*},u_{2})\) be the edge on \(path(\mathcal{T},y)\). This means that \(x_{j^{*}}\in Q(v^{*},u_{1})\), and \(y_{j^{*}}\in Q(v^{*},u_{2})\) and since \(Q(v^{*},u_{1})\) and \(Q(v^{*},u_{2})\) are part of a partition, this implies \(x_{j^{*}}\neq y_{j^{*}}.\) Then if \(c(v^{*},u_{1})=black\), we must have \(c(v^{*},u_{2})=red\), while if \(c(v^{*},u_{1})=red\), we can have \(c(v^{*},u_{2})\in\{black,red\}\). In either case, we see from Eqs. (77) and (78) that \[\langle u_{xj^{*}}|v_{yj^{*}}\rangle=1. \tag{80}\] Now for all other \(j\in[n]\) with \(j\neq j^{*}\), we have \(\langle u_{xj}|v_{yj}\rangle=0\), which we can prove by looking at the following cases: * There is no vertex in \(\mathcal{T}\) where \(j\) is queried for \(x\) or \(y\), which results in \(|v_{xj}\rangle=0\) or \(|u_{xj}\rangle=0\), and so \(\langle u_{xj}|v_{yj}\rangle=0\). * The index \(j\) is queried for both \(x\) and \(y\) before their paths in \(\mathcal{T}\) diverge, at a vertex \(v\) where the paths for both \(x\) and \(y\) then travel to a vertex \(u\), in which case, \(\langle u_{xj}|v_{yj}\rangle=0\), since \(\langle\tilde{\mu}_{u,|V(\mathcal{T})|}|\tilde{\nu}_{u,|V(\mathcal{T})|} \rangle=0\). * The index \(j\) is queried for both \(x\) and \(y\) after their paths in \(\mathcal{T}\) diverge, at a vertex \(v_{1}\) for \(x\) and a vertex \(v_{2}\) for \(y\), in which case \(\langle u_{xj}|v_{yj}\rangle=0\), since \(\langle v_{1}|v_{2}\rangle=0\). Putting this all together for \(j=j^{*}\) and \(j\neq j^{*}\), we have \[\sum_{j\in[n]:x_{j}\neq y_{j}}\langle u_{xj}|v_{yj}\rangle=1. \tag{81}\] Now to calculate the positive and negative witness sizes. For the positive witness size, we have \[w_{+}(\mathscr{P},x) =\sum_{j\in[n]}\||v_{xj}\rangle\|^{2}\] \[=\sum_{(v,u)\in path(\mathcal{T},x)}\left\|\sqrt{r(v,u)}|v \rangle|c(v,u)\rangle|\tilde{\mu}_{u,|V(\mathcal{T})|}\rangle|\tilde{\mu}_{f( x),m}\rangle\right\|^{2}\] \[\leq\sum_{e\in path(\mathcal{T},x)}4r(e), \tag{82}\] where in the second line, we have used Eq. (77) and that \(\mathcal{T}\) will query each index at most once, according to the vertices that are encountered in \(path(\mathcal{T},x)\), and in the final line, we have used that \(\||\tilde{\mu}_{u,|V(\mathcal{T})|}\rangle\|^{2},\||\tilde{\mu}_{f(x),m}\rangle \|^{2}\leq 2\). For the negative witness size, note that again for input \(x\), \(\mathcal{T}\) will query each index of \(x\) at most once, according to the vertices that are encountered in \(path(\mathcal{T})\). Then if index \(j\) of \(x\) is queried at vertex \(v\), where \((v,u)\in path(\mathcal{T},x)\), and \(c(v,u)=black\) then \[\||u_{xj}\rangle\|^{2}\leq 4\frac{1}{r(v,red)} \tag{83}\] while if \(c(v,u)=red\), then \[\||u_{xj}\rangle\|^{2}\leq 4\left(\frac{1}{r(v,red)}+\frac{1}{r(v,black)}\right). \tag{84}\] Thus \[w_{-}(\mathscr{P},x) =\sum_{j\in[n]}\||u_{xj}\rangle\|^{2}\] \[=\sum_{\begin{subarray}{c}(v,u)\in path(\mathcal{T},x)\\ c(v,u)=black\end{subarray}}\frac{4}{r(v,red)}+\sum_{\begin{subarray}{c}(v,u) \in path(\mathcal{T},x)\\ c(v,u)=red\end{subarray}}\left(\frac{4}{r(v,red)}+\frac{4}{r(v,black)}\right)\] \[=\sum_{(v,u)\in path(\mathcal{T},x)}\frac{4}{r(v,red)}+\sum_{ \begin{subarray}{c}(v,u)\in path(\mathcal{T},x)\\ c(v,u)=red\end{subarray}}\frac{4}{r(v,black)}. \tag{85}\] In Theorem 21, we use Lemma 20 to derive average quantum and classical query separations based on classical decision trees. **Theorem 21**.: _If \(\mathcal{T}\) is a decision tree that decides \(f:X\rightarrow[m]\) for \(X\subseteq[q]^{n}\), with optimal average classical query complexity for the distribution \(\{p_{x}\}_{x\in X}\), and \(\mathcal{T}\) has a coloring such that there are at most \(G\) red edges on any path from the root to a leaf, then the average quantum query complexity of deciding \(f(x)\) with bounded error is_ \[O\left(\sum_{x\in X}p_{x}\sqrt{G|path(\mathcal{T},x)|\log^{3}(n)}\right). \tag{86}\] _If it is possible to verify a potential output \(\hat{y}\) as correctly being \(f(x)\) using constant queries, then the average quantum query complexity of deciding \(f(x)\) with bounded error is_ \[O\left(\sum_{x\in X}p_{x}\left(\sqrt{G|path(\mathcal{T},x)|\log(n)}\right) \right). \tag{87}\] _The average classical query complexity of deciding \(f(x)\) with bounded error is_ \[\sum_{x\in X}p_{x}|path(\mathcal{T},x)|. \tag{88}\] Proof.: The average classical query complexity comes from the fact that on input \(x\), which occurs with probability \(p_{x}\), the algorithm uses \(|path(\mathcal{T},x)|\) queries, since each edge on the path of the decision tree corresponds to a single additional query. By assumption, \(\mathcal{T}\) is optimal for the distribution \(\{p_{x}\}\), giving the complexity as in Eq. (88). For the quantum algorithm, we will assign weights to each edge in \(\mathcal{T}\), and then use Lemma 20 to create and analyze a state conversion algorithm. Then we will then apply Theorem 16 and Theorem 19 to achieve better complexity on easier inputs. For each black edge \(e\) in \(\mathcal{T}\) let \(r(e)=G\). For each red edge \(e\), let \(r(e)=l(e)\), where \(l(e)\) is the number of edges on the path in \(\mathcal{T}\) from the root to \(e\), including \(e\). Let \(\mathscr{P}\) be the converting vector set from Lemma 20 that converts \(|\rho_{x}\rangle=|0\rangle\) to \(|\sigma_{x}\rangle=|f(x)\rangle\), based on \(\mathcal{T}\). We first analyze \(w_{+}(\mathscr{P},x)\). By Lemma 20, \[w_{+}(\mathscr{P},x) =\sum_{\text{black }e\in path(\mathcal{T},x)}4r(e)+\sum_{ \text{red }e\in path(\mathcal{T},x)}4r(e)\] \[=\sum_{\text{black }e\in path(\mathcal{T},x)}4G+\sum_{\text{red }e\in path(\mathcal{T},x)}4l(e)\] \[\leq 4G|path(\mathcal{T},x)|+\sum_{\text{red }e\in path(\mathcal{T},x)}4|path(\mathcal{T},x)|\] \[= O\left(G|path(\mathcal{T},x)|\right), \tag{89}\] where in the second-to-last line, we've used that the total number of black or red edges in the path is \(|path(\mathcal{T},x)|\). In the last line, we've used that the number of red edges on any path is at most \(G\). This implies that \(W_{+}(\mathscr{P})=O(nG)\). Now to analyze \(w_{-}(\mathscr{P},x)\). From Lemma 20, \[w_{-}(\mathscr{P},x) \leq\sum_{(v,u)\in path(\mathcal{T},x)}\frac{4}{r(v,red)}+\sum_{ \begin{subarray}{c}(v,u)\in path(\mathcal{T},x)\\ c(v,u)=red\end{subarray}}\frac{4}{r(v,black)}\] \[=\sum_{(v,u)\in path(\mathcal{T},x)}\frac{4}{l(v,u)}+\sum_{ \begin{subarray}{c}(v,u)\in path(\mathcal{T},x)\\ c(v,u)=red\end{subarray}}\frac{4}{G}\] \[\leq\sum_{i=1}^{|path(\mathcal{T},x)|}\frac{4}{i}+4\] \[=O(\log(n)). \tag{90}\] Now applying Theorem 16 with \(\varepsilon,\delta=\Theta(1)\) and \(W_{+}(\mathscr{P})=O(n)\) gives us a bounded error algorithm with an average query complexity of \(O\left(\sqrt{G|path(\mathcal{T},x)|\log^{3}(n)}\right)\) on input \(x\). On average over \(x\in X\), we obtain an average query complexity of \(O\left(\sum_{x\in X}p_{x}\sqrt{G|path(\mathcal{T},x)|\log^{3}(n)}\right).\) When there is a way to verify \(f(x)\) using a constant queries, we can apply Theorem 19 with \(\delta=\Theta(1)\) to give us a bounded error algorithm with an average query complexity of \(O\left(\sqrt{G|path(\mathcal{T},x)|\log(n)}\right)\) on input \(x\). On average over \(x\in X\), we obtain an average query complexity of \(O\left(\sum_{x\in X}p_{x}\left(\sqrt{G|path(\mathcal{T},x)|\log(n)}\right) \right).\) We now use Theorem 21 to show an average quantum advantage for two problems related to searching: searching for \(r\) marked items in a list and searching for the first \(r\) marked items in a list: **Theorem 22**.: _For the problem of finding \(r\) bits with value \(1\) in an \(n\)-bit string, there are distributions for which there are exponential (when \(r=O(1)\)) and superpolynomial (when \(r=O(\operatorname{polylog}(n)\)) advantages in average quantum query complexity over average classical query complexity. For the problem of finding the first \(r\)\(1\)-valued bits in an \(n\)-bit string, there is a distribution for which there is a superpolynomial (when \(r=O(\operatorname{polylog}(n)\)) advantage in average quantum query complexity over average classical query complexity._ The proof is based on a classical decision tree \(\mathcal{T}\) that checks the \(n\) bits of the string in order until \(r\)\(1\)-valued bits are found. The tree for \(r=2\) is shown in Fig. 2. Each time a \(1\)-valued bit is found, the edge that the algorithm traverses is colored red. Then \(G(\mathcal{T})=r\), so Theorem 21 tells us the average query complexity will be small when the \(r\) items occur early in the list, resulting in a short path for that input. We combine this idea with a particular a power-law distribution that Montanaro also uses [24]. This power-law distribution is tailored to allow a quantum algorithm, which has at most a quadratic advantage on any particular input, but which only uses constant queries on the easiest inputs, to achieve an exponential/superpolynomial advantage overall on average. Proof of Theorem 22.: We first analyze the case of finding any \(r\)\(1\)-valued bits. Let \(f_{r}:X\to Y\), for \(X\subset\{0,1\}^{n}\) and \(Y\subseteq[n]^{r}\), such that \(x\in X\) iff \(x\) contains exactly \(r\) bits with value \(1\), and where \(f_{r}(x)=\{i_{1},i_{2},\ldots,i_{r}\}\) are the indices of the \(r\) bits of \(x\) that have value \(1\). We assume the distribution on inputs \(X\) is such that if the values of the first \(d\geq 0\) bits of \(x\) are known, the probability of finding a \(1\)-valued bit among the remaining \(n-d\) bits is non-increasing with increasing index. The probability of finding the \(r^{\text{th}}\)\(1\)-valued bit at position \(i\) is \(p_{i}\), and is non-increasing in \(i\). Because the probability of finding \(1\)'s is non-increasing with increasing bit position, even conditioned on knowing some of the the initial bit values, the optimal strategy for a classical algorithm is to query the bits of the string in order until \(r\) bits with value \(1\) are found, at which point the algorithm returns the location of the bits with value \(1\). This algorithm corresponds to a decision tree \(\mathcal{T}\) where \(|path(\mathcal{T},x)|=i\), where \(i\) is the position of the \(r^{th}\)\(1\). Then the average classical query complexity is \(\sum_{i=r}^{n}p_{i}i\). We label an edge \((v,v^{\prime})\) of this tree as red whenever \(1\in Q(v,v^{\prime})\); that is, edges that are traversed when a \(1\)-valued bit is found are colored red. Then \(G(\mathcal{T})=r\). Figure 2: The classical decision tree used to design a quantum algorithm for finding two bits with value \(1\), or finding the first two bits with value \(1\). Each vertex is labelled by its name \((v_{i})\) for some \(i\), and \(J(v_{i})\), which is the bit of the input that is queried if the algorithm reaches that vertex of the tree. Each edge \((v_{i},v_{j})\) is labelled by \(Q(v_{i},v_{j})\), which is the set in curly brackets alongside each edge. The algorithm follows the edge \((v_{i},v_{j})\) from vertex \(v_{i}\) if the value of the query made at vertex \(v_{i}\) is contained in \(Q(v_{i},v_{j})\). Each edge is also labelled by its weight, \(r(e)\), and is also colored red or black (and red edges are additionally rendered with dot-dashes.) Black edges all have weight \(G(\mathcal{T})\), which in this case is \(2\). Each red edge has a weight that is equal to the number of edges on the path from the root \(v_{1}\) to that edge, inclusive. The vertex \(v_{1}\) is the root, and each leaf (denoted as a rectangular vertex) is labelled by the output of the algorithm on that input. For this problem, one can verify whether an output of the algorithm is correct using an additional \(r\) queries; if the output contains \(r\) indices, query those \(r\) indices to ensure there is a \(1\) at each position, in which case, one knows with certainty that the output is correct. If the tested output does not contain \(r\) indices, or if there is not a \(1\) at one of the indices, one knows with certainty that the output is incorrect. Then by Theorem 21, the average quantum query complexity is \[O\left(\sum_{i=r}^{n}p_{i}\left(\sqrt{irlog^{3}(n)}\right)\right) \text{if }r=\omega(1) \tag{91}\] \[O\left(\sum_{i=r}^{n}p_{i}\left(\sqrt{irlog(n)}\right)\right) \text{if }r=O(1) \tag{92}\] For the distribution \(p_{i,n}\propto(i)^{-k}\), with \(\sum_{i=1}^{n}p_{i,n}=1\), and \(-2<k<-3/2\), Montanaro shows the following [24, Prop. 2.5]: \[\sum_{i=1}^{n}p_{i,n}i=\Omega(n^{k+2}),\qquad\sum_{i=1}^{n}p_{i,n}\sqrt{i}=O(1). \tag{94}\] Now suppose we have the distribution \(p_{i}=p_{i-r+1,n-r+1}\). Then \[O\left(\sum_{i=r}^{n}p_{i}\sqrt{i}\right)=O\left(\sum_{j=1}^{n- r+1}p_{j,n-r+1}\sqrt{j+r}\right)\leq O\left(\sum_{j=1}^{n-r+1}p_{j,n-r+1}( \sqrt{j}+\sqrt{r})\right)\] \[\qquad=O(\sqrt{r}) \tag{95}\] where in the last line, we've used Eq. (94) and the fact that \(\sum_{i=1}^{n-r+1}p_{i,n-r+1}=1.\) Thus the average quantum query complexity is \[O\left(r\log^{3/2}(n)\right) \text{if }r=\omega(1) \tag{96}\] \[O\left(\log^{1/2}(n)\right) \text{if }r=O(1) \tag{97}\] We do a similar analysis for the classical query complexity: \[\sum_{i=r}^{n}p_{i}i =\sum_{j=1}^{n-r+1}p_{j,n-r+1}(j+r-1)\] \[=\Omega((n-r)^{k+2}+r), \tag{98}\] where we have again used Eq. (94) and the fact that \(\sum_{i=1}^{n-r+1}p_{i,n-r+1}=1\). Thus, for \(r=\operatorname{polylog}n\), we find a superpolynomial improvement: the average quantum query complexity is \(O(\operatorname{polylog}n)\) while the average classical query complexity is \(\Omega(n^{k+2})\), for \(-2<k<-3/2\). For \(r=O(1)\), we have an exponential improvement, as the average quantum query complexity is \(O\left(\log^{1/2}(n)\right)\) compared to the classical \(\Omega(n^{k+2})\),, for \(-2<k<-3/2\). Now we consider the case of finding the first \(r\) bits with value \(1\). Let \(f_{r}^{\prime}:\{0,1\}^{n}\rightarrow[n]^{r}\) where \(f_{r}^{\prime}(x)=\{i_{1},\ldots,i_{r}\}\) where \(i_{k}\) is the \(k^{\text{th}}\) smallest index such that \(x_{i_{k}}=1\). We assume the distribution on inputs \(\{0,1\}^{n}\) is such that the probability of finding the \(r^{th}\)\(1\)-valued bit at position \(i\) is \(p_{i-r+1,n-r+1}\) and the optimal classical algorithm is to query the bits in order and return the indices where \(1\)'s were found. This algorithm corresponds to a decision tree \(\mathcal{T}\) where \(\left|path(\mathcal{T},x)\right|=i\), where \(i\) is the position of the \(r^{\text{th 1}}\) in the string. Then the proof proceeds exactly as in the case of finding any \(r\) elements, except in this case we can not verify using constant queries whether the output is correct. Thus we have that the average classical query complexity is \(\Omega(n^{k-2})\) and the average quantum query complexity is \(O\left(\operatorname{polylog}(n)\right).\) ## 5 Acknowledgments We thank Stacey Jeffery for valuable discussions, especially for her preliminary notes on span program negation, and several past referees for insightful suggestions. This research was sponsored by the Army Research Office and was accomplished under Grant Number W911NF-20-1-0327. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Office or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein.
2302.08562
Local dualisable objects in local algebra
We discuss dualisable objects in minimal subcategories of compactly generated tensor triangulated categories, paying special attention to the derived category of a commutative noetherian ring. A cohomological criterion for detecting these local dualisable objects is established. Generalisations to other related contexts are discussed.
Dave Benson, Srikanth B. Iyengar, Henning Krause, Julia Pevtsova
2023-02-16T20:12:25Z
http://arxiv.org/abs/2302.08562v1
# Local dualisable objects ###### Abstract. We discuss dualisable objects in minimal subcategories of compactly generated tensor triangulated categories, paying special attention to the derived category of a commutative noetherian ring. A cohomological criterion for detecting these local dualisable objects is established. Generalisations to other related contexts are discussed. Key words and phrases:Balmer spectrum, compact object, derived category, dualisable object, reflexive object, tensor triangulated category 2020 Mathematics Subject Classification: 13D09 (primary); 18G80, 14F08 (secondary) ## 1. Introduction Let \(\mathcal{T}\) be a rigidly compactly generated tensor triangulated category; in this work we consider only the symmetric tensor categories. A central problem is to classify the localising tensor ideals in \(\mathcal{T}\). Consider the lattice, with respect to inclusion, of such subcategories. In many contexts, its structure is determined by the minimal elements in the lattice. Often these minimal elements are parameterised by some topological space; for instance, the Balmer spectrum, or the spectrum of some commutative ring acting on \(\mathcal{T}\), in the sense of [5]. We are interested in the structure of a minimal subcategory, say \(\mathcal{S}\). Minimality implies that there are no proper localising tensor ideals in \(\mathcal{S}\). In particular, there are no proper thick tensor ideals in the subcategory of compact objects in \(\mathcal{S}\). Typically however, there are dualisable objects which are not compact. Thus there is a collection of thick tensor ideals in the subcategory of dualisable objects in \(\mathcal{S}\), and one can get a handle on them by computing its spectrum. This is what is done in this work for \(\mathbf{D}(A)\), the derived category of a commutative noetherian ring \(A\). In that case the minimal localising subcategories are the subcategories, \(\Gamma_{\mathfrak{p}}\mathbf{D}(A)\), of the derived category consisting of the \(\mathfrak{p}\)-local and \(\mathfrak{p}\)-torsion \(A\)-complexes, where \(\mathfrak{p}\) is a prime ideal in \(A\). Our main result, which reappears as Theorem 4.1, characterises dualisable objects in these categories. **Theorem**.: _For each \(X\) in \(\Gamma_{\mathfrak{p}}\mathbf{D}(A)\) the following conditions are equivalent._ 1. \(X\) _is dualisable in_ \(\Gamma_{\mathfrak{p}}\mathbf{D}(A)\)_;_ 2. \(\operatorname{rank}_{k(\mathfrak{p})}H(k(\mathfrak{p})\otimes_{A}^{\mathbf{ L}}X)\) _is finite;_ 3. \(X\) _is in_ \(\operatorname{Thick}(\mathbf{R}\Gamma_{\mathfrak{p}}(A_{\mathfrak{p}}))\)_._ In the statement \(k(\mathfrak{p})\) is the residue field of the local ring \(A_{\mathfrak{p}}\) and \(\mathbf{R}\Gamma_{\mathfrak{p}}(-)\) is the local cohomology functor with respect to \(\mathfrak{p}\); see Section 3. In contrast, an object \(X\) in \(\Gamma_{\mathfrak{p}}\mathbf{D}(A)\) is compact precisely when \(\operatorname{length}_{A_{\mathfrak{p}}}H(X)\) is finite. It follows from the characterisation above that the dualisable complexes in \(\Gamma_{\mathfrak{p}}\mathbf{D}(A)\) form a thick subcategory; they are always closed under tensor products. Given this, and the equivalence of complete modules and torsion modules, established by Dwyer and Greenlees [15], we deduce that the spectrum of the dualisable objects identifies with the Zariski spectrum of the completion of the local ring \(A_{\mathfrak{p}}\) at its maximal ideal; see Corollary 4.7. This suggests viewing the passage from compact objects to dualisable objects in any compactly generated tensor triangulated category as a completion process. Similar considerations imply that when the ring \(A_{\mathfrak{p}}\) is regular, the category of dualisable objects in \(\Gamma_{\mathfrak{p}}\mathbf{D}(A)\) has a strong generator; see Corollary 4.10 and the discussion surrounding it. Here is an outline of the contents of this manuscript: Section 2 collects some well-known, though not well-recorded, results and remarks on notions of smallness and dualisablity in general compactly generated tensor triangulated categories. Sections 3 and 4 are about the derived category of a commutative noetherian ring, culminating in the characterisation of local dualisable objects and some corollaries. Section 5 contains a discussion of dualisable objects in other contexts, including the stable homotopy category. In fact, the formulation of the theorem above and our work reported here and in [7] is inspired by work of Hovey and Strickland [19] on dualisable objects in the \(K(n)\)-local stable homotopy category. _Acknowledgements._ Part of this work was done during the Trimester Program "Spectral Methods in Algebra, Geometry, and Topology" at the Hausdorff Institute in Bonn. It is a pleasure to thank HIM for hospitality and for funding by the Deutsche Forschungsgemeinschaft under Excellence Strategy EXC-2047/1-390685813. Our work also benefitted from participation in the Abel Symposium on "Triangulated categories in representation theory and beyond", held in Alesund, in June 2022. We thank the Niels Henrik Abel Memorial Fund for invitation to this event. During the preparation of this work, SBI was partly supported by NSF grants DMS-1700985 and DMS-2001368. JP was partly supported by NSF grants DMS-1901854 and DMS-2200832, and a Brian and Tiffinie Pang faculty fellowship. ## 2. Dualisability Though the focus of our work is on the derived category of a commutative ring, we begin by recalling various notions of dualisability in general tensor triangulated categories. Our basic references for this material are [18, Appendix A.2] and [21, Chapter III]. While much of the discussion is valid for symmetric monoidal categories, our examples are equipped with a compatible structure of a triangulated category so we work in that context. Let us fix a compactly generated tensor triangulated category \((\mathcal{T},\otimes,\mathds{1})\), with symmetric tensor product \(\otimes\) and unit \(\mathds{1}\); the latter need not be compact. As usual, \(\mathcal{T}^{c}\) denotes the full subcategory of compact objects in \(\mathcal{T}\). Brown representability yields _function objects_\(\mathcal{H}\!\!\mathit{om}(X,Y)\) satisfying an adjunction isomorphism \[\operatorname{Hom}_{\mathcal{T}}(X\otimes Y,Z)\cong\operatorname{Hom}_{ \mathcal{T}}(X,\mathcal{H}\!\!\mathit{om}(Y,Z))\quad\text{for all $X,Y,Z$ in $\mathcal{T}$.}\] The construction implies that the functor \(\mathcal{H}\!\!\mathit{om}(Y,-)\) on \(\mathcal{T}\) is exact; we will assume that the functor \(\mathcal{H}\!\!\mathit{om}(-,Z)\) is also exact. The adjunction isomorphism above yields natural isomorphisms \[\mathcal{H}\!\!\mathit{om}(X\otimes Y,Z)\cong\mathcal{H}\!\!\mathit{om}(X, \mathcal{H}\!\!\mathit{om}(Y,Z))\,.\] The counit of the adjunction above plays a role in the sequel: \[\varepsilon\colon\,\mathcal{H}\!\mathit{om}(X,Y)\otimes X\longrightarrow Y\,.\] We will need the symmetric braiding in \(\mathcal{T}\) that we denote: \[\gamma\colon X\otimes Y\stackrel{{\sim}}{{\longrightarrow}}Y \otimes X\,.\] One has also a natural map \[\nu\colon\,\mathcal{H}\!\mathit{om}(X,Y)\otimes Z\longrightarrow\mathcal{H} \!\mathit{om}(X,Y\otimes Z)\,, \tag{2.1}\] obtained as the adjoint to the composition of maps \[\mathcal{H}\!\mathit{om}(X,Y)\otimes Z\otimes X\stackrel{{ 1\otimes \gamma}}{{\longrightarrow}}\mathcal{H}\!\mathit{om}(X,Y)\otimes X\otimes Z \stackrel{{\varepsilon\otimes 1}}{{\longrightarrow}}Y\otimes Z\,.\] The _Spanier-Whitehead dual_ of an object \(X\) is \[D^{\mathrm{sw}}X:=\mathcal{H}\!\mathit{om}(X,\mathds{1})\,.\] The assignment \(X\mapsto D^{\mathrm{sw}}X\) is a contravariant functor \(\mathcal{T}\to\mathcal{T}\). An object \(X\) in \(\mathcal{T}\) is said to be _dualisable_ if for all \(Y\) in \(\mathcal{T}\) the natural map \[D^{\mathrm{sw}}X\otimes Y\longrightarrow\mathcal{H}\!\mathit{om}(X,Y)\,,\] obtained from (2.1) by setting \(Z=\mathds{1}\), is an isomorphism. We denote by \(\mathcal{T}^{\mathrm{d}}\) the full subcategory of dualisable objects in \(\mathcal{T}\). The adjoint of the composite \(X\otimes D^{\mathrm{sw}}X\stackrel{{\gamma}}{{\to}}D^{\mathrm{ sw}}X\otimes X\stackrel{{\varepsilon}}{{\to}}\mathds{1}\) of the braiding with the counit \(\varepsilon\) gives the natural double duality map \[\rho\colon X\longrightarrow D^{\mathrm{sw}}D^{\mathrm{sw}}(X)\,.\] We say \(X\) is _reflexive_ if this map is an isomorphism. Dualisable objects are reflexive--this is part of the result below--but not conversely; see 3.3. An object \(X\) in \(\mathcal{T}\) is said to be _functionally compact_ if for all set-indexed collections of objects \(\{Y_{i}\}\) the following natural map is an isomorphism: \[\bigoplus_{i}\mathcal{H}\!\mathit{om}(X,Y_{i})\longrightarrow\mathcal{H}\! \mathit{om}(X,\bigoplus_{i}Y_{i}).\] Observe that replacing the function object with Hom defines compactness. The result below collects some useful observations concerning these notions; we give the proofs because some of the arguments are rather delicate, and not easy to find in the literature. We also invite the reader to verify these statements directly for the derived category of a commutative noetherian ring. **Proposition 2.2**.: _Let \(X\) be an object in \(\mathcal{T}\). The following statements hold._ 1. _The object_ \(X\) _is dualisable if and only if there is a map_ \(\eta\colon\mathds{1}\to X\otimes D^{\mathrm{sw}}X\) _making the following diagram commute_ _The vertical map on the left is the adjoint to the isomorphism_ \(\mathds{1}\otimes X\stackrel{{\sim}}{{\to}}X\)_._ 2. _If_ \(X\) _is dualisable so_ \(D^{\mathrm{sw}}X\) _and_ \(\rho\colon X\to D^{\mathrm{sw}}D^{\mathrm{sw}}X\) _is an isomorphism._ 3. _If either_ \(X\) _or_ \(Z\) _is dualisable, then the map (_2.1_) is an isomorphism._ 4. _If_ \(X\) _is dualisable and_ \(C\in\mathcal{T}\) _is compact, then_ \(C\otimes X\) _is compact._ _._ 5. _If_ \(X\) _is dualisable it is functionally compact; the converse holds if_ \(\mathcal{T}\) _is generated by a set of dualisable objects._ 6. _If_ \(X\) _is functionally compact and_ \(\mathds{1}\) _is compact, then_ \(X\) _is compact._ Proof.: (1) When \(X\) is dualisable, the map \(D^{\mathrm{sw}}X\otimes X\to\mathcal{H}om(X,X)\) is an isomorphism, and we can use its inverse to get a map \(\eta\colon\mathds{1}\to X\otimes D^{\mathrm{sw}}X\), and this fits into the commutative diagram as desired. Conversely, given such an \(\eta\) a diagram chase shows that the composite \[\mathcal{H}om(X,Y)\xrightarrow{\sim}\mathcal{H}om(X,Y)\otimes \mathds{1}\xrightarrow{\ {}^{1\otimes\eta}}\mathcal{H}om(X,Y)\otimes X\otimes D^{ \mathrm{sw}}X\\ \xrightarrow{\ {}^{\varepsilon\otimes 1}}Y\otimes D^{\mathrm{sw}}X \xrightarrow{\ {}^{\gamma}}D^{\mathrm{sw}}X\otimes Y\] is the inverse of the map \(D^{\mathrm{sw}}X\otimes Y\to\mathcal{H}om(X,Y)\). (2) Given \(\eta\colon\mathds{1}\to X\otimes D^{\mathrm{sw}}X\) as in (1), the composite \[\mathds{1}\xrightarrow{\eta}X\otimes D^{\mathrm{sw}}X\xrightarrow{\ {}^{\rho\otimes 1}}D^{\mathrm{sw}}D^{\mathrm{sw}}X\otimes D^{ \mathrm{sw}}X\xrightarrow{\ {}^{\gamma}}D^{\mathrm{sw}}X\otimes D^{\mathrm{sw}}D^{ \mathrm{sw}}X\] plays the role of \(\eta\) for \(D^{\mathrm{sw}}X\), so again using (1), \(D^{\mathrm{sw}}X\) is dualisable. A diagram chase shows that an inverse for \(\rho\colon X\to D^{\mathrm{sw}}D^{\mathrm{sw}}X\) is given by the composite \[D^{\mathrm{sw}}D^{\mathrm{sw}}X\xrightarrow{\ {}^{\sim}} \mathds{1}\otimes D^{\mathrm{sw}}D^{\mathrm{sw}}X\xrightarrow{\ {}^{\eta\otimes 1}}X\otimes D^{\mathrm{sw}}X\otimes D^{ \mathrm{sw}}D^{\mathrm{sw}}X\\ \xrightarrow{\ {}^{1\otimes\gamma}}X\otimes D^{\mathrm{sw}}D^{ \mathrm{sw}}X\otimes D^{\mathrm{sw}}X\xrightarrow{\ {}^{1\otimes\varepsilon}}X\otimes\mathds{1}\xrightarrow{\ {}^{\sim}}X\,.\] (3) If \(X\) is dualisable, then an inverse for \(\nu\) is given by the composite \[\mathcal{H}om(X,Y\otimes Z)\xrightarrow{\ {}^{\sim}} \mathcal{H}om(X,Y\otimes Z)\otimes\mathds{1}\xrightarrow{\ {}^{1\otimes\eta}}\mathcal{H}om(X,Y\otimes Z)\otimes X\otimes D^{\mathrm{sw}}X\] \[\xrightarrow{\ {}^{\varepsilon\otimes 1}}Y\otimes Z\otimes D^{ \mathrm{sw}}X\xrightarrow{\ {}^{\gamma}}D^{\mathrm{sw}}X\otimes Y\otimes Z\xrightarrow{\ {}^{\nu\otimes 1}}\mathcal{H}om(X,Y)\otimes Z\,.\] If \(Z\) is dualisable then using (2) we have a commutative diagram The vertical maps are all isomorphisms, as is the bottom horizontal map, and therefore so is the top horizontal map. (4) This follows from the isomorphisms of functors \[\mathrm{Hom}_{\mathcal{T}}(C\otimes X,-)\cong\mathrm{Hom}_{\mathcal{T}}(C, \mathcal{H}om(X,-))\cong\mathrm{Hom}_{\mathcal{T}}(C,D^{\mathrm{sw}}X\otimes -)\,.\] (5) For any set of objects \(\{Y_{i}\}\) there is a commutative diagram When \(X\) is dualisable, the two vertical maps are isomorphisms and hence so is the lower horizontal map, and hence \(X\) is functionally compact. Conversely, suppose \(X\) is functionally compact. Then the lower horizontal map in the diagram above is an isomorphism, and so it follows that the collection of objects \(Y\) for which the map \(D^{\mathrm{sw}}X\otimes Y\to\mathcal{H}om(X,Y)\) is an isomorphism form a localising subcategory of \(\mathcal{T}\). By (3), it contains the dualisable objects, so when \(\mathcal{T}\) is generated by such objects, we deduce that \(X\) is dualisable. (6) Apply \(\mathrm{Hom}_{\mathcal{T}}(\mathds{1},-)\) to the isomorphism defining functional compactness. We collect some sundry consequences of the preceding result, for later use. _Remark 2.3_.: Let \((\mathcal{T},\otimes,\mathds{1})\) be a compactly generated tensor triangulated category. Proposition 2.2 implies that when \(\mathds{1}\) is compact any dualisable object is compact. The inclusion \(\mathcal{T}^{\mathrm{d}}\subseteq\mathcal{T}^{\mathrm{c}}\) may be strict; see 3.3. The subcategory \(\mathcal{T}^{\mathrm{d}}\) is thick, and closed under tensor products, function objects, and hence also under Spanier-Whitehead duality. On the other hand, the compact objects in \(\mathcal{T}\) form a thick subcategory, but may not be closed under tensor products or Spanier-Whitehead duality; see 3.3. Thus when compact objects and dualisable objects coincide, \(\mathcal{T}^{\mathrm{c}}\) is a tensor triangulated subcategory of \(\mathcal{T}\), with unit \(\mathds{1}\) and the same function object. The condition that \(\mathcal{T}^{\mathrm{c}}=\mathcal{T}^{\mathrm{d}}\) is equivalent to \(\mathcal{T}\) having a set of generators that are both compact and dualisable. Hovey, Palmieri, and Strickland [18] call such a category a _unital algebraic stable homotopy category_; Balmer and Favi [3] use the term _rigidly compactly generated category_. ## 3. Commutative noetherian rings Next we describe the compactly generated tensor triangulated categories that are the focus of this work. Throughout \(A\) is a commutative noetherian ring. We write \(\mathbf{D}(A)\) for the derived category of \(A\) and \(\mathbf{D}^{\mathrm{b}}(\mathrm{mod}\,A)\) for the subcategory consisting of \(A\)-complexes \(M\) such that the \(A\)-module \(H(M):=\bigoplus_{i}H^{i}(M)\) is finitely generated. **3.1**.: The derived category of \(A\) is a compactly generated triangulated category, with compact objects the perfect complexes, namely, those that are isomorphic in \(\mathbf{D}(A)\) to bounded complexes of finitely generated projective \(A\)-modules; equivalently, the objects in \(\mathrm{Thick}(A)\). See, for instance, [22, SS9.2]. One has that \[\mathrm{Thick}(A)\subseteq\mathbf{D}^{\mathrm{b}}(\mathrm{mod}\,A)\,;\] equality holds if and only if \(A\) is regular, that is to say, for each \(\mathfrak{p}\in\mathrm{Spec}\,A\), the local ring \(A_{\mathfrak{p}}\) is regular. This is just a reinterpretation of the classical characterisation, due to Auslander, Buchsbaum, and Serre [11, Theorem 2.2.7], of regular local rings as the local rings of finite global dimension, along with the observation, due to Bass and Murthy that, for objects in \(\mathbf{D}(R)\), finite projective dimension can be tested locally; see [1, Theorem 4.1]. The derived tensor product, \(-\otimes_{A}^{\mathbf{L}}-\) endows \(\mathbf{D}(A)\) with a structure of a tensor triangulated category with unit \(A\) and function object \(\mathbf{R}\!\operatorname{Hom}_{A}(-,-)\). The unit \(A\) generates \(\mathbf{D}(A)\), and is compact and dualisable, so compact objects and dualisable objects coincide. As to the reflexive objects in \(\mathbf{D}(A)\): For an object \(X\) in \(\mathbf{D}^{\mathrm{b}}(\operatorname{mod}A)\) the natural map \(X\to D^{\mathrm{sw}}D^{\mathrm{sw}}X\) is an isomorphism if and only if \(X\) has finite Gorenstein dimension [13, Theorem 2.4.7]. Such an \(X\) is not necessarily compact. Indeed, when \(A\) is Gorenstein any \(X\) in \(\mathbf{D}^{\mathrm{b}}(\operatorname{mod}A)\) has finite Gorenstein dimension, but \(\operatorname{Thick}(A)=\mathbf{D}^{\mathrm{b}}(\operatorname{mod}A)\) if and only if \(A\) is regular. ### Local cohomology and localisation Fix a prime ideal \(\mathfrak{p}\) in \(A\). An \(A\)-complex \(X\) in \(\mathbf{D}(A)\) is \(\mathfrak{p}\)_-local_ if the natural map \(X\to X_{\mathfrak{p}}\) is an isomorphism in \(\mathbf{D}(A)\). Since localisation is an exact functor, this conditions is equivalently to the condition that the map \(H(X)\to H(X)_{\mathfrak{p}}\) of \(A\)-modules is bijective. An \(A\)-complex \(X\) is \(\mathfrak{p}\)_-torsion_ if \(X_{\mathfrak{q}}\cong 0\) in \(\mathbf{D}(A)\) for each \(\mathfrak{q}\not\supseteq\mathfrak{p}\). Once again, it is clear that \(X\) is \(\mathfrak{p}\)-torsion if and only \(H(X)\) is \(\mathfrak{p}\)-torsion; equivalently, each \(A\)-module \(H^{i}(X)\) is \(\mathfrak{p}\)-torsion. An \(A\)-module is \(\mathfrak{p}\)-torsion precisely when, for each \(x\in M\) there exists an integer \(s\geq 0\) such that \(\mathfrak{p}^{s}\cdot x=0\); this explains the terminology. It is straightforward to check that the class of \(\mathfrak{p}\)-torsion \(A\)-complexes is a localising subcategory of \(\mathbf{D}(A)\). Its inclusion into \(\mathbf{D}(A)\) admits a right adjoint, \(\mathbf{R}\Gamma_{\mathfrak{p}}(-)\), the classical local cohomology functor with respect to the (Zariski) closed subset of \(\operatorname{Spec}A\) defined by \(\mathfrak{p}\); see [11, SS3.5], and also [4, SS9]. We are interested in the class of \(\mathfrak{p}\)-local \(\mathfrak{p}\)-torsion objects, namely, the subcategory \[\Gamma_{\mathfrak{p}}\mathbf{D}(A):=\{X\in\mathbf{D}(A)\mid\mathbf{R}\Gamma_ {\mathfrak{p}}(X_{\mathfrak{p}})\cong X\}. \tag{3.2}\] This is a localising tensor ideal in \(\mathbf{D}(A)\), and even minimal, in that the only localising subcategory properly contained in \(\Gamma_{\mathfrak{p}}\mathbf{D}(A)\) is \(0\), by [23, Theorem 2.8]. Said otherwise, \(\mathbf{D}(A)\) is _stratified_ by the \(A\) action on \(\mathbf{D}(A)\), in the sense of [5]. This has the consequence that localising subcategories of \(\mathbf{D}(A)\) are in bijection with the subsets of \(\operatorname{Spec}(A)\). One can thus view the categories \(\Gamma_{\mathfrak{p}}\mathbf{D}(A)\) as the building blocks of the triangulated category \(\mathbf{D}(A)\). And so it is of interest to investigate the objects in it. This is what we do in Section 4. To wrap up this section, we give an example of a compactly generated tensor triangulated category where the unit is compact, so dualisable objects are compact, but not every compact object is dualisable. It also has the feature that the tensor product of compact objects is not always compact. **3.3**.: Let \(A\) be a commutative noetherian ring and \(\mathbf{K}(\operatorname{Proj}A)\) the homotopy category of complexes of projective \(A\)-modules. This is a compactly generated triangulated category, with a triangle equivalence \[\mathbf{D}^{\mathrm{b}}(\operatorname{mod}A)^{\mathrm{op}}\xrightarrow{ \sim}\mathbf{K}(\operatorname{Proj}A)^{c}\] given by the assignment \(M\mapsto(\mathbf{p}M)^{*}\), where \(\mathbf{p}M\) is a projective resolution of \(M\) and \((-)^{*}:=\operatorname{Hom}_{A}(-,A)\); see [20]. We endow \(\mathbf{K}(\operatorname{Proj}A)\) with a structure of a tensor triangulated category with tensor product the usual tensor product over \(A\). The unit for this tensor product is \(A\). By Brown representability, the inclusion \(\mathbf{K}(\operatorname{Proj}A)\to\mathbf{K}(\operatorname{Mod}A)\) has a right adjoint \(\mathbf{q}\colon\mathbf{K}(\operatorname{Mod}A)\to\mathbf{K}(\operatorname {Proj}A)\). It is easy to verify that \(\mathbf{q}\) preserves function objects. Thus \[\mathcal{H}\!\operatorname{\mathit{om}}(X,Y)\cong\mathbf{q}\operatorname{Hom} _{A}(X,Y)\qquad(X,Y\in\mathbf{K}(\operatorname{Proj}A))\,.\] Evidently \(A\) is compact, so dualisable objects in \(\mathbf{K}(\operatorname{Proj}A)\) are compact. We claim that the subcategory of dualisable objects in \(\mathbf{K}(\operatorname{Proj}A)\) is precisely \(\operatorname{Thick}(A)\), the bounded complexes of finitely generated projective modules. Indeed, fix a dualisable object; since it is compact we can assume it is of the form \((\mathbf{p}M)^{*}\), for some \(M\) in \(\mathbf{D}^{\mathrm{b}}(\operatorname{mod}A)\). Moreover since \(A\) is noetherian, we can assume \(\mathbf{p}M\) consists of finitely generated projective \(A\)-modules, and that \((\mathbf{p}M)^{i}=0\) for \(i\gg 0\). Then the Spanier-Whitehead dual of \((\mathbf{p}M)^{*}\) is \[\mathcal{H}\!\mathit{om}((\mathbf{p}M)^{*},A)\cong\mathbf{q}\operatorname{ Hom}_{A}((\mathbf{p}M)^{*},A)\cong\mathbf{q}(\mathbf{p}M)\cong\mathbf{p}M\] where the second isomorphism holds because of the structure of \(\mathbf{p}M\) and the last one holds because \(\mathbf{p}M\) is already in \(\mathbf{K}(\operatorname{Proj}A)\). In particular \(\mathbf{p}M\) is also dualisable, being the Spanier-Whitehead dual of a dualisable object. But then it is also compact. Observe that \(\mathbf{p}M\) is in \(\operatorname{Loc}(A)\), so compactness implies that it is in \(\operatorname{Thick}(A)\). It remains to observe that then so is \((\mathbf{p}M)^{*}\). Suppose now that \(A\) is _singular_; this condition is equivalent to the existence of finitely generated \(A\)-modules of infinite projective dimension. Then for any \(M\) in \(\mathbf{D}^{\mathrm{b}}(\operatorname{mod}A)\) of infinite projective dimension the complex \((\mathbf{p}M)^{*}\) is compact in \(\mathbf{K}(\operatorname{Proj}A)\) but it is not dualisable. Moreover the Spanier-Whitehead dual of the compact object \((\mathbf{p}M)^{*}\) is \(\mathbf{p}M\) and this will not be compact, by the argument above. For \(M,N\) in \(\mathbf{D}^{\mathrm{b}}(\operatorname{mod}A)\) the natural map is an isomorphism: \[(\mathbf{p}M)^{*}\otimes_{A}(\mathbf{p}N)^{*}\stackrel{{\sim}}{{ \longrightarrow}}\operatorname{Hom}_{A}(\mathbf{p}M,(\mathbf{p}N)^{*})\,.\] In particular the cohomology of the object on the left is \(\operatorname{Ext}_{A}(M,\mathbf{R}\operatorname{Hom}_{A}(N,A))\). When \(A\) is singular and Gorenstein the cohomology of any compact object in \(\mathbf{K}(\operatorname{Proj}A)\) is bounded. However one can find \(M,N\) such that the cohomology of \((\mathbf{p}M)^{*}\otimes_{A}(\mathbf{p}N)^{*}\) is not bounded, so the tensor product will not be compact. ## 4. Local dualisable objects in \(\mathbf{D}(A)\) Let \(A\) be a commutative noetherian ring and \(\mathbf{D}(A)\) the derived category of \(A\)-modules, with the usual structure of a tensor triangulated category; see 3.1. As noted there, the dualisable objects and compact objects in \(\mathbf{D}(A)\) coincide, and are precisely the perfect complexes in \(\mathbf{D}(A)\). In this section we focus on the dualisable objects in \(\Gamma_{\mathfrak{p}}\mathbf{D}(A)\), the category of \(\mathfrak{p}\)-local and \(\mathfrak{p}\)-torsion objects in \(\mathbf{D}(A)\), for \(\mathfrak{p}\) a prime ideal in \(A\); see (3.2). Fix a prime ideal \(\mathfrak{p}\). It is straightforward to verify that when \(X\) and \(Y\) are \(\mathfrak{p}\)-local and \(\mathfrak{p}\)-torsion, so is \(X\otimes_{A}^{\mathbf{L}}Y\); that is to say, the triangulated category \(\Gamma_{\mathfrak{p}}\mathbf{D}(A)\) inherits a tensor product from \(\mathbf{D}(A)\). With this tensor product \(\Gamma_{\mathfrak{p}}\mathbf{D}(A)\) is tensor triangulated, with unit \(\mathbf{R}\Gamma_{\mathfrak{p}}A_{\mathfrak{p}}\), and function object \[\mathcal{H}\!\mathit{om}(X,Y):=\mathbf{R}\Gamma_{\mathfrak{p}}\,\mathbf{R} \operatorname{Hom}_{A}(X,Y)\,.\] The thick subcategory of compact objects in \(\Gamma_{\mathfrak{p}}\mathbf{D}(A)\) has a simple structure, in that it is minimal. The unit \(\mathbf{R}\Gamma_{\mathfrak{p}}(A_{\mathfrak{p}})\) is compact only when \(\mathfrak{p}\) is a minimal prime ideal in \(A\). So, typically, there are more dualisable than compact objects in \(\Gamma_{\mathfrak{p}}\mathbf{D}(A)\). Here is a characterisation of the dualisable objects in this category, in terms of their cohomology. We write \(k(\mathfrak{p})\) for \(A_{\mathfrak{p}}/\mathfrak{p}A_{\mathfrak{p}}\), the residue field of the local ring \(A_{\mathfrak{p}}\) and \(\Sigma\) for the suspension, or shift, functor in a triangulated category. **Theorem 4.1**.: _Let \(A\) be a commutative noetherian ring and \(\mathfrak{p}\) a prime ideal in \(A\). For each \(\mathfrak{p}\)-local and \(\mathfrak{p}\)-torsion \(A\)-complex \(X\) the following conditions are equivalent._ 1. \(X\) _is dualisable in_ \(\Gamma_{\mathfrak{p}}\mathbf{D}(A)\)_._ 2. \(\operatorname{rank}_{k(\mathfrak{p})}H(k(\mathfrak{p})\otimes_{A}^{\mathbf{L}}X)\) _is finite._ 3. \(X\) _is in_ \(\operatorname{Thick}(\mathbf{R}\Gamma_{\mathfrak{p}}(A_{\mathfrak{p}}))\)_._ _If moreover \(\operatorname{rank}_{k(\mathfrak{p})}H(k(\mathfrak{p})\otimes_{A}^{\mathbf{L} }X)=1\), then \(X\cong\Sigma^{s}\mathbf{R}\Gamma_{\mathfrak{p}}(A_{\mathfrak{p}})\) for some integer \(s\)._ As will be clear from the proof, the implications (1)\(\Rightarrow\)(2) and (3)\(\Rightarrow\)(1) are elementary to verify. The implication (2)\(\Rightarrow\)(3) is the non-trivial one, and its proof takes most of the work in this section; it makes critical use of derived completions. There is a simpler proof when the ring \(A_{\mathfrak{p}}\) has finite global dimension; see [7]. ### Derived completions Given an ideal \(I\) in \(A\) and an \(A\)-module \(M\), the _\(I\)-adic completion_ of \(M\) is the inverse limit \[\varLambda^{I}M:=\lim_{n}(\cdots\twoheadrightarrow M/I^{n+1}M\twoheadrightarrow M/I^{n}M\twoheadrightarrow\cdots\twoheadrightarrow M/IM)\,,\] where the surjections are the natural ones. The canonical maps \(M\to M/I^{n}M\) induce a map \(M\to\varLambda^{I}M\); when this is bijective we say \(M\) is _classically \(I\)-complete_. Given an \(A\)-complex \(M\) we write \(\mathbf{L}\varLambda^{I}M\) for the left derived functor of the completion; see [16]. This comes equipped with a morphism \(M\to\mathbf{L}\varLambda^{I}M\) in \(\mathbf{D}(A)\), and the complex \(M\) is said to be _\(I\)-complete_ when this map is a quasi-isomorphism. A complex \(M\) is \(I\)-complete if and only if \(H^{i}(M)\) is \(I\)-complete for each \(i\). A caveat: classically complete \(A\)-modules are complete, but the converse does not hold; see [16, Example 1.4] and also [9, Example 2.4]. When \(M\) is an \(A\)-module, there is natural surjective map \(H^{0}(\mathbf{L}\varLambda^{I}M)\to\varLambda^{I}M\). This is an isomorphism when \(M\) is a finitely generated, and then \(H^{i}(\mathbf{L}\varLambda^{I}M)=0\) for \(i\geq 1\), that is to say, there is an isomorphism \(\mathbf{L}\varLambda^{I}M\cong\varLambda^{I}M\) in \(\mathbf{D}(A)\) for any finitely generated \(A\)-module \(M\). In particular, \(\mathbf{L}\varLambda^{I}A\cong\varLambda^{I}A\); this observation is used implicitly in the sequel. The derived local cohomology functor \(\mathbf{R}\Gamma_{I}\) and the derived \(I\)-adic completion functor \(\mathbf{L}\varLambda^{I}\) form an adjoint pair: (4.2) This is the Greenlees-May duality. It restrict to an equivalence between the \(I\)-torsion and \(I\)-complete complexes, and so one has natural isomorphisms \[\mathbf{R}\Gamma_{I}M\cong\mathbf{R}\Gamma_{I}\mathbf{L}\varLambda^{I}M \qquad\text{and}\qquad\mathbf{L}\varLambda^{I}\mathbf{R}\Gamma_{I}M\cong \mathbf{L}\varLambda^{I}M\,. \tag{4.3}\] For a proof of these results, and for a different perspective on completions, as a localisation, see [15], and also [14, Tag091N]. The result below is a crucial step in the proof of Theorem 4.1. **Proposition 4.4**.: _Let \(A\) be a local ring with maximal ideal \(\mathfrak{m}\) and residue field \(k\), and let \(\widehat{A}\) be the \(\mathfrak{m}\)-adic completion of \(A\). The following statements hold for any object \(X\in\mathbf{D}(A)\) that is \(\mathfrak{m}\)-complete._ 1. _If_ \(H(k\otimes_{A}^{\mathbf{L}}X)\) _is bounded, then the natural map_ \[X\longrightarrow k\otimes_{A}^{\mathbf{L}}X\] _induced by the surjection_ \(A\to k\)_, is nonzero in homology._ 2. _If_ \(\operatorname{rank}_{k}H(k\otimes_{A}^{\mathbf{L}}X)\) _is finite, then_ \(X\) _is in_ \(\operatorname{Thick}(\widehat{A})\)_._ 3. _If_ \(H(k\otimes_{A}^{\mathbf{L}}X)\cong\Sigma^{s}k\) _for some integer_ \(s\)_, then_ \(X\cong\Sigma^{s}\widehat{A}\)_._ Proof.: (1) Since \(X\) is \(\mathfrak{m}\)-complete so is \(H^{n}(X)\) for each \(n\). Thus, if \(\mathfrak{m}\cdot H^{n}(X)=H^{n}(X)\), then \(H^{n}(X)=0\); see [26, 1.4], and also [14, Tag09b]. Given this observation, the hypothesis that \(H(k\otimes_{A}^{\mathbf{L}}X)\) is bounded implies \(H(X)\) is bounded; this can be checked via a standard devissage argument using \(H(K\otimes_{R}M)\), where \(K\) is the Koszul complex of \(R\). Set \(i=\inf\{n\mid H^{n}(X)\neq 0\}\). Then the composed map \[H^{i}(X)\longrightarrow H^{i}(k\otimes_{A}^{\mathbf{L}}X)\cong k\otimes_{A}H ^{i}(X)\cong H^{i}(X)/\mathfrak{m}H^{i}(X)\,,\] where the first isomorphism holds because the tensor product is right exact, is the obvious surjection and the target is nonzero. This justifies the claim. (2) We verify this by an induction on the integer \(r:=\operatorname{rank}_{k}H(k\otimes_{A}^{\mathbf{L}}X)\). The base case is \(r=0\). Then \(k\otimes_{A}^{\mathbf{L}}X=0\) in \(\mathbf{D}(A)\), that is to say, \(\mathfrak{m}\) is not in \(\operatorname{supp}_{A}X\). Thus \(\mathbf{R}\Gamma_{\mathfrak{m}}X\cong 0\). It remains to note that \[X\cong\mathbf{L}\Lambda^{\mathfrak{m}}X\cong\mathbf{L}\Lambda^{\mathfrak{m}} \mathbf{R}\Gamma_{\mathfrak{m}}X\cong 0\,,\] where the second isomorphism is from (4.3). Suppose \(r\geq 1\). Since \(\operatorname{Hom}_{\mathbf{D}(A)}(\Sigma^{i}A,-)\cong H^{-i}(-)\), part (1) is equivalent to the existence of a map \(\Sigma^{s}A\to X\) in \(\mathbf{D}(A)\) such that the induced map \(\Sigma^{s}k\to H(k\otimes_{A}^{\mathbf{L}}X)\) is nonzero. Since \(X\) is \(\mathfrak{m}\)-complete, the map \(\Sigma^{s}A\to X\) factors through \(\Sigma^{s}\widehat{A}\to X\) and this fits into an exact triangle \[\Sigma^{s}\widehat{A}\longrightarrow X\longrightarrow Y\longrightarrow\Sigma ^{s+1}\widehat{A}\longrightarrow\,.\] Evidently \(\operatorname{rank}_{k}H(k\otimes_{A}^{\mathbf{L}}Y)=r-1\), so the induction hypothesis yields that \(Y\) is in \(\operatorname{Thick}(\widehat{A})\), and hence so is \(X\). (3) When \(r=1\), the argument above yields that \(\Sigma^{s}\widehat{A}\cong X\), as desired. ### A derived Morita equivalence Let \(A\) be a local ring with maximal ideal \(\mathfrak{m}\). It helps to consider another adjoint pair: The map \(A\to\mathbf{R}\mathrm{Hom}_{A}(\mathbf{R}\Gamma_{\mathfrak{m}}A,\mathbf{R} \Gamma_{\mathfrak{m}}A)\) induces, by (4.3), a quasi-isomorphism \[\widehat{A}\longrightarrow\mathbf{R}\mathrm{Hom}_{A}(\mathbf{R}\Gamma_{ \mathfrak{m}}A,\mathbf{R}\Gamma_{\mathfrak{m}}A) \tag{4.5}\] so derived Morita theory yields adjoint functors \[\mathbf{D}(\widehat{A})\xrightarrow[\mathbf{R}\Gamma_{\mathfrak{m}}A\otimes_ {A}^{\mathbf{L}}-]{\mathbf{R}\Gamma_{\mathfrak{m}}A\otimes_{A}^{\mathbf{L}}- }\overbrace{\mathbf{R}\mathrm{Hom}_{A}(\mathbf{R}\Gamma_{\mathfrak{m}}A,-)}^{ \mathbf{R}\Gamma_{\mathfrak{m}}A\otimes_{A}^{\mathbf{L}}-}}\mathbf{D}(A)\,.\] The functors introduced above give alternative descriptions of the category we are interested in, namely, the thick subcategory generated by \(\mathbf{R}\Gamma_{\mathfrak{m}}A\). **Lemma 4.6**.: _The adjoint pairs above restrict to triangle equivalences_ \[\operatorname{Thick}_{\widehat{A}}(\widehat{A})\xrightarrow[\mathbf{R}\Gamma _{\mathfrak{m}}A\otimes_{A}^{\mathbf{L}}-]{\mathbf{R}\Gamma_{\mathfrak{m}}A \otimes_{A}^{\mathbf{L}}-}\overbrace{\mathbf{R}\mathrm{Hom}_{A}(\mathbf{R} \Gamma_{\mathfrak{m}}A,-)}^{\mathbf{R}\Gamma_{\mathfrak{m}}}\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \ It is easily verified that this is induced by the restriction functor \(\mathbf{D}(\widehat{A})\to\mathbf{D}(A)\) arising from the natural map \(A\to\widehat{A}\) of rings. Proof of Theorem 4.1.: We may assume \((A,\mathfrak{m},k)\) is a local ring and \(\mathfrak{p}=\mathfrak{m}\), so that \(k(\mathfrak{p})=k\). Thus \(X\) is an \(\mathfrak{m}\)-torsion \(A\)-complex. We recall that \(\mathbf{D}(A)\) is a tensor triangulated category, generated by its unit \(A\), and so compact objects and dualisable objects in \(\mathbf{D}(A)\) coincide. This fact will be used throughout the proof. (1)\(\Rightarrow\)(2): Let \(K\) be the Koszul complex on a generating set for the ideal \(\mathfrak{m}\). As \(X\) is dualisable the \(A\)-complex \(K\otimes_{A}X\) is compact, by Proposition 2.2, and so in \(\operatorname{Thick}(A)\). Hence the \(k\)-vector space \(H(k\otimes_{A}^{\mathbf{L}}(K\otimes_{A}X))\) has finite rank. Since \(k\) is a field there are isomorphisms \[H(k\otimes_{A}^{\mathbf{L}}(K\otimes_{A}X)) \cong H((k\otimes_{A}K)\otimes_{k}(k\otimes_{A}^{\mathbf{L}}X))\] \[\cong H(k\otimes_{A}K)\otimes_{k}H(k\otimes_{A}^{\mathbf{L}}X)\,.\] Observe that \(H(k\otimes_{A}K)\) is nonzero. As the rank of \(H(k\otimes_{A}^{\mathbf{L}}(K\otimes_{A}X))\) is finite, so is that of \(H(k\otimes_{A}^{\mathbf{L}}X)\). (2)\(\Rightarrow\)(3): Since \(k\) is \(\mathfrak{m}\)-torsion, the natural map below is an isomorphism: \[k\otimes_{A}^{\mathbf{L}}X\overset{\rightharpoonup}{\rightharpoonup}k\otimes_ {A}^{\mathbf{L}}\mathbf{L}\Lambda^{\mathfrak{m}}X\] The hypothesis and Proposition 4.4 imply that \(\mathbf{L}\Lambda^{\mathfrak{m}}X\) is in \(\operatorname{Thick}(\widehat{A})\). Lemma 4.6 then yields that \(\mathbf{R}\Gamma_{\mathfrak{m}}X\) is in \(\operatorname{Thick}(\mathbf{R}\Gamma_{\mathfrak{m}}A)\). It remains to recall that \(X\) is \(\mathfrak{m}\)-torsion. (3)\(\Rightarrow\)(1): As \(\mathbf{R}\Gamma_{\mathfrak{m}}A\) is the unit of \(\mathbf{R}\Gamma_{\mathfrak{m}}\mathbf{D}(A)\), it is dualisable. It remains to note that the dualisable objects form a thick subcategory. The last part of the theorem follows from Proposition 4.4(3). ### Balmer spectrum Set \(\mathcal{T}:=\mathbf{D}(A)\) and fix a prime \(\mathfrak{p}\) in \(\operatorname{Spec}A\). The full subcategory \((\Gamma_{\mathfrak{p}}\mathcal{T})^{\mathrm{d}}\) of dualisable objects in \(\Gamma_{\mathfrak{p}}\mathcal{T}\) is an essentially small tensor triangulated category, with unit \(\mathbf{R}\Gamma_{\mathfrak{p}}(A_{\mathfrak{p}})\). The unit generates \(\Gamma_{\mathfrak{p}}\mathcal{T}\), in the sense of thick subcategories, so thick subcategories are tensor ideal; this follows from Theorem 4.1. We are interested in the lattice of thick subcategories of \((\Gamma_{\mathfrak{p}}\mathcal{T})^{\mathrm{d}}\), captured in the Balmer spectrum introduced in [2]. Given Theorem 4.1 and Lemma 4.6, one can describe the underlying topological space easily. **Corollary 4.7**.: _One has a homeomorphism \(\operatorname{Spec}\left(\Gamma_{\mathfrak{p}}\mathcal{T}\right)^{\mathrm{d}} \cong\operatorname{Spec}(\widehat{A}_{\mathfrak{p}})\)._ Proof.: We can again assume \(A\) is local with maximal ideal \(\mathfrak{p}\). Given Theorem 4.1, the equivalence of categories on the left in Lemma 4.6 yields an homeomorphism \[\operatorname{Spec}\left(\Gamma_{\mathfrak{p}}\mathcal{T}\right)^{\mathrm{d}} \cong\operatorname{Spec}\operatorname{Thick}_{\widehat{A}}(\widehat{A})\,.\] It remains to recall the classification of the thick subcategories of perfect complex of a commutative noetherian ring, due to Hopkins [17] and Neeman [23], interpreted in terms of the Balmer spectrum [2, Theorem 5.5]. _Remark 4.8_.: The (Zariksi) spectrum of \(\widehat{A}_{\mathfrak{p}}\) can be wildly different from that of \(A_{\mathfrak{p}}\), though they have the same Krull dimension. We offer a few remarks to convey this point. Suppose \(A\) is local and \(\mathfrak{p}=\mathfrak{m}\), the maximal ideal of \(A\). The completion map \(A\to\widehat{A}\) induces a homomorphism \[\operatorname{Spec}\widehat{A}\longrightarrow\operatorname{Spec}A\,.\] This map is surjective as \(A\to\widehat{A}\) is faithfully flat. Moreover \(\dim A=\dim\widehat{A}\). Since \(\mathfrak{m}\widehat{A}\) is the maximal ideal of \(\widehat{A}\), there is a single point lying over the closed point of \(\operatorname{Spec}A\), namely, the closed point of \(\operatorname{Spec}\widehat{A}\). This shows that the Krull dimension of the fibres of the completion map is at most \(\dim A-1\). The fibres over other non-closed points can be highly non-trivial. This is so even over the generic points of \(\operatorname{Spec}A\). It is easy to construct local domains such that the generic formal fibre has more than one point. Here is one example: Consider the local ring \[A\mathrel{\mathop{:}}=\frac{\mathbb{Q}[x,y]_{(x,y)}}{(x^{2}-y^{2}(y-1))}\,.\] Since \(x^{2}-y^{2}(y-1)\) is irreducible in the ring \(\mathbb{Q}[x,y]_{(x,y)}\), the ring \(A\) is a domain. However that polynomial factors in the \((x,y)\)-adic completion \(\mathbb{Q}|\!|x,y|\!|\), so the completion of \(A\) is not a domain. Here is more drastic scenario: Given any pair of integers \(d,t\) with \(0<t<d-2\), Rotthaus [24] constructs a noetherian local domain \(A\) of Krull dimension \(d\) such that the formal fibre over the generic point of \(A\) has Krull dimension \(t\). ### Reflexive objects With \(A\) and \(\mathfrak{p}\) as before, the Spanier-Whitehead dual of an object \(X\) in \(\Gamma_{\mathfrak{p}}\mathbf{D}(A)\) is \[D^{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{ \leftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleft \left\lfloor\lfloor\right\lfloor\lfloor\rfloor\right \lfloor\rfloor\right\lfloor\rfloor\right\lfloor \rfloor\right\lfloor\rfloor\right\lfloor\rfloor\right\)}}}}}}}}(X)= \mathbf{R}\Gamma_{\mathfrak{p}}\,\mathbf{R}\mathrm{Hom}_{A}(X,\mathbf{R} \Gamma_{\mathfrak{p}}(A_{\mathfrak{p}}))\,.\] Recall, from Section 2, that \(X\) is _reflexive_ if the natural map \(X\to D^{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \leftleftleftleftleftleftleftleftleft\lfloor \left\lfloor\lfloor\left\lfloor\lfloor\right \lfloor\lfloor\lfloor\rfloor\right \lfloor\lfloor\lfloor\rfloor\right\lfloor\lfloor\rfloor \right\lfloor\lfloor\rfloor\right\lfloor\lfloor\rfloor\right\lfloor \lfloor\rfloor\right\lfloor\rfloor\lfloor\rfloor\right\lfloor\rfloor \lfloor\rfloor\left\lfloor\rfloor\right\lfloor\rfloor\left\lfloor\rfloor \left\lfloor\right\rfloor\left\lfloor\rfloor\right\lfloor\rfloor\left\lfloor \right\rfloor\left\lfloor\rfloor\right\lfloor\rfloor\left\lfloor\right\rfloor \left\lfloor\rfloor\right\lfloor\rfloor\left\lfloor\right\rfloor\left\lfloor \right\rfloor\left\lfloor\right\rfloor\left\lfloor\right\rfloor\left\lfloor \right\rfloor\left\lfloor\right\rfloor\left\lfloor\right\rfloor\left\lfloor \right\rfloor\left\lfloor\right\rfloor\left\lfloor\right\rfloor\left\lfloor \right\rfloor\left\lfloor\right\rfloor\left\lfloor\right\rfloor\left\lfloor \right\rfloor\left\lfloor\right\rfloor\left\lfloor\right\rfloor\left\lfloor \right\rfloor\left\lfloor\right\rfloor\left\lfloor\right\rfloor\left\lfloor \right\rfloor\left\lfloor\right\rfloor\left\lfloor\right\rfloor\left\lfloor \right\rfloor\left\lfloor\right\rfloor\left\lfloor\right\rfloor\left\lfloor \right\rfloor\left\lfloor\right\rfloor\left\lfloor\right\rfloor\left\lfloor \right\rfloor\left\lfloor\right\rfloor\left\lfloor\right\rfloor\left\lfloor \right\rfloor\left\lfloor\right\rfloor\left\lfloor\left\lfloor\right\rfloor\left\lfloor \right\rfloor\left\lfloor\left\lfloor\right\rfloor\left\lfloor\right\rfloor\left\lfloor \right\rfloor\left\lfloor\left\lfloor\right\rfloor\left\lfloor\right\rfloor\left\lfloor \right\rfloor\left\lfloor\left\lfloor\right\rfloor\left\lfloor\right\rfloor\left\lfloor \right\rfloor\left\lfloor\left\lfloor\right\rfloor\left\lfloor\right\rfloor\left\lfloor \right\rfloor\left\lfloor\right\rfloor\left\lfloor\right\rfloor\left\lfloor\right\rfloor \left\lfloor\left\lfloor\right\rfloor\left\lfloor\right\rfloor\left\lfloor\right\rfloor \left\lfloor\left\lfloor\right\rfloor\left\lfloor\right\rfloor\left\lfloor\right\rfloor \left\lfloor\left\lfloor\right\rfloor\left\lfloor\right\rfloor\left\lfloor\right\rfloor \left\lfloor\left\lfloor\right\rfloor\left\lfloor\right\rfloor\left\lfloor\right\rfloor \left\lfloor\left\lfloor\right\rfloor\left\lfloor\right\rfloor\left\lfloor\right\rfloor \left\lfloor\right\rfloor\left\lfloor\right\rfloor\left\lfloor\right\rfloor\left\lfloor \right\rfloor\left\lfloor\left\lfloor\right\rfloor\left\lfloor\left\lfloor\right\rfloor \left\lfloor\right\rfloor\left\lfloor\right\rfloor\left\lfloor\right\rfloor\left\lfloor \right\rfloor\left\lfloor\left\lfloor\right\rfloor\left\lfloor\right\rfloor\left\lfloor \right\rfloor\left\lfloor\left\lfloor\right\rfloor\left\lfloor\right\rfloor\left\lfloor \right\rfloor\left\lfloor\left\lfloor\right\rfloor\left\lfloor\right\rfloor\left\lfloor \right\rfloor\left\lfloor\left\lfloor\right\rfloor\left\lfloor\right\rfloor\left\lfloor \right\rfloor\left\lfloor\left\lfloor\right\rfloor\left\lfloor\right\rfloor\left\lfloor \right\rfloor\left\lfloor\left\lfloor\right\rfloor\left\lfloor\right\rfloor\left\lfloor \right\rfloor\left\lfloor\left\lfloor\right\rfloor\left\lfloor\right\rfloor\left\lfloor \right\rfloor\left\lfloor\right\rfloor\left\lfloor\left\lfloor\right\rfloor\left\lfloor \right\rfloor\left\lfloor\right\rfloor\left\lfloor\right\rfloor\left\lfloor\left\lfloor \right\rfloor\left\lfloor\right\rfloor\left\lfloor\right\rfloor\left\lfloor\left\lfloor \right\rfloor\left\lfloor\right\rfloor\left\lfloor\left\lfloor\right\rfloor\left\lfloor \right\rfloor\left\lfloor\right\rfloor\left\lfloor\right\rfloor\left\lfloor\left\lfloor \right\rfloor\left\lfloor\right\rfloor\left\lfloor\left\lfloor\right\rfloor\left\lfloor \right\rfloor\left\lfloor\left\lfloor\right\rfloor\left\lfloor\right\rfloor\left\lfloor \left\lfloor\right\rfloor\left\lfloor\right\rfloor\left\lfloor\right\rfloor\left\lfloor \left\lfloor\right\rfloor\left\lfloor\right\rfloor\left\lfloor\left\lfloor\right\rfloor \left\lfloor\right\rfloor\left\lfloor\left\lfloor\right\rfloor\left\lfloor\rfloor\left\lfloor \right\rfloor\left\lfloor\left\lfloor\right\rfloor\left\lfloor\rfloor\left\lfloor \right\rfloor\left\lfloor\right\rfloor\left\lfloor\lfloor\right\rfloor\left\lfloor \left\lfloor\rfloor\left\lfloor\right\rfloor\left\lfloor\lfloor\right\rfloor\left\lfloor \left\lfloor\right\rfloor\left\lfloor\rfloor\left\lfloor\right\rfloor\left\lfloor \left\lfloor\right\rfloor\left\lfloor\rfloor\left\lfloor\right\rfloor\left\lfloor \right\rfloor\left\lfloor\right\rfloor\left\lfloor\right\rfloor\left\lfloor\lfloor \right\rfloor\left\lfloor\left\lfloor\right\rfloor\left\lfloor\rfloor\left\lfloor\right\rfloor \left\lfloor\left\lfloor\right\rfloor\left\lfloor\right\rfloor\left\lfloor\rfloor\left\lfloor \right\rfloor\left\lfloor\left\lfloor\right\rfloor\left\lfloor\rfloor\left\lfloor\right\rfloor \left\lfloor\right\rfloor\left\lfloor\left\lfloor\rfloor\left\lfloor\right\rfloor\left\lfloor \right\rfloor\left\lfloor\right\rfloor\left\lfloor\right\rfloor\left\lfloor\right\rfloor \left\lfloor\left\lfloor\right\rfloor\left\rfloor\left\lfloor\right\rfloor\left\lfloor \left\lfloor\right\rfloor\left\lfloor\right\rfloor\left\lfloor\right\rfloor\left\lfloor \left\lfloor\right\rfloor\left\lfloor\right\rfloor\left\lfloor\right\rfloor\left\lfloor \left\lfloor\right\rfloor\left\lfloor\right\rfloor\left\lfloor\right\rfloor\left\lfloor \left\lfloor\right\rfloor\left\rfloor\left\lfloor\left\lfloor\right\rfloor\left\lfloor \right\rfloor\left\lfloor\left\lfloor\right\rfloor\left\lfloor\rfloor\left\lfloor \right\rfloor\left\lfloor\right\rfloor\left\lfloor\right\rfloor\left\lfloor\left\lfloor \right\rfloor\left\lfloor\right\rfloor\left\lfloor\right\rfloor\left\left\lfloor\right\rfloor \left\lfloor\left\lfloor\right\rfloor\left\lfloor\right\rfloor\left\lfloor\right\rfloor \left\lfloor\left\lfloor\right\rfloor\left\lfloor\right\rfloor\left\rfloor\left\lfloor \left\lfloor\right\rfloor\left\lfloor\right\rfloor\left\lfloor\right\rfloor\left\lfloor \left\lfloor\right\rfloor\left\lfloor\right\rfloor\left\lfloor\right\rfloor\left\lfloor \left\lfloor\right\rfloor\left\lfloor\right\rfloor\left\lfloor\left\rfloor\left\lfloor \right\rfloor\left\left\lfloor\right\rfloor\left\lfloor\right\rfloor\left\left\lfloor \right\rfloor\left\lfloor\left\lfloor\right\rfloor\left\left\lfloor\right\rfloor \left\lfloor\right\rfloor\left\left\lfloor\right\rfloor\left\left\lfloor\right\rfloor \left\lfloor\right\rfloor\left\lfloor\left\rfloor\left\lfloor\right\rfloor\left\left\lfloor \left\lfloor\right\rfloor\left\lfloor\right\rfloor\left\left\lfloor\right\rfloor\left\right\rfloor \left\left\lfloor\right\rfloor\left\lfloor\left\lfloor\right\rfloor\left\lfloor \right\rfloor\left\left\lfloor\right\rfloor\left\left\lfloor\right\rfloor\left\lfloor \right\rfloor\left\left\lfloor\right\rfloor\left\left\left\lfloor\right\rfloor\left\lfloor \right\rfloor\left\left\lfloor\right\rfloor\left\left\lfloor\right\rfloor\left\left\lfloor \right\rfloor\left\left\lfloor\right\rfloor\left\left\lfloor\right\rfloor\left\left\lfloor \right\rfloor\left\left\lfloor\right\rfloor\left\left\lfloor\right\rfloor\left\left\lfloor \right\rfloor\left\left\lfloor\right\rfloor\left\lfloor\right\rfloor\left\left\lfloor \right\rfloor\left\left\lfloor\right\rfloor\left\left\lfloor\right\right\rfloor\left\lfloor \right\rfloor\left\left\lfloor\right\rfloor\left\lfloor\right\rfloor\left\left\lfloor \right\rfloor\left\left\lfloor\right\rfloor\left\left\lfloor\right\right\rfloor\left\left\lfloor \right\rfloor\left\lfloor\right\rfloor\left\left\lfloor\right\rfloor\left\left\lfloor \right\rfloor\left\left\lfloor\right\rfloor\left\left\lfloor\right\rfloor\left\left\lfloor \right\right\rfloor\left\left\lfloor\right\rfloor\left\lfloor\right\right\rfloor\left\left\lfloor \right\rfloor\left\left\lfloor\right\rfloor\left\left\lfloor\right\rfloor\left\left\lfloor \right\rfloor\left\left\lfloor\right\rfloor\left\left\lfloor\right\rfloor\left\left\lfloor \right\rfloor\left\left\lfloor\right\rfloor\left\left\lfloor\right\rfloor\left\lfloor \right\rfloor\left\left\lfloor\right\rfloor\left\left\lfloor\right\rfloor\left\left\lfloor \right\rfloor\left\left\lfloor\right\right\rfloor\left\left\lfloor\right\rfloor\left\left\lfloor \right\rfloor\left\left\lfloor\right\right\rfloor\left\left\lfloor\right\rfloor\left\lfloor \right\right\rfloor\left\left\lfloor\right\rfloor\left\left\rfloor\left\lfloor\right In the preceding result, the condition that \(H(X)\) is bounded is required: When \(A\) is any local ring, \(X:=\bigoplus_{i}\Sigma^{i}A\) is reflexive but not dualisable, for it is not compact. ### Strong generation Let us return to the general framework of a compactly generated tensor triangulated category \((\mathcal{T},\otimes,\mathds{1})\). We are interested in the property that \(\mathcal{T}^{\mathrm{c}}\), the thick subcategory consisting of compact objects, has a _strong generator_, in the sense of Bondal and Van den Bergh [10]. Roughly speaking, an object \(G\in\mathcal{T}^{\mathrm{c}}\) is a strong generator if there exists an integer \(d\) such that every compact object in \(\mathcal{T}\) can be built out of \(G\) using direct sums, retracts, and at most \(d\) extensions. This might be viewed as a regularity condition, for when \(A\) is a commutative noetherian ring the category of perfect \(A\)-complexes \(\mathbf{D}(A)^{\mathrm{c}}\) has a strong generator if and only if the global dimension of \(A\) is finite; see [25, Proposition 7.2.5]. A question that arises is this: If \(\mathcal{T}^{\mathrm{c}}\) has a strong generator, does each category of local dualisable objects also have a strong generator? The motivation comes from the following result in commutative algebra; we recall that \(A_{\mathfrak{p}}\) is regular precisely when the subcategory of compact objects in \(\mathbf{D}(A_{\mathfrak{p}})\) has a strong generator. **Corollary 4.10**.: _Let \(A\) be a commutative noetherian ring and \(\mathfrak{p}\) a prime ideal in \(A\). When \(A_{\mathfrak{p}}\) is regular, \(\mathbf{R}\Gamma_{\mathfrak{p}}(A_{\mathfrak{p}})\) is a strong generator for the subcategory of dualisable objects among the \(\mathfrak{p}\)-local \(\mathfrak{p}\)-torsion \(A\)-complexes._ Proof.: We pass to the localisation at \(\mathfrak{p}\) and assume \(A\) is a regular local ring, and hence of finite global dimension. Then \(\widehat{A}\), the completion of \(A\) at its maximal ideal also has finite global dimension; see [11, Proposition 2.2.2]. Thus \(\widehat{A}\) is a strong generator for \(\operatorname{Thick}_{\widehat{A}}(\widehat{A})\). It remains to recall that this category is triangle equivalent to the category of dualisable objects in \(\Gamma_{\mathfrak{p}}\mathbf{D}(A)\), by Theorem 4.1 and Lemma 4.6. ## 5. Other contexts In this section we discuss other examples of compactly generated tensor triangulated categories for which we have some information on the local dualisable objects. ### Noetherian schemes Let \(\mathbb{X}\) be a separated noetherian scheme and \(\mathcal{T}\) the derived category of quasi-coherent sheaves on \(\mathbb{X}\), viewed as a tensor triangulated category in the usual way. For each \(x\in\mathbb{X}\) one can consider the dualisable objects in the subcategory \(\Gamma_{x}\mathcal{T}\subseteq\mathcal{T}\) consisting of objects supported on \(\{x\}\). This category is described by Theorem 4.1, for by standard arguments it is the same as the dualisable objects in \(\Gamma_{\mathfrak{m}}\mathbf{D}(\mathcal{O}_{\mathbb{X},x})\), where \(\mathcal{O}_{\mathbb{X},x}\) is the local ring at \(x\) and \(\mathfrak{m}\) is its maximal ideal. Thus Corollary 4.7 and Remark 4.8 yield the following result. **Corollary 5.1**.: _The Balmer spectrum of \((\Gamma_{x}\mathcal{T})^{\mathrm{d}}\) is homeomorphic to \(\operatorname{Spec}\widehat{\mathcal{O}}_{\mathbb{X},x}\). _ ### Modular representations of finite groups Let \(k\) be a field of positive characteristic and \(G\) a finite group whose order is divisible by the characteristic of \(k\). We write \(\operatorname{StMod}kG\) for the stable category of \(kG\)-modules, and \(\operatorname{stmod}kG\) for its full subcategory of finite dimensional modules. Then \(\operatorname{StMod}kG\) is a compactly generated, with compact objects \(\operatorname{stmod}kG\), and tensor product over \(k\), with diagonal \(G\)-action, gives it a structure of a tensor triangulated category. The unit is \(k\) with trivial action and the function object is \(\operatorname{Hom}_{k}(-,-)\), again with the diagonal \(G\)-action. Moreover compact objects in \(\operatorname{StMod}kG\) are easily seen to be dualisable and hence one has an equality \(\left(\operatorname{StMod}kG\right)^{\mathrm{c}}=\left(\operatorname{StMod} kG\right)^{\mathrm{d}}\). The group cohomology ring \(H^{*}(G,k)\) is a finitely generated \(k\)-algebra. As in the case of the derived category of a commutative noetherian ring, one considers the subcategory \(\Gamma_{\mathfrak{p}}(\operatorname{StMod}kG)\) of the (big) stable module category consisting of \(\mathfrak{p}\)-local and \(\mathfrak{p}\)-torsion modules. These are the minimal localising tensor ideals of \(\operatorname{StMod}G\), and so the lattice of localising tensor ideals in the stable module category are parameterised by subsets of \(\operatorname{Proj}H^{*}(G,k)\), the homogenous prime ideals in \(H^{*}(G,k)\) not containing the maximal ideal \(H^{\geqslant 1}(G,k)\). These results are proved in [6]; see also [8]. In [7] we prove the following analogue of Theorem 4.1; the case when \(\mathfrak{p}\) is a closed point is also treated in the work of Carlson [12]. **Theorem 5.2**.: _Fix \(\mathfrak{p}\) in \(\operatorname{Proj}H^{*}(G,k)\). For each \(kG\)-module \(X\) in \(\Gamma_{\mathfrak{p}}(\operatorname{StMod}kG)\) the following conditions are equivalent:_ 1. \(X\) _is dualisable in_ \(\Gamma_{\mathfrak{p}}(\operatorname{StMod}kG)\)_;_ 2. _The_ \(H^{*}(G,k)_{\mathfrak{p}}\)_-module_ \(H^{*}(G,C\otimes_{k}X)_{\mathfrak{p}}\) _is artinian for each finite dimensional_ \(kG\)_-module_ \(C\)_;_ 3. \(M\) _is in_ \(\operatorname{Thick}(\Gamma_{\mathfrak{p}}(\operatorname{StMod}kG))\)_._ Compare condition (2) above with the corresponding condition in Theorem 4.1. It is not hard to prove that the latter implies that the \(A_{\mathfrak{p}}\) module \(H(C\otimes^{\mathbf{L}}_{A}X)\) is artinian for each compact object (that is to say perfect complex) in \(\mathbf{D}(A)\); see [7]. But condition Theorem 4.1(2) is strictly stronger, for the residue field is not a compact object in \(\Gamma_{\mathfrak{p}}\mathbf{D}(A)\) unless the local ring \(A_{\mathfrak{p}}\) has finite global dimension. This suggests that there is a broader framework than that covered by Theorem 4.1 wherein one can get a handle on dualisable objects. ### The stable homotopy category The last example we consider is the stable homotopy category of spectra. This is a rather more involved context than the ones discussed earlier, so the discussion is more telegraphic than before; we refer readers to [19] for details. Akin to the derived category of a commutative ring, the stable homotopy category is determined by its localisations at various prime numbers. Fix a prime number \(p\), a positive integer \(n\), and let \(\mathcal{S}\) be the homotopy category of \(p\)-local spectra. This is a tensor triangulated category with tensor identity the \(p\)-local sphere \(S\). Let \(K(n)\) be the Morava \(K\)-theory of level \(n\) at the prime \(p\), and \(\mathcal{K}\) the category of \(K(n)\)-local spectra. By [19, Theorem 7.5], this is a minimal localising subcategory of \(\mathcal{S}\). Let \(\hat{L}\colon\mathcal{S}\to\mathcal{K}\) be the localisation functor. **Theorem 5.3**.: _Fix \(X\) in \(\mathcal{K}\), and consider the following conditions:_ 1. \(X\) _is dualisable in_ \(\mathcal{K}\)_;_ 2. \(K(n)_{*}(X)\) _is finite;_ 3. \(X\) _is in_ \(\operatorname{Thick}(\widehat{L}S)\)_._ _Then (1) and (2) are equivalent and are implied by (3). _ Condition (3) is strictly stronger than (1) and (2): Hopkins constructed a \(K(n)\)-local spectrum \(Y\) in the case \(n=1\) that is dualisable but not finitely built from \(\widehat{L}S\). Set \(E\mathrel{\mathop{:}}=\widehat{E(1)}\) and \(T\mathrel{\mathop{:}}=\psi^{a}-1\in E^{0}(E)\), where \(\psi^{a}\) is the Adams psi-operation with \(a\) a topological generator for \(1+p\mathbb{Z}_{p}\). Provided \(p\) is odd, \(Y\) is the cofibre of the map \(T^{2}-p\colon E\to E\). The spectrum \(Y\) is dualisable but is not in the thick subcategory generated by the Picard group of invertible objects in \(\mathcal{K}\), and hence not in \(\operatorname{Thick}(\widehat{L}S)\). For details, see [19, Section 15.1].
2310.07440
Distance Weighted Trans Network for Image Completion
The challenge of image generation has been effectively modeled as a problem of structure priors or transformation. However, existing models have unsatisfactory performance in understanding the global input image structures because of particular inherent features (for example, local inductive prior). Recent studies have shown that self-attention is an efficient modeling technique for image completion problems. In this paper, we propose a new architecture that relies on Distance-based Weighted Transformer (DWT) to better understand the relationships between an image's components. In our model, we leverage the strengths of both Convolutional Neural Networks (CNNs) and DWT blocks to enhance the image completion process. Specifically, CNNs are used to augment the local texture information of coarse priors and DWT blocks are used to recover certain coarse textures and coherent visual structures. Unlike current approaches that generally use CNNs to create feature maps, we use the DWT to encode global dependencies and compute distance-based weighted feature maps, which substantially minimizes the problem of visual ambiguities. Meanwhile, to better produce repeated textures, we introduce Residual Fast Fourier Convolution (Res-FFC) blocks to combine the encoder's skip features with the coarse features provided by our generator. Furthermore, a simple yet effective technique is proposed to normalize the non-zero values of convolutions, and fine-tune the network layers for regularization of the gradient norms to provide an efficient training stabiliser. Extensive quantitative and qualitative experiments on three challenging datasets demonstrate the superiority of our proposed model compared to existing approaches.
Pourya Shamsolmoali, Masoumeh Zareapoor, Huiyu Zhou, Xuelong Li, Yue Lu
2023-10-11T12:46:11Z
http://arxiv.org/abs/2310.07440v2
# Distance Weighted Trans Network for Image Completion ###### Abstract The challenge of image generation has been effectively modeled as a problem of structure priors or transformation. However, existing models have unsatisfactory performance in understanding the global input image structures because of particular inherent features (for example, local inductive prior). Recent studies have shown that self-attention is an efficient modeling technique for image completion problems. In this paper, we propose a new architecture that relies on Distance-based Weighted Transformer (DWT) to better understand the relationships between an image's components. In our model, we leverage the strengths of both Convolutional Neural Networks (CNNs) and DWT blocks to enhance the image completion process. Specifically, CNNs are used to augment the local texture information of coarse priors and DWT blocks are used to recover certain coarse textures and coherent visual structures. Unlike current approaches that generally use CNNs to create feature maps, we use the DWT to encode global dependencies and compute distance-based weighted feature maps, which substantially minimizes the problem of visual ambiguities. Meanwhile, to better produce repeated textures, we introduce Residual Fast Fourier Convolution (Res-FFC) blocks to combine the encoder's skip features with the coarse features provided by our generator. Furthermore, a simple yet effective technique is proposed to normalize the non-zero values of convolutions, and fine-tune the network layers for regularization of the gradient norms to provide an efficient training stabiliser. Extensive quantitative and qualitative experiments on three challenging datasets demonstrate the superiority of our proposed model compared to existing approaches. Generative network, attention network, image completion. ## 1 Introduction Image completion (inpainting), is a primary challenge in image processing. It involves filling of damaged or missing regions in an image with visually realistic and semantically meaningful content. It has an extensive variety of practical applications, making it a vital area of research in computer vision, including image editing, object removal, and image restoration. In the past few years, generative neural networks have received considerable interest because of their capacity to learn complicated and high-dimensional distributions for inpainting [1]. For such task, models like Generative Adversarial Networks (GANs) [2] or VAEs [3] have shown promising results. A GAN is made up of generator-discriminator networks set up in a zero-sum form [4]. On the other hand, VAEs [5] are a type of probabilistic models and use an encoder-decoder architecture to create a lower-dimensional representation of features from which new data samples can be made. These models can adapt different neural networks to leverage the characteristics of the data. For instance, CNN's weight-sharing mechanism makes them the preferred method for various image processing tasks, but for sequential data, attention-based networks are recently the preferred designs. The attention mechanism, in particular, has recently shown solid performance on a range of tasks, including object detection [6] and image reconstruction [7]. Preliminary studies demonstrated deep learning's efficacy in image completion with semantic guidance. Conventional approaches such as [8] use patch-based image matching for filling missing regions. However, deep learning-based methods with semantic guidance outperform them, producing more realistic and coherent completions. Autoencoder-based approaches are important for image completion [9, 10]. However, direct end-to-end methods give unsatisfactory results when dealing with missing parts of images. To address this issue, two-stage methods [11, 12] are used to learn missing structures and textures incrementally, but the outputs of these approaches have inconsistent appearances. Indeed, the constraint of CNNs is their separate feature learning of structures and textures in images. These components interact to create the image's content, and existing methods struggle to generate visually pleasing results without understanding their coherence. Instead of relying solely on past knowledge, we adopt a neural network with a strong focus on understanding the connections between image components. This is where the attention mechanism becomes valuable, as it naturally captures many-to-many interactions and is highly effective in identifying unsupervised correlations in image distributions. By incorporating the attention mechanism, we can enhance the model's ability to create more coherent and realistic image completions. Rather than adding attention modules to CNNs, Transformer [13] is another suitable framework for handling non-local modeling, in which attention is the main component of each block. To solve the issue of inpainting, some models [3, 14] use transformer-based architectures. However, because of the complexity problem, previous studies only used transformers to infer low-resolution predictions for further processing, leading to a coarse image structure that degraded the final image quality, particularly in large missing regions. LaMa [15] is a image completion technique that uses Fast Fourier Convolution [16] to address the lack of a large receptive field. In the past, researchers faced challenges with global self-attention [8, 17] due to its computational complexity and limitations in effectively recovering repeated structures. LaMa has demonstrated better performance in this regard. However, when the missing regions become larger and extend beyond object boundaries, LaMa may encounter difficulties and produce faded structures. This paper fills the gap in the literature by implementing a framework with an effective attention-based design. More specifically, we propose Distance-based Weighted Transformer Network (DWTNet), which considerably improves the quality of generated images. Considering that image regions are not equally important in image inpainting tasks, the DWT calculates the weights for image tokens using the k-nearest neighbor (KNN) distancing algorithm. This can effectively reduce the visual deviations that occur in image inpainting problems. Furthermore, to minimize the intensive computation in ViTs [18], we adopt the concept of sparse attention [19] in the DWTNet. Inaddition, to optimize the reuse of high-frequency features and improve the generation of repeating textures, we introduce a Res-FFC module. This module combines the generated coarse features and skip features from the encoder. By doing so, the Res-FFC module enhances DWTNet's ability to produce more realistic and visually coherent textures. We designed our architecture for the task of image completion, demonstrating baseline performance on three benchmark datasets. Our key contributions are: * For the recovery of coherent image structures with a tendency toward high-level interactions between pixels in an image, a new generative model with distance-based weighted feature maps is proposed. This method is designed to fill the missing regions by taking into account all of the available contexts. * A module is introduced that aggregates distance-based weighted feature maps, which are more discriminative. We propose a distance-based weighted transformer to encode and calculate the weights for image tokens and improve global context inference. Moreover, to enhance the reuse of features, we built a Res-FFC unit to integrate coarse features from the generator with the encoder's skip features. * For better convergence and stabilizing training, a norm-regularization method is introduced by retaining the same variance of the weighted CNN gradients. This method does not require singular value decomposition to process the non-zero values of CNNs. Moreover, we performed a set of experiments on three datasets indicate that our architecture can generate high-fidelity images and outperforms current state-of-the-art inpainting approaches. The rest of this paper is organised as follows: We present related studies in Section 2. In Section 3, we introduce DWTNet. The experimental results and ablation study are discussed in Section 4, and Section 5 concludes the paper. ## 2 Related Work ### _Image Completion_ When dealing with large areas of missing pixels, the majority of conventional image completion methods, such as path-based techniques [8], are unable to produce realistic images. In [7] and [20], variations of LaMa [15] architecture is proposed that use additional varieties of masks and a new loss functions to better capture different forms of missing information. Indeed, incorporating more damaged images during the training phase can improve the model's robustness with respect to different masks. To reduce visual distortions induced by standard convolutions, in [8], partial convolutions and gated convolutions are introduced. Some works are also focused on the fusion of local and global knowledge. [9] makes use of feature equalization to integrate local and global features. Another successful approach is CoModGAN [21], which improves the generation quality of images by adding a stochastic noise vector to the encoded representation. However, since CoMod-GAN lacks attention-related structures to expand the receptive field, the input image textures cannot be effectively reused. These CNN-based approaches can create appropriate contents for masked areas, but they cannot guarantee the semantic content of the inpainted images is consistent. Due to the impressive performance of the transformer design in various tasks, several transformer-based approaches have recently been developed. For instance, the first transformer-based image inpainting approach is proposed in [3] to get the image prior and transfer it to a CNN. In addition, [10] proposes a bidirectional and autoregressive transformer for incorporating the image prior. Although these approaches enhance performance, they have limitations due to the large-scale downsampling and generation. In [22, 23], the authors develop a transformer using edge auxiliaries to acquire prior and transmit the prior with masked positional encoding to a base network. The transformer-based algorithms outperform CNN-based methods in terms of both quality and diversity. But because of their irrational structure, such as downsampling of the input image and transformer input quantization, they cause substantial information loss. On the other hand, diffusion models have recently gained significant attention due to their ability to generate high-quality images [24]. In [25], an inpainting method based on a denoising diffusion probabilistic model is proposed. This model provides a probabilistic formulation to generate missing pixels in an image by iteratively denoising corrupted samples. To reduce the computation overhead of diffusion models, [26] proposes an efficient image restoration model. ### _Transformers_ Transformers have been successfully used in a wide variety of vision tasks, including object detection [6], image synthesis [10], and image completion [3], due to their capacity to model long-range relationships. In particular, the autoregressive inference process can generate high-quality images when used for image synthetic generation [3]. In TFill [14], an attention-aware layer is proposed to better leverage distantly related high-frequency features, resulting in improved appearance consistency between visible and reconstructed regions. While such generation methods can produce accurate results, they involve the optimization of extra hyperparameters (such as beam size), and there is no theoretical assurance of learning the real data distribution. Choosing the appropriate level of regularization during training is more important than finding the right distribution. Beam search will only give results with sufficient diversity if the model is adequately regularized. To avoid depending on heuristics, we use an attention-based VAE to directly approximate the image distribution instead of relying on the generation process, which lacks theoretical guarantees. VAEs with Attention module is a new approach in machine learning and computer vision. The objective is to precisely learn the data's distribution compared to use of self-supervised methods [3, 27]. Although Transformers have shown excellent performance in supervised learning tasks, their adoption in generative models has been relatively limited. However, in [28], a novel approach combines a GAN with a Transformer to effectively compose foreground objects into backgrounds, resulting in natural-looking images. In [29] and [30], to learn the actual data's distribution, a transformer-based latent variable technique is used in conditional VAE for the purpose of text generation. In this work, we propose a generative model with DWT that contains a bias induction toward a high-level understanding and computes the weights for image tokens to improve global context inference. ## 3 Distance-based Weighted Transformer Network (DWTNet) In this section, we will go through the details of our DWTNet. DWTNet is a type of VAE architecture for image completion with our proposed DWT and Res-FFC layers, which serve as the primary parts of our model that parameterize the encoder and decoder. To begin, we will analyse the structure of DWTNet. This is followed by a discussion of the model, regularization, and training. In our model, the ground truth image is denoted by \(x\), while the corrupted image is represented by \(x_{\text{m}}\). m is a binary matrix where \(0\) denotes the missing region and \(1\) denotes the observed region. This process is inherently stochastic, based on the masked image \(x_{\text{m}}\), a conditional distribution exists \(p(x|x_{\text{m}})\). By obtaining prior \(z\) based on \(x\) and \(x_{\text{m}}\), then \(p(x|x_{\text{m}})\) can be expressed as, \[\begin{array}{l}p(x|x_{\text{m}})=p(x|x_{\text{m}}).p(z|x,x_{\text{m}})\\ =p(z,x|x_{\text{m}})\\ =p(z|x_{\text{m}}).p(x|z,x_{\text{m}}).\end{array} \tag{1}\] In the self-attention encoder, we extract feature vectors from the masked image \(x_{\text{m}}\). These feature vectors are then used as input to the DWT blocks, which estimate the tokens of latent vectors for the masked regions m. To reconstruct the inpainted image, the retrieved latent vectors are given to the self-attention decoder as quantized vectors. Our model performs sampling from the underlying distribution of appearance priors, \(p(z|x_{\text{m}})\), rather than sampling from \(p(x|x_{\text{m}})\). These reconstructed appearance priors provide a wider range of information for global structure and coarse textures because the DWTs are able to produce extremely high-quality representation. As illustrated in Fig. 1, our DWTNet framework consists of downsampling residual blocks, DWT blocks, upsampling residual blocks, and Res-FFC units. In the encoder network, downsampling residual blocks are used to extract tokens, and then our DWT blocks at different resolutions (with various token counts) represent long-term relationships and compute distance-based weighted feature maps. To increase the spatial resolution to the input size in the decoder layer, upsampling residual block-based reconstruction is used. To incorporate feature-to-feature context between the encoder and decoder layers, DWT blocks are used to leverage distant spatial context and effectively reduce visual deviations. Fig. 1: Overview of our proposed DWTNet model. An attention-based model is proposed to parameterize the network. This biases the model toward learning correlations between input components and exploiting relationships between encoder and decoder features. Our model contains (b) Down/Up Resnet, (c) DWT, and (d) Res-FFC layers that uses Spectral Transform (S. Trans). Furthermore, to capture more global context information, we use the Res-FFC layers to merge generated features in the decoder with the encoder's skip features. ### _DWTNet Architecture_ #### 3.1.1 Self-attention Encoder The downsampling Resnet blocks of the self-attention encoder take a corrupted image, and produce feature maps, which are used to generate tokens. The Resnet blocks contain convolution layers to change the input dimension and downscale the resolution. These Resnet blocks are used for two reasons. (1) To use local inductive priors for better representation and optimization in early visual processing. In addition, they supply positional information for our DWT, as demonstrated by [31]\(3\times 3\) convolutions can supply adequate positional information in place of the positional embedding in ViTs. (2) The Resnet blocks are designed for quick downsampling, reducing memory usage, and computational complexity. This allows the model to efficiently process large images without a significant increase in computational requirements. In comparison to ViT's linear projection [32], this design has been found to be more effective in handling image inpainting tasks. #### 3.1.2 Dwt As transformers are computationally intensive and lack certain inductive biases found in CNNs (e.g., translation invariance and locality [32]), we address this by using Resnet blocks to extract local features and introducing DWT as shown in Fig. 1 (c) to compute distance-based weighted feature maps. Given a feature map \(\mathbf{f}\in\mathbb{R}^{C\times H\times W}\), first we split the feature map into patches \(\mathbf{f_{p}}\in\mathbb{R}^{N\times P^{2}\cdot C}\), and down-sample the patches, converting them into vectors \(\mathbf{x_{p}}\in\mathbb{R}^{N\times C}\), where \((H,W)\) denotes the feature map's resolution, \(C\) represents the number of channels, \(P\) denotes the down-sampling rate, and \(N=HW/P^{2}\) shows the total number of tokens. Then, \(\mathbf{x_{p}}\) is used as the input token embeddings. A distance-based weighted module is applied before the multi-head attention to calculate the weights for the input token embeddings \(\mathbf{x_{p}}\) by the KNN algorithm. Given a set of token embeddings \(\mathbf{x_{p}}\), the distance density \(\tau_{i}\) of each token embedding \(\mathbf{x}_{i}\) is computed by exploiting its \(k\)-nearest neighbors: \[\tau_{i}=\text{exp}(-\frac{1}{k}\sum_{\mathbf{x_{j}}\in\text{KNN}(\mathbf{x_{i }})}\|\mathbf{x}_{i}-\mathbf{x}_{j}\|_{2}^{2}), \tag{2}\] in which \(\mathbf{x}_{i}\) and \(\mathbf{x}_{j}\) correspond to the token embeddings, while \(\text{KNN}(\mathbf{x}_{i})\) represents the \(k\)-nearest neighbors of \(\mathbf{x}_{i}\). Consequently, we can calculate the weight for each token embedding \(\mathbf{x}_{i}\) by \(w_{i}=\frac{1-\tau_{i}}{\sum_{j=1}^{N}(1-\tau_{j})}\), where \(N\) is the total number of token embeddings, and \(w_{i}\) represents the potential value of each token embedding. In DWT we remove the Layer Normalization (LN) to make distancing-based weighted module performs more effectively. The DWT consists of a multi-head self-attention (MSA), and a multi-layer perceptron (MLP). For the MSA module, we use distancing-based weights in the computation of the self-attention mechanism. This can significantly minimises visual discrepancies in image completion tasks. Given the token embeddings \(\mathbf{x_{p}}\) as the input sequence of the transformer encoder, it is projected to query (\(Q\)), key (\(K\)), and value (\(V\)) of \(h\)-th attention head as follows, \[q_{h}=\mathbf{x_{p}}W_{Q}^{h},\ \ \ k_{h}=\mathbf{x_{p}}W_{K}^{h},\ \ \ v_{h}= \mathbf{x_{p}}W_{V}^{h}, \tag{3}\] \[A_{h}=softmax(\frac{q_{h}k_{V}^{T}}{\sqrt{A_{h}}})((\lambda_{c}I_{v}+W_{c})V). \tag{4}\] in which \(W_{Q,K,V}\) indicate linear projection matrices for \(Q\), \(K\), and \(V\), respectively, and the distance-based weights are applied through the diagonal matrix \(W_{c}\) while main diagonal elements consist of each distancing-based weight \(w_{i}\). A scaled dot-product operation followed by softmax is used to compute the attention map (\(A_{h}\)), which determines how much frames attend to one another, while \(d_{k}\) is the channel number of the token embeddings. \(I_{v}\) is an identity matrix with the same rank of \(v_{h}\). \(\lambda_{c}\) is the scaling factor, which was set to \(0.5\) in our experiments. Therefore, our DWT is formulated as follows: \[\mathbf{z_{p}}=\text{M}SA(\mathbf{x_{p}})+\mathbf{x_{p}},\ \ \ \mathbf{x_{p}}=\text{DSL}(\mathbf{f_{p}}), \tag{5}\] \[\mathbf{z^{\prime}_{p}}=\text{MLP}(\mathbf{z_{p}})+\mathbf{z_{p}}, \tag{6}\] \[\text{f}_{\text{out}}=\text{USL}(\mathbf{z^{\prime}_{p}})+\mathbf{f_{p}}. \tag{7}\] DSL is average pooling to down-sample feature maps. To address the rank collapse issue, we employ an MLP consisting of two linear layers and a gaussian error linear unit activation function. Additionally, the up-sampling layer (USL) is implemented using a depth-wise separable transposed convolution. At last, DWT reshapes the output sequence \(\mathbf{f}_{\text{out}}\) to a distancing-based weighted feature map \(\mathbf{f}_{d}\in\mathbb{R}^{C\times H\times W}\) to feed it to the next block. To optimize the DWT, we use a strategy similar to that used in [33] and adopt the masked language (MaL) model. To be more precise, in the discretized input, we label the mask token indexes as \(\Pi{=\{\pi^{1},\pi^{2},...,\pi^{J}\}}\), in which \(J\) represents the total amount of masked tokens. \(X^{\Pi}\) represents the number of the mask tokens in \(X\), and \(X^{-\Pi}\) indicates the number of the unmasked tokens. MaL aims to reduce the negative log-likelihood of \(X^{\Pi}\) given all of the visible areas, which is written as: \[\ell_{ML}=\underset{X}{\mathbb{E}}[\frac{1}{J}\sum_{J}^{j=1}-log\ p(x^{\pi^{j} }|X^{-\Pi},\theta)], \tag{8}\] in which \(\theta\) presents the parameters of the transformer. The incorporation of MaL with self-attention ensures that our DWT can collect all contextual information in order to estimate the probability distribution of missing regions. #### 3.1.3 Self-attention Decoder The self-attention decoder \(p_{\theta}(x|z)\) is similar to the encoder, whose keys and queries are generated by the encoder. In experiments, we found that decoding performance improves when the encoder output is directly sent to the first decoder's layer. In our model, the autoregressive decoder \(p_{\theta}(x|z)=\prod_{i=1}^{N}p_{\theta}(x_{i}|x_{i-1},z)\), is used, in which \(\mathcal{N}\) is the maximum number of sample points from a prior (such as a Binomial distribution). In this model, two unique prior distributions are considered. \(1-\)The standard multivariate normal distribution with a zero mean. \(2-\)The multivariate normal distribution parameters use the diagonal covariance matrix to better express the prior distribution. While autoregressive decoders make object sampling possible with different numbers of components, the prior is modeled by \(p(z,t)=p(z|t)p(t)\), in which \(t\) is the set of sampling points. During training, we learn \(p(t)\) by counting the occurrences of each sequence length, and we observe that aggregating the latent representations \(z\) over all components in an image improves perceptual quality. In general, this approach is equivalent to parameterizing the posterior distribution using the encoder's output aggregated along with the dimension of the latent coordinates. In order to do this, we follow [33], where the first element in the decoder input is the last hidden state of the encoder for the first token, which serves as a representation of the complete sequence. VAEs generally encounter a challenge called "posterior collapse" [34]. In this case, the information encoded in the latent representation is neglected by the decoder, which instead emphasizes the modes of data distribution. To address this issue and ensure that the encoder's posterior distribution accurately reflects the prior distribution, we introduce a balancing parameter \(\beta\). In the training section, we will provide further details and explanations about how \(\beta\) is utilized to achieve this objective. #### 3.1.4 Res-FFC To enhance the generation of intricate textures and meaningful semantic structures in the holes of the image, we introduce Res-FFC, as depicted in Fig. 1 (d), which includes a FFC layer. The FFC layer is powered by the channel-wise Fast Fourier Transform (FFT) [16], providing a large image-wide receptive field for more effective and efficient image completion. The FFC splits channels into two different branches: I) Local: This branch uses general convolutions to obtain spatial information. II) Global: The global branch utilises a Spectral Transform (S. Trans) to analyze the global structure. The results of these branches are then combined. The Spectral Transform layer (Fig. 1 (d)) contains two Fourier Units (FU) to obtain both semi-global and global features. The left FU focuses on the global context, while the right FU takes one-fourth of the channels and pays more attention to the semi-global image information. Skip connections are used in Res-FFC between encoder and decoder layers that have the same resolution scale. Res-FFC takes the features upsampled from the previous layer in the decoder (the created textures from the preceding layers) as well as the encoded skip features \(\mathbf{f}_{\text{skip}}\) (the already-existing image textures) to create the global repeating textural features. This process allows our model to use the prior coarse-level repeated textures and improve them further at the finer level. ### _Network Regularization_ We deploy a norm-preserving convolution methodology that effectively maintains the norms of vectors using singular value regularization. Notably, this approach avoids the necessity of matrix decomposition. Specifically, in a convolution layer characterized by parameters such as input channels (\(c\)), output channels (\(d\)), and kernel size (\(k\)) the resulting gradient can be expressed as \(\nabla_{u}=\hat{W}\nabla_{v}\), where \(u\in\mathbb{R}^{c}\) represents a vector of dimension \(c\), \(\nabla_{u}\) signifying the gradient of the input. Similarly, \(v\in\mathbb{R}^{d}\) denotes a \(d\)-dimensional vector, while \(\nabla_{v}\) stands for the gradient of the convolutional output. The matrix \(\hat{W}\) is of dimensions \(c\times d\) and has an important role in the backpropagation process within the convolution layer. The gradients are defined as: \[\begin{array}{l}\nabla_{u}=\sum_{i=1}^{c}\psi_{i}m_{i}<\nabla_{v},n_{i}>,\\ \nabla_{v}=\sum_{i=1}n_{i}<\nabla_{v},n_{i}>,\end{array} \tag{9}\] here, \(\psi_{i}\) corresponds to a singular value of \(\hat{W}\), while \(m_{i}\) and \(n_{i}\) represent the respective left and right singular vectors, and the computation of the estimated gradient norms are derived as follows: \[\begin{array}{l}\mathbb{E}[\|\nabla_{u}\|_{2}^{2}]=\sum_{i=1}^{c}\psi_{i}^ {2}\mathbb{E}[c]<\nabla_{v},n_{i}>|^{2}],\\ \mathbb{E}[\|\nabla_{v}\|_{2}^{2}]=\sum_{i=1}^{d}\mathbb{E}[d]<\nabla_{v},n_{i }>|^{2}],\end{array} \tag{10}\] To ensure uniform data dimensions, we define: \(m_{i}\times m_{j}=n_{i}\times n_{j}=1\) for \(i=j\), and 0 otherwise. Consequently, in order to maintain gradient norm consistency, we introduce the concept of \(\mathbb{E}[\nabla_{u}]=\mathbb{E}[\nabla_{v}]\) through the allocation of non-zero values to \(\psi\): \[\psi^{2}=\frac{\sum_{i=1}^{d}\mathbb{E}[|<\nabla_{v},n_{i}>|^{2}]}{\sum_{i, \psi_{i}}\mathbb{E}[|<\nabla_{v},n_{i}>|^{2}]} \tag{11}\] where the denominator aggregation within a singular vector \(n_{i}\) is determined by non-zero values, specifically when \(\psi_{i}\neq 0\). The proportion outlined in Eq. (11) signifies the ratio of the projected gradient of \(\nabla_{v}\) relative to the overall gradient, ensuring it remains distinct from the null-space or the kernel of the matrix \(\hat{W}\). This assumption can be approximated as \(d/\min(d,c)\). Following this premise, approximately \(\min(d,c)/d\) of the gradient \(\nabla_{v}\) will reside within the \(\min(d,c)\)-dimensional subspace. Consequently, for the purpose of norm regularization, we adjust the singular values to \(\sqrt{d/\min(d,c)}\). Nevertheless, direct implementation of \(d/\min(d,c)\) is computationally intensive as the square root operation involving matrices and poor implementation could potentially disrupt the training process. In order to address this concern, we draw inspiration from [35] and adopt an iterative algorithm for matrix square root computation through matrix multiplication. This approach ensures efficient model training, as the iterations only involve matrix multiplication. ### _DWTNet Training_ In our model, the unmasked pixels are used to recover the corresponding pixels of the input image \(x_{\text{m}}\), while the latent vectors \(\mathcal{V}\in\mathbb{R}^{k\times c}\), that are responsible for feature vector \(\hat{f}\) is interpreted from an image and use to recover the masked and unmasked pixels, where \(k\) and \(c\) represent the total number of latent vectors and the dimensional of feature vectors respectively. This allows the decoder learns to reconstruct image \(x_{r}\) from the input \(x_{\text{m}}\), thus we can write the loss as: \[\ell_{DWTNet}=\underbrace{\ell_{R}(x_{\text{m}},x_{r})}_{1}+\beta\underbrace {\|\hat{\nabla}[\mathcal{V}]\|_{2}^{2}}_{2}, \tag{12}\] in which \(\hat{\nabla}\)[.] is a pause-gradient operation that stops gradients from flowing into its argument, and \(\beta=0.25\). The second term of Eq. (12) is the commitment loss [36] that transfers gradient information from decoder to encoder. The first term indicates the reconstruction loss \(\ell_{R}\)(..), which determines the dissimilarity between the corrupted and reconstructed images. It is composed of five components, including the \(\ell\)1 between two images' pixel values (\(\ell_{pixel}\)), gradients (\(\ell_{G}\)), the adversarial loss (\(\ell_{A}\)), the perceptual loss (\(\ell_{P}\)), and the style loss (\(\ell_{S}\)) between the two images. We were inspired by [11] for the designs of the last three losses. Following is a detailed description of the losses listed above. \[\ell_{pixel}=\mathcal{M}(|x_{\text{m}}\ominus x_{r}|), \tag{13}\] \[\ell_{G}=\mathcal{M}(|\nabla[x_{\text{m}}]\ominus\nabla[x_{r}]|), \tag{14}\] in which \(\mathcal{M}(.)\) denotes a mean-value operation and \(\nabla[.]\) is the function that calculates the image gradient. The adversarial loss \(\ell_{A}\) is calculated using a discriminator network \(\mathcal{D}_{A}(.)\): \[\ell_{A}=-\mathcal{M}(\text{log}[1\ominus\mathcal{D}_{A}(x_{r})])-\mathcal{M} (\text{log}[\mathcal{D}_{A}(x_{\text{m}})]), \tag{15}\] \(\log[.]\) represents the element-wise logarithm procedure. The network architecture of the discriminator is identical to that described in [11]. On the basis of the activation maps from VGG-16, the conceptual loss \(\ell_{P}\) and style loss \(\ell_{S}\) are computed. \[\ell_{P}=\sum_{l}^{L_{P}}\mathcal{M}(|\rho_{l}(x_{\text{m}})\ominus\rho_{l}(x_{ r})|), \tag{16}\] \[\ell_{S}=\sum_{l}^{L_{S}}\mathcal{M}(|\mathcal{G}(\rho_{l}(x_{\text{m}})) \ominus\mathcal{G}(\rho_{l}(x_{r}))|), \tag{17}\] in which \(\rho_{l}(.)\) denotes different layers in VGG-16, and \(\mathcal{G}(.)\) represents the function that returns the Gram matrix of its argument. For \(\ell_{P}\) and \(\ell_{S}\), \(L_{P}\) is set to {lrelu1-1, lrelu2-1,..., lrelu5-1}, and \(L_{S}\) is set to {lrelu2-2, lrelu3-4, lrelu4-4, lrelu5-2} respectively. \(\ell_{R}\) is therefore equal to the sum of the above four losses. ## 4 Experiments Three datasets--CelebA-HQ [37], Places2 [38], and ImageNet [39] are used to evaluate our model, and for each dataset the standard training, testing, and validation splits are followed. For ImageNet, only 1K images from the test split are randomly selected for evaluation, the same strategy as used in ICT [3]. Additionally, following other image inpainting methods, we use four evaluation metrics, including U-IDS [40], P-IDS [21], FID [41], SSIM [42], and PSNR for evaluation. These metrics reflect how humans actually evaluate the quality of images. ### _Implementation Details_ Our model is implemented in Pytorch and we use eight RTX 3080 GPU to train the model for approximately 170 hours with a batch size of 20 for 1M iterations. For the implementation, we set the loss weight to \(\alpha_{g}\) = 5, \(\alpha_{a}=\alpha_{p}\) = 0.1, \(\alpha_{s}\) = 250. In all networks, spectral normalization is applied. Moreover, Orthogonal Initialization is used to initialize the networks, and the networks are trained at a fixed learning rate of 1e-4 and the Adam is used with \(\beta_{1}\) = 0 and \(\beta_{2}\) = 0.9. We also use Adam to optimize the transformer with a fixed learning rate of 3e-4. To ensure a fair comparison with earlier inpainting approaches, all images for training and testing are \(256\times 256\) in size, with regular or irregular missing regions in random locations. The details of our network architecture is shown in Table I. ### _Performance Evaluation_ We compare the performance of our model with state-of-the-art inpainting models. DFv2 [8], EC [11], MED [9], DSI [12], TFill [14], CoMod-GAN [21], and LaMa [15] using the provided pre-trained weights. **Qualitative Comparisons:** Center-hole inpainting's qualitative comparison results on CelebA, Places2, and ImageNet are shown in Figs. 2, 3, and 4. In comparison to other methods, MED tends to produce implausible structures, and DSI generates textures with unnatural artifacts. Moreover, in the DSI model, the contribution of structural information to texture synthesis is limited. In contrast, our method proves to be more effective than TFill and LaMa in understanding the global context and preserving realistic textures, especially when dealing with challenging images from datasets like Places2 and ImageNet. We believe that the excellence of our model can be attributed to two main factors: 1) The DWT modules that play a crucial role in capturing long-range dependencies and understanding global context in the input images. 2) The Res-FFC modules that facilitate the optimal reuse of high-frequency features, enhancing the generation of repeating textures. These components help preserve essential information from the input images, leading to the generation of more realistic and meaningful completions. \begin{table} \begin{tabular}{l|l} \hline \hline Encoder & Decoder \\ \hline RGB image \(x\in\mathbb{R}^{256\times 256\times 3}\) & \(z\in\mathbb{R}^{8\times 8\times 4\cdot ch}\) \\ \hline Conv \(3\times 3\) & \begin{tabular}{l} Up Resnet \(8\times 8\times 4\cdot ch\) \\ Up Resnet \(16\times 16\times 4\cdot ch\) \\ \end{tabular} \\ Down Resnet \(128\times 128\times 1\cdot ch\) & \begin{tabular}{l} Up Resnet \(8\times 8\times 4\cdot ch\) \\ Up Resnet \(16\times 16\times 4\cdot ch\) \\ \end{tabular} \\ Down Resnet \(32\times 32\times 4\cdot ch\) & \begin{tabular}{l} DWT block \(4\times 4\) - ResFFC \(\times 2\) \\ Up Resnet \(32\times 32\times 4\cdot ch\) + Res-FFC \\ \end{tabular} \\ Down Resnet \(16\times 16\times 4\cdot ch\) & \begin{tabular}{l} Up Resnet \(32\times 32\times 4\cdot ch\) + Res-FFC \\ Up Resnet \(64\times 64\times 2\cdot ch\) \\ Up Resnet \(128\times 128\times 1\cdot ch\) + Res-FF \\ \end{tabular} \\ \end{tabular} \end{table} TABLE I: Details of our network architectures, in which \(ch\) stands for base channel width. We apply LRelu (0.1), Conv \(3\times 3\), and Tanh at all dimensions for the output layer. Fig. 3: Qualitative comparison on the Places2. Our model is successful at reducing blur and artifacts produced by inconsistencies in structure and texture within and around missing regions. Fig. 2: Qualitative comparison on the CelebA dataset. The face images produced by our model are more realistic and have more characteristic facial features as compared to other baselines. **Quantitative Comparisons:** We compare our DWTNet with several baseline methods. For a fair comparison, we test the models on the same masks. On all three datasets, as reported in Table II, DWTNet achieves higher or equivalent results. In this experiment, all test samples with \(128\times 128\) center masks are used as the comparison's basis. By using DWT and Res-FFC networks, our DWTNet learns to generate texture details in the output image without any distortions or blurriness. In particular, without the DWT module, our model cannot have satisfactory performance on images with complex textures. For example, without the DWT module on Places2, our model has a FID of 8.87. On the other hand, without the Res-FFC, DWTNet struggles with texture reconstruction (FID of 17.63 on ImageNet). As demonstrated in Figs. 3 and 4, MED and DSI are not good at reconstructing the images with large missing regions, however, our DWTNet produces more realistic images with fewer artifacts as compared to other approaches. Fig. 5 shows some visual examples of how our model produces more realistic textures without adding extra artifacts. Fig. 5 further demonstrates that, despite having high PSNR values, TFill and LaMa are unable to produce visually plausible images while there is a large missing region, which means they cannot compete with their generative networks. DWTNet excels in capturing and generating complex periodic structures. Moreover, it achieves these capabilities with fewer trainable parameters and inference time costs compared to competitive methods. ### _Norm Regularization_ To evaluate the impact on performance of our regularization method, the following experiment is conducted. The regularization is performed on a small network with four convolution layers (similar to encoder). In this experiment, the odd layers have \(3\times 3\) convolutions, and the even layers have \(1\times 1\) convolutions. Fig. 6(A) represents the norm of gradient ratio changes for various input \(c\) and output \(d\) channels at the \(20^{th}\) iteration on CelebA, with and without using our norm-regularization method. The averaged results over \(5\) runs show that our model improves norm preserving capability, as it improves the ratio of gradient norms toward \(1\) which resulting in better performance and faster convergence. In addition, the FID measure in Fig. 7 demonstrates that, in comparison to the baselines, our model with norm-regularization has a faster convergence. For example, our model enhances the FID by 76 and 93 on CelebA and Places2 after 40k iterations. In Fig. 6(B), the ratios of the network with/without norm-regularization are evaluated during the training. In this experiment, \begin{table} \begin{tabular}{l|c c c c|c c c c|c c c|c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{4}{c|}{CelebA} & \multicolumn{4}{c|}{Places2} & \multicolumn{4}{c|}{ImageNet} & \multicolumn{4}{c}{Complexity} \\ \cline{2-13} & P-IDS & U-IDS & FID & SSIM & P-IDS & U-IDS & FID & SSIM & P-IDS & U-IDS & FID & SSIM & Params (M) & Inf. (s) \\ \hline DFv2 & 5.25 & 13.46 & 9.52 & 0.846 & 5.18 & 13.18 & 17.83 & 0.752 & 2.24 & 7.95 & 23.42 & 0.685 & 4 & 0.3 \\ EC & 6.08 & 15.79 & 8.16 & 0.859 & 5.62 & 14.45 & 17.27 & 0.774 & 2.68 & 7.81 & 23.76 & 0.694 & 22 & 1 \\ MED & 8.76 & 22.17 & 7.45 & 0.873 & 8.33 & 20.14 & 13.65 & 0.792 & 4.12 & 10.63 & 18.84 & 0.726 & 18 & 0.2 \\ DSI & 10.84 & 23.55 & 7.12 & 0.911 & 10.24 & 21.53 & 12.93 & 0.829 & 4.73 & 11.57 & 17.25 & 0.775 & 40 & 7 \\ TFill & 10.91 & 24.29 & 5.31 & 0.917 & 10.31 & 21.94 & 9.02 & 0.840 & 5.89 & 16.76 & 11.81 & 0.791 & 69 & 4 \\ CoMod-GAN & 11.26 & 24.83 & 5.03 & 0.921 & 10.79 & 22.66 & 8.70 & 0.847 & 6.75 & 17.42 & 11.26 & 0.805 & 109 & 5 \\ LaMa & 11.57 & 25.12 & 4.87 & 0.925 & 11.64 & 23.51 & 8.56 & 0.851 & 6.92 & 17.74 & 10.61 & 0.810 & 51 & 3 \\ \hline DWTNet w/o Res-FFC & 10.43 & 22.98 & 7.24 & 0.894 & 9.78 & 20.42 & 13.07 & 0.817 & 4.38 & 11.25 & 17.63 & 0.761 & - & - \\ DWTNet w/o DWT & 11.02 & 24.53 & 5.12 & 0.919 & 10.86 & 22.73 & 8.96 & 0.848 & 6.64 & 17.15 & 11.49 & 0.792 & - & - \\ DWTNet (ours) & **12.08** & **26.34** & **4.51** & **0.939** & **12.14** & **23.98** & **8.04** & **0.868** & **7.35** & **18.26** & **10.12** & **0.824** & 42 & 2 \\ \hline \hline \end{tabular} \end{table} TABLE II: Quantitative comparisons on CelebA, Places2, and ImageNet datasets on center masked images. Results are based on P-IDS (\(\uparrow\)), U-IDS (\(\uparrow\)), FID (\(\downarrow\)), and SSIM (\(\uparrow\)). DWTNet w/o Res-FFC indicates our model that use two convolution layers to aggregate f and f\({}_{\text{aug}}\), DWTNet w/o DWT indicates our model trained with the standard ViT. Among the baselines, LaMa and CoModGAN are the closest competitors to ours. However, both of them are considerably more complex than ours, while also exhibiting inferior performance. This comparison highlights our method’s superior efficiency in using trainable parameters and achieving faster inference (Inf.) speeds per image. Fig. 4: Qualitative comparison on the ImageNet dataset. Our model outperforms existing methods in terms of retaining both structures and textures. Fig. 5: Visual results comparison of our model with other models on CelebA with PSNR values. A higher PSNR implies less distortion. the encoder consists of four layers, two layers containing pooling operations that alter the dimensions and the other two without pooling. The network is trained for 100 iterations. We emphasize the significance of initialization in constructing a norm-preserving network. Although the model without norm-regularization initially maintains norm preservation, the gradient's range gradually increases and deviates from the desired value of 1 as the parameters are updated over time. This observation underscores the importance of proper initialization and norm regularization for stable and effective training. ### _Human Perceptual and Ablation Study_ We conduct a user study to obtain a more precise evaluation of subjective image quality compared to DSI, TFill, and LaMa. We randomly select 30 masked images from the CelebA and Places2 test sets. For each image, we generate two reconstructed outputs: one using DWTNet and the other using one of the baselines. Participants in the study are presented with both reconstructed images simultaneously and asked to choose the one that appears most photorealistic and visually natural. We gather results from 27 participants and calculate the preference ratios for each approach based on the data provided in the Table. 3(A). For CelebA and Places2 our method have a 61.8% and 68.4% likelihood of being selected, respectively. We also perform ablation studies on our model to analyse which elements of our proposed architecture most significantly influence the overall performance. In order to conduct this investigation, we train the models using 100K training images from Places2. For testing, we randomly select 10K validation images. In Table III(B), the quantitative comparison is reported. **Transformer-Convolution Framework.** We investigate if or not a transformer with multi-head self-attention is effective for filling large missing regions. The inpainted images lose some quality when the transformer blocks are replaced with convolution blocks (Table III(B) type "B"), as seen the performance reduction on all the metrics. Additionally, we demonstrate a few examples in Fig. 8. In comparison with the convolution network, our model uses distance context to reconstruct visual structure, demonstrating the efficiency of long-term interactions. **DWT Block.** We developed a new transformer block in our framework because the standard design is prone to unstable optimization. As shown in Table III(B), our full model (type Fig. 6: Proposed norm-regularization method evaluation. (A) The gradient norm ratio for different input and output channels with and without using the proposed regularization method. (B) The layers without pooling are indicated by solid lines, while the layers with pooling are indicated by dashed lines. Fig. 7: The FID score for CelebA and Places2 generated for our model with and without norm-regularization, LaMa, and MED as a function of iterations. "A") outperforms type "C" with the original transformer [13], improving performance by 0.83 on FID. As seen in Fig. 8, our model generates visually more pleasing images, allowing for high-quality image completion. **Autoregressive Decoder.** We observed that autoregressive modeling is a key component of learning data distributions and it represents a significant component to produce high-quality images. In Table III(B) "D" we demonstrate the effect of autoregressive decoder as a data synthesizer in the performance of our model. **Res-FFC.** In two different architectures, we evaluate the effect of this module. (1) In our architecture, we combine the encoder and decoder features without using the Res-FFC blocks (two convolution layers are used to aggregate the features). (2) Connecting the skipped features of the encoder with the Res-FFC blocks to the generator features. The quantitative comparison in Table II demonstrates the significance of Res-FFC blocks (improves the model performance on CelebA by 2.36 FID). Moreover, to evaluate the impact on local and global branches of Res-FFC, we performed a comparison in Table III "E" and "F". In these studies we analyze our model's behavior when each branch is disabled. Our ablation study confirms that both the local and global branches play crucial roles in improving our model performance. ## 5 Conclusion In this paper, we introduced DWTNet for image completion. To enhance the quality of the reconstructed images, we proposed DWT with an autoregressive VAE to calculate the weights for image tokens and encode global dependencies. Additionally, we designed Res-FFC by integrating coarse features from the generator with the encoder's skip features, which helps the model to generalize on unseen images. Indeed, the Fourier convolutions of FFC provide a high receptive field which result in better perceptual quality. Furthermore, to stabilize training a norm-preserving method is proposed to improve performance. Without using singular value decomposition, we introduced a simple regularization method to determine the nonzero singular values of the convolution operator. We experimentally show our model has significant potential for content generation, because of its ability to approximate soft relationships between image contents. DWTNet achieved other baselines' performance on several benchmarks. Our architecture has been shown to be superior via extensive quantitative and qualitative comparisons. Although DWTNet achieves better results than current state-of-the-art approaches on images that have been damaged with regular and irregular masks, it still struggles when analyzing objects with different shapes or complex textures. For instance, the DWTNet outperforms competing approaches on the ImageNet dataset, however, there is still room for further improvement in the quality of the generated images. Consequently, we will work on a more advanced Transformer to fully comprehend the semantic content of an image. Fig. 8: Ablation study examples. Type A represents our comprehensive model, whereas B and C are simplified versions that use convolutions in place of transformers and the original transformer. \begin{table} \begin{tabular}{l|c|c|c|c} \hline \hline Type & P-IDS\(\uparrow\) & U-IDS\(\uparrow\) & FID\(\downarrow\) & SSIM\(\uparrow\) \\ \hline A -Full architecture & **12.14** & **23.98** & **8.04** & **0.868** \\ \hline B - CNNs. & 10.22 & 22.11 & 9.59 & 0.833 \\ C - Transformer & 10.86 & 22.73 & 8.96 & 0.848 \\ D - Non-autoregressive & 10.13 & 21.94 & 9.53 & 0.831 \\ E - Res-FFC w global branch & 11.36 & 22.91 & 8.95 & 0.852 \\ F - Res-FFC w local branch & 11.48 & 23.04 & 8.83 & 0.854 \\ \hline \multicolumn{5}{c}{Places2} \\ \multicolumn{5}{c}{(B)} \\ \hline \hline \end{tabular} \end{table} TABLE III: Ablation study. Table (A) shows the results of a study on human perception. Table (B) shows different configurations of our model on Places2. Type “A”: indicates our full architecture. Type “B”: replacing the transformer with CNNs. Type “C”: replacing our DWT blocks with the standard transformer. Type “D”: indicates the use of our model without an autoregressive decoder. Types “E” and “F” : indicate the performance of our model with Res-FFC while local and global branches are disabled, respectively.
2303.07578
VANI: Very-lightweight Accent-controllable TTS for Native and Non-native speakers with Identity Preservation
We introduce VANI, a very lightweight multi-lingual accent controllable speech synthesis system. Our model builds upon disentanglement strategies proposed in RADMMM and supports explicit control of accent, language, speaker and fine-grained $F_0$ and energy features for speech synthesis. We utilize the Indic languages dataset, released for LIMMITS 2023 as part of ICASSP Signal Processing Grand Challenge, to synthesize speech in 3 different languages. Our model supports transferring the language of a speaker while retaining their voice and the native accent of the target language. We utilize the large-parameter RADMMM model for Track $1$ and lightweight VANI model for Track $2$ and $3$ of the competition.
Rohan Badlani, Akshit Arora, Subhankar Ghosh, Rafael Valle, Kevin J. Shih, João Felipe Santos, Boris Ginsburg, Bryan Catanzaro
2023-03-14T01:55:41Z
http://arxiv.org/abs/2303.07578v1
VANI: Very-Lightweight Accent-Controllable Tts for Native and Non-Native Speakers with Identity Preservation ###### Abstract We introduce VANI, a very lightweight multi-lingual accent controllable speech synthesis system. Our model builds upon disentanglement strategies proposed in RADMMM[1] and supports explicit control of accent, language, speaker and fine-grained \(F_{0}\) and energy features for speech synthesis. We utilize the Indic languages dataset, released for LIMMITS 2023 as part of ICASSP Signal Processing Grand Challenge, to synthesize speech in 3 different languages. Our model supports transferring the language of a speaker while retaining their voice and the native accent of the target language. We utilize the large-parameter RADMMM model for Track \(1\) and lightweight VANI model for Track \(2\) and \(3\) of the competition. Rohan Badlani & Akshit Arora & Subhankar Ghosh & Rafael Valle Kevin J. Shih & Joao Felipe Santos & Boris Ginsburg & Bryan Catanzaro NVIDIA ## 1 Abstract We introduce VANI, a very lightweight multi-lingual accent controllable speech synthesis system. Our model builds upon disentanglement strategies proposed in RADMMM[1] and supports explicit control of accent, language, speaker and fine-grained \(F_{0}\) and energy features for speech synthesis. We utilize the Indic languages dataset, released for LIMMITS 2023 as part of ICASSP Signal Processing Grand Challenge, to synthesize speech in 3 different languages. Our model supports transferring the language of a speaker while retaining their voice and the native accent of the target language. We utilize the large-parameter RADMMM model for Track \(1\) and lightweight VANI model for Track \(2\) and \(3\) of the competition. ## 2 Introduction There has been incredible progress in the quality of text-to-speech(TTS) models. However, most TTS models do not disentangle attributes of interest. Our goal is to create a multi-lingual TTS system that can synthesize speech in any target language (with the target language's native accent) for any speaker seen by the model. The main challenge is disentanglement of attributes like speaker, accent and language such that the model can synthesize speech for any desired combination of these attributes without any bi-lingual data. ## 3 Method ### Dataset and Preprocessing We utilize the Hindi, Telugu, and Marathi dataset released as part of LIMMITS challenge. We remove empty audio files and clips with duplicate transcripts. We parse files through Automatic Speech Recognition model and generate transcripts. We select top \(8000\) datapoints per speaker with the least character error rate (CER) between ground truth and generated transcripts. This results in the dataset used for Track \(2\). For Track \(1\) and \(3\), we identify audio clips with maximal overlap in characters across speakers within a language1. We trim the leading and trailing silences and normalize audio volume. Footnote 1: Dataset and Model Parameter Details: [https://bit.ly/classp_vani](https://bit.ly/classp_vani) ### Spectogram Synthesis Model Our goal is to develop a model for multilingual synthesis in the languages of interest with the ability of cross-lingual synthesis for a speaker of interest. Our dataset comprises of each speaker speaking only one language and hence there are correlations between text, language, accent and speaker within the dataset. Recent work on RADMMM [1] tackles this problem by proposing several disentanglement approaches. We utilize RADMMM as the base model for track \(1\). For track \(2,3\) we use the proposed lightweight VANI model. As in RADMMM, we use deterministic attribute predictors to predict fine-grained features given text, accent and speaker. We leverage the text pre-processing, shared alphabet set and the accent-conditioned alignment learning mechanism proposed in RADMMM to our setup. This supports code-switching by default. We consider language to be _implicit in the phoneme sequence_ whereas the information captured by accent should explain the fine-grained differences between _how phonemes are pronounced in different languages_. ### Track1: Large-parameter setup, Small-data As described in Sec 3.1, our dataset is limited to \(5\) hours per speakers. Since our dataset is very limited, we apply formant scaling augmentation suggested in RADMMM[1] with the goal of disentangling speaker \(S\) and accent \(A\) attributes. We apply constant format scales of \(0.875\) and \(1.1\) to each speech sample to obtain 2 augmented samples and treat those samples belonging to 2 new speakers. This helps reduce correlation between speaker, text and accent by having the those variables same for multiple speakers and provides more training data. Our model syntheses mels(\(X\in\mathbb{R}^{C_{\mathit{mal}}\times F}\)) using encoded text(\(\Phi\in\mathbb{R}^{C_{\mathit{lat}}\times T}\)), accent(\(A\in\mathbb{R}^{D_{\mathit{normal}}}\)), speaker(\(S\in\mathbb{R}^{D_{\mathit{polar}}}\)), fundamental frequency(\(F_{0}\in\mathbb{R}^{\mathit{1}\times F}\)) and energy(\(\xi\in\mathbb{R}^{\mathit{1}\times F}\)) as conditioning variables where \(F\) is the number of mel frames, \(T\) is the text length, and energy is per-frame mel energy average. Although we believe attribute predictors can be generative models, we use deterministic predictors where \(F_{0}^{h}\), \(\mathcal{E}^{h}\) and \(\Lambda^{h}\) are predicted pitch, energy, and durations conditioned on text \(\Phi\), accent \(A\) and speaker \(S\): \[P_{\mathit{vani}}(X)=P_{\mathit{mel}}(X|\Phi,\Lambda^{h},A,S,F_{0}^{h},\mathcal{ E}^{h}) \tag{1}\] ### Track2: Small-parameter, Large-data setup Since our goal is to have very lightweight model (\(<5\) million parameters), we replace RADMMM mel-decoder with an au toregressive architecture. Our architecture is very similar to Flowtron[2] with 2 steps of flow (one forward and one backward). Each flow step uses 3 LSTM layers and is conditioned on text, accent, speaker, \(F_{0}\) and \(\xi\). ### Track3: Small-parameter, Small-data setup We utilize the model from Track \(2\) and the data and augmentation strategy from Track \(1\) as the model and data for Track \(3\). ### Vocoder We use the HiFiGAN2 for Track \(1\) and Waveglow3 for Track 2 and 3 to convert mel-spectrograms to waveforms. Footnote 2: WeMo implementation of HiFi-GAN; github.com/NVIDIA/NeMo Footnote 3: Waveglow checkpoints: github.com/bloodraven66/ICASSP_LIMIITS23 ## 4 Results and Analysis In this section, we evaluate the performance of the models in terms of content quality and speaker identity retention. ### Character Error Rate (CER): We calculate CER between the transcripts generated from synthesized speech and ground truth(GT) transcripts. Models with lower CER are better in terms of content quality. ### Speaker Embedding Cosine Similarity: We use Titanet[3] to get speaker embeddings and compare cosine similarity of synthesized sample against GT samples for same speaker. Higher scores show better identity retention. ### Evaluation Task Definition Table 1 compares the Track1 model (RADMMM) against Track2 (VANI with nonparallel dataset - VANI-NP) and Track3 (VANI with limited parallel dataset - VANI-P) on mono-lingual resynthesis of speakers on 10 prompts in their own language (resynthesis task). Table 2 compares the models in the three tracks where speech was synthesized in a speaker's voice on 50 prompts outside of their own language (**transfer*** task.**). ### Analysis We observe that even with the limited dataset, the large parameter RADMMM model outperforms small parameter VANI model. We notice that Track2 with a larger dataset retains identity and content quality better than Track3 with limited data. However, all tracks do reasonably well on maintaining identity. We observe that on transfer, we're able to achieve decent CER comparable to the resynthesis case, indicating our model preserves content on transferring language of the speaker. The identity retention in transfer task is worse than resynthesis as expected but doesn't degrade much in VANI as compared to RADMMM demonstrating the importance of disentanglement strategies. We observe similar trend across tracks with human evaluation metrics (Table 3). ## 5 Conclusion We utilize strategies proposed in RADMMM [1] to disentangle speaker, accent and text for high-quality multilingual speech synthesis. We also present VANI, a lightweight multilingual autoregressive TTS model. We utilize several data pre-processing and augmentation strategies to preserve speaker identity in cross-lingual speech synthesis. Our model(s) can synthesize speech with proper native accent of any target language for any seen speaker without relying on bi-lingual data.
2301.11563
Exponential tail bounds and Large Deviation Principle for Heavy-Tailed U-Statistics
We study deviation of U-statistics when samples have heavy-tailed distribution so the kernel of the U-statistic does not have bounded exponential moments at any positive point. We obtain an exponential upper bound for the tail of the U-statistics which clearly denotes two regions of tail decay, the first is a Gaussian decay and the second behaves like the tail of the kernel. For several common U-statistics, we also show the upper bound has the right rate of decay as well as sharp constants by obtaining rough logarithmic limits which in turn can be used to develop LDP for U-statistics. In spite of usual LDP results in the literature, processes we consider in this work have LDP speed slower than their sample size $n$.
Milad Bakhshizadeh
2023-01-27T06:55:34Z
http://arxiv.org/abs/2301.11563v1
# Exponential tail bounds and Large Deviation Principle for Heavy-Tailed U-Statistics+ ###### Abstract We study deviation of U-statistics when samples have heavy-tailed distribution so the kernel of the U-statistic does not have bounded exponential moments at any positive point. We obtain an exponential upper bound for the tail of the U-statistics which clearly denotes two regions of tail decay, the first is a Gaussian decay and the second behaves like the tail of the kernel. For several common U-statistics, we also show the upper bound has the right rate of decay as well as sharp constants by obtaining rough logarithmic limits which in turn can be used to develop LDP for U-statistics. In spite of usual LDP results in the literature, processes we consider in this work have LDP speed slower than their sample size \(n\). ## 1 Introduction Suppose \(X_{1},X_{2},\ldots,X_{n}\stackrel{{ iid}}{{\sim}}\mu\) where \(\mu\) is a probability measure supported on \(\mathbb{R}\). We consider the following U-statistic of order \(m\) with a kernel function \(h(\cdot):\mathbb{R}^{m}\rightarrow\mathbb{R}\) which is symmetric about its arguments: \[U_{n}\triangleq\frac{1}{\binom{n}{m}}\sum_{1\leq i_{1}<\ldots<i_{m}\leq n}h(X _{i_{1}},\ldots,X_{i_{m}}).\] We study the decay of \(\mathbb{P}\left(\left|U_{n}-\mathbb{E}\left[U_{n}\right]\right|>t\right)\) as a function of \(t,n\), and \(m\), in a setup which is rarely addressed in the literature and is characterized by a couple of key assumptions: heavy-tailed distribution for \(h(X_{1},...,X_{m})\), and large values for \(t\). We explain the role of each assumption in more details below. When \(h(X_{1},...,X_{m})\triangleq h\) has a heavy-tailed distribution, its Moment Generating Function (MGF) is not bounded at any positive point. This breaks many concentration results and violates Cramer's condition which is required for common results that determine large deviation behavior. While deviation of U-statistics has been studied extensively in the light-tailed regime [1, 9, 8, 15, 16, 23, 22], the same for heavy-tailed U-statistics is relatively under-explored. In this paper, we aim to focus on the heavy-tailed regime. Moreover, when kernel \(h\) is assumed to have a heavy tail, for fixed values of \(n\) and \(m\), \(\mathbb{P}\left(\left|U_{n}-\mathbb{E}\left[U_{n}\right]\right|>t\right)\) has two different behaviors: 1) for small values of \(t\) it decays like a Gaussian tail, and 2) for \(t\) larger than the zone of normal convergence, i.e. large deviation zone, it has a decay slower than normal distribution. In spite of the first region which has been studied in the literature largely, see [16, 15] and the references therein, there are little documented information about the tail behavior in the second region. Several setups have been developed to study the behavior of the tail. We mention some of them below to discuss the setup that results of the current paper belong to. 1. Finite sample concentration inequalities that give upper bounds for \(\mathbb{P}\left(\left|U_{n}-\mathbb{E}\left[U_{n}\right]\right|>t\right)\) for all values of n. 2. Asymptotic distribution studies that find suitable scaling \(a_{n}\) and limiting distribution \(D\) for which \(a_{n}\big{|}U_{n}-\mathbb{E}\left[U_{n}\right]\big{|}\overset{d}{\to}D\). \(a_{n}=c\sqrt{n}\), and \(D=\mathcal{N}(0,1)\) is the well-known CLT that holds for non-degenerate U-statistics. 3. Berry-Esseen type inequalities that seek for uniform upper bound for all \(t\in\mathcal{C}\), where \(\mathcal{C}\subseteq\mathbb{R}\). 4. Large deviation studies that look for convergence speed \(b_{n}\) and rate function \(f(t)\) for which we have \(\lim_{n\to\infty}\frac{-\log\mathbb{P}\left(\left|U_{n}-\mathbb{E}\left[U_{n} \right]\right|>t\right)}{b_{n}}=f(t),\;f(t)>0\) for large \(t\). In this work, we try to shed some light on the undocumented facts about the deviation of U-statistics in setups 1 and 4 listed above, the concentration inequality, and the large deviation setups. In particular, we develop a general concentration inequality that holds for all U-statistics with finite second moments. Moreover, we characterize a class of kernels for which that concentration inequality is sharp enough to yield the large deviation limit. The byproduct of such sharpness analysis is obtaining exact convergence speed \(b_{n}\) and a closed form for the rate function in terms of the tail of the kernel \(h\). For heavy-tailed kernels, which are considered in this work, we usually obtain \(b_{n}\ll n\), e.g \(b_{n}=n^{\alpha},\;\alpha<1\). This distinguishes our results from several previous works that tried to determine tail behavior in the asymptotic Gaussian regime, i.e. \(b_{n}=n\). ### Related works Large deviations of U-statistics have been studied in several previous works under a variety of assumptions. Hoeffiding [12] formally introduced U-statistics and showed with the right scaling they converge to a Normal distribution. Follow up studies offered bounds for the deviation of U-statistics from their Gaussian limits (see [16, 15] and the references therein). Gine et al. [11] offers upper bound for moments of U-statistics. There is a relatively straightforward connection between moment inequalities and exponential bounds for the tail (see [21]). Utilizing such connection, [11] obtains exponential tail bounds. However, these bounds do not show Gaussian decay when the deviation amount \(t\) is within the Gaussian zone. Therefore, the authors also offer improvements for their inequalities in the case of completely degenerate kernels of order \(m=2\). Recently, Chakraborty and Kuchibhotla [5] considered degenerate kernels of order 2 and samples from heavy-tailed subWeibull distributions and obtained exponential tail bounds for such U-statistics. However, the specific form they used for the U-statistics seems restrictive. They define \[U_{n}=\sum_{i\neq j}\phi(Y_{i})w_{i,j}(X_{i},X_{j})\psi(Y_{j}),\] and impose uniform boundedness assumption on \(w_{i,j}\). Hence, the kernel can be unbounded only through product function \(\phi(Y_{i})\psi(Y_{j})\). It is not clear to us if results of [5] can recover exponential bounds for general non-degenerate kernels like ones named in Lemma 2.19 of the current work. One property of U-statitics of order \(m\geq 2\) that makes them more challenging than the case \(m=1\), i.e. iid sums, is containing dependent terms in the sum. There are a couple of ways to handle such dependencies, decoupling and martingale inequalities. Each direction points to a path that is relatively different from another. Both directions have been explored to develop tail bounds for U-statistics. de la Pena [7] provides a decoupling tool that helps one to get rid of dependencies of summands in the U-statistics. This tool has been utilized by several later works to obtain exponential concentration bounds, e.g. [17, 4, 5]. While the decoupling technique is proved very powerful in handling dependencies, there is an extra 8 factor on the upper bound it offers which usually makes the exponential bound it provides off by a constant factor in its exponent. Another approach to obtain exponential inequalities for U-statistics is to leverage inequalities for (sub/super) martingales such as those in [10]. Houdre and Reynaud-Bouret [13] and some of their references pursue this direction. Using de la Pena [7] decoupling and Talagrand [20] inequality is common in such approach. Nevertheless, in this approach, the statements get algebraically complicated soon and several constants given by sup or expectation of partial sums get involved. As a result, many bounds obtained by martingale inequalities are restricted to order 2 or degenerate kernels. Moreover, constants are not usually sharp for the similar reason discussed in utilizing the decoupling technique above. In addition to tail bounds, several works have been devoted to showing U-statistics satisfy Large Deviation Principal (LDP). It is common for works in this direction to assume the kernel has finite exponential moments, hence they exclude distributions we are focusing on in the current manuscript. Dasgupta [6] tried to show large deviation behavior of U-statistics is determined by their conditional expectation given one summand. At the first glance this sounds a reasonable claim since the asymptotic normality of U-statistics is tightly related to such conditional expectations [12]. However, this claim has been disapproved later [3, 22]. Nikitin and Ponikarov [18] study large deviation of U-statistics with bounded non-degenerate kernels. It also claims no large deviation results have been known by the time the work was published. Arcones [1] developes a decomposition technique for the case of a 2-variables kernel with finite MGF that enables one to prove LDP. Nevertheless, extension of this technique to higher order kernels is not straightforward. Eichelsbacher and Lowe [9] consider both U-statistics and V-statistics, which includes terms with repeated samples in the sum, given by a kernel function with bounded MGF. In this case, they show satisfying LDP for U-statistics and the corresponding V-statistics is equivalent with the same rate function. Lemma 3.1 from [9], which allows one to bound MGF of a U-statistic, is one the most important tools we used in the current work. Eichelsbacher [8] provides sufficient conditions under which a U-statistic would satisfy an LDP under weak Cramer condition. It utilizes a representation of a bi-variable kernel as infinite sums of separable functions in order to reduce the problem of the large deviation for U-statistics to the one for sums of iid variables. Then, it applies contraction theorem to prove LDP for the U-statistics. The current work focuses on heavy-tailed distributions which are mostly excluded from large deviation studies above. The exponential tail bound given in Section 2.1 includes all kernels with finite variances regardless of their order, boundedness, or degeneracy. The upper bound we offer is simple enough to reveal both Gaussian and heavy tail decay regions immediately. All parameters in the statement of the upper bound have known limits when \(n\to\infty\). Moreover, we offer ready-to-use tools to handle them for fixed sample size \(n\) in Section 2.1.1. In addition, to the best of our knowledge, the rough logarithmic limits, with speeds slower than \(n\), obtained in Section 2.3 have not been documented in earlier works. ### Some Applications U-statistics appear in several statistical analyses including point estimation, such as population variance [12], hypothesis testing [19], and statistical mechanics [14]. As a self-averaging function, it is quite natural to ask how fast a U-statistic concentrates around its mean. In particular, for the asymptotic analysis of hypothesis testing, the speed and the rate function for the large deviation of such statistics are important. For instance, obtaining the values of Bahadur efficiency and Bahadur exact slope requires having LDP for the test function [19]. Moreover, the large deviation principle and exponential concentrations for U-statistics play substantial roles in the study of macroscopic limits of interacting particle systems and in general statistical mechanics [17]. Our result enables one to extend such theories when samples are drawn from a heavy-tailed distribution. Main results The main result of this paper is two-fold. First, in Section 2.1 we develop a general upper bound for the tail probability of U-statistics whose kernels have bounded variances. Second, in Section 2.3, we show for several kernel functions this upper bound is tight enough to determine the large deviation behavior of the U-statistics. If a centered kernel \(h:\mathbb{R}^{m}\rightarrow\mathbb{R}\) satisfies the criteria discussed in Section 2.3 and \(U_{n}\) denotes the U-statistic corresponding to \(h\) with \(n\) samples, we show in Lemma 2.14 that \(\mathbb{P}\left(U_{n}>t\right)\) and \(\mathbb{P}\left(h>\frac{n}{m}t\right)\) have the same asymptotic behavior in logarithmic sense, i.e. \[\lim_{n\rightarrow\infty}\frac{\log\mathbb{P}\left(U_{n}>t\right)}{\log \mathbb{P}\left(h>\frac{n}{m}t\right)}=1.\] Therefore, both the convergence speed and the rate function for large deviation of \(U_{n}\) are determined by the tail of kernel \(h\). In addition, the lower bound developed in Section 2.2 reveals the event that is responsible for large deviations of \(U_{n}\). Indeed, one of the \(X_{1},...,X_{n}\) gets large enough to make all the summands it is contributing to exceed \(\frac{n}{m}t\). There will be \(\binom{n-1}{m-1}\) such terms in the expansion of \(U_{n}\). ### Finite sample upper bound Without loss of generality, we will operate under the assumption that the kernel is centered. **Assumption 1**.: _Suppose \(h\) is centered, i.e._ \[\mathbb{E}h(X_{1},\ldots,X_{m})=0.\] We need to define rate functions \(I,J\) as below. **Definition 2.1**.: _Let \(I,J:\mathbb{R}^{\geq 0}\rightarrow\mathbb{R}^{\geq 0}\) be two non-decreasing functions that upperbound right tails of \(h\) and \(|X_{i}|\) exponentially. In other words:_ \[\mathbb{P}\left(h(X_{1},...,X_{m})>t\right) \leq\exp\left(-I(t)\right),\quad\forall t\geq 0, \tag{2.1}\] \[\mathbb{P}\left(\left|X_{i}\right|>t\right) \leq\exp\left(-J(t)\right),\quad\forall t\geq 0. \tag{2.2}\] Note that one can simply take \(I(t)=-\log\mathbb{P}\left(h(X_{1},...,X_{m})>t\right),\ J(t)=-\log\mathbb{P} \left(\left|X_{i}\right|>t\right)\). The whole point of Definition 2.1 is to allow one to work with possibly simpler upper bounds. Below Lemma is a simpler modified version of Lemma 3.1 from [9]. **Lemma 2.2**.: _Let \(k\triangleq\lfloor\frac{n}{m}\rfloor\). Then, given any \(L\geq 0\) and \(\lambda\geq 0\), the following holds:_ \[\mathbb{E}\left[\exp\left(\lambda\frac{1}{\binom{n}{m}}\sum_{1\leq i_{1}<...< i_{m}\leq n}h_{L}(X_{i_{1}},\ldots,X_{i_{m}})\right)\right]\leq\mathbb{E}\left[\exp \left(\frac{\lambda}{k}h_{L}(X_{1},...,X_{m})\right)\right]^{k},\] _where \(h_{L}(X_{i_{1}},\ldots,X_{i_{m}})\triangleq h(X_{i_{1}},\ldots,X_{i_{m}}) \mathbf{1}(h(X_{i_{1}},\ldots,X_{i_{m}})\leq L)\)._ Proof.: Define \(B(X_{1},...,X_{n})=\frac{1}{k}(h_{L}(X_{1},...,X_{m})+h_{L}(X_{m+1},...,X_{2m} )+...+h_{L}(X_{km-m+1},...,X_{km}))\), then we have \[\frac{1}{\binom{n}{m}}\sum_{i_{1}<...<i_{m}}h_{L}(X_{i_{1}},...,X_{i_{m}})= \frac{1}{n!}\sum_{\sigma\in S_{n}}B(X_{\sigma(1)},...,X_{\sigma(n)}).\] Since \(h_{L}\leq L\) is bounded, its moment generating function is finite at any positive point, hence we obtain \[\mathbb{E}\left[\exp\left(\lambda\frac{1}{\binom{n}{m}}\sum_{1\leq i _{1}<...<i_{m}\leq n}h_{L}(X_{i_{1}},\ldots,X_{i_{m}})\right)\right] =\mathbb{E}\left[\exp\left(\frac{\lambda}{n!}\sum_{\sigma}B(X_{ \sigma(1)},...,X_{\sigma(n)})\right)\right]\] \[\overset{*}{\leq}\frac{1}{n!}\sum_{\sigma}\mathbb{E}\left[\exp \left(\lambda B(X_{\sigma(1)},...,X_{\sigma(n)})\right)\right]\] \[=\mathbb{E}\left[\exp\left(\lambda B(X_{1},...,X_{n})\right)\right]\] \[=\mathbb{E}\left[\exp\left(\frac{\lambda}{k}h_{L}(X_{1},...,X_{m })\right)\right]^{k}.\] To obtain inequality marked by \(*\), we used the fact \(\exp\left(\frac{1}{n!}\sum\lambda B\circ\sigma\right)\leq\frac{1}{n!}\sum\exp \left(B\circ\sigma\right)\) by convexity of the exponential function. \(\Box\) For the sake of simplicity we drop arguments of \(h(X_{1},...,X_{m}),h_{L}(X_{1},...,X_{m})\) and only write \(h,h_{L}\) **Lemma 2.3** (Lemma 1 of [2]).: _If \(\mathbb{E}\left[h\right]=0\), for any \(\eta,L\geq 0\) we have_ \[\mathbb{E}\left[\exp(\eta h_{L})\right]\leq\exp\left(\frac{v(L,\eta)}{2}\eta ^{2}\right),\] _where \(v(L,\eta)\triangleq\mathbb{E}\left[h_{L}^{2}\mathbf{1}(h\leq 0)+h_{L}^{2}\exp( \eta h_{L})\mathbf{1}(h>0)\right]\)._ **Theorem 2.4**.: _Under Assumption 1, for any \(0<\beta\leq 1,\ t\geq 0\)_ \[\mathbb{P}\left(U_{n}>t\right)\leq\exp\left(-\frac{kt^{2}}{2v(kt,\beta\frac{I( kt)}{kt})}\right)+\exp\left(-\beta I(kt)\max(\frac{1}{2},c(t,\beta,k))\right)+ \binom{n}{m}\exp\left(-I(kt)\right), \tag{2.3}\] _where \(v(\cdot,\cdot)\) is the same as in Lemma 2.3, \(c(t,\beta,k)\triangleq 1-\frac{\beta}{2t}\frac{I(kt)}{kt}v(kt,\beta\frac{I( kt)}{kt})\), and \(k=\lfloor\frac{n}{m}\rfloor\)._ Proof.: Let show the U-statistic with kernel \(h\) by \(U_{n}(h)\) \[\mathbb{P}\left(U_{n}(h)>t\right) \leq\mathbb{P}\left(U_{n}(h_{L})>t\right)+\mathbb{P}\left(\exists i _{1},...,i_{m},\quad h(X_{i_{1}},...,X_{i_{m}})>L\right)\] \[\leq\exp\left(-\lambda t\right)\mathbb{E}\left[\exp\left(\lambda U _{n}(h_{L})\right)\right]+\binom{n}{m}\exp\left(-I(L)\right)\] \[\leq\exp\left(-\lambda t\right)\left(\mathbb{E}\left[\exp\left( \frac{\lambda}{k}h_{L}\right)\right]\right)^{k}+\binom{n}{m}\exp\left(-I(L)\right)\] Lemma 2.2 \[\leq\exp\left(-\lambda t\right)\left(\exp\left(\frac{v(L,\frac{ \lambda}{k})}{2}\frac{\lambda^{2}}{k^{2}}\right)\right)^{k}+\binom{n}{m}\exp \left(-I(L)\right)\] Lemma 2.3 \[=\exp\left(-\lambda t+\frac{v(L,\frac{\lambda}{k})}{2k}\lambda^{2} \right)+\binom{n}{m}\exp\left(-I(L)\right).\] Choose \(L=kt\). To conclude the proof we only need to show that there are always choices for \(\lambda\) which makes \[\exp\left(-\lambda t+\frac{v(kt,\frac{\lambda}{k})}{2k}\lambda^{2}\right)\leq \exp\left(-\frac{kt^{2}}{2v(kt,\beta\frac{I(kt)}{kt})}\right)+\exp\left(-\beta I (kt)\max(\frac{1}{2},c(t,\beta,k))\right)\] We consider two cases: 1. if \(\frac{t}{v\left(kt,\beta\frac{I(kt)}{kt}\right)}\leq\frac{\beta I(kt)}{kt}\) choose \(\lambda=\frac{kt}{v(kt,\beta\frac{I(kt)}{kt})},\) so \(\frac{\lambda}{k}=\frac{t}{v\left(kt,\beta\frac{I(kt)}{kt}\right)}\leq\frac{ \beta I(kt)}{kt}\) 2. if \(\frac{t}{v\left(kt,\beta\frac{I(kt)}{kt}\right)}>\frac{\beta I(kt)}{kt}\) choose \(\lambda=\frac{\beta I(kt)}{t}.\) Then, in case 1 since \(\frac{\lambda}{k}\leq\frac{\beta I(L)}{L}\), we have \(v(L,\frac{\lambda}{k})\leq v(L,\frac{\beta I(kt)}{kt})\) (Note that \(v(L,\cdot)\) is increasing in its second argument). Hence, \[-\lambda t+\frac{v(kt,\frac{\lambda}{k})}{2k}\lambda^{2}\leq-\lambda t+\frac{ v(kt,\frac{\beta I(kt)}{kt})}{2k}\lambda^{2}=-\frac{kt^{2}}{2v(kt,\frac{\beta I(kt) }{kt})}.\] In the second case one just needs to substitute \(\lambda\) to obtain \[-\lambda t+\frac{v(kt,\frac{\lambda}{k})}{2k}\lambda^{2} =-\beta I(kt)+\frac{v(kt,\frac{\beta I(kt)}{kt})\beta^{2}I(kt)^{2 }}{2kt^{2}}\] \[=-\beta I(kt)\left(1-\frac{v(kt,\frac{\beta I(kt)}{kt})\beta I(kt) }{2kt^{2}}\right)\] \[=-\beta c(t,\beta,k)I(kt)\] \[=-\beta\max(\frac{1}{2},c(t,\beta,k))I(kt).\] Note that since in this case \(\frac{t}{v\left(kt,\beta\frac{I(kt)}{kt}\right)}>\frac{\beta I(kt)}{kt}\), we have \(c(t,\beta,k)>\frac{1}{2}\) so we have \(\max(\frac{1}{2},c(t,\beta,k))=c(t,\beta,k)\). The max operator controls this term when we are in the first case, so the upper bound does not blow up. **Remark 2.5** (Two regions of deviations).: _Inequality (2.3) reveals two different decay rates for the tail of \(U_{n}\). For small \(ts\), the first term, i.e. \(\exp\left(-\frac{kt^{2}}{2v}\right)\), will be dominant, hence we observe Gaussian-like deviation. This behavior has been studied already by CLT for U-statistics [12]. For larger \(ts\), the last couple of terms on the right hand side of (2.3) will be dominant. We call this region **large deviation** region. Asymptotically, the sum of the last two terms decays like \(\binom{n}{m}\exp\left(-I(kt)\right)\) for both subWeibull and polynomial tail kernels (see Section III of [2] for detailed discussion)._ _Inequality (2.3) denotes large deviation behavior whenever_ \[\frac{kt^{2}}{v(kt,\beta)}\gg I(kt). \tag{2.4}\] _For instance, when \(I(kt)=\sqrt[3]{kt},\ \alpha\geq 1\) we have large deviation behavior for \(t\gg k^{-\frac{\alpha-1}{2\alpha-1}}=\lfloor\frac{n}{m}\rfloor^{-\frac{ \alpha-1}{2\alpha-1}}\). This means the region of Gaussian deviation shrinks to \(0\) as \(n\rightarrow\infty\), when \(\alpha>1\)._ #### 2.1.1 Parameters of inequality (2.3) Theorem 2.4 bounds the tail of \(U_{n}\) in terms of \(k=\lfloor\frac{n}{m}\rfloor\) and the tail of kernel \(h\). The only terms of (2.3) that might seem unfamiliar are \(c(t,\beta,k),v(kt,\beta\frac{I(kt)}{kt})\). What typically happens in the asymptotic setting \(n\rightarrow\infty\) is \(c(t,\beta,k)\to 1,\ v(kt,\beta\frac{I(kt)}{kt})\to Var(h)\). Moreover, \(\beta\) can be chosen arbitrarily close to \(1\). Hence, for large \(n\), one can think of upper bound (2.3) as \(\exp\left(-\frac{kt^{2}}{2Var(h)}\right)+(1+\binom{n}{m})\exp\left(-I(kt)\right)\). For logarithmic \(I(t)\) which corresponds to polynomial tail kernels, there are more restrictions on the constant \(\beta\). Nevertheless, for large deviation regime, the dominant term of (2.3) will still be \(\binom{n}{m}\exp\left(-I(kt)\right)\). This Section contains several statements that make the above claims precise. **Remark 2.6**.: _Lemma 4 of [2] states that \(v(kt,\beta\frac{I(kt)}{kt})\xrightarrow{k\to\infty}Var(h)\) in either of the following setups:_ 1. \(I(t)=c\sqrt[n]{t},\;\alpha>1,\;\beta<1\)__ 2. \(I(t)=\gamma\log(t),\;\gamma>2,\;\beta<1-\frac{2}{\gamma}\)_._ _Hence, for large values of \(k\), one should be able to upper bound the first term of (2.3) with \(\exp\left(-\frac{kt^{2}}{2CVar(h)}\right)\), where \(C<1\), but can get arbitrarily close to \(1\)._ _In the above, Case 1 includes all subWeibull variables, and Case 2 includes variables with polynomial tails and finite variances._ **Remark 2.7**.: _When \(kt^{2}\gg I(kt)\), and \(v(kt,\beta\frac{I(kt)}{kt})\) is bounded, we have_ \[c(t,\beta,k)\xrightarrow{k\to\infty}1. \tag{2.5}\] _This includes both cases of Remark 2.6 with \(t\gg k^{-\frac{\alpha-1}{2\alpha-1}}\) and \(t\gg\sqrt{\frac{\log k}{k}}\), respectively._ _To verify (2.5) it suffices to note that \(c(t,\beta,k)=1-\frac{\beta}{2t}\frac{I(kt)}{kt}v(kt,\beta\frac{I(kt)}{kt})\), \(\xrightarrow{I(kt)}\xrightarrow{k\to\infty}0\), and all other terms in the definition of \(c(t,\beta,k)\) remain bounded._ While Remark 2.6 provides asymptotic bounds for \(v(kt,\beta\frac{I(kt)}{kt})\), one might need bounds for finite sample case to utilize Theorem 2.4. Next Lemma and Remark provide such bounds. **Lemma 2.8**.: _If \(I(t)\geq c\sqrt[n]{t}\) for some \(\alpha\geq 1\), \(\text{Var}(h)<\infty\), and \(\beta<1\) is fixed, then there is a fixed number \(v<\infty\) such that for any \(L>1\) and \(\eta\leq\beta\frac{I(L)}{L}\) we have \(v(L,\eta)\leq v\)._ Proof.: Since \(\eta\leq\beta\frac{I(L)}{L}\), we have \(v(L,\eta)\leq v(L,\beta\frac{I(L)}{L})\). Moreover, by Corollary 2 of [2] we obtain \[v(L,\beta\frac{I(L)}{L}) \leq\mathbb{E}\left[h^{2}\mathbf{1}(h\leq 0)\right]+\frac{\Gamma( 2\alpha+1)}{((1-\beta)c)^{2\alpha}}+L^{\frac{1}{\alpha}-1}\frac{\beta c\Gamma (3\alpha+1)}{3((1-\beta)c)^{3\alpha}}\] \[\leq\mathbb{E}\left[h^{2}\right]+\frac{\Gamma(2\alpha+1)}{((1- \beta)c)^{2\alpha}}+\frac{\beta c\Gamma(3\alpha+1)}{3((1-\beta)c)^{3\alpha}}. \tag{2.6}\] The right hand side of (2.6) does not depend on \(L\), hence it remains constant as \(L\to\infty\). Note that there is a slight change of variable for function \(v(L,\eta)\) defined here and the one defined in [2], which of course has been taken into account in quoting Corollary 2 from [2]. **Remark 2.9**.: _If \(I(t)\geq\gamma\log t\) with \(\gamma>2\), which includes kernels with polynomial tails and finite variances, Corollary 3 of [2] yields_ \[v(L,\beta\frac{I(L)}{L})\leq CL^{2-(1-\beta)\gamma}\log L, \tag{2.7}\] _for some \(C\) independent of \(L\)._ _Hence, for \(\beta<1-\frac{1}{\gamma}\), while \(v(L,\beta\frac{I(L)}{L})\) can grow as \(L\to\infty\), still the last two terms of (2.3) are the dominant terms of right hand side. As discussed in [2], \(\beta<1-\frac{1}{\gamma}\) is sufficient for obtaining sharp upper bounds for the deviation probability of the sum of iid variables with polynomial tails, i.e. U-statistics with order \(m=1\)._ ### Lower bound Let \(J(t)\) be the function defined in Definition 2.1, and \(A_{n}=\left[-J^{-1}(\log 2n),J^{-1}(\log 2n)\right]^{n-1}\), where \(J^{-1}\) denotes generalized inverse of \(J\) and \([\cdot,\cdot]\) is the closed interval of given limits. Define: **Definition 2.10**.: \[\varphi_{n}(X)\triangleq\inf_{(X_{1},...,X_{n-1})\in A_{n}}h(X_{1},X_{2},...,X_ {m-1},X).\] (2.8) Note that we force \(X_{1},...X_{n-1}\) to be in \(A_{n}\), but only use the first \(m-1\) in the argument of inf. Then we have: **Lemma 2.11**.: _Assume kernel \(h\) has a finite variance. Then_ \[\mathbb{P}\left(U_{n}>t\right)\geq C\mathbb{P}\left(\varphi_{n}(X_{n})\geq \frac{nt}{m}\right), \tag{2.9}\] _where \(C>0\) is an absolute constant independent of \(n\)._ Proof.: \[\mathbb{P}\left(U_{n}>t\right) \geq\mathbb{P}\left(U_{n-1}(X_{1},...,X_{n-1})\geq 0,\ \sum_{i_{1}<...<i_{m-1}<n}h(X_{i_{1}},X_{i_{2}},...,X_{i_{m-1}},X_{n})>\binom{ n}{m}t\right)\] \[\geq\mathbb{P}\left(U_{n-1}(X_{1},...,X_{n-1})\geq 0,\ (X_{1},...,X_{n-1})\in A_{n}, \binom{n-1}{m-1}\varphi_{n}(X_{n})>\binom{n}{m}t\right)\] \[\geq\mathbb{P}\left(U_{n-1}(X_{1},...,X_{n-1})\geq 0,\ |X_{i}|\leq J ^{-1}(\log 2n)\ \ \forall i\leq n-1\right)\mathbb{P}\left(\varphi_{n}(X_{n})>\frac{nt}{m} \right).\] Note that \[\mathbb{P}\left(\left[X_{i}\right|\leq J^{-1}(\log 2n)\ \ \forall i \leq n-1\right) =\left(1-\mathbb{P}\left(\left[X_{i}\right|>J^{-1}(\log 2n) \right)\right)^{n-1}\] \[\geq\left(1-\exp\left(-J(J^{-1}(\log 2n))\right)\right)^{n-1}\] \[\geq(1-\frac{1}{2n})^{n-1}\xrightarrow{n\to\infty}\frac{1}{ \sqrt{\mathrm{e}}}.\] Moreover, \(\mathbb{P}\left(U_{n-1}\geq 0\right)\xrightarrow{n\to\infty}\frac{1}{2}\) by CLT for U-statistics [12]. Hence, for large enough \(n\) we obtain: \[\mathbb{P}\left(U_{n-1}(X_{1},...,X_{n-1})\geq 0,\ |X_{i}|\leq J^{-1}(\log 2n)\ \ \forall i\leq n-1\right)\geq 0.9 \left(\frac{1}{2}+\frac{1}{\sqrt{\mathrm{e}}}-1\right)>0.\] Choosing \(C<0.9\left(\frac{1}{\sqrt{\mathrm{e}}}-\frac{1}{2}\right)\), and small enough to cover all the finite cases before the above asymptotic become true concludes the proof. **Remark 2.12**.: \(A_{n}=\left[-J^{-1}(\log 2n),J^{-1}(\log 2n)\right]^{n-1}\) _in Lemma 2.11 can be replaced with any sequence of events \(A_{n}\subset\mathbb{R}^{n-1}\) for which \(\liminf\limits_{n\to\infty}\mathbb{P}\left(A_{n}\right)>\frac{1}{2}\). Also, one can work with \(\mathbb{P}\left(U_{n-1}\geq-\varepsilon,\varphi_{n}(X_{n})>\frac{nt}{m}+\frac {\varepsilon}{\binom{n}{m}}\right)\) to relax this condition to \(\liminf\limits_{n\to\infty}\mathbb{P}\left(A_{n}\right)>0\)._ ### Large Deviation Principle In this Section, we show the upper bound (2.3) is asymptotically tight in certain cases. The idea is to show the rate function for the large deviation of a U-statistic is asymptotically equivalent to the right hand side of (2.3). A trivial requirement for such sharpness to hold is functions \(I(t),J(t)\) defined in Definition 2.1 be asymptotically tight. This is formalized in the next assumption. **Assumption 2**.: _Suppose_ \[\lim_{t\to\infty}\frac{-\log\mathbb{P}\left(h(X_{1},...,X_{m})>t\right)}{I(t)}= \lim_{t\to\infty}\frac{-\log\mathbb{P}\left(\!\left|X_{i}\right|>t\right)}{J(t )}=1.\] Hereafter, we focus on subWeibull distributions. The tail of such distribution is bounded by some Weibull distribution, hence, all of their moments are finite. Nonetheless, the exponential moment is not finite. Assumption 3 encodes the class of heavy-tailed subWeibull random variables. **Assumption 3**.: _Assume there is \(\alpha>1\) and \(c>0\) such that \(I(t)\geq c\sqrt[c]{t},\;\forall t>0\)._ Although LDP is derived under Assumptions 3 and 6 here, we think one can obtain similar results for distributions with polynomial tails, i.e. logarithmic \(I(t),J(t)\), and finite second moments following footsteps of this Section and Bakhshizadeh et al. [2]. However, for the sake of brevity we do not include polynomial tails in the following Sections. Moreover, we need a lower bound for the deviation amount \(t\) to make sure it is in the large deviation regime. The below assumption provides such a bound. **Assumption 4**.: _Suppose \(\frac{kt^{2}}{I(kt)}\to\infty\), i.e. \(kt^{2}\gg I(kt)\), as \(n\to\infty\)._ **Remark 2.13**.: _For \(I(t)=c\sqrt[c]{t},\;\alpha>1\), Assumption 4 holds whenever \(t\gg n^{-\frac{\alpha-1}{2\alpha-1}}\). This includes constant \(t\) as well as converging to \(0\) sequence \(t_{n}\) as long it decays slower than \((\frac{1}{n})^{\frac{\alpha-1}{2\alpha-1}}\)._ **Lemma 2.14**.: _Suppose Assumptions 1, 2, 3, and 4 hold. For \(\varphi_{n}(\cdot)\) defined in (2.8) and \(I(\cdot)\) in Theorem 2.4 if one has_ \[\lim_{n\to\infty}\frac{-\log\mathbb{P}\left(\varphi_{n}(X)\geq\frac{n}{m}t \right)}{I(kt)}=1, \tag{2.10}\] _where \(k=\lfloor\frac{n}{m}\rfloor\), then_ \[\lim_{n\to\infty}\frac{-\log\mathbb{P}\left(U_{n}>t\right)}{I(kt)}=1.\] _In other words, \(I(kt)\) is the large deviation rate function for \(U_{n}\)._ We postpone proof of the above Lemma to Section 4.1. Condition (2.10) essentially says large values of \(h(X_{1},...,X_{m})\) are determined by large value of only one coordinate. It can be proved for many commonly used kernels applied to heavy-tailed variables \(X_{i}\). Assuming tail of \(|X_{i}|\) is a shifted sub-additive function, a usual property of heavy tails, we can prove (2.10) for several kernels. **Assumption 5**.: _Suppose a constant shift on \(J(t)\), defined in Definition 2.1, makes it sub-additive on non-negative Real numbers, i.e._ \[J(t_{1}+t_{2})\leq J(t_{1})+J(t_{2})+b,\quad\forall t_{1},t_{2}\geq 0, \tag{2.11}\] _where \(b\in\mathbb{R}\) is an absolute constant._ **Remark 2.15**.: _Assumption 5 is somehow posing heavy-tailed distribution requirement for random variable \(X\). While it does not directly state \(J\) is a sublinear function, which is equivalent to the formal definition of a heavy-tailed distribution, it controls \(J\)'s growth to be equal to or slower than linear functions. Lemma 2.16 and Remarks 2.17, 2.18 denote most well-known heavy-tailed distributions can have shifted sub-additive tail function \(J\)._ **Lemma 2.16**.: _If \(J:\mathbb{R}\to\mathbb{R}\) is a concave function on non-negative Real numbers, then \(J\) satisfies (2.11) with \(b=-J(0)\)._ Proof.: By concavity of \(J\) we have \[J(t_{1})=J\left(\frac{t_{1}}{t_{1}+t_{2}}(t_{1}+t_{2})+\frac{t_{2}}{t_{1}+t_{2}}0 \right)\geq\frac{t_{1}}{t_{1}+t_{2}}J(t_{1}+t_{2})+\frac{t_{2}}{t_{1}+t_{2}}J(0).\] Similarly, we obtain \(J(t_{2})\geq\frac{t_{2}}{t_{1}+t_{2}}J(t_{1}+t_{2})+\frac{t_{1}}{t_{1}+t_{2}}J(0)\). Summing up the above two inequalities shows \[J(t_{1}+t_{2})\leq J(t_{1})+J(t_{2})-J(0).\] **Remark 2.17**.: _For the below distributions, one can find a function \(J(t)\) as defined in Definition 2.1 which is both asymptotically tight and shifted sub-additive, i.e. satisfies Assumptions 2, 5._ 1. _Exponential_ 2. \(\left|\mathcal{N}(0,1)\right|^{\alpha},\quad\alpha\geq 2\)__ 3. _Log-Normal_ 4. _Weibull distribution with shape parameter_ \(s\leq 1\)__ 5. _Log Logistic_ 6. _Pareto_ We postpone proof of Remark 2.17 to Section 4.2. **Remark 2.18**.: _In general, with simple modifications, we expect the tail function \(J(t)\) for heavy-tailed distributions satisfy Assumption 5. This includes distributions that are not named in Remark 2.17. Note that \(J(t)=-\log\mathbb{P}\left(\left|X\right|>t\right)\) is an increasing function that grows to infinity, and by heavy-tailed assumption is supposed to be sub-linear. Hence, it is expected that \(J(t)\) becomes a concave function after some point \(t\geq T\). Using the same technique we utilized in the proof of case 3, Log-Normal distribution, of Remark 2.17, one can define a function \(J_{2}(t)\) which is equal to \(J(t)\) on \([T,\infty)\), and linearly extends to \([0,T]\) such that it is less than \(J(t)\) and remains concave on the whole non-negative Real numbers. At this point, Lemma 2.16 shows \(J_{2}(t)\) should satisfy (2.11)._ **Assumption 6**.: _Suppose \(J(t)\gg\log t\) as \(t\to\infty\), i.e. \(\lim\limits_{t\to\infty}\frac{\log t}{J(t)}=0\)._ **Lemma 2.19**.: _Under Assumptions 2, 5, 6 condition (2.10) holds with any bounded \(t\) in the following cases:_ 1. \(h(X,Y)=\left|X-Y\right|-\mathbb{E}\left[\left|X-Y\right|\right]\)__ 2. \(h(X,Y)=(X-Y)^{2}-\mathbb{E}\left[(X-Y)^{2}\right]\)__ 3. \(h(X_{1},...,X_{m})=\max(\left|X_{1}\right|,...,\left|X_{m}\right|)-\mathbb{E }\left[\max(\left|X_{1}\right|,...,\left|X_{m}\right|)\right]\)__ 4. \(h(X,Y)=\frac{1}{2}(X^{2}+Y^{2})-\max\left(X,Y\right)-\mathbb{E}\left[X^{2} \right]+\mathbb{E}\left[\max\left(X,Y\right)\right]\)__ _Hence, under extra Assumptions 3, 4 Lemma 2.14 yields_ \[\lim\limits_{n\to\infty}\frac{-\log\mathbb{P}\left(U_{n}>t\right)}{I(kt)}=1,\] _for U-statistics constructed with the above kernels._ Proof of the above Lemma is postponed to Section 4.3. **Remark 2.20**.: _The last kernel in Lemma 2.19 is related to \(\omega^{2}\)-statistics for the goodness of fit [23]._ **Remark 2.21**.: _Similar to case 3 of Lemma 2.19, if one takes \(J(t)\) to be the tail function of \(X\) instead of \(X|\), i.e. \(\mathbb{P}\left(X>t\right)\lesssim\exp\left(-J(t)\right)\), she can show condition (2.10) also holds for the below kernel_ \[h(X_{1},...,X_{m})=\max(X_{1},...,X_{m})-\mathbb{E}\left[\max(X_{1},...,X_{m}) \right].\] ### Discussion on the necessity of condition (2.10) Lemma 2.14 denotes the upper bound given in Theorem 2.4 is asymptotically sharp if (2.10) holds. Lemma 2.19 lists some common kernels of U-statistics for which (2.10) holds. One might ask if (2.10) is necessary to obtain asymptotic sharpness and LDP. Below, we study an example for which the bound given by Theorem 2.4 is not sharp. This shows kernels need to satisfy certain conditions to have the same asymptotic as the right hand side of (2.3). Determining the necessary conditions for such kernels is not given in this work, but is an interesting question to be addressed in future studies. Consider \(m=2\) and \(h(x,y)=xy\). Also, assume \(X\) has a symmetric distribution around origin, and \(J(t)=ct^{\alpha},\ \alpha<1\) (e.g \(X\sim\text{Weibull or }X=\mathcal{N}(0,1)^{\frac{\alpha}{2}}\)). In this case, \[\mathbb{P}\left(XY>u\right) \leq\mathbb{P}\left(\left|X\right|>\sqrt{u}\text{ or }\left|Y\right|>\sqrt{u}\right)\simeq 2\exp \left(-J(\sqrt{u})\right)=2\exp\left(-cu^{\frac{\alpha}{2}}\right),\] \[\mathbb{P}\left(XY>u\right) \geq\mathbb{P}\left(X>\sqrt{u}\right)\mathbb{P}\left(Y>\sqrt{u} \right)\simeq\exp\left(-2J(\sqrt{u})\right)=\exp\left(-2cu^{\frac{\alpha}{2}} \right),\] for large enough \(u\). Hence, for the tail function \(I(u)\) we will have \[C_{1}u^{\frac{\alpha}{2}}\leq I(u)\leq C_{2}u^{\frac{\alpha}{2}},\quad C_{1}, C_{2}>0. \tag{2.12}\] If one directly applies Theorem 2.4, she obtains \[\mathbb{P}\left(U_{n}>t\right)\leq\exp\left(-\frac{kt^{2}}{2v}\right)+\exp \left(-\beta I(kt)\max(\frac{1}{2},c(t,\beta,k))\right)+\binom{n}{2}\exp\left( -I(kt)\right), \tag{2.13}\] where constant \(v\) is given by Lemma 2.8, \(\beta<1\) is arbitrary for large \(n\), and \(c(t,\beta,k)\xrightarrow{n\to\infty}1\). The right hand side of (2.13) is at \(\exp\left(-I(kt)\right)>\exp\left(-\frac{C_{2}}{2^{\frac{\alpha}{2}}}n^{\frac {\alpha}{2}}\right)=\exp\left(-C_{2}^{\prime}n^{\frac{\alpha}{2}}\right)\). This bound is loose since we will show \(\mathbb{P}\left(U_{n}>t\right)\) decays like \(\exp\left(-C_{3}n^{\alpha}\right)\), and \(n^{\alpha}\gg n^{\frac{\alpha}{2}}\) as \(n\to\infty\). Observe that \[U_{n}=\frac{1}{n(n-1)}\left(\sum_{i=1}^{n}X_{i}\right)^{2}-\frac{1}{n(n-1)} \sum_{i=1}^{n}X_{i}^{2}=\frac{n}{n-1}S_{n}^{2}-\frac{1}{n(n-1)}T_{n},\] where \(S_{n}=\frac{1}{n}\sum X_{i},\ T_{n}=\sum X_{i}^{2}\). Note that \(S_{n}\) is an order \(1\) U-statistic with kernel \(h_{2}(x)=x\), and \(U_{n}\leq\frac{n}{n-1}S_{n}^{2}\leq 2S_{n}^{2}\). Also, for kernel \(h_{2}\) the tail function \(I_{2}(t)=J(t)-\log 2\). Hence, by Theorem 2.4 we obtain \[\mathbb{P}\left(U_{n}>t\right) \leq\mathbb{P}\left(S_{n}>\sqrt{\frac{t}{2}}\right)\] \[\leq\exp\left(-\frac{nt}{4v}\right)+C_{3}\exp\left(-\beta\max( \frac{1}{2},c(t,\beta,k))J(n\sqrt{t/2})\right)+C_{3}n\exp\left(-J(n\sqrt{t/2}) \right). \tag{2.14}\] As discussed in Section 2.3, the right hand side of (2.14) decays like \(\exp\left(-J(n\sqrt{t/2})\right)=\exp\left(-C_{3}n^{\alpha}t^{\frac{\alpha}{2}}\right)\) which is much smaller than (2.13) when \(n\) is large. Indeed, we can show (2.14) has the right order of decay. Considering the event \(E=(X_{n-1},X_{n}>\sqrt{n(n-1)t}\) and \(U_{n-2}\geq 0)\) for which \[\mathbb{P}\left(U_{n}>t\right)\geq\mathbb{P}\left(E\right)\simeq\exp\left(-C_{4 }n^{\alpha}t^{\frac{\alpha}{2}}\right).\] Therefore, one can show \(n^{\alpha}\) is the correct speed for the large deviation decay of \(U_{n}\). In other words, there are constants \(C_{4},C_{5}>0\) such that for large enough \(n\) \[\exp\left(-C_{4}(n\sqrt{t})^{\alpha}\right)\leq\mathbb{P}\left(U_{n}>t\right) \leq\exp\left(-C_{5}(n\sqrt{t})^{\alpha}\right).\] **Remark 2.22**.: _Note that \(h(x,y)=xy\) is a degenerate unbounded kernel. Both Nikitin and Ponikarov [18] and Chakraborty and Kuchibhotla [5] claim sharp exponential bounds or large deviation limits for such kernels have not been addressed in works preceding them. As discussed in the current Section, while product kernel does not satisfy (2.10), a slight modification in the usage of Theorem 2.4 can still yield an exponential bound which is sharp up to a constant. This shows the strength of Theorem 2.4 even beyond scenarios in which sharpness has been shown through Lemma 2.14._ ## 3 Future works While we documented some important information about the concentration of U-statistics with heavy-tailed samples, there are questions that remained unanswered. It seems addressing some of them takes only extra effort along the similar path of reasoning we used in this manuscript. We exclude such questions just for the sake of brevity which we believe helps to convey the main message of the current work better. Other questions sound more challenging and may require different techniques to be addressed. In this Section, we point out both types of questions and leave them for future studies. Note that Lemmas 2.14 and 2.19 denote the upper bound (2.3) is sharp for certain U-statistics when \(t\) is larger than the region of Gaussian decay. However, the first term of (2.3), the Gaussian term, does not have a sharp constant in general. Below Remark declares this fact. **Remark 3.1**.: _Let \(h_{1}(X_{1})=\mathbb{E}\left[h(X_{1},...,X_{m})\,|\,X_{1}\right]\). The asymptotic variance of \(U_{n}\) is \(\frac{m^{2}}{n}Var(h_{1})\)[12], \(v(kt,\beta\frac{I(kt)}{kt})\rightarrow\text{Var}(h)\), and \(Var(h)\geq mVar(h_{1})\). In fact, \(Var(h)-mVar(h_{1})=Var(h-\sum\limits_{i\leq m}h_{1}(X_{i}))\). This means_ \[\lim_{n\rightarrow\infty}\frac{\frac{kt^{2}}{2v(kt,\beta\frac{I(kt)}{kt})}}{ \frac{m^{2}}{2n\text{ Var}(U_{n})}}<1.\] _It would be interesting if one can improve (2.3) to have sharp constants on both Gaussian and heavy-tailed regions of deviation. A direction that sounds promising to do so is to use Hoeffding decomposition [12] and apply Theorem 2.4 for projections of \(U_{n}\) individually._ _Another possible improvement is to extend results of Section 2.3 beyond kernels with subWeibull tails, i.e. when Assumption 3 does not hold. This already has been done for sums of iid samples, i.e. U-statistics of order \(m=1\), in Bakhshizadeh et al. [2]. Moreover, Theorem 2.4 offers non-trivial bounds as long as \(\text{Var}(X_{i})<\infty\). Assumption 3 is used to remove all logarithmic terms when \(n\rightarrow\infty\). Taking such terms into account needs more effort, but does not change the spirit of our reasoning. Let \(I(t)=\gamma\log t\) have logarithmic growth. In the light of (2.3) \(f(t)=\frac{I(kt)-\log\binom{n}{m}}{\log n}\simeq\frac{I(kt)-m\log n}{\log n}\) seems a reasonable rate function for LDP of \(U_{n}\) with speed \(b_{n}=\log n\)._ _Moreover, condition (2.10) is only a sufficient condition for the sharpness of inequality (2.3). It is interesting to determine necessary conditions for sharpness of Theorem 2.4 in the sense of rough logarithmic limits. In addition, developing sharp upper bounds when such conditions do not hold can be a good direction to extend results of the current paper._ _Finally, perhaps the most important message of this paper is one can extend concentration results developed for subGaussian variables to distribution with heavier tails simply by truncation technique and precise tuning of the truncation level. While this technique is applied for U-statistics here, the question is to what other self-averaging processes we can apply the same and obtain sharp concentration. We hope this work motivates future studies to obtain upper bounds with sharp constants to a larger class self-averaging processes._ ## Acknowledgment The author is thankful for inspiring discussions he has had with Dr. Nabarun Deb during the development of this work. ## 4 Proofs This Section includes the proofs of statements in the previous Sections. First, let recall the following Lemma from Bakhshizadeh et al. [2] which turns out to be useful in the calculation of logarithmic limits. **Lemma 4.1** (Lemma 5 of [2]).: _Let \(a_{n},b_{n}\) and \(c_{n}\) be sequences of positive numbers such that_ \[\lim_{n\to\infty}\frac{\log a_{n}}{c_{n}}=a,\ \lim_{n\to\infty}\frac{\log b_{n} }{c_{n}}=b,\ \lim_{n\to\infty}c_{n}=\infty.\] _Then_ \[\lim_{n\to\infty}\frac{\log(a_{n}+b_{n})}{c_{n}}=\max\left\{a,b\right\}.\] ### Proof of Lemma 2.14 Proof.: By Lemma 2.8, for any \(\beta<1\), we have \(v(kt,\beta\frac{I(kt)}{kt})<v\), where \(v\) is a constant independent of \(k\). Therefore, we obtain the following \[\lim_{k\to\infty}\frac{\frac{-kt^{2}}{2v(kt,\beta\frac{I(kt)}{kt})}}{I(kt)} \leq\lim_{k\to\infty}\frac{\frac{-kt^{2}}{2v}}{I(kt)}=-\infty,\] because \(\frac{kt}{I(kt)}\to\infty\) as \(kt\to\infty\). Moreover, by Remark 2.7, \(c(t,\beta,k)\to 1\), hence, \[\lim_{k\to\infty}\frac{-\beta I(kt)\max(\frac{1}{2},c(t,\beta,k))}{I(kt)}=- \beta,\quad\lim_{k\to\infty}\frac{\log\binom{n}{m}-I(kt)}{I(kt)}=-1.\] Having inequality (2.3) and the above equations, Lemma 4.1 yields \(\lim_{n\to\infty}\frac{\log\mathbb{P}\left(U_{n}>t\right)}{I(kt)}\leq-\beta,\ \forall \beta<1\). Multiplying by \(-1\) and taking supremum over \(\beta<1\) implies \[\lim_{k\to\infty}\frac{-\log\mathbb{P}\left(U_{n}>t\right)}{I(kt)}\geq 1. \tag{4.1}\] On the other hand, Lemma 2.11 yields \(\lim_{n\to\infty}\frac{-\log\mathbb{P}\left(U_{n}>t\right)}{-\log\mathbb{P} \left(\varphi_{n}(X)>\frac{nt}{m}\right)}\leq 1\), so by (2.10) we obtain \[\lim_{n\to\infty}\frac{-\log\mathbb{P}\left(U_{n}>t\right)}{I(kt)}\leq 1,\] ### Proof Remark 2.17 Proof.: 1. If \(X\sim\mathrm{Exp}(\lambda)\) we have \(\mathbb{P}\left(X>t\right)=\exp\left(-\lambda t\right)\), so \(J(t)=-\log\mathbb{P}\left(X>t\right)=\lambda t\) is a linear function which satisfies (2.11) with equality and \(b=0\). 2. Let \(X=\left|Z\right|^{\alpha},\;Z\sim\mathcal{N}(0,1)\). Observe that \[\mathbb{P}\left(X>t\right)=\mathbb{P}\left(\left|Z\right|>\sqrt[\alpha]{t} \right)\leq 2\exp\left(-\frac{1}{2}t^{\frac{2}{\alpha}}\right)\] Hence, setting \(J(t)=\frac{\sqrt[\alpha]{t}}{2}-\log 2\) gives us a tail upperbound as in (2.2). Utilizing LDP rate function of the Normal distribution shows \(J(t)\) is asymptotically tight too. Also, for \(\alpha\geq 2,\;J(t)\) is a concave function, hence by Lemma 2.16 satisfies Assumption 5. 3. Let \(X=\exp\left(Z\right),\;Z\sim\mathcal{N}(0,1)\). Note that \[\mathbb{P}\left(X>t\right)=\mathbb{P}\left(Z>\log t\right)\leq\exp\left(-J(t) \right),\] where \(J(t)=\begin{cases}0&0\leq t\leq 1\\ \frac{1}{2}\log^{2}t&t>1\end{cases}\). Similar to the previous case, asymptotic tightness of \(J\) comes from rate function of the Normal distribution. Instead of showing (2.11) for \(J(t)\), we replace it with a concave asymptotically tight lower bound \(J_{2}(t)\). This useful technique can be applied to several other cases as well. Let \[J_{2}(t)=\begin{cases}\frac{1}{\mathrm{e}}(t-\mathrm{e})+\frac{1}{2}&0\leq t \leq\mathrm{e}\\ \frac{1}{2}\log^{2}t&t>\mathrm{e}\end{cases}.\] Then, \(J(t)\geq J_{2}(t)\;\forall t\geq 0\), i.e. \(\mathbb{P}\left(X>t\right)\leq\exp\left(-J_{2}(t)\right)\), and \(J_{2}\) is a concave function on \(\mathbb{R}^{\geq 0}\), hence by Lemma 2.16 satisfies (2.11). To verify above claims, one only needs to note that \(J(t)\) is a convex function on \([0,\mathrm{e}]\) and is a concave function on \([\mathrm{e},\infty)\). \(J_{2}(t)\) on \([0,\mathrm{e}]\) is simply the linear approximation of \(J(t)\) at \(t=\mathrm{e}\) to make it concave everywhere. 4. If \(X\sim\mathrm{Weibull}(\lambda,s)\), then \(\mathbb{P}\left(\left|X\right|>t\right)=\exp\left(-(\frac{t}{\lambda})^{s}\right)\). Hence, one can take the trivial tail bound \(J(t)=-\log\mathbb{P}\left(X>t\right)=\left(\frac{t}{\lambda}\right)^{s}\). This function satisfies \(J(0)=0\), and is concave for \(s\leq 1\). Hence, Lemma 2.16 yeilds \(J(t_{1}+t_{2})\leq J(t_{1})+J(t_{2})\). 5. Let \(X\sim\text{Log-logistic}(\alpha,\beta),\;\alpha,\beta>0\) \[J(t)=-\log\mathbb{P}\left(\|X|>t\right)=-\log(1-\frac{1}{1+(x/\alpha)^{-\beta}}) =\log(1+(x/\alpha)^{\beta}).\] Note that \[2^{\beta}(1+x^{\beta})(1+y^{\beta})\geq 2^{\beta}+(2\max\{x,y\})^{\beta}\geq 1+(x+y)^{ \beta},\quad\forall x,y\geq 1.\] therefore \[2^{\beta}(1+(t_{1}/\alpha)^{\beta})(1+(t_{2}/\alpha)^{\beta})\geq(1+((t_{1}+t _{2})/\alpha)^{\beta}).\] Applying \(\log\) to the above inequality yields \(J(t_{1})+J(t_{2})+\beta\log 2\geq J(t_{1}+t_{2})\). 6. Let \(X\sim\text{Pareto}(x_{m},\alpha)\). Then, \[J(t)=-\log\mathbb{P}\left(X>t\right)=\alpha\log\frac{t}{x_{m}},\quad t>x_{m},\] is a concave function on the support of \(X\), i.e. \([x_{m},\infty)\). Similar to the Log-Normal case above, one can linearly extend \(J(t)\) to a concave function on \([0,\infty)\) and utilize Lemma 2.16 to verify \(J(t)\) is shifted sub-additive. ### Proof of Lemma 2.19 Proof.: _1._ The strategy to show (2.10) is to show \(\lim\frac{-\log\mathbb{P}\left(\varphi_{n}(X)>\frac{n}{m}t\right)}{J(kt)}=\lim \frac{I(kt)}{J(kt)}=1\) as \(n\to\infty\). Let us write \(c\triangleq\mathbb{E}\left[\|X-Y|\right]\). Note that \[\varphi_{n}(X) =\inf_{|y|\leq J^{-1}(\log 2n)}|X-y|-c\] \[=\begin{cases}-c&\text{if }|X|\leq J^{-1}(\log 2n)\\ |X|-J^{-1}(\log 2n)-c&\text{if }|X|>J^{-1}(\log 2n).\end{cases}.\] With the above display in mind, observe that \[\mathbb{P}(\varphi_{n}(X)>\frac{n}{m}t)=\mathbb{P}\left(|X|>\frac{n}{m}t+J^{-1 }(\log 2n)+c\right).\] Note that, by Assumption 2 we have: \[\lim_{n\to\infty}\frac{-\log\mathbb{P}(X|>\frac{n}{m}t+J^{-1}(\log 2n)+c)}{J( \frac{n}{m}t+J^{-1}(\log 2n)+c)}=1.\] Also, note that for large \(n\), \(J(kt)\leq J(\frac{n}{m}t+J^{-1}(\log 2n)+c)\leq J(kt+t+J^{-1}(\log 4km)+c)\). Using (4.16) from Lemma 4.2 with \(u=4km,\;c_{1}=\frac{t}{4m},\;c_{2}=c+t\) we obtain \(\lim_{n\to\infty}\frac{J(kt+t+J^{-1}(\log 4km)+c)}{J(kt)}=1\), therefore \[\lim_{n\to\infty}\frac{-\log\mathbb{P}(\varphi_{n}(X)>\frac{n}{m}t)}{J(kt)}= \lim_{n\to\infty}\frac{J(\frac{n}{m}t+J^{-1}(\log 2n)+c)}{J(kt)}=1. \tag{4.2}\] Next, we try to approximate the term \(I(kt)\) from (2.10). Towards this direction, we begin by observing that \[\mathbb{P}(|X_{1}-X_{2}|\geq c+kt)\geq\mathbb{P}(X_{2}\leq c^{\prime})\mathbb{ P}(X_{1}\geq c^{\prime}+c+kt),\] for some \(c^{\prime}\in\mathbb{R}\) such that \(\mathbb{P}\left(X_{2}\leq c^{\prime}\right)>0\). Consequently, \[\limsup_{n\to\infty}\frac{-\log\mathbb{P}(h(X_{1},X_{2})\geq kt)}{J(kt+c^{\prime }+c)}\leq 1,\] Hence, \[\limsup_{n\to\infty}\frac{I(kt)}{J(kt+c^{\prime}+c)}=\limsup_{n\to\infty}\frac{ I(kt)}{J(kt)}\leq 1. \tag{4.3}\] We used Lemma 4.2 to drop \(c^{\prime}+c\). For the other direction, we need to establish an upper bound for \(\mathbb{P}\left(h(X_{1},X_{2})>kt\right)\). Let \(u>0\), then \[\mathbb{P}(|X_{1}-X_{2}|\geq u) \leq\mathbb{P}\left(\!|X_{1}|>u\right)+\mathbb{P}\left(\!|X_{2}|>u \right)+\mathbb{P}\left(\!|X_{1}-X_{2}|\geq u,|X_{1}|,\!|X_{2}|\leq u\right)\] \[\leq 2\exp\left(-J(u)\right)+2\mathbb{P}\left(X_{1}>X_{2}+u,|X_{ 1}|,\!|X_{2}|\leq u\right)\] \[\leq 2\exp\left(-J(u)\right)+2\sum_{i=0}^{\left[u\right]}\exp \left(-\frac{i}{\left\lceil u\right\rceil}u\leq X_{2}\leq-\frac{i-1}{\left \lceil u\right\rceil}u\right)\mathbb{P}\left(X_{1}\geq(1-\frac{i}{\left\lceil u \right\rceil})u\right)\] \[\leq 2\exp\left(-J(u)\right)+2\sum_{i=0}^{\left\lceil u\right\rceil }\exp\left(-J\left(\frac{i-1}{\left\lceil u\right\rceil}u\right)-J\left((1- \frac{i}{\left\lceil u\right\rceil})u\right)\right)\] \[\stackrel{{*}}{{\leq}}2\exp\left(-J(u)\right)+2\sum_{ i=0}^{\left\lceil u\right\rceil}\exp\left(-J\left(u-\frac{u}{\left\lceil u \right\rceil}\right)+b\right)\] \[\leq 2\exp\left(-J(u)\right)+2(u+2)\exp\left(b\right)\exp\left(-J (u-1)\right)\] \[\leq 3\mathrm{e}^{b}u\exp\left(-J(u-1)\right),\quad\text{for }u\geq 2 \mathrm{e}^{-b}+4.\] To obtain inequality marked by \(*\), we used Assumption 5. Taking \(-\log\) of above inequality we get \[\liminf_{u\to\infty}\frac{-\log\mathbb{P}\left(\!|X_{1}-X_{2}|>u\right)}{-\log 3 u-b+J(u-1)}\geq 1. \tag{4.4}\] Set \(u=kt+c\). Since, by Lemma 4.2, \(\lim_{k\to\infty}\frac{-\log 3(kt+c)-b+J(kt+c-1)}{J(kt)}=1\), we obtain: \[\liminf_{u\to\infty}\frac{I(kt)}{J(kt)}\geq 1. \tag{4.5}\] Equations (4.2), (4.3), and (4.5) yield (2.10). _2._ Once again, we set \(c\triangleq\mathbb{E}(X-Y)^{2}\) and note that, in this case, \[\varphi_{n}(X) =\inf_{|y|\leq J^{-1}(\log 2n)}(X-y)^{2}-c\] \[=\begin{cases}-c&\text{if }|X|\leq J^{-1}(\log 2n),\\ (|X|-J^{-1}(\log 2n))^{2}-c&\text{if }|X|>J^{-1}(\log 2n).\end{cases}\] With the above display in mind, observe that \[\mathbb{P}(\varphi_{n}(X)>\frac{n}{m}t)=\mathbb{P}(\!|X|>J^{-1}(\log 2n)+\sqrt{c+ \frac{n}{m}t}).\] Note that for large enough \(n\) \[\sqrt{kt}\leq J^{-1}(\log 2n)+\sqrt{c+\frac{n}{m}t}\leq J^{-1}(\log 2n)+\sqrt{|c|}+ \sqrt{t}+\sqrt{kt}.\] Hence, by Assumptions 5, 6 we obtain \[\lim_{n\to\infty}\frac{J(J^{-1}(\log 2n)+\sqrt{c+\frac{n}{m}t})}{J(\sqrt{kt})}=1.\] (see proof of Lemma 4.2 for details.) We therefore have \[\lim_{n\to\infty}\frac{-\log\mathbb{P}(\varphi_{n}(X)>\frac{n}{m}t)}{J(\sqrt{ kt})}=1. \tag{4.6}\] Next, we try to approximate the term \(I(kt)\) from (2.10). Note that \[\mathbb{P}((X_{1}-X_{2})^{2}\geq c+kt)=\mathbb{P}(X_{1}-X_{2}|\geq\sqrt{c+kt}) \geq\mathbb{P}(\!|X_{2}|\leq c^{\prime})\mathbb{P}(\!|X_{1}|\geq c^{\prime}+ \sqrt{c+kt}), \tag{4.7}\] for a constant \(c^{\prime}\) such that \(\mathbb{P}\left(\!|X_{2}|\leq c^{\prime}\right)>0\). Consequently, \[\limsup_{n\to\infty}\frac{-\log\mathbb{P}((X_{1}-X_{2})^{2}\geq c+kt)}{J(c^{ \prime}+\sqrt{c+kt})}\leq 1,\] which in turn yields \[\limsup_{n\to\infty}\frac{I(kt)}{J(\sqrt{kt})}\leq 1. \tag{4.8}\] We utilized Lemma 4.2 to drop \(c,c^{\prime}\) from denominator in limits. For the other direction, we need to establish an upper bound for the left hand side of (4.7). As proved in (4.4), with \(u=\sqrt{c+kt}\), we have \[\liminf_{k\to\infty}\frac{-\log\mathbb{P}\left(\!|X_{1}-X_{2}|>\sqrt{c+kt} \right)}{-\log 3\sqrt{c+kt}-b+J(\sqrt{c+kt}-1)}\geq 1. \tag{4.9}\] Since \(\frac{\log\sqrt{c+kt}}{J(\sqrt{kt})}\to 0,\ \frac{J(\sqrt{c+kt}-1)}{J(\sqrt{kt})}\to 1\) as \(k\to\infty\), from (4.9) we obtain \[\liminf_{k\to\infty}\frac{-\log\mathbb{P}\left(h(X_{1},X_{2})\geq kt\right)}{ J(\sqrt{kt})}\geq 1. \tag{4.10}\] This completes the proof. _3._ We write \(c=\mathbb{E}\max(\!|X_{1}|\,,...,\!|X_{m}|)\). Note that \[\varphi_{n}(X)=\!|X|-c.\] Hence, \(\mathbb{P}\left(\varphi_{n}(X)\geq\frac{n}{m}t\right)=\mathbb{P}\left(\!|X| \geq\frac{n}{m}t+c\right)\), and similar to the above cases we can show: \[\lim_{n\to\infty}\frac{-\log\mathbb{P}\left(\varphi_{n}(X)\geq\frac{n}{m}t \right)}{J(kt)}=\lim_{n\to\infty}\frac{I(kt)}{J(kt)}=\lim_{n\to\infty}\frac{- \log\mathbb{P}\left(\varphi_{n}(X)\geq\frac{n}{m}t\right)}{I(kt)}=1. \tag{4.11}\] 4. Assume \(n\) is large enough so \(1\leq J^{-1}(\log 2n)\). Calling \(\mathbb{E}\left[X^{2}\right]-\mathbb{E}\left[\max\left\{X,Y\right\}\right]=c\) we have \[\varphi_{n}(X)=\begin{cases}\frac{1}{2}X^{2}-X-c,&\text{if }X>\frac{1}{2},\\ \frac{1}{2}X^{2}-\frac{1}{2}-c,&\text{if }X\leq\frac{1}{2}.\end{cases}\] Therefore, \[\mathbb{P}\left(\varphi_{n}(X)>\frac{n}{m}t\right) =\mathbb{P}\left(\frac{X^{2}}{2}-X>\frac{n}{m}t+c,X>0\right)+ \mathbb{P}\left(X\leq-\sqrt{\frac{2n}{m}t+2c+1}\right)\] \[=\mathbb{P}\left(X>1+\sqrt{\frac{2n}{m}t+2c+1}\right)+\mathbb{P} \left(X\leq-\sqrt{\frac{2n}{m}t+2c+1}\right).\] Hence, \[\mathbb{P}\left(\left|X\right|>\sqrt{\frac{2n}{m}t+2c+1}+1\right)\leq\mathbb{ P}\left(\varphi_{n}(X)>\frac{n}{m}t\right)\leq\mathbb{P}\left(\left|X\right|> \sqrt{\frac{2n}{m}t+2c+1}\right).\] Similar to the previous cases above, by Lemma 4.2, we can then show \[\lim_{n\to\infty}\frac{-\log\mathbb{P}\left(\varphi_{n}(X)>\frac{n}{m}t\right) }{J\left(\sqrt{\frac{2n}{m}t+2c+1}+1\right)}=\lim_{n\to\infty}\frac{-\log \mathbb{P}\left(\varphi_{n}(X)>\frac{n}{m}t\right)}{J\left(\sqrt{\frac{2n}{m}t +2c+1}\right)}=\lim_{n\to\infty}\frac{-\log\mathbb{P}\left(\varphi_{n}(X)> \frac{n}{m}t\right)}{J(\sqrt{2k}t)}=1. \tag{4.12}\] Moreover, since \(h(X,Y)\geq\min(\frac{1}{2}X^{2}-\frac{1}{2},\frac{1}{2}X^{2}-X)-c\), we obtain \[\mathbb{P}\left(h(X,Y)>kt\right) \geq\mathbb{P}\left(\min(\frac{1}{2}X^{2}-\frac{1}{2},\frac{1}{2} X^{2}-X)>kt+c\right)\] \[\geq\mathbb{P}\left(\left|X\right|>\sqrt{2kt+2c+1}+1\right).\] Hence, \[\limsup_{n\to\infty}\frac{-\log\mathbb{P}\left(h(X,Y)>kt\right)}{J(\sqrt{2kt+ 2c+1}+1)}=\limsup_{n\to\infty}\frac{I(kt)}{J(\sqrt{2kt})}\leq 1. \tag{4.13}\] Again, for dropping extra constants from the denominator we used Lemma 4.2. Furthermore, \(h(X,Y)\leq\frac{1}{2}X^{2}+\left|X\right|+\frac{1}{2}Y^{2}+\left|Y\right|-c\), and \(\frac{1}{2}X^{2}+\left|X\right|>u\iff\left|X\right|>-1+\sqrt{2u+1},\;\forall u\geq 0\). Hence, \[\mathbb{P}\left(h(X,Y)>kt\right) \leq\mathbb{P}\left(\frac{1}{2}X^{2}+|X|+\frac{1}{2}Y^{2}+|Y|>kt+c\right)\] \[\leq\mathbb{P}\left(\frac{1}{2}X^{2}+|X|>kt+c\right)+\mathbb{P} \left(\frac{1}{2}Y^{2}+|Y|>kt+c\right)\] \[\quad+\sum_{i=1}^{\lceil kt+c\rceil}\mathbb{P}\left(\frac{i-1}{ \lceil kt+c\rceil}(kt+c)\leq\frac{1}{2}X^{2}+|X|\leq\frac{i}{\lceil kt+c \rceil}(kt+c)\right)\times\] \[\qquad\qquad\mathbb{P}\left(\frac{1}{2}Y^{2}+|Y|>(1-\frac{i}{ \lceil kt+c\rceil})(kt+c)\right)\] \[\leq 2\mathbb{P}\left(|X|>-1+\sqrt{2kt+2c+1}\right)\] \[\quad+\sum_{i=1}^{\lceil kt+c\rceil}\mathbb{P}\left(|X|\geq-1+ \sqrt{\frac{2(i-1)}{\lceil kt+c\rceil}(kt+c)+1}\right)\times\] \[\qquad\qquad\mathbb{P}\left(|Y|>-1+\sqrt{2(1-\frac{i}{\lceil kt+c \rceil})(kt+c)+1}\right)\] \[\leq 2\exp\left(-J(-1+\sqrt{2kt+2c+1})\right)\] \[\quad+\sum_{i=1}^{\lceil kt+c\rceil}\exp\left(-J(-1+\sqrt{\frac{ 2(i-1)}{\lceil kt+c\rceil}(kt+c)+1})-J(-1+\sqrt{2(1-\frac{i}{\lceil kt+c \rceil})(kt+c)+1})\right)\] \[\stackrel{{*}}{{\leq}}2\exp\left(-J(-1+\sqrt{2kt+2c+ 1})\right)+\sum_{i=1}^{\lceil kt+c\rceil}\mathrm{e}^{b}\exp\left(-J(-2+\sqrt{ 2(1-\frac{1}{\lceil kt+c\rceil})(kt+c)+2})\right)\] \[=2\exp\left(-J(-1+\sqrt{2kt+2c+1})\right)+\lceil kt+c\rceil \mathrm{e}^{b}\exp\left(-J(-2+\sqrt{2(1-\frac{1}{\lceil kt+c\rceil})(kt+c)+2} )\right).\] To obtain inequality marked by \(*\) we used Assumption 5 and the fact that \(\sqrt{x+y}\leq\sqrt{x}+\sqrt{y}\) for non-negative \(x,y\). Hence, by Lemma 4.3 we obtain \[\liminf_{n\to\infty}\frac{I(kt)}{J(\sqrt{2kt})}\geq 1, \tag{4.14}\] which concludes the proof. **Lemma 4.2**.: _If tail function \(J(t)\), defined in (2.2), satisfy Assumptions 5, 6, then for any \(c_{1}>0\) and \(c_{2},c_{3}\in\mathbb{R}\) we have_ \[\lim_{u\to\infty}\frac{J(u+c_{2})}{J(u)}=1 \tag{4.15}\] \[\lim_{u\to\infty}\frac{J(c_{1}u+J^{-1}(\log u)+c_{2})}{J(c_{1}u)}=1 \tag{4.16}\] \[\lim_{u\to\infty}\frac{J(c_{2}+\sqrt{u+c_{3}})}{J(\sqrt{u})}=1 \tag{4.17}\] Proof.: To prove (4.15), one only need to note that by Assumption 5 we have \[J(u)-J(|c_{2}|)-b\leq J(u-|c_{2}|)\leq J(u+c_{2})\leq J(u+|c_{2}|)\leq J(u)+J(|c_{2 }|)+b, \tag{4.18}\] and that \(J\) is a non-decreasing function, and \(J(u)\xrightarrow{u\to\infty}\infty\). To show (4.16) observe that \(J^{-1}(\log u)+c_{2}>0\) for large enough \(u\). By Assumption 5 we obtain \[J(c_{1}u)\leq J(c_{1}u+J^{-1}(\log u)+c_{2})\leq J(c_{1}u)+\log u+J(|c_{2}|)+2b.\] Note that \(\lim\frac{\log u}{J(u)}=\lim\frac{C}{J(u)}=0\) as \(u\to\infty\), so dividing by \(J(c_{1}u)\) and taking limit of the above inequalities yields (4.16). For the third part observe that when \(u\) is large enough we have \[\sqrt{u}-\sqrt{|c_{3}|}-|c_{2}|\leq c_{2}+\sqrt{u+c_{3}}\leq\sqrt{u}+\sqrt{|c _{3}|}+|c_{2}|\,.\] Then, since \(J\) is non-decreasing and satisfies Assumption 5 we obtain \[J(\sqrt{u})-J(\sqrt{|c_{3}|}+|c_{2}|)-b\leq J(c_{2}+\sqrt{u+c_{3}})\leq J( \sqrt{u})+J(\sqrt{|c_{3}|}+|c_{2}|)+b.\] Once again, dividing by \(J(\sqrt{u})\) and taking \(u\to\infty\) yields (4.17). **Lemma 4.3**.: _Under Assumptions 5, 6 we have_ \[\lim_{k\to\infty}\frac{-\log\left(2\exp\left(-J(-1+\sqrt{2kt+2c+1})\right)+|kt+ c|\mathrm{e}^{b}\exp\left(-J(-2+\sqrt{2(1-\frac{1}{|kt+c|})(kt+c)+2})\right) \right)}{J(\sqrt{2kt})}=1\] Proof.: Given Lemma 4.1, it suffices to show that \[\lim_{k\to\infty}\frac{-\log\left(2\exp\left(-J(-1+\sqrt{2kt+2c+1} )\right)\right)}{J(\sqrt{2kt})}=1 \tag{4.19}\] \[\lim_{k\to\infty}\frac{-b-\log\left(|kt+c|\exp\left(-J(-2+\sqrt{ 2(1-\frac{1}{|kt+c|})(kt+c)+2})\right)\right)}{J(\sqrt{2kt})}=1. \tag{4.20}\] Note that \[\lim_{k\to\infty}\frac{-\log\left(2\exp\left(-J(-1+\sqrt{2kt+2c+1})\right) \right)}{J(\sqrt{2kt})}=\lim_{k\to\infty}\frac{J(-1+\sqrt{2kt+2c+1})}{J(\sqrt {2kt})}=1,\] by Lemma 4.2. Moreover, \[\lim_{k\to\infty}\frac{-b-\log\left(|kt+c|\exp\left(-J(-2+\sqrt{2 (1-\frac{1}{|kt+c|})(kt+c)+2})\right)\right)}{J(\sqrt{2kt})}\] \[=\lim_{k\to\infty}\frac{-b-\log[kt+c]}{J(\sqrt{2kt})}+\frac{J(-2+ \sqrt{2(1-\frac{1}{|kt+c|})(kt+c)+2})}{J(\sqrt{2kt})}.\] By Assumption 6 we have \(\lim\limits_{k\rightarrow\infty}\frac{-b\log[kt+c]}{J(\sqrt{2kt})}=\lim\limits_{k \rightarrow\infty}\frac{-2\log\sqrt{[kt+c]}}{J(\sqrt{2kt})}=0\). We also have \[\lim\limits_{k\rightarrow\infty}\sqrt{2(1-\frac{1}{[kt+c]})(kt+c)+2}=\sqrt{2kt+2 c+2},\] Hence, for large enough \(k\), there are fixed \(c_{2},c_{3}\in\mathbb{R}\) such that \[-2+\sqrt{2kt+c_{2}}\leq-2+\sqrt{2(1-\frac{1}{[kt+c]})(kt+c)+2}\leq-2+\sqrt{2kt+ c_{3}},\quad\forall k>K.\] Given \(J(t)\) is an increasing function, and \(\lim_{k\rightarrow\infty}\frac{J(-2+\sqrt{2kt+c_{i}})}{J(\sqrt{2kt})}=1,\ i=2,3\) by Lemma 4.2 we obtain \[\lim\limits_{k\rightarrow\infty}\frac{J(-2+\sqrt{2(1-\frac{1}{[kt+c]})(kt+c)+2 })}{J(\sqrt{2kt})}=1. \tag{4.21}\]
2302.07651
A constrained mean curvature type flow for capillary boundary hypersurfaces in space forms
In this paper, we introduce a new constrained mean curvature type flow for capillary boundary hypersurfaces in space forms. We show the flow exists for all time and converges globally to a spherical cap. Moreover, the flow preserves the volume of the bounded domain enclosed by the hypersurface and decreases the total energy. As a by-product, we give a flow proof of the capillary isoperimetric inequality for the starshaped capillary boundary hypersurfaces in space forms.
Xinqun Mei, Liangjun Weng
2023-02-15T13:34:31Z
http://arxiv.org/abs/2302.07651v1
# A constrained mean curvature type flow for capillary boundary hypersurfaces in space forms ###### Abstract. In this paper, we introduce a new constrained mean curvature type flow for capillary boundary hypersurfaces in space forms. We show the flow exists for all time and converges globally to a spherical cap. Moreover, the flow preserves the volume of the bounded domain enclosed by the hypersurface and decreases the total energy. As a by-product, we give a flow proof of the capillary isoperimetric inequality for the star-shaped capillary boundary hypersurfaces in space forms. Key words and phrases:Mean curvature type flow, space forms, capillary boundary, the capillary isoperimetric inequality 2020 Mathematics Subject Classification: Primary 53C44, Secondary 35K93 ## 1. Introduction Let \(M^{n+1}(K)\) be a complete simply-connected Riemann manifold with constant sectional curvature \(K\). Up to homoteties, we may assume that \(K=-1,0,+1\). The case \(K=0\), \(M^{n+1}(K)\) is just the Euclidean space \(\mathbb{R}^{n+1}\). If \(K=-1\), \(M^{n+1}(K)\) is the hyperbolic space \(\mathbb{H}^{n+1}\), and we use the Poincare ball model for \(\mathbb{H}^{n+1}\), which is given by \(\left(\mathbb{B}^{n+1},\bar{g}_{\mathbb{H}}\right)\), where \[\mathbb{B}^{n+1}=\{x\in\mathbb{R}^{n+1}:|x|<1\},\quad\bar{g}_{\mathbb{H}}=e^{2 u}\delta_{\mathbb{B}^{n+1}}:=\frac{4}{(1-|x|^{2})^{2}}|dx|^{2}.\] Let \(B_{R}^{\mathbb{H}}\) be a geodesic ball in \(\mathbb{H}^{n+1}\) with the hyperbolic radius \(R\in(0,+\infty)\), by using the isometry of \(\mathbb{H}^{n+1}\), one can view \(B_{R}^{\mathbb{H}}\) as a Euclidean ball \(B_{r_{0}}\subset\mathbb{B}^{n+1}\) of radius \[r_{0}:=\sqrt{\frac{\cosh R-1}{\cosh R+1}}\in(0,1),\] in \(\mathbb{R}^{n+1}\) with the hyperbolic metric \(\bar{g}_{\mathbb{H}}\). If \(K=1\), \(M^{n+1}(K)\) is the spherical space form \(\mathbb{S}^{n+1}\), and we use the model \[(\mathbb{R}^{n+1},\bar{g}_{\mathbb{S}}),\quad\bar{g}_{\mathbb{S}}=e^{2v} \delta_{\mathbb{R}^{n+1}}:=\frac{4}{(1+|x|^{2})^{2}}|dx|^{2}.\] to represent \(\mathbb{S}^{n+1}\setminus\{\mathcal{S}\}\), the unit ball without the south pole. Let \(B_{R}^{\mathbb{S}}\) be a geodesic ball in \(\mathbb{S}^{n+1}\) with radius \(R\in(0,\pi)\) center at the north pole. Analog to hyperbolic space, one can view \(B_{R}^{\mathbb{S}}\) as a Euclidean ball \(B_{r_{0}}\subset\mathbb{R}^{n+1}\) of radius \[r_{0}:=\sqrt{\frac{1-\cos R}{1+\cos R}}\in(0,\infty).\] with the spherical metric \(\bar{g}_{\mathbb{S}}\). For brevity, and without causing ambiguity, we uniformly denote \(\bar{g}\) for the metric \(\bar{g}_{\mathbb{H}}\) or \(\bar{g}_{\mathbb{S}}\) and \(B_{R}\) for the geodesic ball \(B_{R}^{\mathbb{H}}\) in \(\mathbb{H}^{n+1}\) or \(B_{R}^{\mathbb{S}}\) in \(\mathbb{S}^{n+1}\) with the geodesic radius \(R\) in the rest of this paper. For a constant unit vector \(a\in\mathbb{S}^{n}\subset\mathbb{R}^{n+1}\), let \(X_{a}\) be the vector field in \(M^{n+1}(K)\) (\(K=-1,+1\)), given by \[X_{a}:=\frac{2}{1+Kr_{0}^{2}}\left[\langle x,a\rangle x-\frac{1}{2}(|x|^{2}+r_{0 }^{2})a\right],\] \(\langle,\rangle\) denotes the Euclidean metric. It has been observed by Wang-Xia [23, Proposition 4.1] that \(X_{a}\) is a conformal Killing vector field such that \[L_{X_{a}}\bar{g}=V_{a}\bar{g},\] where \[V_{a}:=\frac{2\langle x,a\rangle}{1+K|x|^{2}}.\] Besides, \(X_{a}\) satisfies \(\bar{g}(X_{a},\bar{N})=0\) on \(\partial B_{R}\), where \(\bar{N}\) is the unit normal vector along \(\partial B_{R}\). From those properties of \(X_{a}\), Wang-Xia [23, Proposition 4.4] obtain a new Minkowski formula in space forms. That is, for a hypersurface \(\Sigma\subset\bar{B}_{R}\) with boundary \(\partial\Sigma\subset\partial B_{R}\) such that \(\partial\Sigma\) intersects \(\partial B_{R}\) at a constant contact angle \(\theta\in(0,\pi)\), in \(\mathbb{H}^{n+1}\), it holds \[\int_{\Sigma}n\left(V_{a}+\sinh R\cos\theta\bar{g}(Y_{a},\nu)\right)dA=\int_{ \Sigma}H\bar{g}(X_{a},\nu)dA, \tag{1.1}\] and in \(\mathbb{S}^{n+1}\) respectively, \[\int_{\Sigma}n\left(V_{a}+\sin R\cos\theta\bar{g}(Y_{a},\nu)\right)dA=\int_{ \Sigma}H\bar{g}(X_{a},\nu)dA, \tag{1.2}\] where \(H\nu\) is mean curvature vector of \(\Sigma\) and \(Y_{a}:=\frac{1}{2}(1-K|x^{2}|)a+K\langle x,a\rangle x\). In this paper, we consider a new type mean curvature flow for capillary boundary hypersurfaces (see Section 2.1 for the definition) supported in the geodesic ball in space forms \(M^{n+1}(K)\) for \(K=-1\) and \(K=1\), while the case \(K=0\) was studied previously in [21, 22] respectively. Such kind of locally constrained curvature type flow was first used by Guan-Li [4] for closed hypersurfaces in space forms, which was motivated by the Minkowski formula. See also [5, 6, 7, 8, 13, 20] and references therein for various general setting, which include the hyperbolic space \(\mathbb{H}^{n+1}\) and spherical space \(\mathbb{S}^{n+1}\). Recently, it attracts high interest to study the flow of hypersurface with non-empty boundary, especially due to their close connection with geometric inequalities. For example, some new class of constrained curvature flows was studied for hypersurface with free boundary in Euclidean space by [19, 22], and [16, 21, 24] for capillary boundary. Subsequently, a class of new Alexandrov-Fenchel inequalities was obtained after establishing the long-time existence and convergence of those flows. One can refer to [11, 12] for the studying of the inverse mean curvature flow with free boundary in a Euclidean ball and its application in geometric inequality. Therefore, besides the Euclidean space, it is natural to ask the same question in general ambient space, say space forms. To be more precise, let \(\Sigma_{t}\) be a family of hypersurfaces with boundary in \(\bar{B}_{R}\) given by a family of isometric embeddings \(x(\cdot,t):M\to\bar{B}_{R}\) from a compact \(n\)-dimensional manifold \(M\) with the boundary \(\partial M\) (\(n\geq 2\)) such that \[\operatorname{int}(\Sigma_{t})=x\left(\operatorname{int}(M),t\right)\subset B_{ R},\quad\partial\Sigma_{t}=x(\partial M,t)\subset\partial B_{R}.\] And \(x(\cdot,t)\) satisfy \[\left\{\begin{array}{ll}(\partial_{t}x)^{\perp}(\cdot,t)=F(\cdot,t)\nu( \cdot,t),&\text{in }M\times[0,T),\\ \bar{g}(\nu(\cdot,t),\bar{N}\circ x(\cdot,t))=-\cos\theta&\text{on }\partial M \times[0,T),\\ x(\cdot,0)=x_{0}(\cdot)&\text{in }M,\end{array}\right. \tag{1.3}\] In the case of \(\mathbb{H}^{n+1}\), we choose the speed function in (1.3) as \[F:=nV_{a}+n\sinh R\cos\theta\bar{g}(Y_{a},\nu)-H\bar{g}(X_{a},\nu). \tag{1.4}\] In the case of \(\mathbb{S}^{n+1}\), we choose the speed function in (1.3) as \[F:=nV_{a}+n\sin R\cos\theta\bar{g}(Y_{a},\nu)-H\bar{g}(X_{a},\nu). \tag{1.5}\] Such kind of flow is motivated by the Minkowski formula (1.1) and (1.2) respectively. And the speed function \(F\) is chosen to be (1.4) or (1.5) to ensure that the enclosed volume of \(\Sigma_{t}\subset\bar{B}_{R}\) is preserved along the flow (1.3), while the total energy functional (1.8) is monotone decreasing. We will discuss this later. **Definition 1.1**.: We say \(\Sigma\subset\bar{B}_{R}\subset M^{n+1}(K)\) is star-shaped with respect to \(a\) if \(\bar{g}(X_{a},\nu)>0\) along \(\Sigma\). Our main result in this paper is the following theorem. **Theorem 1.2**.: _If the initial hypersurface \(\Sigma_{0}\) is star-shaped capillary boundary hypersurface in space forms \(M^{n+1}(K)\) and the contact angle satisfies \(|\cos\theta|<\frac{3n+1}{5n-1}\), then the flow (1.3) exists for all time. Moreover, \(x(\cdot,t)\) converges to a spherical cap in the \(C^{\infty}\) topology as \(t\to\infty\), whose enclosed domain has the same volume as the one enclosed by \(\Sigma_{0}\)._ The family of the spherical cap is given by geodesic ball of radius \(r\) and totally geodesic ball in the hyperbolic space or in the spherical space, which can be viewed as a Euclidean set \[C_{\theta,r}(a):=\left\{x\in B_{R}:\left|x-\sqrt{r^{2}+2rr_{0}\cos\theta+r_{0} ^{2}}a\right|=r\right\}, \tag{1.6}\] and \[C_{\theta,\infty}(a):=\{x\in B_{R}:\langle x,a\rangle=\cos\theta\}, \tag{1.7}\] endowed with the metric \(\bar{g}\). In particular, when \(K=0\) or \(\theta=\frac{\pi}{2}\) and \(K=-1\), Theorem 1.2 was proved in [21] and [17] respectively. And we have a technique restriction on the range of contact angle \(|\cos\theta|<\frac{3n-1}{5n-1}\) as in [21], which is crucial for us to obtain uniform gradient estimate in space forms, see Section 3. However, we expect the result holds true for the whole range of \(\theta\in(0,\pi)\) in space forms. It is worth noting that the isoperimetric inequality for hypersurfaces with non-empty boundary in a Euclidean ball were studied in [1, 2], and they proved that among the hypersurface with fixed volume of enclosed domain \(\Omega\), the spherical cap are the minimizer of area functional \(\Sigma\). Instead of just considering the area functional, it is interesting to consider the total energy functional, that is \[E(\Sigma):=\mathrm{Area}(\Sigma)-\cos\theta\mathrm{Area}(T) \tag{1.8}\] for a constant contact angle \(\theta\in(0,\pi)\). The second term \(T:=\partial\Omega\setminus\Sigma\) is known as the wetting part of \(\partial\Omega\) in the theory of capillarity, see [3] for example. Further, by combining Theorem 1.2 with our high order Minkowski formulas (2.4) and (2.8) for \(k=2\), we give a flow proof for the capillary isoperimetric inequality in space forms for (1.8), which can also be viewed as the hyperbolic and sphere counterpart in [21, Theorem 1.1] or [15, Chapter 19]. **Corollary 1.3**.: _Among the star-shaped capillary boundary hypersurfaces with fixed volume of enclosed domain in a geodesic ball \(\bar{B}_{R}\subset M^{n+1}(K)\) for \(K=\pm 1\), the spherical caps are the only minimizers of the total energy (1.8), provided that the contact angle \(\theta\) satisfies \(|\cos\theta|<\frac{3n+1}{5n-1}\)._ Proof.: For \(K=-1\), along the flow (1.3) with \(F\) being (1.4), using (2.2) and (1.1), we know \[\frac{d}{dt}\mathrm{Vol}(\Omega_{t}) = \int_{\Sigma_{t}}\left[nV_{a}+n\sinh R\cos\theta\bar{g}(Y_{a}, \nu)-H\bar{g}(X_{a},\nu)\right]dA_{t}\] \[= 0.\] Note that \(\left.\partial_{t}x\right|_{\partial M}\in T(\partial B_{R}^{\mathbb{H}})\), combining with (2.3) and (2.4) for \(k=2\), it follows \[\frac{d}{dt}E(\Sigma_{t}) = \int_{\Sigma_{t}}H\left[nV_{a}+n\sinh R\cos\theta\bar{g}(Y_{a}, \nu)-H\bar{g}(X_{a},\nu)\right]dA_{t}\] \[= \int_{\Sigma_{t}}(\frac{2n}{n-1}\sigma_{2}-H^{2})\bar{g}(X_{a}, \nu)dA_{t}\] \[= -\frac{1}{n-1}\int_{\Sigma_{t}}\sum_{1\leq i<j\leq n}(\kappa_{i} -\kappa_{j})^{2}\bar{g}(X_{a},\nu)dA_{t}\leq 0.\] For \(K=1\), the proof is similar to above. Hence Corollary 1.3 follows directly from Theorem 1.2. **This article is structured as follows.** In Section 2, we give some preliminaries for capillary boundary hypersurfaces supported in a geodesic ball of space forms, and prove the high-order Minkowksi type formula in the hyperbolic space \(\mathbb{H}^{n+1}\) and in the spherical space \(\mathbb{S}^{n+1}\) respectively. Then we convert the flow (1.3) to a scalar parabolic equation on semi-sphere with the help of a conformal transformation. The last Section 3 is devoted to obtain uniform a priori estimates and prove Theorem 1.2. ## 2. Preliminaries In this section, in the first part, we collect some basic facts about capillary boundary hypersurfaces supported in a geodesic ball of space forms, and we establish Minkowski formula which can be viewed as a high order counterpart of (1.1) and (1.2). In the second part, we reduce (1.3) to a scalar flow, provided that the evolving hypersurface is star-shaped in the sense of Definition 1.1. ### Capillary boundary hypersurfaces in space forms Let \(x:M\to\bar{B}_{R}\) be an isometric embedding of an orientable \(n\)-dimensional compact manifold \(M\) with smooth boundary \(\partial M\), denote \(\Sigma:=x(M)\) and \(\partial\Sigma:=x(\partial M)\) which satisfying \(\mathrm{int}(\Sigma)\subset\mathrm{int}(B_{R})\) and \(\partial\Sigma\subset\partial B_{R}\). If there is no confusion, we will identity \(M\) with \(\Sigma\) and \(\partial M\) with \(\partial\Sigma\). We denote by \(\bar{\nabla},D\) the Levi-Civita connection of \((B_{R},\bar{g})\) and \((\Sigma,g)\) respectively, where \(g\) is the induced metric from embedding \(x\). \(\Sigma\) divides the ball \(B_{R}\) into two parts, we denote one part by \(\Omega\) and \(\nu\) be the unit outward normal vector of \(\Sigma\) w.r.t \(\Omega\) and \(T:=\partial\Omega\cap\partial B_{R}\) be the wetting part of \(\partial\Omega\). Let \(\mu\) be the unit outward conormal vector field along \(\partial\Sigma\subset\Sigma\) and \(\bar{\nu}\) be the unit normal to \(\partial\Sigma\subset\partial B_{R}\) such that \(\{\nu,\mu\}\) and \(\{\bar{\nu},\bar{N}\}\) have the same orientation in the normal bundle of \(\partial\Sigma\subset B_{R}\). Denote by \(h\) and \(\sigma_{k}\) the second fundamental form and \(k\)-th mean curvature of the immersion \(x\) respectively, precisely, \(h(X,Y):=\bar{g}(\bar{\nabla}_{X}\nu,Y)\) and \(\sigma_{k}:=\sigma_{k}(\kappa)\), where \(X,Y\in T\Sigma\) and \(\kappa:=(\kappa_{1},\cdots,\kappa_{n})\in\mathbb{R}^{n}\) are the eigenvalues of Weingarten matrix \((h^{j}_{i})\). We define the _contact angle_\(\theta\in(0,\pi)\) between the hypersurface \(\Sigma\) and the geodesic ball \(B_{R}\) by \[\bar{g}(\nu,\bar{N})=\cos(\pi-\theta),\quad\text{ on }\partial\Sigma.\] (See for example Figure 1 in the case \(\mathbb{H}^{n+1}\).) And we call such hypersurface \(\Sigma\) as the _capillary boundary hypersurface_. In particular, when \(\theta=\frac{\pi}{2}\), it is known as _free boundary hypersuface_. Moreover, it follows that \[\mu = \sin\theta\bar{N}+\cos\theta\bar{\nu},\] \[\nu = -\cos\theta\bar{N}+\sin\theta\bar{\nu}.\] Figure 1. A capillary boundary hypersurface \(\Sigma\) in hyperbolic space The so-called wetting energy \(W(\Sigma)\) is just the area of the region \(T\), which is bounded by \(\partial\Sigma\) on \(\partial B_{R}\). And the total energy functional is defined as \[E(\Sigma):=\operatorname{Area}(\Sigma)-\cos\theta\,W(\Sigma). \tag{2.1}\] In order to show the monotonicity property of total energy functional \(E\) along our flow (1.3). Let us consider an admissible variation of \(\Sigma:=x(M)\), given by \(x_{t}:M\times(-\varepsilon,\varepsilon)\to\bar{B}_{R}\) satisfying that \(x_{t}(\cdot):=x(\cdot,t):M\to\bar{B}_{R}\) is an immersion with \[\operatorname{int}(\Sigma_{t})\subset B_{R},\quad\text{and}\quad\partial \Sigma_{t}\subset\partial B_{R},\] where \(\Sigma_{t}:=x(M,t)\). Denote \(\Omega_{t}\) the enclosed domain by \(\Sigma_{t}\) in \(\bar{B}_{R}\). Let \(Y:=\frac{\partial}{\partial t}x(\cdot,t)\big{|}_{t=0}\) be the associated variational vector field of \(x_{t}\), then the first variational formula of \(\operatorname{Vol}(\Omega_{t})\) and \(E(\Sigma_{t})\) are known as (cf. [18, Section 4]) \[\frac{d}{dt}\Big{|}_{t=0}\operatorname{Vol}(\Omega_{t})=\int_{M}\bar{g}(Y, \nu)dA, \tag{2.2}\] and \[\frac{d}{dt}\Big{|}_{t=0}E(\Sigma_{t})=\int_{M}H\bar{g}(Y,\nu)dA+\int_{\partial M }\bar{g}(Y,\mu-\cos\theta\bar{\nu})ds, \tag{2.3}\] where \(dA\) and \(ds\) are the area element of \(M\) and \(\partial M\) respectively. As the Minkowski formula plays an important role for the closed hypersurface in space forms, cf.[4, 5, 6] etc. In the following, we firstly establish a new Minkowski formula for the capillary boundary hypersurfaces supported in a geodesic ball in space forms \(M^{n+1}(K)\) for \(K=\pm 1\). While the case \(K=0\) was shown recently by Weng-Xia in [24, Proposition 2.8]. It may also have independent interest for \(K=\pm 1\), since we only use the special case \(k=2\) of (2.4) and (2.8) to show the monotonicity of the total energy functional (2.1) in this paper, which has been indicated in the proof of Corollary 1.3. **Proposition 2.1**.: _Let \(x:M\to B_{R}^{\mathbb{H}}\) be an isometric immersion of \(\Sigma:=x(M)\) into the hyperbolic ball \(B_{R}^{\mathbb{H}}\), whose boundary \(\partial\Sigma\) intersects \(\partial B_{R}^{\mathbb{H}}\) at a constant angle \(\theta\in(0,\pi)\), then it holds_ \[(n-k+1)\int_{\Sigma}\big{[}\sigma_{k-1}V_{a}+\sinh R\cos\theta\sigma_{k-1} \bar{g}(Y_{a},\nu)\big{]}dA=k\int_{\Sigma}\sigma_{k}\bar{g}(X_{a},\nu)dA. \tag{2.4}\] In particular, when \(\theta=\frac{\pi}{2}\), formula (2.4) was proved by Wang-Xia [23, Proposition 5.1]. For completeness, we contain a proof here for general \(\theta\). Proof.: Let \(\{e_{i}\}_{i=1}^{n}\) be an othonormal frame on \(\Sigma\), by [23, equation (4.7)], we have \[\frac{1}{2}(D_{i}(X_{a}^{T})_{j}+D_{j}(X_{a}^{T})_{i})=V_{a}\bar{g}_{ij}-h_{ij }\bar{g}(X_{a},\nu),\] where \(X_{a}^{T}\) is the tangential projection of \(X_{a}\) on \(\Sigma\). Set \[Z_{a}:=\bar{g}(\nu,e^{-u}a)x-\bar{g}(x,\nu)(e^{-u}a).\] It is known that \(x\) is a conformal Killing vector [4, Lemma 2.1], satisfying \[\bar{\nabla}x=V_{0}\bar{g}, \tag{2.5}\] where \(V_{0}:=\cosh R=\frac{1+|x|^{2}}{1-|x|^{2}}\) and \(R:=R(x)\) is the hyperbolic distance to the origin. By [23, Proposition 4.3], \[\bar{\nabla}_{Z}(e^{-u}a)=e^{-u}\left[\bar{g}(x,e^{-u}a)Z-\bar{g}(Z,e^{-u}a)x \right], \tag{2.6}\] for any \(Z\in T\Sigma\). Using (2.5) and (2.6), it yields \[D_{i}(\bar{g}(\nu,e^{-u}a)x^{T})_{j} = e^{-u}h_{ik}\bar{g}(e_{k},a)\bar{g}(x,e_{j})-e^{-u}\bar{g}(e_{i},e^{-u}a)\bar{g}(x,e_{j})\bar{g}(\nu,x)\] \[+\bar{g}(\nu,e^{-u}a)\left[V_{0}\bar{g}_{ij}-h_{ij}\bar{g}(x,\nu) \right],\] and \[D_{i}(\bar{g}(x,\nu)(e^{-u}a)^{T})_{j} = e^{-u}h_{ik}\bar{g}(x,e_{k})\bar{g}(a,e_{j})+\bar{g}(x,\nu)\big{[} e^{-u}\big{(}\bar{g}(x,e^{-u}a)\bar{g}_{ij}\] \[-\bar{g}(e_{i},e^{-u}a)\bar{g}(x,e_{j})\big{)}-h_{ij}\bar{g}(e^{-u }a,\nu)\big{]}.\] On the other hand, we have \[\bar{g}(Z_{a}^{T},\mu)|_{\partial\Sigma}=r_{0}\bar{g}(\bar{\nu},a),\] and \[\bar{g}(X_{a}^{T},\mu)|_{\partial\Sigma}=-\frac{2r_{0}^{2}}{1-r_{0}^{2}}\cos \theta\bar{g}(a,\bar{\nu}),\] combining with \(\sinh R=\frac{2r_{0}}{1-r_{0}^{2}}\), it follows \[\bar{g}(X_{a}^{T}+\sinh R\cos\theta Z_{a}^{T},\mu)|_{\partial \Sigma}=0. \tag{2.7}\] Denote \(\sigma_{k-1}^{ij}:=\frac{\partial\sigma_{k}}{\partial h_{j}^{i}}\) be the \(k\)-th Newton transformation, note that \[V_{0}\bar{g}(\nu,e^{-u}a)-e^{-u}\bar{g}(x,\nu)\bar{g}(x,e^{-u}a) = \bar{g}\left(\nu,\frac{1}{2}(|x|^{2}+1)a-\langle x,a\rangle x\right)\] \[= \bar{g}(\nu,Y_{a}),\] we have \[\sigma_{k-1}^{ij}D_{i}(X_{a}^{T}+\sinh R\cos\theta Z_{a}^{T})_{j}\] \[= (n-k+1)\sigma_{k-1}V_{a}-k\sigma_{k}\bar{g}(X_{a},\nu)+\sinh R \cos\theta\bar{g}(\nu,e^{-u}a)V_{0}\sigma_{k-1}^{ij}\bar{g}_{ij}\] \[-\sinh R\cos\theta e^{-u}\bar{g}(x,\nu)\bar{g}(x,e^{-u}a)\sigma_{ k-1}^{ij}\bar{g}_{ij}\] \[= (n-k+1)\sigma_{k-1}V_{a}-k\sigma_{k}\bar{g}(X_{a},\nu)+(n-k+1) \sinh R\cos\theta\sigma_{k-1}\big{[}V_{0}\bar{g}(\nu,e^{-u}a)\] \[-e^{-u}\bar{g}(x,\nu)\bar{g}(x,e^{-u}a)\big{]}\] \[= (n-k+1)\sigma_{k-1}V_{a}-k\sigma_{k}\bar{g}(X_{a},\nu)+(n-k+1) \sinh R\cos\theta\sigma_{k-1}\bar{g}(Y_{a},\nu),\] integrating above identity on \(\Sigma\) and by the divergence theorem, we have \[\int_{\Sigma}\left[(n-k+1)\sigma_{k-1}V_{a}+(n-k+1)\sinh R\cos \theta\sigma_{k-1}\bar{g}(Y_{a},\nu)-k\sigma_{k}\bar{g}(X_{a},\nu)\right]dA\] \[= \int_{\Sigma}\sigma_{k-1}^{ij}D_{i}(X_{a}^{T}+\sinh R\cos\theta Z _{a}^{T})_{j}dA=\int_{\Sigma}D_{i}\left(\sigma_{k-1}^{ij}\bar{g}(X_{a}^{T}+ \sinh R\cos\theta Z_{a}^{T},e_{j})\right)dA\] \[= \int_{\partial\Sigma}\sigma_{k-1}^{ij}\bar{g}(X_{a}^{T}+\sinh R \cos\theta Z_{a}^{T},e_{j})\overline{g}(\mu,e_{i})ds=\int_{\partial\Sigma} \sigma_{k-1}^{\mu\mu}\bar{g}(X_{a}^{T}+\sinh R\cos\theta Z_{a}^{T},\mu)ds=0,\] where the last line follows from that \(\mu\) is the principal direction of \(\partial\Sigma\subset\Sigma\) (cf. [23, Propostion 2.1]) and (2.7). Thus we complete the proof of Proposition 2.1. Except the case \(\mathbb{H}^{n+1}\), we obtain the corresponding Minkowski formula for capillary boundary hypersurfaces supported in a geodesic ball of spherical space \(\mathbb{S}^{n+1}\). **Proposition 2.2**.: _Let \(x:M\to B^{\mathbb{S}}_{R}\) be an isometric immersion of \(\Sigma:=x(M)\) into a ball \(B^{\mathbb{S}}_{R}\) in \(\mathbb{S}^{n+1}\), whose boundary \(\partial\Sigma\) intersects \(\partial B^{\mathbb{S}}_{R}\) at a constant angle \(\theta\in(0,\pi)\), then it holds_ \[\int_{\Sigma}\left[(n-k+1)\sigma_{k-1}V_{a}+(n-k+1)\sin R\cos \theta\sigma_{k-1}\bar{g}(Y_{a},\nu)\right]dA=k\int_{\Sigma}\sigma_{k}\bar{g} (X_{a},\nu)dA. \tag{2.8}\] The proof is similar to Proposition 2.1, we leave it to the interested readers. ### A scalar equation In this section, we firstly reduce the flow (1.3) to a scalar equation on \(\bar{\mathbb{S}}^{n}_{+}\), if the evolving hypersurfaces are star-shaped in the sense of Definition 1.1. With loss of generality, we assume \(a:=-e_{n+1}=(0,\cdots,0,-1)\) from now on, and the half-space \[\mathbb{R}^{n+1}_{+}:=\{x|x=(x_{1},\cdots,x_{n+1})\in\mathbb{R}^{n +1}:x_{n+1}>0\}.\] We use the polar coordinate \((\rho,\beta,\theta)\in[0,+\infty)\times[0,\frac{\pi}{2}]\times\mathbb{S}^{n-1}\) in \(\mathbb{R}^{n+1}_{+}\), then the standard Euclidean metric in \(\mathbb{R}^{n+1}_{+}\) has the form \[|dz|^{2}=d\rho^{2}+\rho^{2}g_{\mathbb{S}^{n}_{+}}=d\rho^{2}+\rho^ {2}(d\beta^{2}+\sin^{2}\beta g_{\mathbb{S}^{n-1}}).\] Let \(f\) the conformal diffeomorphism from the unit ball to the half space (cf. [21, Section 3.2] or [17, 23]) given by \[f: \bar{\mathbb{R}}^{n+1}_{+} \to\bar{\mathbb{B}}^{n+1},\] \[(z^{{}^{\prime}},z_{n+1}) \mapsto(\frac{2z^{{}^{\prime}}}{|z^{{}^{\prime}}|^{2}+(1+z_{n+1}) ^{2}},\frac{|z|^{2}-1}{|z^{{}^{\prime}}|^{2}+(1+z_{n+1})^{2}}):=(y^{{}^{\prime }},y_{n+1}).\] where \(z^{{}^{\prime}}:=(z_{1},\cdots,z_{n})\in\mathbb{R}^{n},y^{{}^{\prime}}:=(y_{1},\cdots,y_{n})\in\mathbb{R}^{n}\). Define \[\phi(y)=r_{0}y:=x,\] which maps \(\bar{\mathbb{B}}^{n+1}\) to \(\bar{B}_{r_{0}}\). Then map \(\bar{f}:=\phi\circ f:\bar{\mathbb{R}}^{n+1}_{+}\to\bar{B}_{r_{0}}\) satisfies \[\bar{f}(\mathbb{R}^{n+1}_{+})=B_{r_{0}},\quad\bar{f}(\partial \mathbb{R}^{n+1}_{+})=\partial B_{r_{0}}.\] \(\bar{f}\) is a conformal transformation from \((\bar{\mathbb{R}}^{n+1}_{+},\delta_{\mathbb{R}^{n+1}_{+}}:=|dz|^{2})\) to \((\bar{B}_{r_{0}},\bar{g}_{\mathbb{H}}=\frac{4}{(1-|x|^{2})^{2}}|dx|^{2})\) or \((\bar{B}_{r_{0}},\bar{g}_{\mathbb{S}}=\frac{4}{(1+|x|^{2})^{2}}|dx|^{2})\) (cf. [17, Proposition 2.1] ) and \[\bar{f}^{*}\bar{g}_{\mathbb{H}}=e^{2U}|dz|^{2},\qquad\bar{f}^{*} \bar{g}_{\mathbb{S}}=e^{2V}|dz|^{2},\] where \[e^{U}:=\frac{4r_{0}}{(1-r_{0}^{2})(1+|z|^{2})+2(1+r_{0}^{2})z_{n +1}},\] and \[e^{V}:=\frac{4r_{0}}{(1+r_{0}^{2})(1+|z|^{2})+2(1-r_{0}^{2})z_{n +1}}.\] Let \(\Sigma\subset\bar{B}_{r_{0}}\) be a properly embedded compact hypersurface with capillary boundary, given by an embedding \(x:\bar{\mathbb{S}}^{n}_{+}\to\bar{B}_{r_{0}}\). We associate \(\Sigma\) with a corresponding hypersurface \(\widehat{\Sigma}\subset\bar{\mathbb{R}}^{n+1}_{+}\), given by embedding \[\widehat{x}=\bar{f}^{-1}\circ x:\ \bar{\mathbb{S}}^{n}_{+}\to\bar{\mathbb{R}}^{n+1}_ {+}.\] Since \((\bar{B}_{r_{0}},\bar{g})\) and \((\bar{\mathbb{R}}^{n+1}_{+},e^{2U}|dz|^{2})\) or \((\bar{\mathbb{R}}^{n+1}_{+},e^{2V}|dz|^{2})\) are isometric, then \(x:\bar{\mathbb{S}}^{n}_{+}\to\bar{B}_{r_{0}}\) can be identified as the embedding \(\bar{x}:\mathbb{S}^{n}_{+}\to(\bar{\mathbb{R}}^{n+1}_{+},e^{2U}|dz|^{2})\) or \((\bar{\mathbb{R}}^{n+1}_{+},e^{2V}|dz|^{2})\). It is easy to check that \(\widehat{X}_{a}:=(\bar{f}^{-1})_{*}(X_{a})=\frac{2r_{0}}{1+Kr_{0}^{2}}(\rho \partial_{\rho})\), so a hypersurface \(\widehat{\Sigma}\subset(\bar{\mathbb{R}}^{n+1}_{+},e^{2U}|dz|^{2})\) or \((\bar{\mathbb{R}}^{n+1}_{+},e^{2V}|dz|^{2})\) is star-shaped (classically) with respect to the origin if and only if \(\Sigma\subset(\bar{B}_{r_{0}},\bar{g})\subset M^{n+1}(-1)\) or \(\Sigma\subset(\bar{B}_{r_{0}},\bar{g})\subset M^{n+1}(+1)\) is star-shaped with respect to \(a\in\mathbb{S}^{n}\). Then there exists some positive function \(\rho(y)\) defined on \(\bar{\mathbb{S}}^{n}_{+}\), such that \[\widehat{x}=\rho(y)y=\rho(\beta,\theta)y,\quad y:=(\beta,\theta)\in[0,\frac{ \pi}{2}]\times\mathbb{S}^{n-1}.\] We denote \(\mathrm{div},\nabla\) as the divergence operator, covariant derivative on \(\bar{\mathbb{S}}^{n}_{+}\) with respect to the standard spherical metric \(\sigma\) on \(\bar{\mathbb{S}}^{n}_{+}\) respectively. Set \(u:=\log\rho\) and \(v:=\sqrt{1+|\nabla u|^{2}}\). We have the following identities for \(\mathbb{H}^{n+1}\) and \(\mathbb{S}^{n+1}\) respectively. **Proposition 2.3**.: _In a geodesic ball \(B^{\mathbb{H}}_{R}\) of \(\mathbb{H}^{n+1}\), it holds,_ 1. \(\bar{g}(X_{a},\nu)=e^{U}\frac{2r_{0}}{1-r_{0}^{2}}\frac{\rho}{v}\)_._ 2. \(V_{a}=\frac{2r_{0}(1-\rho^{2})}{(1-r_{0}^{2})(\rho^{2}+1)+2(1+r_{0}^{2})\rho \cos\beta}\)_._ 3. _The mean curvature_ \(\widetilde{H}\) _of_ \(\widehat{\Sigma}\) _in_ \((\bar{\mathbb{R}}^{n+1}_{+},e^{2U}|dz|^{2})\) _is_ \[\widetilde{H} = -\Bigg{[}\frac{1}{\rho ve^{U}}\sum_{i,j=1}^{n}(\sigma^{ij}-\frac{ u^{i}u^{j}}{v^{2}})u_{ij}+\frac{1+r_{0}^{2}}{2r_{0}}\frac{n\sin\beta\nabla_{ \partial_{\beta}}u}{v}\] \[+\frac{n(\rho^{2}-1)(1-r_{0}^{2})}{4r_{0}\rho v}\Bigg{]}.\] 4. \(\bar{g}(Y_{a},\nu)=-\frac{r_{0}\rho e^{U}}{v}+\frac{r_{0}^{2}-1}{2r_{0}}\frac {e^{U}}{2v}\left(\rho^{2}\cos\beta+2\rho+\cos\beta-(\rho^{2}-1)\sin\beta\nabla _{\partial_{\beta}}u\right).\)__ Proof.: (1)-(3) were shown in [17, Proposition 2.2]. We only need to show (4). Note that \[\widehat{e}_{n+1}: = (\bar{f}^{-1})_{*}(e_{n+1})\] \[= \sum_{i=1}^{n}\frac{\partial z_{i}}{\partial x_{n+1}}\frac{ \partial}{\partial z_{i}}+\frac{\partial z_{n+1}}{\partial x_{n+1}}\frac{ \partial}{\partial z_{n+1}}\] \[= \frac{1+z_{n+1}}{r_{0}}\sum_{i=1}^{n}z_{i}\frac{\partial}{ \partial z_{i}}+\frac{(1+z_{n+1})^{2}-|z|^{2}}{2r_{0}}\frac{\partial}{\partial z _{n+1}}\] \[= \frac{1}{r_{0}}\rho\cos\beta(\rho\sin^{2}\beta\partial_{\rho}+ \frac{\sin\beta}{2}\partial_{\beta})+\frac{1+\rho^{2}\cos 2\beta}{2r_{0}}(\cos\beta \partial_{\rho}-\frac{\sin\beta}{\rho}\partial_{\beta})+\frac{1}{r_{0}}\rho \partial_{\rho}\] \[= \frac{\rho^{2}\cos\beta+2\rho+\cos\beta}{2r_{0}}\partial_{\rho}+ \frac{(\rho^{2}-1)\sin\beta}{2r_{0}\rho}\partial_{\beta},\] and the unit normal of \(\widehat{\Sigma}\subset(\bar{\mathbb{R}}_{+}^{n+1},e^{2U}|dz|^{2})\) is \[\widehat{\nu}:=(\bar{f}^{-1})_{*}(\nu)=e^{-U}\frac{\partial_{\rho}-\rho^{-1} \nabla u}{v},\] By the definition \(X_{a}\) and \(Y_{a}\), we have \[Y_{a}=\frac{r_{0}^{2}-1}{2}e_{n+1}-\frac{1-r_{0}^{2}}{2}X_{a}.\] Hence, it follows \[\bar{g}(Y_{a},\nu) = \bar{g}(\frac{r_{0}^{2}-1}{2}e_{n+1}-\frac{1-r_{0}^{2}}{2}X_{a},\nu)\] \[= \bar{f}^{*}\bar{g}(\frac{r_{0}^{2}-1}{2}\widehat{e}_{n+1}-\frac{1 -r_{0}^{2}}{2}\widehat{X}_{a},\widehat{\nu})\] \[= -\frac{r_{0}\rho e^{U}}{v}+\frac{(r_{0}^{2}-1)e^{U}}{2r_{0}}\left( \frac{\rho^{2}\cos\beta+2\rho+\cos\beta}{2v}-\frac{(\rho^{2}-1)\sin\beta}{2v} \nabla_{\partial_{\beta}}u\right).\] Similarly, in the case \(\mathbb{S}^{n+1}\). **Proposition 2.4**.: _In a geodesic ball \(B_{R}^{\mathbb{S}}\) of \(\mathbb{S}^{n+1}\), it holds_ 1. \(\bar{g}(X_{a},\nu)=e^{V}\frac{2r_{0}}{1+r_{0}^{2}}\frac{\rho}{v}\)_._ 2. \(V_{a}=\frac{2r_{0}(1-\rho^{2})}{(1+r_{0}^{2})(\rho^{2}+1)+2(1-r_{0}^{2})\rho \cos\beta}\)_._ 3. _The mean curvature_ \(\widetilde{H}\) _of_ \(\widehat{\Sigma}\) _in_ \((\bar{\mathbb{R}}_{+}^{n+1},e^{2V}|dz|^{2})\) _is_ \[\widetilde{H} = -\Bigg{[}\frac{1}{\rho ve^{V}}\sum_{i,j=1}^{n}(\sigma^{ij}-\frac{ u^{i}u^{j}}{v^{2}})u_{ij}+\frac{1-r_{0}^{2}}{2r_{0}}\frac{n\sin\beta\nabla_{ \partial_{\beta}}u}{v}\] \[+\frac{n(\rho^{2}-1)(1+r_{0}^{2})}{4r_{0}\rho v}\Bigg{]}.\] 4. \(\bar{g}(Y_{a},\nu)=\frac{r_{0}\rho e^{V}}{v}-\frac{r_{0}^{2}+1}{2r_{0}}\frac{ e^{V}}{2v}\left(\rho^{2}\cos\beta+2\rho+\cos\beta-(\rho^{2}-1)\sin\beta\nabla_{ \partial_{\beta}}u\right).\)__ The proof of Proposition 2.4 is the similar to Proposition 2.3, we omit it here. With the help of Proposition 2.3 and 2.4, following the argument as in [21, Section 3.3], we can reduce the first equation in (1.3) to the following scalar equation \[\partial_{t}u=\frac{v}{\rho e^{U}}\widehat{F}. \tag{2.9}\] Moreover, in the case \(\mathbb{H}^{n+1}\), \[\widehat{F} := nV_{a}+n\sinh R\cos\theta\bar{g}(Y_{a},\nu)-\widetilde{H}\bar{g} (X_{a},\nu)\] \[= \frac{2nr_{0}(1-\rho^{2})}{(1-r_{0}^{2})(1+\rho^{2})+2(1+r_{0}^{ 2})\rho\cos\beta}\frac{|\nabla u|^{2}}{v^{2}}-\frac{2n\cos\theta r_{0}^{2}}{1 -r_{0}^{2}}\frac{\rho e^{U}}{v}\] \[-\frac{n\cos\theta e^{U}}{2v}\left(\rho^{2}\cos\beta+2\rho+\cos \beta-(\rho^{2}-1)\sin\beta\nabla_{\partial_{\beta}}u\right)\] \[+\frac{n(1+r_{0}^{2})\rho e^{U}\sin\beta\nabla_{\partial_{\beta}}u }{(1-r_{0}^{2})v^{2}}+\frac{2r_{0}}{1-r_{0}^{2}}\frac{1}{v^{2}}\sum_{i,j=1}^ {n}(\sigma^{ij}-\frac{u^{i}u^{j}}{v^{2}})u_{ij},\] and it is now easy to see that (2.9) is equivalent to \[\partial_{t}u = \frac{2r_{0}}{1-r_{0}^{2}}\mathrm{div}(\frac{\nabla u}{\rho ve^{U}}) -\frac{2(n+1)r_{0}}{(1-r_{0}^{2})v}\sigma\left(\nabla u,\nabla(\frac{1}{\rho e ^{U}})\right)-\frac{2n\cos\theta r_{0}^{2}}{1-r_{0}^{2}} \tag{2.10}\] \[-\frac{n\cos\theta}{2\rho}\left(\rho^{2}\cos\beta+2\rho+\cos \beta-(\rho^{2}-1)\sin\beta\nabla_{\partial_{\beta}}u\right)\] \[:= G(\nabla^{2}u,\nabla u,\rho,\beta).\] Similarly, in the case \(\mathbb{S}^{n+1}\), \(\widehat{F}\) in flow (2.9) has the form \[\widehat{F} := nV_{a}+n\sin R\cos\theta\bar{g}(Y_{a},\nu)-\widetilde{H}\bar{g} (X_{a},\nu)\] \[= \frac{2nr_{0}(1-\rho^{2})}{(1+r_{0}^{2})(1+\rho^{2})+2(1-r_{0}^{ 2})\rho\cos\beta}\frac{|\nabla u|^{2}}{v^{2}}+\frac{2n\cos\theta r_{0}^{2}}{1+ r_{0}^{2}}\frac{\rho e^{V}}{v}\] \[-\frac{n\cos\theta e^{V}}{2v}\left(\rho^{2}\cos\beta+2\rho+\cos \beta-(\rho^{2}-1)\sin\beta\nabla_{\partial_{\beta}}u\right)\] \[+\frac{n(1-r_{0}^{2})\rho e^{V}\sin\beta\nabla_{\partial_{\beta}} u}{(1+r_{0}^{2})v^{2}}+\frac{2r_{0}}{1+r_{0}^{2}}\frac{1}{v^{2}}\sum_{i,j=1}^{n}( \sigma^{ij}-\frac{u^{i}u^{j}}{v^{2}})u_{ij},\] which follows that (2.9) is equivalent to \[\partial_{t}u = \frac{2r_{0}}{1+r_{0}^{2}}\mathrm{div}(\frac{\nabla u}{\rho ve^{ V}})-\frac{2(n+1)r_{0}}{(1+r_{0}^{2})v}\sigma\left(\nabla u,\nabla(\frac{1}{\rho e ^{V}})\right)+\frac{2n\cos\theta r_{0}^{2}}{1+r_{0}^{2}}\] \[-\frac{n\cos\theta}{2\rho}\left(\rho^{2}\cos\beta+2\rho+\cos \beta-(\rho^{2}-1)\sin\beta\nabla_{\partial_{\beta}}u\right)\] \[:= G(\nabla^{2}u,\nabla u,\rho,\beta).\] Next we derive the boundary condition. We show the case \(\mathbb{H}^{n+1}\) here, since the case \(\mathbb{S}^{n+1}\) is similar. The capillary boundary condition in flow (1.3) implies \[-\cos\theta=\bar{g}(\nu,\bar{N}\circ x)=e^{2U}\left\langle\widehat{\nu},(\bar{ f}^{-1})_{*}(\bar{N}\circ x)\right\rangle.\] Note that \((\bar{f}^{-1})_{*}(\bar{N}\circ x)=-e^{-U}\frac{\partial_{\beta}}{\rho}\) on \(\partial\mathbb{R}_{+}^{n+1}\), then \[\nabla_{\partial_{\beta}}u=\cos\theta\sqrt{1+|\nabla u|^{2}},\quad\text{on }\partial\mathbb{S}_{+}^{n}. \tag{2.12}\] In summary, the flow (1.3) is equivalent to the following scalar parabolic equation on \(\bar{\mathbb{S}}_{+}^{n}\), \[\begin{cases}\frac{\partial u}{\partial t}=G(\nabla^{2}u,\nabla u,\rho,\beta) &\text{in}\quad\mathbb{S}_{+}^{n}\times[0,T),\\ \nabla_{\partial_{\beta}}u=\cos\theta\sqrt{1+|\nabla u|^{2}}&\text{on}\quad \partial\mathbb{S}_{+}^{n}\times[0,T),\\ u(\cdot,0)=u_{0}(\cdot)&\text{in}\quad\mathbb{S}_{+}^{n},\end{cases} \tag{2.13}\] where \(G\) has the form as (2.10) in \(\mathbb{H}^{n+1}\) and (2.11) in \(\mathbb{S}^{n+1}\) respectively, \(u_{0}:=\log\rho_{0}\) and \(\rho_{0}\) corresponds to the hypersurface \(x_{0}(M)\) under the transformation \(\bar{f}\). ## 3. A priori estimates and convergence In this section, we focus on establishing the uniform height and gradient estimates for the solution of scalar parabolic equation (2.13). The key ingredient is gradient estimate. Since the expression of \(G\) in (2.13) for \(\mathbb{H}^{n+1}\) and \(\mathbb{S}^{n+1}\) are similar, we only show the case \(\mathbb{H}^{n+1}\) below. One can obtain the same a priori estimates for the case \(\mathbb{S}^{n+1}\) by just adapting the same approach as below with minor modifications. The short-time existence of the flow (1.3) follows from the standard PDE theory (cf. [9]), due to our assumption of star-shaped, \(\bar{g}(X_{a},\nu)>0\) on \(x_{0}(M)\). Next, in order to establish the long-time existence of the flows, we need to obtain the uniform height and gradient estimates for the solutions of flow, then the long-time existence and uniform \(C^{\infty}\) estimates follows from the standard quasi-linear parabolic PDE theory with strictly oblique boundary condition (cf. [10]). In the following, we use the Einstein summation convention, i.e., if not stated otherwise, the repeated arabic indices \(i,j,k\) should be summed from \(1\) to \(n\). Besides, we introduce the following notations. \(a^{ij}:=(\sigma^{ij}-\frac{u^{i}u^{j}}{v^{2}})\), \(u_{\beta}:=\nabla_{\partial_{\beta}}u=\sigma(\nabla u,\partial_{\beta})\) and \[G^{ij} := \frac{\partial G(r,p,\rho,\beta)}{\partial r_{ij}}\Big{|}_{r= \nabla^{2}u,p=\nabla u}=\frac{2r_{0}}{1-r_{0}^{2}}\frac{1}{\rho ve^{U}}a^{ij},\] \[G_{p_{k}} := \frac{\partial G(r,p,\rho,\beta)}{\partial p_{k}}\Big{|}_{r= \nabla^{2}u,p=\nabla u}\] \[= -\frac{2r_{0}}{1-r_{0}^{2}}\frac{u_{k}}{e^{U}\rho v^{3}}a^{ij}u_ {ij}+\frac{2r_{0}}{1-r_{0}^{2}}\frac{1}{e^{U}\rho v}(-\frac{\delta_{i}^{k}u^{j }+\delta_{j}^{k}u^{i}}{v^{2}}+\frac{2u_{k}u^{i}u^{j}}{v^{4}})u_{ij}\] \[+\frac{n(1+r_{0}^{2})}{1-r_{0}^{2}}\frac{\sin\beta\sigma(\partial _{\beta},\partial_{k})}{v}-\frac{n(1+r_{0}^{2})}{1-r_{0}^{2}}\frac{\sin\beta u _{\beta}u_{k}}{v^{3}}+\frac{n(1-\rho^{2})}{\rho}\frac{u_{k}}{v}\] \[+\frac{n(\rho^{2}-1)|\nabla u|^{2}u_{k}}{2\rho v^{3}}+\frac{\rho^ {2}-1}{2\rho}\sin\beta\sigma(\partial_{\beta},\partial_{k}),\] \[G_{\rho} := \frac{\partial G(r,p,\rho,\beta)}{\partial\rho}\Big{|}_{r=\nabla ^{2}u,p=\nabla u}\] \[= \frac{1}{2v}(1-\frac{1}{\rho^{2}})a^{ij}u_{ij}-\frac{n}{2}(1+ \frac{1}{\rho^{2}})\frac{|\nabla u|^{2}}{v}-\frac{n\cos\theta\cos\beta}{2}(1- \frac{1}{\rho^{2}})\] \[+\frac{n}{2}(1+\frac{1}{\rho^{2}})\cos\theta\sin\beta u_{\beta},\] \[G_{\beta} := \frac{\partial G(r,p,\rho,\beta)}{\partial\beta}\Big{|}_{r= \nabla^{2}u,p=\nabla u}\] \[= \frac{1+r_{0}^{2}}{1-r_{0}^{2}}\frac{-a^{ij}u_{ij}\sin\beta+n\cos \beta u_{\beta}}{v}+\frac{n\cos\theta}{2\rho}\Big{[}\sin\beta\rho^{2}+\sin\beta\] \[+(\rho^{2}-1)\cos\beta u_{\beta}\Big{]}.\] Firstly, we have the height estimate for the solution of flow (2.13) for the case \(\mathbb{H}^{n+1}\). **Proposition 3.1**.: _Assume \(\Sigma_{0}:=x_{0}(M)\subset B_{R}^{\mathbb{H}}\) is star-shaped with respect to \(a\in\mathbb{S}^{n}\) and satisfies_ \[\Sigma_{0}\subset C_{\theta,r_{1}}(a)\setminus C_{\theta,r_{2}}(a), \tag{3.1}\] _for some \(0<r_{1}<r_{2}\), where \(C_{\theta,r}(a)\) is defined by (1.6). Then the solution \(\Sigma_{t}:=x(M,t)\) of (1.3) satisfies_ \[\Sigma_{t}\subset C_{\theta,r_{2}}(a)\setminus C_{\theta,r_{1}}(a).\] _Moreover, if \(u\) solves (2.13) and \(G\) has the form (2.10), then_ \[||u||_{C^{0}\left(\bar{\mathbb{S}}_{+}^{n}\times[0,T)\right)}\leq C,\] _where \(C\) is a positive constant, depending only on the initial datum._ Proof.: Since the spherical cap \(C_{\theta,r}(a)\) is the static solution to flow (1.3) for each \(r>0\), that is, it satisfies \[nV_{a}+n\sinh R\cos\theta\bar{g}(Y_{a},\nu)-H\bar{g}(X_{a},\nu)=0.\] Then the assertion follows from the avoidance principle for the strictly parabolic equation with a capillary boundary condition (see [21, Proposition 4.2]). In order to get the gradient estimate, we need to employ the distance function \(d(x):=dist_{\sigma}(x,\partial\mathbb{S}_{+}^{n})\). It is well-known that \(d\) is well-defined and smooth for \(x\) near \(\partial\mathbb{S}_{+}^{n}\) and \(\nabla d=-\partial_{\beta}\) on \(\partial\mathbb{S}_{+}^{n}\), where \(\partial_{\beta}\) is the unit outer normal vector field of \(\partial\mathbb{S}_{+}^{n}\) in \(\mathbb{S}_{+}^{n}\). We can extend \(d\) to be a smooth function defined in \(\bar{\mathbb{S}}_{+}^{n}\) and satisfy that \[d\geq 0,\qquad|\nabla d|\leq 1,\quad\text{ in }\bar{\mathbb{S}}_{+}^{n}.\] We will use \(O(s)\) to denote terms that are bounded by \(Cs\) for some constant \(C>0\), which depends only on the \(C^{0}\) norm of \(u\). And the constant \(C\) may change from line to line. Next, we choose a suitable auxiliary function that had been used in [21] to obtain the uniform gradient estimate for the flow (2.13) in the case \(\mathbb{H}^{n+1}\). **Proposition 3.2**.: _If \(u:\bar{\mathbb{S}}_{+}^{n}\times[0,T)\to\mathbb{R}\) solves (2.13) and \(G\) has the form (2.10), \(|\cos\theta|<\frac{3n+1}{5n-1}\), then for any \((x,t)\in\bar{\mathbb{S}}_{+}^{n}\times[0,T)\),_ \[|\nabla u|(x,t)\leq C,\] _where \(C\) is a positive constant, depending only on the initial datum._ Proof.: Define the function as \[\Phi:=(1+Kd)v+\cos\theta\sigma(\nabla u,\nabla d),\] where \(K\) is a positive constant to be determined later. For any \(T^{\prime}<T\), assume \(\Phi\) attains its maximum value at some point, say \((x_{0},t_{0})\in\bar{\mathbb{S}}_{+}^{n}\times[0,T^{\prime}]\). Following the same argument as in [21, Proposition 4.3, Case 1], by choosing \(K>0\) sufficiently large, \(\Phi\) does not attain its maximum value on \(\partial\mathbb{S}_{+}^{n}\), hence we have either \(x_{0}\in\mathbb{S}_{+}^{n}\) or \(t_{0}=0\). If \(t_{0}=0\), it is easy to see \[\sup_{\bar{\mathbb{S}}_{+}^{n}\times[0,T^{\prime}]}|\nabla u|\leq C, \tag{3.2}\] where \(C\) is a positive constant depending only on \(n\) and \(u_{0}\). Next we analyze the case \(x_{0}\in\mathbb{S}_{+}^{n}\) and complete the gradient estimate. By rotating the geodesic coordinate \(\{\frac{\partial}{\partial x_{i}}\}_{i=1}^{n}\) at \(x_{0}\), we assume that \[|\nabla u|=u_{1}>0,\text{ and }\{u_{\alpha\beta}\}_{2\leq\alpha,\beta\leq n} \text{ is diagonal}.\] Assume that \(u_{1}(x_{0},t_{0})\) is large enough, otherwise we finish the proof. All the computation below are done at the point \((x_{0},t_{0})\). Note that \[0=\nabla_{i}\Phi=(1+Kd)v_{i}+Kd_{i}v+\cos\theta(u_{li}d_{l}+u_{l}d_{li}),\ 1 \leq i\leq n.\] It follows that \[\left[(1+Kd)\frac{u_{1}}{v}+\cos\theta d_{1}\right]u_{1\alpha}=\cos\theta u_{ \alpha\alpha}d_{\alpha}-\cos\theta u_{a}d_{1\alpha}-Kd_{\alpha}v,\] and \[\left[(1+Kd)\frac{u_{1}}{v}+\cos\theta d_{1}\right]u_{11}=-\cos\theta u_{ \alpha 1}d_{\alpha}-\cos\theta u_{1}d_{11}-Kd_{1}v.\] Denote \(S:=(1+Kd)\frac{u_{1}}{v}+\cos\theta d_{1}\), it is easy to see that \(0<C(\delta,\theta)\leq S\leq 2+K\), if we assume \(u_{1}\geq\delta>0\), otherwise we complete the proof. Hence \[u_{1\alpha} = -\frac{\cos\theta d_{\alpha}}{S}u_{\alpha\alpha}-\frac{1}{S}( \cos\theta u_{1}d_{1\alpha}+Kd_{\alpha}v) \tag{3.3}\] \[= -\frac{\cos\theta d_{\alpha}}{S}u_{\alpha\alpha}+O(v),\ 2\leq \alpha\leq n,\] and \[u_{11} = -\frac{1}{S}\cos\theta u_{\alpha 1}d_{\alpha}+\frac{1}{S}(-\cos \theta u_{1}d_{11}-Kd_{1}v) \tag{3.4}\] \[= \frac{\cos^{2}\theta}{S^{2}}\sum_{\alpha=2}^{n}d_{\alpha}^{2}u_{ \alpha\alpha}+O(v).\] On the other hand, we have \[0 \leq (\partial_{t}-G^{ij}\nabla_{ij}-G_{p_{i}}\nabla_{i})\Phi \tag{3.5}\] \[= \frac{(1+Kd)}{v}u_{l}(u_{lt}-G^{ij}u_{lij}-G_{p_{i}}u_{li})+d_{k }\cos\theta(u_{kt}-G^{ij}u_{kij}-G_{p_{i}}u_{ki})\] \[+(1+Kd)(\frac{G^{ij}u_{l}u_{li}u_{k}u_{kj}}{v^{3}}-\frac{G^{ij}u_ {li}u_{lj}}{v})-(2G^{ij}u_{ki}d_{kj}\cos\theta+2KG^{ij}d_{i}v_{j})\] \[-(G^{ij}u_{k}d_{kij}\cos\theta+KG^{ij}d_{ij}v)-G_{p_{i}}(Kd_{i}v +\cos\theta u_{k}d_{ki})\] \[:= I_{1}+I_{2}+I_{3}+I_{4}+I_{5}+I_{6}.\] Next, we carefully handle those terms one by one. Differentiating the first equation in (2.13), \[u_{tl}=G^{ij}u_{ijl}+G_{p_{i}}u_{il}+G_{\rho}\rho u_{l}+G_{\beta}\sigma( \partial_{\beta},\partial_{l}),\] Combining with the communicative formula on \(\mathbb{S}_{+}^{n}\), \[u_{ijl}=u_{lij}+u_{j}\sigma_{li}-u_{l}\sigma_{ij},\] it follows \[u_{tl}=G^{ij}u_{lij}+G^{ij}u_{j}\sigma_{li}-\sum_{i=1}^{n}G^{ii}u_{l}+G_{p_{i} }u_{il}+G_{\rho}\rho u_{l}+G_{\beta}\sigma(\partial_{\beta},\partial_{l}).\] First, we deal with the term \(I_{1}\). \[I_{1} = \frac{(1+Kd)}{v}u_{l}(u_{lt}-G^{ij}u_{lij}-G_{p_{l}}u_{il})\] \[= \frac{(1+Kd)}{v}G^{ij}u_{j}u_{l}\sigma_{li}+\frac{(1+Kd)u_{l}}{v} \left(G_{\rho}\rho u_{l}+G_{\beta}\sigma(\partial_{\beta},\partial_{l})\right)\] \[-\frac{(1+Kd)|\nabla u|^{2}}{v}\sum_{i=1}^{n}G^{ii}\] \[= \left[\frac{(1+Kd)|\nabla u|^{2}}{2v^{4}}(\rho-\frac{1}{\rho})u_ {11}-\frac{1+r_{0}^{2}}{1-r_{0}^{2}}\frac{(1+Kd)\sin\beta u_{\beta}}{v^{4}}u_ {11}\right]\] \[+\left[\frac{(1+Kd)|\nabla u|^{2}}{2v^{2}}(\rho-\frac{1}{\rho}) \sum_{\alpha=2}^{n}u_{\alpha\alpha}\right]\] \[-\left[\frac{n(1+Kd)|\nabla u|^{4}}{2v^{2}}(\rho+\frac{1}{\rho}) -\frac{n(1+Kd)|\nabla u|^{2}}{2v}(\rho+\frac{1}{\rho})\cos\theta\sin\beta u_{ \beta}\right]\] \[-\frac{1+r_{0}^{2}}{1-r_{0}^{2}}\frac{(1+Kd)u_{\beta}\sin\beta}{ v^{2}}\sum_{\alpha=2}^{n}u_{\alpha\alpha}+\frac{1+r_{0}^{2}}{1-r_{0}^{2}}\frac{n(1+ Kd)\cos\beta u_{\beta}^{2}}{v^{2}}\] \[-\frac{n\cos\theta(1+Kd)u_{\beta}}{2\rho v}\left(\sin\beta\rho^{ 2}+\sin\beta+(\rho^{2}-1)\cos\beta u_{\beta}\right)\] \[-\frac{2r_{0}}{1-r_{0}^{2}}\frac{(1+Kd)(1-n)|\nabla u|^{2}}{\rho v ^{2}e^{U}}+\frac{n(1+Kd)|\nabla u|^{2}\cos\beta\cos\theta}{2v}(\rho-\frac{1}{ \rho})\Bigg{]}\] \[:= I_{11}+I_{12}+I_{13}+I_{14}.\] By (3.4), we obtain \[I_{11} = \frac{(1+Kd)|\nabla u|^{2}}{2v^{4}}(\rho-\frac{1}{\rho})u_{11}- \frac{1+r_{0}^{2}}{1-r_{0}^{2}}\frac{(1+Kd)\sin\beta u_{\beta}}{v^{4}}u_{11}\] \[= \left[\frac{(1+Kd)|\nabla u|^{2}}{2v^{4}}(\rho-\frac{1}{\rho})- \frac{1+r_{0}^{2}}{1-r_{0}^{2}}\frac{(1+Kd)\sin\beta u_{\beta}}{v^{4}}\right]\] \[\cdot\left(\frac{\cos^{2}\theta}{S^{2}}\sum_{\alpha=2}^{n}d_{ \alpha}^{2}u_{\alpha\alpha}+O(v)\right)\] \[= O(\frac{1}{v^{2}})\sum_{\alpha=2}^{n}|u_{\alpha\alpha}|+O(\frac{ 1}{v}).\] and \[I_{14} = -\Bigg{[}\frac{1+r_{0}^{2}}{1-r_{0}^{2}}\frac{(1+Kd)u_{\beta} \sin\beta}{v^{2}}\sum_{\alpha=2}^{n}u_{\alpha\alpha}+\frac{1+r_{0}^{2}}{1-r_{0 }^{2}}\frac{n(1+Kd)\cos\beta u_{\beta}^{2}}{v^{2}}\] \[-\frac{n\cos\theta(1+Kd)u_{\beta}}{2\rho v}\left(\sin\beta\rho^{ 2}+\sin\beta+(\rho^{2}-1)\cos\beta u_{\beta}\right)\] \[-\frac{2r_{0}}{1-r_{0}^{2}}\frac{(1+Kd)(1-n)|\nabla u|^{2}}{\rho v ^{2}e^{U}}+\frac{n(1+Kd)|\nabla u|^{2}\cos\beta\cos\theta}{2v}(\rho-\frac{1}{ \rho})\Bigg{]}\] \[=O(\frac{1}{v^{2}})\sum_{\alpha=2}^{n}|u_{\alpha\alpha}|+O(v).\] For the term \(I_{2}\), \[I_{2} = d_{k}\cos\theta(u_{kt}-G^{ij}u_{kij}-G_{p_{i}}u_{ki})\] \[= \cos\theta\sigma(\nabla u,\nabla d)\rho G_{\rho}+\cos\theta G_{ \beta}d_{\beta}+\cos\theta G^{11}\sigma(\nabla u,\nabla d)\] \[-\cos\theta\sigma(\nabla u,\nabla d)\sum_{i=1}^{n}G^{ii}\] \[= \left[\frac{\cos\theta\sigma(\nabla u,\nabla d)}{2v^{3}}(\rho- \frac{1}{\rho})u_{11}-\frac{1+r_{0}^{2}}{1-r_{0}^{2}}\frac{\sin\beta\cos \theta d_{\beta}}{v^{3}}u_{11}\right]\] \[+\left[\frac{\cos\theta\sigma(\nabla u,\nabla d)}{2v}(\rho- \frac{1}{\rho})\sum_{\alpha=2}^{n}u_{\alpha\alpha}\right]\] \[-\left[\frac{n\cos\theta\sigma(\nabla u,\nabla d)}{2}(\rho+\frac{ 1}{\rho})(\frac{|\nabla u|^{2}}{v}-\cos\theta\sin\beta\nabla_{\partial_{\beta}} u)\right]\] \[-\left[\frac{n\cos^{2}\theta\cos\beta\sigma(\nabla u,\nabla d)}{2 }(\rho-\frac{1}{\rho})-\frac{n\cos\theta\cos\beta d_{\beta}\nabla_{\partial_{ \beta}}u}{v}\right.\] \[+\frac{2(n-1)r_{0}}{\rho ve^{U}(1-r_{0}^{2})}\cos\theta\sigma( \nabla u,\nabla d)+\frac{1+r_{0}^{2}}{1-r_{0}^{2}}\frac{\cos\theta\sin\beta d _{\beta}}{v}\sum_{\alpha=2}^{n}u_{\alpha\alpha}\] \[-\frac{n\cos^{2}\theta d_{\beta}}{2\rho}\left(\sin\beta\rho^{2}+ \sin\beta+(\rho^{2}-1)\cos\beta\nabla_{\partial_{\beta}}u\right)\Bigg{]}\] \[:= I_{21}+I_{22}+I_{23}+I_{24}.\] For the term \(I_{21},I_{24}\), we see \[I_{21} = \frac{\cos\theta\sigma(\nabla u,\nabla d)}{2v^{3}}(\rho-\frac{1} {\rho})u_{11}-\frac{1+r_{0}^{2}}{1-r_{0}^{2}}\frac{\sin\beta\cos\theta d_{ \beta}}{v^{3}}u_{11}\] \[= \left[\frac{\cos\theta\sigma(\nabla u,\nabla d)}{2v^{3}}(\rho- \frac{1}{\rho})-\frac{1+r_{0}^{2}}{1-r_{0}^{2}}\frac{\sin\beta\cos\theta d_{ \beta}}{v^{3}}\right]\] \[\cdot\left(\frac{\cos^{2}\theta}{S^{2}}\sum_{\alpha=2}^{n}d_{ \alpha}^{2}u_{\alpha\alpha}+O(v)\right)\] \[= O(\frac{1}{v^{2}})\sum_{\alpha=2}^{n}|u_{\alpha\alpha}|+O(\frac{ 1}{v^{2}}),\] and \[I_{24} = -\Bigg{[}\frac{n\cos\theta^{2}\cos\beta\sigma(\nabla u,\nabla d )}{2}(\rho-\frac{1}{\rho})-\frac{n\cos\theta\cos\beta d_{\beta}\nabla_{ \partial_{\beta}}u}{v}\] \[+\frac{2(n-1)r_{0}}{\rho ve^{U}(1-r_{0}^{2})}\cos\theta\sigma( \nabla u,\nabla d)+\frac{1+r_{0}^{2}}{1-r_{0}^{2}}\frac{\cos\theta\sin\beta d _{\beta}}{v}\sum_{\alpha=2}^{n}u_{\alpha\alpha}\] \[-\frac{n\cos^{2}\theta d_{\beta}}{2\rho}\left(\sin\beta\rho^{2}+ \sin\beta+(\rho^{2}-1)\cos\beta\nabla_{\partial_{\beta}}u\right)\Bigg{]}\] \[= O(\frac{1}{v})\sum_{\alpha=2}^{n}|u_{\alpha\alpha}|+O(v).\] Next, we handle the term \(I_{3}\). \[I_{3} = (1+Kd)(\frac{G^{ij}u_{l}u_{li}u_{k}u_{ki}}{v^{3}}-\frac{G^{ij}u_{li}u _{lj}}{v})\] \[= \frac{2r_{0}}{1-r_{0}^{2}}\frac{(1+Kd)}{\rho ve^{U}}(-\frac{1}{v^{ 5}}u_{11}^{2}-\frac{2}{v^{3}}\sum_{\alpha=2}^{n}u_{1\alpha}^{2})-\frac{2r_{0}} {1-r_{0}^{2}}\frac{(1+Kd)}{\rho v^{2}e^{U}}\sum_{\alpha=2}^{n}u_{\alpha\alpha}^ {2}\] \[= \frac{2r_{0}}{1-r_{0}^{2}}\frac{(1+Kd)}{\rho ve^{U}}(-\frac{1}{v^ {5}}u_{11}^{2}-\frac{2}{v^{3}}\sum_{\alpha=2}^{n}u_{1\alpha}^{2})\] \[-(1-\varepsilon)\frac{2r_{0}}{1-r_{0}^{2}}\frac{(1+Kd)}{\rho v^{2 }e^{U}}\sum_{\alpha=2}^{n}u_{\alpha\alpha}^{2}-\varepsilon\frac{2r_{0}}{1-r_{ 0}^{2}}\frac{(1+Kd)}{\rho v^{2}e^{U}}\sum_{\alpha=2}^{n}u_{\alpha\alpha}^{2}\] \[:= I_{31}+I_{32}+I_{33}.\] Hence \[I_{31} = \frac{2r_{0}}{1-r_{0}^{2}}\frac{(1+Kd)}{\rho ve^{U}}(-\frac{1}{v ^{5}}u_{11}^{2}-\frac{2}{v^{3}}\sum_{\alpha=2}^{n}u_{1\alpha}^{2})\] \[\leq O(\frac{1}{v^{4}})\sum_{\alpha=2}^{n}u_{\alpha\alpha}^{2}+O(v).\] Finally, we deal with the other remaining terms in (3.5). \[I_{4}+I_{5}+I_{6} = -(2G^{ij}u_{ki}d_{kj}\cos\theta+2KG^{ij}d_{i}v_{j})-(G^{ij}u_{k}d _{kij}\cos\theta+KG^{ij}d_{ij}v)\] \[-G_{p_{i}}(Kd_{i}v+\cos\theta u_{k}d_{ki})\] \[= O(\frac{1}{v})\sum_{\alpha=2}^{n}|u_{\alpha\alpha}|+O(v).\] By the arithmetic-geometric inequality, we have \[I_{12}+I_{22}+I_{32} = \frac{(1+Kd)|\nabla u|^{2}}{2v^{2}}(\rho-\frac{1}{\rho})\sum_{ \alpha=2}^{n}u_{\alpha\alpha}+\frac{\cos\theta\sigma(\nabla u,\nabla d)}{2v}( \rho-\frac{1}{\rho})\sum_{\alpha=2}^{n}u_{\alpha\alpha}\] \[-(1-\varepsilon)\frac{2r_{0}}{1-r_{0}^{2}}\frac{(1+Kd)}{\rho v^{2 }e^{U}}\sum_{\alpha=2}^{n}u_{\alpha\alpha}^{2}\] \[= \frac{u_{1}}{2v}S(\rho-\frac{1}{\rho})\sum_{\alpha=2}^{n}u_{\alpha \alpha}-(1-\varepsilon)\frac{2r_{0}}{1-r_{0}^{2}}\frac{(1+Kd)}{\rho v^{2}e^{U }}\sum_{\alpha=2}^{n}u_{\alpha\alpha}^{2}\] \[\leq \frac{1-r_{0}^{2}}{2r_{0}}\frac{(n-1)}{1+Kd}\frac{(\rho-\frac{1}{ \rho})^{2}S^{2}\rho e^{U}}{16(1-\varepsilon)}u_{1}^{2}\] \[\leq \frac{1-r_{0}^{2}}{2r_{0}}\frac{(n-1)(1+|\cos\theta|)S}{16(1- \varepsilon)}(\rho-\frac{1}{\rho})^{2}\rho e^{U}u_{1}^{2}.\] Fix a positive constant \(a_{0}\in\Big{(}|\cos\theta|,\frac{3n+1}{5n-1}\Big{)}\), if \[\frac{|\nabla u|^{2}}{v}-\cos\theta\sin\beta u_{\beta}<(1-a_{0})u_{1},\] then \[\frac{|\nabla u|^{2}}{v}-|\cos\theta|u_{1}\leq\frac{|\nabla u|^{2}}{v}-\cos \theta\sin\beta u_{\beta}<(1-a_{0})u_{1},\] which implies \[u_{1}^{2}\leq\frac{\left(\cos\theta+(1-a_{0})\right)^{2}}{1-\left(\cos\theta+(1-a _{0})\right)^{2}},\] and this finish the proof. Therefore, we assume for any fixed positive constant \(a_{0}\in(|\cos\theta|,\frac{3n+1}{5n-1})\), it holds \[\frac{|\nabla u|^{2}}{v}-\cos\theta\sin\beta u_{\beta}\geq(1-a_{0})u_{1},\] then it follows \[I_{13}+I_{23} = -\left[\frac{n(1+Kd)|\nabla u|^{4}}{2v^{2}}(\rho+\frac{1}{\rho})- \frac{n(1+Kd)|\nabla u|^{2}}{2v}(\rho+\frac{1}{\rho})\cos\theta\sin\beta u_{ \beta}\right]\] \[-\left[\frac{n\cos\theta\sigma(\nabla u,\nabla d)}{2}(\rho+\frac{ 1}{\rho})(\frac{|\nabla u|^{2}}{v}-\cos\theta\sin\beta\nabla_{\partial_{\beta}} u)\right]\] \[= -\frac{n}{2}u_{1}S(\rho+\frac{1}{\rho})(\frac{|\nabla u|^{2}}{v} -\cos\theta\sin\beta u_{\beta})\] \[\leq -\frac{n}{2}(1-a_{0})Su_{1}^{2}(\rho+\frac{1}{\rho}).\] Since \(|\cos\theta|<a_{0}\) and we choose \(\varepsilon:=\frac{\varepsilon_{0}}{2}\in(0,1)\) with \(\varepsilon_{0}:=\frac{3n+1-a_{0}(5n-1)}{4n(1-a_{0})}>0\), we have \((n-1)(1+a_{0})-4(1-\varepsilon)(1-a_{0})n<0\), then \[I_{13}+I_{23}+I_{12}+I_{22}+I_{32}\] \[\leq -\frac{n}{2}(1-a_{0})Su_{1}^{2}(\rho+\frac{1}{\rho})+\frac{1-r_{0 }^{2}}{2r_{0}}\frac{(n-1)(1+|\cos\theta|)S}{16(1-\varepsilon)}(\rho-\frac{1}{ \rho})^{2}\rho e^{U}u_{1}^{2}\] \[\leq u_{1}^{2}S\left[\frac{(n-1)(1+|\cos\theta|)}{16(1-\varepsilon)}( \rho-\frac{1}{\rho})^{2}\frac{2\rho}{\rho^{2}+1}-\frac{n}{2}(1-a_{0})(\rho+ \frac{1}{\rho})\right]\] \[= \frac{u_{1}^{2}S}{8\rho(\rho^{2}+1)(1-\varepsilon)}\Big{[}\left( (n-1)(1+a_{0})-4n(1-\varepsilon)(1-a_{0})\right)(\rho^{4}+1)\] \[-\left(2(n-1)(1-a_{0})+8n(1-\varepsilon)(1-a_{0})\right)\rho^{2} \Big{]}\] \[\leq -\alpha_{0}u_{1}^{2},\] where \(\alpha_{0}\) is a positive constant, which depends on \(n,a_{0},||u||_{C^{0}}\). Adding all above terms into (3.4), we deduce \[0 \leq -\frac{\varepsilon_{0}}{2}\frac{2r_{0}}{1-r_{0}^{2}}\frac{(1+Kd) }{\rho v^{2}e^{U}}\sum_{\alpha=2}^{n}u_{\alpha\alpha}^{2}-\alpha_{0}u_{1}^{2} +O(\frac{1}{v})\sum_{\alpha=2}^{n}|u_{\alpha\alpha}|+O(v)\] \[\leq -\alpha_{0}u_{1}^{2}+O(v),\] which follows \[u_{1}\leq C.\] where the positive constant \(C\) depends only on \(n,r_{0}\), and \(\|u\|_{C^{0}}\). Hence we complete the proof. Following the above argument for the case \(\mathbb{H}^{n+1}\), we can get the uniform height and gradient estimates for the scalar parabolic equation (2.13) in the case \(\mathbb{S}^{n+1}\). That is. **Proposition 3.3**.: _If \(u:\bar{\mathbb{S}}_{+}^{n}\times[0,T)\to\mathbb{R}\) solves (2.13) and \(G\) has the form (2.11), \(|\cos\theta|<\frac{3n+1}{5n-1}\), then_ \[\|u\|_{C^{1}\left(\bar{\mathbb{S}}_{+}^{n}\times[0,T)\right)}\leq C, \tag{3.6}\] _where the constant \(C\) is a positive constant, depending on the intial datum._ For the concise of this paper, we leave the proof of Proposition 3.3 to the interested readers. In conclusion, we have the following convergence for the flow (1.3) both in hyperbolic space and spherical space. **Proposition 3.4**.: _The smooth solution of flow (1.3) exists for all time and has uniform \(C^{\infty}\)-estimates, if the initial hypersurface \(\Sigma_{0}\subset\bar{B}_{R}\subset M^{n+1}(K)\) with \(K=\pm 1\) is star-shaped in the sense of Definition 1.1 and \(|\cos\theta|<\frac{3n+1}{5n-1}\)._ Proof.: Proposition 3.1, 3.2 and 3.4 say that \(u\) is uniformly bounded in \(C^{1}(\bar{\mathbb{S}}_{+}^{n}\times[0,T))\), then the scalar equation in (2.13) is uniformly parabolic. Since \(|\cos\theta|<1\), hence the desired conclusion follows from the standard quasi-linear parabolic theory with strictly oblique boundary condition theory (cf. [14, 10]). Finally, we obtain the convergence result by using the argument in [19, 24], that is, we complete the proof of Theorem 1.2. **Proposition 3.5**.: _If the initial hypersurface \(\Sigma_{0}\subset\bar{B}_{R}\subset M^{n+1}(K)\) is star-shaped capillary boundary hypersurface and \(|\cos\theta|<\frac{3n+1}{5n-1}\), then the flow (1.3) smoothly converges to a uniquely determined spherical cap \(C_{\theta,r}(a)\) given by (1.6) with capillary boundary, as \(t\to+\infty\)._ Proof.: In the following, we present a complete proof of the convergence for flow (1.3) in the case \(\mathbb{H}^{n+1}\). Since the proof for the case \(\mathbb{S}^{n+1}\) is similar, we omit it here. From the proof of Corollary 1.3 and uniform \(C^{\infty}\)-estimate, we see \[\int_{0}^{\infty}\int_{\Sigma_{t}}\sum_{1\leq i<j\leq n}(\kappa_{ i}(x,t)-\kappa_{j}(x,t))^{2}\bar{g}(X_{a},\nu)dA_{t}\leq C,\] where the \(\kappa_{i}(x,t),i=1,\cdots,n\) are the principal curvatures of the radial graph at the point \((x,t)\). Together with the uniform estimate, we see \(\bar{g}(X_{a},\nu)\) and \(dA_{t}\) are uniformly bounded, it follows \[\max_{\begin{subarray}{c}x\in\Sigma_{t}\\ 1\leq i<j\leq n\end{subarray}}|\kappa_{i}(x,t)-\kappa_{j}(x,t)|=o_{t}(1),\] where \(o_{t}(1)\) denotes a quantity which goes to zero as \(t\to+\infty\). Hence any convergent subsequence of \(x(\cdot,t)\) converges to a spherical cap as \(t\to+\infty\). Next, we show that the limit spherical cap is unique by following the argument in [19, 24]. First, we know any convergent subsequence of \(x(\cdot,t)\) smoothly converges to a spherical cap \(C_{\theta,\rho_{\infty}}(a_{\infty})\). Since the volume is preserved along with the flow (1.3), the radius is independent of the choice of the subsequence of \(t\). Now we just need to show that \(a_{\infty}=a\). Denote \(\rho(\cdot,t)\) be the radius of the unique spherical cap centered at the point \(\sqrt{\rho^{2}(\cdot,t)+r_{0}^{2}+2\rho(\cdot,t)r_{0}\cos\theta}a\) with contact angle \(\theta\) passing through the point \(x(\cdot,t)\). Following from the same barrier argument in Proposition 3.1, \[\rho_{\max}(t):=\max_{x\in M}\rho(x,t)=\rho(\xi_{t},t),\] is non-increasing with respect to \(t\), for some point \(\xi_{t}\in M\), hence the limit \(\lim_{t\to+\infty}\rho_{\max}(t)\) exists and it is clear that \(\rho_{\max}(t)\geq\rho_{\infty}\). We claim that \[\lim_{t\to+\infty}\rho_{\max}(t)=\rho_{\infty}. \tag{3.7}\] We prove the above claim by a contradiction. Suppose (3.7) is not true, then there exists a constant \(\varepsilon>0\), when \(t\) is large enough, such that \[\rho_{\max}(t)>\rho_{\infty}+\varepsilon. \tag{3.8}\] By the definition of \(\rho(\cdot,t)\), \[2\langle x,a\rangle\sqrt{\rho^{2}+r_{0}^{2}+2\rho r_{0}\cos\theta}=|x|^{2}+r_ {0}^{2}+2\rho r_{0}\cos\theta, \tag{3.9}\] taking the time derivative on the both sides for (3.9), we get \[\langle x_{t},x-\sqrt{\rho^{2}+r_{0}^{2}+2\rho r_{0}\cos\theta}a\rangle=\left( \frac{(\rho+r_{0}\cos\theta)\langle x,a\rangle}{\sqrt{\rho^{2}+r_{0}^{2}+2\rho r _{0}\cos\theta}}-r_{0}\cos\theta\right)\partial_{t}\rho.\] We evaluate at point \((\xi_{t},t)\), note that \(\Sigma_{t}\) is tangential to \(C_{\theta,\rho_{\max}}(a)\) at \((\xi_{t},t)\), it implies \[(\nu_{0})_{\Sigma_{t}}(\xi_{t},t)=(\nu_{0})_{\partial C_{\theta,\rho_{\max}(a )}}=\frac{x-\sqrt{\rho_{\max}^{2}(t)+r_{0}^{2}+2\rho_{\max}(t)r_{0}\cos\theta }a}{\rho_{\max}(t)},\] therefore \[\begin{split}&\left(\frac{(\rho_{\max}(t)+r_{0}\cos\theta) \langle x,a\rangle}{\sqrt{\rho_{\max}^{2}(t)+r_{0}^{2}+2\rho_{\max}(t)r_{0} \cos\theta}}-r_{0}\cos\theta\right)\partial_{t}\rho_{\max}(t)\\ &=e^{-U}F\left\langle(\nu_{0})\big{|}_{\Sigma_{t}}(\xi_{t},t),x- \sqrt{\rho_{\max}^{2}(t)+r_{0}^{2}+2\rho_{\max}(t)r_{0}\cos\theta}a\right\rangle \\ &=e^{-U}\rho_{\max}(t)\left(nV_{a}+n\sinh R\cos\theta\bar{g}(Y_{ a},\nu)-H\bar{g}(X_{a},\nu)\right).\end{split} \tag{3.10}\] Since the spherical \(C_{\theta,\rho_{\max}}(a)\) is a static solution to flow (1.3), the mean curvature \(\bar{H}\) of \(C_{\theta,\rho_{\max}}(a)\) in \((\bar{B}_{R}^{\mathbb{H}},\bar{g})\) is \[\bar{H}=e^{-u}\left[\frac{n}{\rho_{\max}(t)}+\frac{2n}{1-|x|^{2}}\langle x, \nu_{0}\big{|}_{\partial C_{\theta,\rho_{\max}}(a)}\rangle\right]=\frac{n \left(1-r_{0}^{2}-2r_{0}\rho_{\max}(t)\cos\theta\right)}{2\rho_{\max}(t)},\] then \[\frac{nV_{a}+n\sinh R\cos\theta g(Y_{a},\nu)}{\bar{g}(X_{a},\nu)}\Big{|}_{C_{ \theta,\rho_{\max}}(a)}=\frac{n\left(1-r_{0}^{2}-2r_{0}\rho_{\max}(t)\cos \theta\right)}{2\rho_{\max}(t)}. \tag{3.11}\] Since \(x(\cdot,t)\) converges to \(C_{\theta,\rho_{\infty}}(a_{\infty})\) and \(\rho_{\infty}\) is uniquely determined, we have \[\bar{H}\to\frac{n(1-r_{0}^{2}-2\rho_{\infty}r_{0}\cos\theta)}{2\rho_{\infty}}. \tag{3.12}\] We claim that there exist a positive constant \(\delta>0\), such that \[\frac{\left(\rho_{\max}(t)+r_{0}\cos\theta\right)\langle x,a\rangle}{\sqrt{\rho_{ \max}^{2}(t)+r_{0}^{2}+2\rho_{\max}(t)r_{0}\cos\theta}}-r_{0}\cos\theta\geq\delta. \tag{3.13}\] In fact, by (3.9), we have \[\begin{split}&\langle x,a\rangle^{2}(\rho^{2}+2\rho r_{0}\cos \theta+r_{0}^{2})\\ &=\frac{1}{4}(|x|^{2}+r_{0}^{2})^{2}+\rho r_{0}\cos\theta(|x|^{2} +r_{0}^{2})+\rho^{2}r_{0}^{2}\cos^{2}\theta,\end{split} \tag{3.14}\] combining with (3.9), it yields \[(\rho+r_{0}\cos\theta)\langle x,a\rangle-r_{0}\cos\theta\sqrt{ \rho^{2}+2\rho r_{0}\cos\theta+r_{0}^{2}}\] \[= (\rho+r_{0}\cos\theta)\langle x,a\rangle-r_{0}\cos\theta\frac{|x |^{2}+r_{0}^{2}+2\rho r_{0}\cos\theta}{2\langle x,a\rangle}\] \[= \frac{1}{\rho\langle x,a\rangle}\left[\rho(\rho+r_{0}\cos\theta) \langle x,a\rangle^{2}-\rho^{2}r_{0}^{2}\cos^{2}\theta-\rho r_{0}\cos\theta \frac{|x|^{2}+r_{0}^{2}}{2}\right]\] \[= \frac{1}{\rho\langle x,a\rangle}\left[\frac{(|x-r_{0}a||x+r_{0}a |)^{2}}{4}+\frac{1}{2}\rho r_{0}\cos\theta(|x|^{2}+r_{0}^{2})-\rho r_{0}\cos \theta\langle x,a\rangle^{2}\right],\] together with Proposition 3.1, it yields that Claim (3.13) is true. On the other hand, by (3.7), (3.11), (3.12) and the uniform estimates we established before, then there exists some large constant \(T_{0}\) satisfying for \(t>T_{0}\), it holds \[e^{-U}\left(nV_{a}+n\sinh R\cos\theta\bar{g}(Y_{a},\nu)-H\bar{g}(X_{a},\nu) \right)\big{|}_{x(\xi_{t},t)}\leq-C\varepsilon.\] Finally, by (3.10) and (3.13), we conclude that there exists a positive constant \(C_{0}\) such that \[\frac{d}{dt}\left(\rho_{\max}(t)\right)\leq-C_{0}\varepsilon.\] This contradicts to the fact that \(\lim\limits_{t\to+\infty}\frac{d}{dt}\left(\rho_{\max}(t)\right)=0\), so (3.7) is true. Similarly, one can obtain \[\lim\limits_{t\to+\infty}\rho_{\min}(t)=\rho_{\infty}.\] Therefore, \(\lim\limits_{t\to+\infty}\rho(\cdot,t)=\rho_{\infty}\). This implies any limit of the convergent subsequence is the spherical cap \(C_{\theta,\rho_{\infty}}(a)\) around \(a\) with radius \(\rho_{\infty}\). We complete the proof of Proposition 3.5, which follows also Theorem 1.2. **Acknowledgment.** Both authors would like to express sincere gratitude to Prof. Xinan Ma and Prof. Guofang Wang for their constant encouragement and many inspiring conversations in this subject. XM is partially supported by CSC (No. 202106340053) and the doctoral dissertation creation project of USTC. LW is partially supported by China Postdoctoral Science Foundation (No. 2021M702143) and NSFC (No. 12201003, 12171260).
2308.09627
Simplicial presheaves of Green complexes and twisting cochains
We construct three simplicial presheaves on the site of ringed spaces, and in particular on that of complex manifolds. The descent objects for these simplicial presheaves yield Toledo--Tong's twisting cochains, simplicial twisting cochains, and complexes that appear in Green's thesis on Chern classes for coherent analytic sheaves, respectively. We thus extend the aforementioned constructions to the equivariant setting, and more generally to stacks. This is the first step in achieving push-forwards in K-theory and Riemann--Roch theorems for appropriate stacks, as was achieved by Toledo and Tong for arbitrary complex manifolds, and further pursued by O'Brian and Green.
Timothy Hosgood, Mahmoud Zeinalian
2023-08-18T15:41:17Z
http://arxiv.org/abs/2308.09627v1
# Simplicial presheaves of ###### Abstract We construct three simplicial presheaves on the site of ringed spaces, and in particular on that of complex manifolds. The descent objects for these simplicial presheaves yield Toledo-Tong's twisting cochains, simplicial twisting cochains, and complexes that appear in Green's thesis on Chern classes for coherent analytic sheaves, respectively. We thus extend the aforementioned constructions to the equivariant setting, and more generally to stacks. This is the first step in achieving push-forwards in K-theory and Riemann-Roch theorems for appropriate stacks, as was achieved by Toledo and Tong for arbitrary complex manifolds, and further pursued by O'Brian and Green. ###### Contents * 1 Introduction * 1.1 History * 1.2 Purpose * 1.3 Overview * 1.4 Acknowledgments * 2 Preliminaries * 2.1 Spaces via simplicial sets * 2.2 Cosimplicial simplicial sets * 2.3 Totalisation and homotopy limits * 2.4 The Cech nerve and the categorical nerve * 2.5 The dg-nerve and Maurer-Cartan elements * 2.6 The pair subdivision * 2.7 Homotopy theory of simplicial presheaves * 2.8 Cech totalisation * 2.9 Perfectness of complexes * 3 Three simplicial presheaves * 3.1 Narrative * 3.2 Twisting cochains * 3.3 Green complexes * 3.4 Simplicial twisting cochains * 4 Complex-analytic examples * 4.1 Points in all three * 4.2 Edges in twisting cochains 4.3 Edges in Green complexes * 4.4 Edges in simplicial twisting cochains * 5 Relations between the three presheaves * 5.1 Horn filling conditions * 5.2 Inclusions * 5.3 Equivalences * 5.4 Green's resolution * 6 Future work ## 1 Introduction ### History The problem of resolving a coherent sheaf by locally free sheaves is fundamental in geometry. Indeed, one of the main tools in proving the Hirzebruch-Riemann-Roch theorem for holomorphic bundles on smooth projective complex varieties is a resolution of the pushforward along the diagonal of the structure sheaf by a bounded complex of locally free sheaves. To prove the analogous statement in the non-algebraic setting, various tools from differential geometry, such as heat kernels, are used. These tools rely on the choice of a metric, which, outside the context of Kahler manifolds within complex geometry, is unnatural, preventing us from generalising to the equivariant setting and to that of stacks. To resolve a coherent sheaf on a compact complex manifold by vector bundles, it suffices to have a positive line bundle. Such a line bundle, readily available in the algebraic setting by the canonical line bundle, does not exist in general. As such, outside the algebraic setting, coherent analytic sheaves cannot always be resolved by locally free sheaves. Nevertheless, in a series of papers ([17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 111, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 221, 232, 241, 242, 243, 244, 245, 246, 247, 248, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 282, 286, 287, 288, 289, 291, 287, 288, 289, 288, 289, 292, 293, 294, 295, 296, 297, 298, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 321, 332, 341, 342, 343, 351, 352, 353, 361, 362, 373, 383, 390, 311, 312, 338, 391, 313, 314, 315, 316, 317, 318, 319, 320, 321, 33, 341, 342, 343, 353, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 40, 41, 42, 43, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 52, 55, 54, 53, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 65, 66, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 91, 80, 82, 84, 85, 86, 87, 88, 89, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 111, 112, 109, 113, 114, 115, 116, 117, 118, 119, 120, 121, 123, 124, 125, 126, 127, 128, 129, 131, 140, 151, 152, 153, 154, 155, 156, 157, 168, 179, 180, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 210, 222, 213, 214, 215, 216, 217, 218, 219, 222, 230, 219, 224, 225, 218, 226, 219, 233, 241, 242, 243, 245, 246, 247, 248, 250, 251, 253, 254, 255, 256, 257, 258, 259, 260, 271, 282, 283, 284, 285, 286, 287, 288, 293, 294, 295, 296, 297, 298, 300, 31, 320, 321, 333, 341, 342, 343, 351, 352, 353, 361, 362, 373, 383, 390, 392, 393, 395, 396, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 51, 52, 53, 54, 55, 56, 57, 59, 61, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 108, 109, 112, 109, 112, 113, 114, 115, 116, 117, 119, 132, 124, 133, 144, 145, 155, 156, 157, 168, 179, 190, 119, 193, 194, 195, 196, 197, 198, 199, 200, 210, 22, 22, 23, 241, 25, 26, 27, 28, 29, 293, 29, 30, 31, 32, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 89, 93, 94, 95, 96, 97, 99, 100, 101, 102, 103, 104, 105, 106, 107, 107, 108, 109, 112, 109, associated homotopies a _twisting cochain_. In modern language, these objects would be described in terms of the \(\infty\)-stackification of the presheaf of perfect complexes on the site of complex manifolds (as in [10]). They used these homotopic methods to give a proof of the Hirzebruch-Riemann-Roch theorem for coherent analytic sheaves ([1]) and extended it to a proof of the more general Grothendieck-Riemann-Roch theorem ([1]). Another consequence of their work was to answer to an appeal of Bott, in [11], amongst other places, for _"the construction of characteristic classes of bundles in terms of transition functions"_. Indeed, when working with characteristic classes of bundles on foliations, or quotients by group actions, such an approach is necessary. In 1980, a philosophically related but technically different approach to resolving coherent sheaves appeared in the thesis [12] of Green, a student of O'Brian and Eels, in which Chern classes in de Rham cohomology of coherent analytic sheaves were constructed from local free resolutions that globally clutch together via a simplicial system of strictly invertible chain maps and inclusions, now referred to as _Green complexes_. Green's key insight was turning holomorphic twisting cochains into simplicial objects satisfying strict identities on the nose. To relate these different approaches, Toledo and Tong in [13] gave a reformulation of Green's simplicial resolution in terms of objects that simultaneously generalise twisting cochains and the complexes of sheaves on the Cech nerve arising in [12], namely _simplicial twisting cochains_. As mentioned in [11], the work of O'Brian, Toledo, and Tong responds to a question posed in [14] concerning Riemann-Roch formulas using Cech calculations that are an example of descent for complexes of vector bundles, but _"a better general framework for these calculations could contribute to our understanding of Riemann-Roch formulas"_. Since then, there has been important work on better understanding the homotopy theory of twisting cochains (such as [14, 1, 15, 16, 17, 18]), but the full story of how this applies to various open problems in complex-analytic geometry is one that has yet to be fully told. Even the abstract foundations provide need for further study: as mentioned in [14, Remark 2.16], the connections between twisting cochains and the dg-nerve should be further explored. ### Purpose The fundamental idea of this paper is the following: to construct a simplicial presheaf \(\mathcal{F}\) twist of simplicial twisting cochains that recovers, via some analogue of sheafification, the simplicial twisting cochains of [13] in the complex-analytic setting. As special cases, we will also obtain two more simplicial presheaves, \(\mathcal{F}\) twist and \(\mathcal{G}\)reen, that recover twisting cochains and Green complexes, respectively. We give a motivation of these constructions via perfectness conditions of sheaves of \(\mathcal{O}_{X}\)-modules, and "homotopical weakening", in Section 3.1. These simplicial presheaves are defined on the category of connected ringed spaces. We show (Theorem 4.1.1) that, if one picks the specific ringed space corresponding to some complex-analytic manifold, then this aforementioned analogue of sheafification (which we call _Cech totalisation_) recovers the classical definitions that we would expect from the three simplicial presheaves. Although we do not discuss what happens in the case of other geometries (such as locally Noetherian schemes with affine covers), the formal machinery that we provide can immediately be extended to these settings. One sees that the dg-nerve and twisting cochains should be related to one another, since both are given by the Maurer-Cartan condition (see e.g. [16, Remark 2.16]). In constructing the simplicial presheaf for twisting cochains, we show how the defining equations of the dg-nerve translate exactly to those for twisting cochains, via intermediary results concerning Maurer-Cartan elements in Cech bicomplexes (such as Theorem 2.5.17) which can be thought of as upgrades of certain technical lemmas from [11] to the case of _presheaves_ of dg-categories. Furthermore, we show that not only twisting cochains, but also the weak equivalences between them (as defined in [16, Definition 2.27]), arise from the dg-nerve (Theorem 4.2.2). As mentioned above, we endow the simplicial presheaves with geometry via Cech totalisation (Section 2.8), which consists of evaluating a simplicial presheaf on the Cech nerve of some fixed cover and then taking the totalisation of the resulting cosimplicial simplicial set, which we show computes the homotopy limit. In this way, we are providing the space analogue of [1, Proposition 4.9], showing that twisting cochains arise as a homotopy limit of bounded complexes of free modules evaluated on the Cech nerve; we similarly characterise Green complexes and simplicial twisting cochains as homotopy colimits of other cosimplicial simplicial sets. We make this analogy precise via a comparison result (Lemma 2.8.6) for presheaves of dg-categories that preserve finite limits. The results of this present article concerning only twisting cochains can thus be seen as a sort of synthesis of [11], [1], and [16]. Although weak equivalences between Green complexes were defined in [14] as level-wise quasi-isomorphisms, the \(1\)-simplices of the Cech totalisation of \(\mathcal{G}^{\text{c}een}\) here provide a seemingly more fitting notion (Section 4.3). We conjecture that the description of Green's resolution given in [11] actually describes a morphism of simplicial presheaves \(\mathcal{A}\mathit{wist}\to\mathcal{G}^{\text{c}een}\) (Conjecture 5.4.1), though to prove this would require a refinement of the constructions given here, as we justify in Section 5.4. Given that we construct three simplicial presheaves, it is natural to ask how they relate to one another in the category of simplicial presheaves, which can be endowed with a model structure. By construction, \(\mathcal{F}\mathit{wist}\) is globally fibrant in the Kan-Quillen model structure, and will thus give a space after Cech totalisation (Lemma 2.8.2), but neither \(\mathcal{G}^{\text{c}een}\) nor \(\mathcal{A}\mathit{wist}\) are immediately seen to be globally fibrant. However, we provide some partial results (Section 5.1) which ensure that the simplicial \(\pi_{0}\) are well defined for all three presheaves, and then show that they are all equivalent, noting a particularly nice application of Green's resolution in strictifying quasi-isomorphisms of complexes of free modules to isomorphisms (Remark 5.3.2). As a consequence of global fibrancy, this implies that the \(\pi_{0}\) of their Cech totalisations are also all equivalent (Corollary 5.3.4). In an attempt to make this paper as useful a reference as possible, we try not to leave any proofs as exercises for the reader: in Appendix A and Appendix B.2 we provide explicit descriptions and calculations of \(1\)-simplices in the Cech totalisation of a simplicial presheaf. ### Overview * Section 1: Historical context and brief summary of main results of this paper. * Section 2: Preliminary notation and definitions, as well as some general technical lemmas. Mostly classical, but some new or folklore results as well, especially concerning the dg-nerve (Section 2.5), pair subdivision (Section 2.6), and Cech totalisation (Section 2.8). * Section 3: Motivation for our approach via simplicial presheaves, the dg-nerve, and simplicial labelling (Section 3.1), and definitions of the three simplicial presheaves. * Section 4: Details of the holomorphic case, including comparisons to other results. * Section 5: Study of the three simplicial presheaves in the context of the Kan-Quillen model structure. * Section 6: Summary of main results, including questions for future research. * Appendix A: Worked example of Cech totalisation for constructing the space of principal \(\operatorname{GL}_{n}(\mathbb{R})\)-bundles. * Appendix B: Details of lengthier proofs of some of the more technical lemmas. ### Acknowledgments We thank Cheyne Glass, Michah Miller, and Thomas Tradler for providing us with an early copy of [13]; versions of both Lemma 2.5.7 and Theorem 2.5.12 can be found there. The first author would also like to thank Evan Cavallo for his patience in helping with the numerous sign issues in an early draft of Appendix B.2, as well as Ivan Di Liberti and Josefien Kuijper for continual interesting conversations. ## 2 Preliminaries The majority of content in this section is classical, and we simply gather it together here for convenience of the reader, as well as to establish notation. Some sections (such as Section 2.5, Section 2.6, and Section 2.8) contain material that is either slightly more general than what can be found in existing literature, or that is more difficult to find references for. _Throughout this entire paper, whenever we say "manifold" we mean "paracompact smooth manifold"; whenever we say "cover" we mean "open cover". For categories \(\mathscr{C}\) and \(\mathscr{D}\), we denote the set (or category) of functors from \(\mathscr{C}\to\mathscr{D}\) by \([\mathscr{C},\mathscr{D}]\). We always use \(\subset\) to mean strict subset, and \(\subseteq\) to mean non-strict subset, entirely analogous to \(<\) and \(\leqslant\)._ ### Spaces via simplicial sets **Definition 2.1.1**.: Let \(\Delta\) be the abstract simplex category, whose objects are the finite totally ordered sets \([p]=\{0<1<\ldots<p\}\) for \(p\in\mathbb{N}\), and whose morphisms \([p]\to[q]\) are order-preserving functions. There are injections \(f_{p}^{i}\colon[p-1]\to[p]\) for \(i\in\{0,\ldots,p-1\}\), called _coface_ maps; there are surjections \(s_{i}^{p}\colon[p+1]\to[p]\) for \(i\in\{0,\ldots,p\}\), called _codegeneracy_ maps. The _topological_\(p\)_-simplex_, denoted by \(\Delta^{p}\), is the \(p\)-dimensional polytope given by the convex hull of the affinely independent1 set of \(p+1\) points \(\{e_{0},e_{1},\ldots,e_{p}\}\) inside \(\mathbb{R}^{p}\), where \(e_{i}\) is the standard basis unit vector for \(i\in\{1,\ldots,p\}\), and \(e_{0}\) is the zero vector. Ordered non-empty subsets \(\sigma\subseteq[p]\) of cardinality \(k+1\) then correspond bijectively to non-degenerate sub-\(k\)-simplices \(\Delta^{k}\subseteq\Delta^{p}\), since \(\sigma\) corresponds to a subset of the aforementioned set of \(p+1\) points, and we can then take its convex hull (cf. Figure 2.1.i). When we talk of \(p\)-simplices, we always mean _non-degenerate_\(p\)-simplices, unless otherwise stated. Footnote 1: That is, the set \(\{e_{i}-e_{0}\,|\,i=1,\ldots,p\}\) is linearly independent. **Definition 2.1.2**.: A _simplicial set_ is a contravariant functor \(X_{\star}\colon[p]\to X_{p}\) from \(\Delta\) to the category of sets, i.e. an object of the category \(\operatorname{\mathsf{sSet}}\coloneqq[\Delta^{\operatorname{op}},\operatorname {\mathsf{Set}}]\). The coface maps \(f_{p}^{i}\) of \(\Delta\) induce _face_ maps \(X_{\star}f_{p}^{i}\colon X_{p}\to X_{p-1}\); the codegeneracy maps \(s_{i}^{p}\) of \(\Delta\) induce _degeneracy_ maps \(X_{\star}s_{i}^{p}\colon X_{p}\to X_{p+1}\). Given a category \(\mathscr{C}\), a _simplicial presheaf_ on \(\mathscr{C}\) is a presheaf with values in simplicial sets, i.e. an object in \([\mathscr{C}^{\operatorname{op}},\operatorname{\mathsf{sSet}}]\). We often simply write \(X\) instead of \(X_{\star}\), and \(f^{i}\) (resp. \(s_{i}\)) instead of \(X_{\star}f_{p}^{i}\) (resp. \(X_{\star}s_{i}^{p}\)). Since simplicial sets give a model for topological spaces (through geometric realisation), we often refer to the \(0\)-simplices \(x\in X_{0}\) of a simplicial set \(X_{\star}\) as _points_ or _vertices_, and the \(1\)-simplices \(x\in X_{1}\) as _lines_ or _edges_. We reserve the use of the word "_space_" to refer to either topological spaces or Kan complexes (defined below), and if we do not make precise to which one we are referring then it is because one can pick either meaning, depending on preference. Figure 2.1.i. Inclusions of subsets correspond to inclusions of sub-simplices. **Definition 2.1.3**.: The prototypical simplicial sets are those of the form \[\Delta[p]\coloneqq\operatorname{Hom}_{\Delta}(-,[p])\] for \(p\in\mathbb{N}\). We call \(\Delta[p]\) the _standard \(p\)-simplex_. Although we work almost entirely with the "abstract" simplices \(\Delta[p]\), when drawing diagrams we are really drawing the topological simplices \(\Delta^{p}\), which are related to the abstract simplices by _geometric realisation_. One needs to be careful about the definition of the category \(\operatorname{Space}\) in the definition of geometric realisation, but we do not need to worry about the details here. For us, what is important is the intuitive understanding of geometric realisation: we take a simplicial set \(\boldsymbol{X}_{\boldsymbol{\cdot}}\), replace every copy of the abstract \(p\)-simplex with a topological \(p\)-simplex, and then glue these together exactly in the way that the abstract simplices glue together via the face and degeneracy maps. **Definition 2.1.4**.: We define _geometric realisation_\(|\cdot|\colon\operatorname{\mathsf{sSet}}\to\operatorname{Space}\) as the functor given on the standard \(p\)-simplices by \(|\Delta[p]|\coloneqq\Delta_{p}\), and then extend this to arbitrary simplicial sets \(\boldsymbol{X}_{\boldsymbol{\cdot}}\) via \[|\boldsymbol{X}_{\boldsymbol{\cdot}}|\coloneqq\lim_{\Delta[p]\to\boldsymbol{X }_{\boldsymbol{\cdot}}}|\Delta[p]|.\] More formally, geometric realisation is the left adjoint to the functor \(\operatorname{Sing}\colon\operatorname{Space}\to\operatorname{\mathsf{sSet}}\) given by \(\operatorname{Sing}(Y)_{p}\coloneqq\operatorname{Hom}_{\operatorname{Space}}( \Delta^{p},Y)\). **Definition 2.1.5**.: Given \(0\leq i\leq p\), the _\(ith\) horn_\(\Lambda_{i}[p]\) of the \(p\)-simplex_ is the simplicial set defined by \[\Lambda_{i}[p]([q])=\left\{\alpha\in\operatorname{Hom}_{\Delta}([q],[p])\mid[p ]\not\simeq\alpha([q])\cup\{i\}\right\}\subset\Delta[p]([q]).\] Topologically, the \(i\)th horn is what remains after removing the interior of the \(p\)-simplex and then deleting the \((p-1)\)-dimensional face opposite the \(i\)th vertex (cf. Figure 2.1.ii); more simply, it is the collection of all simplices that contain the \(i\)th vertex. We write \(\Lambda_{i}^{p}\) to mean the geometric realisation of \(\Lambda_{i}[p]\), so that \(\Lambda_{i}^{p}\subset\Delta^{p}\). Figure 2.1.ii. Top: \(\Lambda_{1}^{2}\subset\Delta^{2}\); Bottom: \(\Lambda_{0}^{3}\subset\Delta^{3}\) (where, on the left, all \(2\)-dimensional faces are filled in except for \(\{1<2<3\}\)). **Definition 2.1.6**.: A simplicial set \(X_{\star}\) is a _Kan complex_ if all horns can be filled, i.e. if any map \(\Lambda_{i}[p]\to X_{\star}\) can be extended to a map \(\Delta[p]\to X_{\star}\) for all \(p\in\mathbb{N}\) and all \(0\leq i\leq p\), i.e. if the natural map \(\operatorname{Hom}_{\operatorname{sSet}}(\Delta[p],X_{\star})\to\operatorname{ Hom}(\Lambda_{i}[p],X_{\star})\) is surjective. If the same condition holds only for \(0<i<p\) (for all \(p\in\mathbb{N}\)), then we say that only _inner_ horns can be filled, and we say that the simplicial set is a _quasi-category_. This defines two full subcategories of the category of simplicial sets: the category \(\operatorname{Kan}\) of Kan complexes, and the category \(\operatorname{Quasi-Cat}\) of quasi-categories. **Definition 2.1.7**.: The inclusion \(\operatorname{Kan}\hookrightarrow\operatorname{Quasi-Cat}\) has both a left and a right adjoint, where the right adjoint is called the _core_. It can be shown ([11]) that the core is given by taking the _maximal Kan complex_, i.e. the largest (by inclusion) Kan complex contained inside the quasi-category. We denote this maximal-Kan-complex functor by \([-]\colon\operatorname{Quasi-Cat}\to\operatorname{Kan}\). **Remark 2.1.8**.: The core of an arbitrary simplicial set is not a priori well defined, but there is a "model" of the core which is indeed a functor defined on all of \(\operatorname{sSet}\). We return to this point in Remark 2.8.7. ### Cosimplicial simplicial sets **Definition 2.2.1**.: A _cosimplicial simplicial set_ is a covariant functor \(X_{\star}^{\star}\colon[p]\mapsto X_{\star}^{p}\) from \(\Delta\) to the category of simplicial sets, i.e. an object of the category \(\operatorname{csSet}\coloneqq[\Delta,\operatorname{sSet}]\). The coface maps \(f_{p}^{i}\) of \(\Delta\) induce _coface_ maps \(X_{\star}^{\star}f_{p}^{i}\colon X_{\star}^{p-1}\to X_{\star}^{p}\); the codegeneracy maps \(s_{i}^{p}\) of \(\Delta\) induce _codegeneracy_ maps \(X_{\star}^{\star}s_{i}^{p}\colon X_{\star}^{p+1}\to X_{\star}^{p}\). Note that we can enrich \(\operatorname{csSet}\) over \(\operatorname{sSet}\) by defining \[\big{(}\operatorname{\underline{Hom}}_{\operatorname{csSet}}(A_{\star}^{ \star},B_{\star}^{\star})\big{)}_{p}=\operatorname{Hom}_{\operatorname{csSet }}(A_{\star}^{\star}\times\Delta[p],B_{\star}^{\star}).\] **Remark 2.2.2**.: Just to be clear: since we are using simplicial sets as models for spaces, when we talk about the coface maps of a cosimplicial simplicial set, we mean the coface maps coming from the _cosimplicial structure_, not the face maps coming from the simplicial structure. **Definition 2.2.3**.: The prototypical cosimplicial simplicial set is \[\Delta[\star]\colon[p]\mapsto\Delta[p]=\operatorname{Hom}_{\Delta}(-,[p]),\] i.e. "collecting all the simplicial sets \(\Delta[p]\) for \(p\in\mathbb{N}\) together". ### Totalisation and homotopy limits One functor of particular interest to us regarding simplicial sets and cosimplicial simplicial sets is the _totalisation_ functor. **Definition 2.3.1**.: Let \(L\colon\operatorname{sSet}\to\operatorname{csSet}\) be the functor given by \(Y_{\star}\mapsto Y_{\star}\times\Delta[\star]\). We define the _totalisation_ functor \(\operatorname{Tot}\colon\operatorname{csSet}\to\operatorname{sSet}\) as the right adjoint to \(L\). We often simply write \((\operatorname{Tot}Y)_{p}\) instead of \((\operatorname{Tot}Y_{\star}^{\star})_{p}\). This functor is of particular interest in the setting of homotopy theory, as explained by the following technical lemma, which we provide without further context. **Lemma 2.3.2** ([20, Theorem 18.7.4]).: _If \(Y_{\star}^{\star}\in\operatorname{csSet}\) is Reedy fibrant, then the totalisation \(\operatorname{Tot}Y_{\star}^{\star}\) and the homotopy limit \(\operatorname{holim}Y_{\star}^{\star}\) are naturally weakly equivalent._ Here the homotopy limit is defined as usual (e.g. as the right derived functor of the right adjoint to the constant-diagram functor), but we will not need to appeal to the technical definition in this paper. There are many ways ([17, SS5.3]) to think of totalisation (e.g. as the dual to geometric realisation), but one particularly useful point-of-view for our purposes is the following. Given a cosimplicial simplicial set \(Y_{\star}^{\star}\), we can show that \[\operatorname{Tot}Y_{\star}^{\star}=\underline{\operatorname{Hom}}_{ \operatorname{csSet}}(\Delta[\star],Y_{\star}^{\star}).\] Morally, this is a version of the tensor-hom adjunction. Using this, it can be proven that \(\operatorname{Tot}\) is also given by an equaliser in \(\operatorname{sSet}\): \[\operatorname{Tot}Y_{\star}^{\star}=\operatorname{eq}\left(\prod_{p} \operatorname{Hom}_{\operatorname{sSet}}(\Delta[p],Y_{\star}^{p})=\prod_{[p] -[q]}\operatorname{Hom}_{\operatorname{sSet}}(\Delta[p],Y_{\star}^{q})\right)\] (for details, see [20, Definition 18.6.3]). With this definition of \(\operatorname{Tot}\) as an equaliser, we can show the following: _a point in \(\operatorname{Tot}Y_{\star}^{\star}\) consists of \((y^{0},y^{1},y^{2},\ldots)\), with \(y^{p}\in Y_{p}^{p}\), such that_ * _the images of_ \(y^{p}\) _under the coface maps_ \(f_{p+1}^{i}\colon Y_{\star}^{p}\to Y_{\star}^{p+1}\) _are exactly the_ \(p\)_-dimensional faces of_ \(y^{p+1}\)_; and_ * _the images of_ \(y^{p}\) _under the codegeneracy maps_ \(s_{i}^{p-1}\colon Y_{\star}^{p}\to Y_{\star}^{p-1}\) _are exactly (up to degeneracy)_ \(y^{p-1}\)_._ (cf. Figure 2.3.i). More generally, we can show that a \(k\)-simplex in \(\operatorname{Tot}Y\) consists of morphisms \(\Delta[k]\times\Delta[p]\to Y_{\star}^{p}\) for \(p\in\mathbb{N}\) such that some analogous conditions hold. This all follows from the definition of the totalisation as \(\underline{\operatorname{Hom}}_{\operatorname{csSet}}(\Delta[\star],-)\) along with the description as an equaliser. For a worked example of \(1\)-simplices in the totalisation, i.e. for morphisms from \(\Delta[1]\times\Delta[p]\), see Appendix B.2 (or Appendix A, though the situation there is rather more trivial); for the more general case, see [11, Appendix B]. Figure 2.3.i. _Visualising a point \(y=(y^{0},y^{1},y^{2},\ldots)\) in the totalisation of a cosimplicial simplicial set \(Y_{\star}^{\star}\). For aesthetic purposes, we have not drawn the codegeneracy maps, nor anything above degree \(2\)._ ### The Cech nerve and the categorical nerve **Definition 2.4.1**.: Given a topological space \(X\) with a cover \(\mathcal{U}=\{U_{\alpha}\}_{\alpha\in I}\), we define the _Cech nerve_ of the pair \((X,\mathcal{U})\) to be the simplicial space \((\tilde{\mathcal{N}}\mathcal{U})\), \(\in[\Delta^{\mathrm{op}},\mathrm{Space}]\) whose \(p\)-simplices are given by the disjoint union of all \(p\)-fold intersections, i.e. \[(\tilde{\mathcal{N}}\mathcal{U})_{p}=\coprod_{\alpha_{0},\ldots,\alpha_{p}\in I }U_{\alpha_{0}\ldots\alpha_{p}}\] and where the face (resp. degeneracy) maps are given by dropping (resp. repeating) indices. \(\lrcorner\) **Definition 2.4.2**.: Given a category \(\mathcal{C}\), we define the _ordinary nerve_ (or simply the _nerve_) to be the simplicial set \((\mathcal{N}\mathcal{C})\), whose \(p\)-simplices are sequences of length \(p\) of composable morphisms, i.e. \[(\mathcal{N}\mathcal{C})_{p}=\left\{x_{0}\xrightarrow{f_{1}}x_{1}\xrightarrow {f_{2}}\ldots\xrightarrow{f_{p}}x_{p}\mid f_{i}\in\mathrm{Hom}_{\mathcal{C}}( x_{i-1},x_{i})\right\}_{x_{0},\ldots,x_{p}\in\mathcal{C}}\] where, for \(p=0\), such a sequence is simply a single object of \(\mathcal{C}\), and where the face (resp. degeneracy) maps are given by composing morphisms (resp. inserting identity morphisms). \(\lrcorner\) Given a \(p\)-simplex \((f_{1},\ldots,f_{p})\) in \(\mathcal{N}\mathcal{C}\), we can "fill it out" by taking the \(1\)-skeleton of the convex hull of an affinely independent embedding of \(p+1\) points (as described in Section 2.1), labelling the \(i\)th point with the domain of \(f_{i+1}\) (or, equivalently, the codomain of \(f_{i}\)), labelling the edge connecting the \((i-1)\)th point to the \(i\)th point with \(f_{i}\), and labelling any remaining edges with the composition of the other two morphisms on the same triangular face, so that all triangles commute. That is, we label the _spine_ of the standard \(p\)-simplex, and use the fact that the constituent \(1\)-simplices are exactly the _generating_ simplices. For example, given a \(2\)-simplex \[x_{0}\xrightarrow{f_{1}}x_{1}\xrightarrow{f_{2}}x_{2},\] we obtain the labelling of (the \(1\)-skeleton of \(\Delta[2]\). By "filling out" (or "blowing up") the nerve like this, the length-\(p\) sequence of morphisms uniquely determines the other \(\binom{p}{2}\) by composition, giving \(p+\binom{p}{2}=\binom{p}{1}+\binom{p}{2}=\binom{p+1}{2}\) many morphisms in total. **Definition 2.4.3**.: Whenever we think of the nerve in this way, we refer to it as the _blown-up nerve_. \(\lrcorner\) ### The dg-nerve and Maurer-Cartan elements Throughout this section, and the rest of the paper, whenever we speak of _complexes_, we mean _bounded, non-negatively graded, cochain complexes_. **Definition 2.5.1**.: A _dg-category_ is a category enriched in complexes. That is, a category \(\mathcal{D}\) such that the \(\operatorname{hom}\)-set \(\operatorname{Hom}_{\mathcal{D}}(x,y)\) is actually a complex for any \(x,y\in\mathcal{D}\), with differential denoted by \(\partial\), and such that composition is associative, unital, bilinear, and satisfies the Leibniz rule (for details, see e.g. [13]). We often say _dg-category of complexes_ to mean a dg-category whose objects are cochain complexes of some objects in an abelian category \(\mathcal{A}\), and whose morphisms are degree-wise morphisms in \(\mathcal{A}\), and whose hom-differential is given by the standard formula. More precisely, a morphism \(f^{\star}\in\operatorname{Hom}^{p}(C^{\star},D^{\star})\) of degree \(p\) consists of morphisms \(f^{n}\colon C^{n}\to D^{n+p}\) (not necessarily commuting with the differentials \(\operatorname{d}_{C}\) and \(\operatorname{d}_{D}\)), and the differential \(\partial\colon\operatorname{Hom}^{p}(C^{\star},D^{\star})\to\operatorname{ Hom}^{p+1}(C^{\star},D^{\star})\) is given by defining \(\partial f\) as consisting of the morphisms \((\partial f)^{n}\coloneqq f^{n+1}\circ\operatorname{d}_{C}+(-1)^{p+1} \operatorname{d}_{D}\circ f^{n}\colon C^{n}\to D^{n+p+1}\). \(\lrcorner\) **Definition 2.5.2**.: Let \(\mathcal{D}\) be a dg-category. We define the _dg-nerve_ of \(\mathcal{D}\) to be the simplicial set \(\mathcal{N}^{\operatorname{dg}}\mathcal{D}\) constructed as follows. * The \(0\)-simplices of \(\mathcal{N}^{\operatorname{dg}}\mathcal{D}\) are labellings2 of the standard \(0\)-simplex by objects of \(\mathcal{D}\), i.e. \((\mathcal{N}^{\operatorname{dg}}\mathcal{D})_{0}\) is in bijection with \(\operatorname{Ob}\mathcal{D}\). Footnote 2: We use this language of _labellings_ to be consistent with the constructions later on. It is entirely equivalent, however, to the more standard way of phrasing the definition: “\((\mathcal{N}^{\operatorname{dg}}\mathcal{D})_{1}\)_consists of triples \((x_{0},x_{1},f_{(0<1)})\), where \(x_{i}\in\mathcal{D}\) and \(f_{(0<1)}\colon x_{0}\to x_{1}\) is of degree \(0\), such that \(\ldots\)”. Figure 2.4.i. Given a sequence of three composite morphisms, we can fold them to lie along the spine of the \(3\)-simplex, and then (uniquely) label the rest of the \(1\)-skeleton using the compositions. * The \(1\)-simplices of \(\mathcal{N}^{\mathrm{dg}}\mathcal{D}\) are labellings of the standard \(1\)-simplex \(\{0<1\}\) by morphisms \(f_{(0<1)}\in\mathrm{Hom}_{\mathcal{D}}^{0}(x_{1},x_{0})\), where \(x_{i}\) labels the \(0\)-face \(\{i\}\subset\{0<1\}\), such that \(\partial f_{(0<1)}=0\) (i.e. such that \(f_{0<1}\) is a chain map: it commutes with the differentials). * The \(2\)-simplices of \(\mathcal{N}^{\mathrm{dg}}\mathcal{D}\) are labellings of the standard \(2\)-simplex \(\{0<1<2\}\) by morphisms \(f_{(0<1<2)}\in\mathrm{Hom}_{\mathcal{D}}^{-1}(x_{2},x_{0})\) (where \(x_{i}\) labels the \(0\)-face \(\{i\}\), and \(f_{i<j}\) labels the \(1\)-face \(\{i<j\}\)) such that \(\partial f_{\{0<1<2\}}=f_{\{0<2\}}-f_{\{1<2\}}f_{\{0<1\}}\). * Generally, for \(p\geq 2\), the \(p\)-simplices of \(\mathcal{N}^{\mathrm{dg}}\mathcal{D}\) are labellings of every (non-degenerate) face of the standard \(p\)-simplex; the vertex corresponding to the singleton subset \(\{i\}\subset[p]\) is labelled by some object \(x_{i}\) of \(\mathcal{D}\); for \(k\geq 1\), the \(k\)-dimensional face corresponding to some \(I=\{i_{0}<i_{1}<\ldots<i_{k}\}\subseteq[p]\) is labelled by a morphism \(f_{I}\in\mathrm{Hom}_{\mathcal{D}}^{1-k}(x_{i_{k}},x_{i_{0}})\); for all non-empty \(I\subseteq[p]\) with \(|I|-1=k\geq 2\), the following relation is satisfied: \[\partial f_{I}=\sum_{j=1}^{k-1}(-1)^{j-1}f_{I\setminus\{i_{j}\}}+\sum_{j=1}^{k -1}(-1)^{k(j-1)+1}f_{\{i_{0}<\ldots<i_{j}\}}\circ f_{\{i_{j}<\ldots<i_{k}\}}.\] (2.5.2.1) The face maps are given by the "topological" face maps of simplices: since a \(p\)-simplex in the dg-nerve is, in particular, a labelling of the \(p\)-simplex, we obtain face maps by simply looking at the data that labels the faces, which are \((p-1)\)-simplices. The degeneracy maps are given by inserting identity morphisms. For details, see [14, Definition 2.8, Proposition 2.9, Corollary 2.10]. (N.B. the sign convention differs from that of [13, Construction 1.3.1.6] and [15, Tag 00PL]). Note also that the direction of the morphisms is "backwards", in that \(f_{i_{0}<\ldots<i_{k}}\) goes from \(x_{i_{k}}\) to \(x_{i_{0}}\), cf. Remark 2.5.3. \(\lrcorner\) In the case where \(\mathcal{D}\) is a dg-category of complexes, we could also think of the \(0\)-simplices as being labelled by morphisms \(f_{\{0\}}\in\mathrm{Hom}^{1}(x_{0},x_{0})\), which would be exactly the differentials of the complex \(x_{0}\). **Remark 2.5.3**.: In this paper, the morphisms in the dg-nerve go in the "backwards" direction, but this is purely a matter of convention: the dg-nerve commutes with \((-)^{\mathrm{op}}\) up to isomorphism. Furthermore, since we are almost exclusively interested in the _core_ of the dg-nerve, we can even use the fact that every quasi-groupoid is equivalent to its opposite. \(\lrcorner\) The following lemma ensures that we can indeed talk of the maximal Kan complex of the dg-nerve of a dg-category. **Lemma 2.5.4** ([13, Proposition 1.3.1.10]).: _Let \(\mathcal{D}\) be a dg-category. Then the simplicial set \(\mathcal{N}^{\mathrm{dg}}\mathcal{D}\) is a quasi-category._ **Definition 2.5.5**.: Every dg-category \(\mathcal{D}\) has an underlying "ordinary" category \(K_{0}\mathcal{D}\), where \[\operatorname{Hom}_{K_{0}\mathcal{D}}(x,y)=\{f\in\operatorname{Hom}_{\mathcal{ D}}(x,y)\,|\,\partial f=0\}.\] For example, when \(\mathcal{D}\) is a dg-category of chain complexes, the morphisms in \(K_{0}\mathcal{D}\) are exactly chain maps, i.e. those that commute with the differential. For notational simplicity, we write \(\mathcal{N}\mathcal{D}\) to mean \(\mathcal{N}(K_{0}\mathcal{D})\). **Lemma 2.5.6**.: _Let \(\mathcal{D}\) be a dg-category of cochain complexes of modules. Then, in the notation of Definition 2.5.2,_ 1. _the ordinary nerve_ \(\mathcal{N}\mathcal{D}\) _sits inside_3 _the dg-nerve_ \(\mathcal{N}^{\operatorname{dg}\mathcal{D}}\) _as the simplicial set of labellings with_ \(f_{I}=0\) _for_ \(|I|\geq 3\)_;_ Footnote 3: Taking Remark 2.5.3 into account, we really mean “the nerve of the \(\mathcal{D}^{\operatorname{op}}\). 2. _the maximal Kan complex_ \([\mathcal{N}^{\operatorname{dg}\mathcal{D}}]\) _of the ordinary nerve is given by requiring that the_ \(f_{\{0<1\}}\) _be isomorphisms; and_ 3. _the maximal Kan complex_ \([\mathcal{N}^{\operatorname{dg}\mathcal{D}}]\) _of the dg-nerve is given by requiring that the_ \(f_{\{0<1\}}\) _be quasi-isomorphisms._ Proof.: 1. (cf. [17, Remark 1.3.1.9]). This is immediate from Definition 2.5.5, since \(\mathcal{N}\mathcal{D}:=\mathcal{N}(K_{0}\mathcal{D})\) already consists of morphisms \(f\) such that \(\partial f=0\), so this satisfies the relevant condition in Definition 2.5.2. 2. Note that the simplicial set defined by requiring the \(f_{\{0<1\}}\) to be isomorphisms is exactly the ordinary nerve of the maximal groupoid \(\mathcal{D}^{\prime}\) of \(\mathcal{D}\), and thus a Kan complex. So let \(X_{\star}\) be a Kan complex such that \(\mathcal{N}\mathcal{D}^{\prime}\subseteq X_{\star}\subseteq\mathcal{N} \mathcal{D}\). This immediately implies that \(X_{0}=\operatorname{ob}\mathcal{D}^{\prime}=\operatorname{ob}\mathcal{D}\). Then, since \(X_{\star}\) is Kan, in particular, the outer \(2\)-horns fill, i.e. for any \(f\in X_{1}\), there exist \(g_{I},g_{r}\in X_{1}\) such that \(g_{I}\circ f=\operatorname{id}=f\circ g_{r}\), whence \(f\) is an isomorphism, since \(f^{-1}=g=g_{I}=g_{r}\). That is, \(X_{1}\subseteq(\mathcal{N}\mathcal{D}^{\prime})_{1}\). But then, since the nerve of a category is built entirely from \(1\)-simplices (i.e. is \(2\)-coskeletal), this implies that \(X_{p}\subseteq(\mathcal{N}\mathcal{D}^{\prime})_{p}\) for all \(p\in\mathbb{N}\), whence \(\mathcal{N}\mathcal{D}^{\prime}\) is maximal amongst Kan complexes contained inside \(\mathcal{N}\mathcal{D}\). 3. Since the dg-nerve is a quasi-category [17, Proposition 1.3.1.10], this follows from [11, Corollary 1.5]. The purpose of the rest of this section is to state Theorem 2.5.17, which is a generalisation of some results found in [10]. We take the time to restate and reprove the main result _loc. cit._ which we are generalising, in a way consistent with the notation used in this current paper, to save the reader the effort of translating from one setting to another. **Lemma 2.5.7** ([10, Lemma 2.7]).: _Let \(\mathcal{D}\) be a dg-category, and let \(X=X_{\star}\) be a simplicial set. Write \(\operatorname{ver}_{i}K\) to mean the \(i\)th vertex of a \(p\)-simplex \(K\in X_{p}\). Then the following are equivalent:_ 1. _a morphism_ \(F\colon X\to\mathcal{N}^{\operatorname{dg}}(\mathcal{D})\) _of simplicial sets; and_ _;_ 2. _the data of an object_ \(c_{x}\in\mathcal{D}\) _for each_ \(0\)_-simplex_ \(x\in X_{0}\)_, along with a morphism_ \(f_{K}\in\operatorname{Hom}^{1-p}(c_{\operatorname{ver}_{p}K},c_{\operatorname{ ver}_{0}K})\) _for each_ \(p\)_-simplex_ \(K\in X_{p}\) _for all_ \(p\geq 1\)_, such that_ \[\partial f_{K}=\sum_{j=1}^{p-1}(-1)^{j-1}f_{K\setminus\{\operatorname{ver}_{j} K\}}+\sum_{j=1}^{p-1}(-1)^{p(j-1)+1}f_{\{\operatorname{ver}_{0}K<\ldots< \operatorname{ver}_{j}K\}}\circ f_{\{\operatorname{ver}_{j}K<\ldots< \operatorname{ver}_{p}K\}}\] (2.5.7.1) _where the right-hand side is taken to be zero if_ \(p=1\)_._ The key difference between the equation found in the definition of the dg-nerve (2.5.2.1) and the above (2.5.7.1) is that the former concerns morphisms labelled by _abstract_ simplices, for _all_ (non-empty) faces \(I\subseteq[p]\), whereas the latter concerns morphisms labelled by _simplices of \(X_{\bullet}\)_, and makes _no (direct) reference_ to faces/sub-simplices. The moral of Lemma 2.5.7, however, is that these two descriptions give the same result. Proof.: Let \(F\colon X\to\mathcal{N}^{\operatorname{dg}}(\mathcal{D})\) be a morphism of simplicial sets. Then, by Definition 2.5.2, for any \(p\geq 1\), any \(p\)-simplex \(K\in X_{p}\), and any \(I=\{i_{0}<i_{1}<\ldots<i_{k}\}\subseteq[p]\) with \(1\leq k\leq p\), there exist \(f_{I}\in\operatorname{Hom}^{1-k}(c_{\operatorname{ver}_{I}},c_{\operatorname{ ver}_{0}I})\) satisfying (2.5.7.1). In particular then, for \(k=p\) (and thus for \(\{i_{0}<i_{1}<\ldots<i_{k}\}=[p]\)), we have exactly the data given in the statement of the lemma. Indeed, the difficulty lies in showing the converse: that having such data _only_ for \(k=p\) is enough to recover all lower-dimensional data in a functorial way. Assume that we have the data of the \(c_{x}\) and the \(f_{K}\), satisfying (2.5.7.1), as stated in the lemma; this gives us a map \[F_{p}\colon K\mapsto(\{c_{\operatorname{ver}_{i}K}\}_{0\leq i\leq p},\{f_{K}\})\] for all \(p\)-simplices \(K\in X_{p}\), for all \(p\geq 1\); our goal is to _extend_ this to a _functorial_ map \[\overline{F_{p}}\colon X_{p} \to\mathcal{N}^{\operatorname{dg}}(\mathcal{D})_{p}\] \[K \mapsto(\{c_{\operatorname{ver}_{i}K}\}_{0\leq i\leq p},\{f_{|I| }\}_{I\subseteq[p]})\] (where \(|I|\) denotes the face of \(K\) defined by \(I\)), i.e. to extend the singleton set \(\{f_{K}\}\) to a set \(\{f_{|I|}\}_{I\subseteq[p]}\) such that all the \(f_{|I|}\) satisfy (2.5.2.1), in such a way that \(\overline{F_{\bullet}}\) is functorial. So fix \(K\in X_{p}\) and \(I=\{i_{0}<\ldots<i_{k}\}\subset[p]\) for some \(k<p\). Let \(\sigma\colon[k]\to[p]\) be given by the composition of the coface maps \(f^{j_{n}}\) where \(j_{n}\in[p]\setminus I\), i.e. \(\sigma\colon m\mapsto i_{m}\) for all \(0\leq m\leq k\). Since \(\sigma\) is injective, it induces the morphism \[\mathcal{N}^{\operatorname{dg}}(\sigma)\colon\mathcal{N}^{ \operatorname{dg}}(\mathcal{D})_{p} \to\mathcal{N}^{\operatorname{dg}}(\mathcal{D})_{k}\] \[\left(\{x_{i}\}_{0\leq i\leq p},\{f_{|I|}\}_{L\subseteq[p]}\right) \mapsto\left(\{x_{i_{m}}\}_{0\leq m\leq k},\{f_{|\sigma(M)|}\}_{M\leq[k]}\right)\] and so we define \[f_{|I|}\coloneqq\mathcal{N}^{\operatorname{dg}}(\sigma)(F_{p}(K))\] which, by construction, is such that \(f_{|I|}=F_{k}(X_{\bullet}(\sigma)(K))\). That is, we have the commutative diagram \[\diagram{0.0}\diagram{0.0}\node{X_{p}}\node{X_{\bullet}}\node{X_{p}}\node{X_{ \bullet}}\node{X_{\bullet}}\node{X_{\bullet}}\node{X_{\bullet}}\node{X_{ \bullet}}\node{X_{\bullet}}\node{X_{\bullet}}\node{X_{\bullet}}\node{X_{ \bullet}}\node{X_{\bullet}}\node{X_{\bullet}}\node{X_{\bullet}}\node{X_{\bullet}} \node{X_{\bullet}}\node{X_{\bullet}}\node{X_{\bullet}}\node{X_{\bullet}} \node{X_{\bullet}}\node{X_{\bullet}}\node{X_{\bullet}}\node{X_{\bullet}} \node{X_{\bullet}}\node{X_{\bullet}}\node{X_{\bullet}}\node{X_{\bullet}}\node{X_{ \bullet}}\node{X_{\bullet}}\node{X_{\bullet}}\node{X_{\bullet}}\node{X_{\bullet}} \node{X_{\bullet}}\node{X_{\bullet}}\node{X_{\bullet}}\node{X_{\bullet}}\node{X_{ \bullet}}\node{X_{\bullet}}\node{X_{\bullet}}\node{X_{\bullet}}\node{X_{\bullet}} \node{X_{\bullet}}\node{X_{\bullet}}\node{X_{\bullet}}\node{X_{\bullet}}\node{X_{ \bullet}}\node{X_{\bullet}}\node{X_{\bullet}}\node{X_{\bullet}}\node{X_{\bullet}} \node{X_{\bullet}}\node{X_{\bullet}}\node{X_{\bullet}}\node{X_{\bullet}}\node{X_{ \bullet}}\node{X_{\bullet}}\node{X_{\bullet}}\node{X_{\bullet}}\node{X_{\bullet}} \node{X_{\bullet}}\node{X_{\bullet}}\node{X_{\bullet}}\node{X_{\bullet}}\node{X_{ \bullet}}\node{X_{\bullet}}\node{X_{\bullet}}\node{X_{\bullet}}\node{X_{\bullet}} \node{X_{\bullet}}\node{X_{\bullet}}\node{X_{\bullet}}\node{X_{\bullet}}\node{X_{ \bullet}}\node{X_{\bullet}}\node{X_{\bullet}}\node{X_{\bullet}}\node{X_{\bullet}} \node{X_{\bullet}}\node{X_{\bullet}}\node{X_{\bullet}}\node{X_{\bullet}}\node{X_{ \bullet}}\node{X_{\bullet}}\node{X_{\bullet}}\node{X_{\bullet}}\node{X_{\bullet}} \node{X_{\bullet}}\node{X_{\bullet}}\node{X_{\bullet}}\node{X_{\bullet}}\node{X_{ \bullet}}\node{X_{\bullet}}\node{X_{\bullet}}\node{X_{\bullet}}\node{X_{\bullet}} \node{X_{\bullet}}\node{X_{\bullet}}\node{X_{\bullet}}\node{X_{\bullet}}\node{X_{ \bullet}}\node{X_{\bullet}}\node{X_{\bullet}}\node{X_{\bullet}}\node{X_{\bullet}} \node{X_{\bullet}}\node{X_{\bullet}}\node{X_{\bullet}}\node{X_{\bullet}}\node{X_{ \bullet}}\node{X_{\bullet}}\node{X_{\bullet}}\node{X_{\bullet}}\node{X_{\bullet}} \node{X_{\bullet}}\node{X_{\bullet}}\node{X_{\bullet}}\node{X_{\bullet}}\node{X_{ \bullet}}\node{X_{\bullet}}\node{X_{\bullet}}\node{X_{\bullet}}\node{X_{\bullet}} \node{X_{\bullet}}\node{X_{\bullet}}\node{X_{\bullet}}\node{X_{\bullet}}\node{X_{ \bullet}}\node{X_{\bullet}}\node{X_{\bullet}}\node{X_{\bullet}}\node{X_{\bullet}} \node{X_{\bullet}}\node{X_{\bullet}}\node{X_{\bullet}}\node{X_{\bullet}}\node{X_{\bullet}} \node{X_{\bullet}}\node{X_{\bullet}}\node{X_{\bullet}}\node{X_{\bullet}}\node{X_{ \bullet}}\node{X_{\bullet}}\node{X_{\bullet}}\node{X_{\bullet}}\node{X_{\bullet}} \node{X_{\bullet}}\node{X_{\bullet}}\node{X_{\bullet}}\node{X_{\bullet}}\node{X_{ \bullet}}\node{X_{\bullet}}\node{X_{\bullet} which tells us that the \(\overline{F_{\star}}\) thus defined is indeed functorial. What remains to be shown is that this definition of \(f_{|I|}\) satisfies the equation (2.5.2.1) in Definition 2.5.2. But, by commutativity again, \[\mathcal{N}^{\mathrm{dg}}(\sigma)(F_{p}(K)) =F_{k}(X_{\star}(\sigma)(K))\] \[=\big{(}\{c_{\mathrm{ver}_{m}|I|}\}_{0\leq m\leq k},\{f_{|I|}\} \big{)}\] which satisfies (2.5.7.1), and thus (under the identification \(|I|\leftrightarrow I\), since \(K\) is fixed) (2.5.2.1). An analogous argument shows that, if \(K\in X_{p}\) is a _degenerate_ simplex, then the morphism \(f_{K}\) given by \(F_{k}(K)\) is the identity if \(p=0\), and is the zero morphism if \(p\geq 1\). **Corollary 2.5.8**.: _The image of the morphism \(F:X\to\mathcal{N}^{\mathrm{dg}}(\mathcal{D})\) defined by the data of (ii) in Lemma 2.5.7 lies in the maximal Kan complex \([\mathcal{N}^{\mathrm{dg}}(\mathcal{D})]\) if and only if, for all \(1\)-simplices \(K\in X_{1}\), the morphisms \(f_{K}\) are quasi-isomorphisms._ Proof.: This follows from Lemma 2.5.6. **Definition 2.5.9**.: Let \(\mathcal{D}\) be a dg-category, and let \(X_{\star}\) be a simplicial set. Then a _labelling of the \(0\)-simplices of \(X_{\star}\) by objects of \(\mathcal{D}\)_ is a map of sets \(\mathcal{L}:X_{0}\to\mathcal{D}\). Alternatively, we can think of such a labelling as a set \(\mathcal{L}=\{c_{x}\in\mathcal{D}\}_{x\in X_{0}}\). **Definition 2.5.10**.: Let \(\mathcal{D}\) be a dg-category of chain complexes, and let \(X=X_{\star}\) be a simplicial set. Fix some labelling \(\mathcal{L}=\{c_{x}\in\mathcal{D}\}_{x\in X_{0}}\) of the \(0\)-simplices of \(X_{\star}\) by objects of \(\mathcal{D}\). We define a bigraded dg-algebra \(C^{\star,\star}(X,\mathcal{D};\mathcal{L})\) by setting \[C^{p,q}(X,\mathcal{D};\mathcal{L})=\left\{\big{(}f_{K}\in\mathrm{Hom}^{q}_{ \mathcal{D}}(c_{\mathrm{ver}_{p+1}K},c_{\mathrm{ver}_{0}K})\big{)}_{K\in X_{p} }\right\}\] for4\(p\geq 1\) and \(q\in\mathbb{Z}\), and define a (_deleted_) _Cech differential_ by Footnote 4: The missing \(p=0\) term corresponds to the already existing (internal) differential \(\partial\) of \(\mathcal{D}\), and the fact that we prescribe the degree-\(0\) part separately as the labelling \(\mathcal{L}\). \[\hat{\delta}\colon C^{p,q}(X,\mathcal{D};\mathcal{L}) \longrightarrow C^{p+1,q}(X,\mathcal{D};\mathcal{L})\] \[(f_{K})_{K\in X_{p}} \longrightarrow\left(\sum_{i=1}^{p}(-1)^{i}f_{L\setminus\{ \mathrm{ver}_{i}L\}}\right)_{L\in X_{p+1}}\] and an _internal differential_ by \[\partial\colon C^{p,q}(X,\mathcal{D};\mathcal{L}) \longrightarrow C^{p,q+1}(X,\mathcal{D};\mathcal{L})\] \[(f_{K})_{K\in X_{p}} \longrightarrow((-1)^{q+1}\partial f_{K})_{K\in X_{p}}\] (where \(\partial f_{K}\) is given by the dg-structure of \(\mathcal{D}\), with sign conventions as in Definition 2.5.1), with graded multiplication \[C^{p,q}(X,\mathcal{D};\mathcal{L})\times C^{r,s}(X,\mathcal{D}; \mathcal{L}) \longrightarrow C^{p+r,q+s}(X,\mathcal{D};\mathcal{L})\] \[\big{(}(f_{K})_{K\in X_{p}},(g_{L})_{L\in X_{r}}\big{)} \longrightarrow\big{(}(-1)^{q^{r}}f_{\{\mathrm{ver}_{0}M<\ldots< \mathrm{ver}_{p}M\}}\circ\mathcal{g}_{\{\mathrm{ver}_{p}M<\ldots<\mathrm{ver}_{ p+1}M\}}\big{)}_{M\in X_{p+r}}.\] We then define the _total complex_\(\operatorname{Tot}^{*}(C^{p,q}(X,\mathcal{D};\mathcal{L}))\) by \[\operatorname{Tot}^{n}(C^{p,q}(X,\mathcal{D};\mathcal{L}))=\bigoplus_{p+q=n}C^{ p,q}(X,\mathcal{D};\mathcal{L})\] with _total differential_ \[\operatorname{D}\colon\operatorname{Tot}^{n}(C^{p,q}(X,\mathcal{D};\mathcal{L }))\longrightarrow\operatorname{Tot}^{n+1}(C^{p,q}(X,\mathcal{D};\mathcal{L}))\] defined by \(\operatorname{D}=\hat{\delta}+(-1)^{p}\partial\). In words, * the degree-\((p,q)\) elements of \(C^{*,\star}(X,\mathcal{D};\mathcal{L})\) are labellings of the \(p\)-simplices of \(X\). by morphisms \(f\colon c_{x}\to c_{y}[-q]\) of \(\mathcal{D}\), where \(x\) is the first vertex of the \(p\)-simplex and \(y\) is the last; * the differential \(\hat{\delta}\) of degree \((1,0)\) is given by taking the alternating sum of the morphisms labelling the \(p\)-simplices given by removing the \(i\)th vertex for \(i=1,2,\ldots,k\) (but _not_ for \(i=0\) or \(i=k+1\)); * the differential \(\partial\) of degree \((0,1)\) is given by simply applying the differential of \(\mathcal{D}\) coming from its dg-structure, with a global sign depending on the degree of \(f\); * the graded multiplication takes an element \(f=(f_{K})\) of degree \((p,q)\) and an element \(g=(g_{L})\) of degree \((r,s)\) and gives an element \(f\cdot g\) of degree \((p+r,q+s)\) by labelling the \((p+r)\)-simplices as follows: split each \((p+r)\)-simplex into a _front half_ (given by taking the first \(p+1\) vertices) and a _back half_ (given by taking the last \(r+1\) vertices), and then label the front half with \(f\) and the back half with \(g\) (with some global sign depending on the degrees of \(f\) and \(g\)). **Definition 2.5.11**.: An element \(f\in\operatorname{Tot}^{*}(C^{p,q}(X,\mathcal{D};\mathcal{L}))\) is said to be _Maurer-Cartan_ if it satisfies the Maurer-Cartan equation: \[\operatorname{D}f+f\cdot f=0.\] Note that, for degree reasons, all Maurer-Cartan elements are of the form \[f=\big{(}f_{p}\in C^{p,1-p}(X,\mathcal{D};\mathcal{L})\big{)}_{p\geq 1}\] and so \(\deg f=1\). That is, the Maurer-Cartan elements of \(\operatorname{Tot}^{*}(C^{p,q}(X,\mathcal{D};\mathcal{L}))\) are exactly the Maurer-Cartan elements of \(\operatorname{Tot}^{1}(C^{p,q}(X,\mathcal{D};\mathcal{L}))\). **Theorem 2.5.12** ([1, Corollary 3.5]).: _Let \(\mathcal{D}\) be a dg-category of cochain complexes of modules, let \(X=X\). be a simplicial set, and let \(\mathcal{L}=\{c_{x}\in\mathcal{D}\}_{x\in X_{0}}\) be a labelling of the \(0\)-simplices of \(X\). by \(\mathcal{D}\). Then there is a bijection_ \[\Big{\{}f\in\operatorname{Tot}^{1}(C^{p,q}(X,\mathcal{D};\mathcal{L}))\mid \operatorname{D}f+f\cdot f=0\Big{\}}\longrightarrow\Big{\{}F\colon X \rightarrow\mathcal{N}^{\deg}(\mathcal{D})\mid F(x)=c_{x}\text{ for all }x\in X_{0}\Big{\}}\] _between Maurer-Cartan elements of \(C^{\star,\star}(X,\mathcal{D};\mathcal{L})\) and morphisms of simplicial sets from \(X\). to dg-nerve of \(\mathcal{D}\) that agree with the labelling \(\mathcal{L}\)._ Proof.: By Lemma 2.5.7, we know that an element of the set on the right-hand side (i.e. a morphism \(F\colon X\to\mathcal{N}^{\operatorname{dg}}(\mathcal{D})\) such that \(F(x)=c_{x}\) for all \(x\in X_{0}\)) is equivalent to the data of a morphism \(f_{K}\in\operatorname{Hom}^{1-p}(c_{\operatorname{ver}_{p}K},c_{\operatorname {ver}_{0}k})\) for each \(p\)-simplex \(K\in X_{p}\), for all \(p\geq 1\), such that (2.5.7.1) holds (unless \(K\) is degenerate, in which case \(f_{K}\) is the identity when \(p=0\) and the zero morphism when \(p\geq 1\)). The collection of all these \(f_{K}\) is then an element of the bigraded dg-algebra: \[f_{p}=\big{(}f_{K}\in\operatorname{Hom}^{1-p}(c_{\operatorname{ver}_{p}K},c_{ \operatorname{ver}_{0}k})\big{)}_{K\in X_{p}}\in C^{p,1-p}(X,\mathcal{D}; \mathcal{L}).\] Now, by Definition 2.5.10, for \(p,r\geq 1\) \[f_{p}\cdot f_{r}=\big{(}(-1)^{(1-p)r}f_{\{\operatorname{ver}_{0}M<\ldots< \operatorname{ver}_{p}M\}}\circ f_{\{\operatorname{ver}_{p}M<\ldots< \operatorname{ver}_{p+r+1}M\}}\big{)}_{M\in X_{p+r}}\] and \[\operatorname{D}(f_{p}) =\delta f_{p}+(-1)^{p}\partial f_{p}\] \[=\left(\sum_{i=1}^{p}(-1)^{i}f_{L\setminus\{\operatorname{ver}_ {i}L\}}\right)_{L\in X_{p+1}}+\big{(}\underbrace{(-1)^{p}(-1)^{(1-p)+1}}_{=1} \partial f_{K}\big{)}_{K\in X_{p}}\] whence, for \(\lambda\geq 2\), \[\big{(}\operatorname{D}f+f\cdot f\big{)}_{\lambda} =\bigg{(}\sum_{j=1}^{\lambda-1}(-1)^{j}f_{M\setminus\{ \operatorname{ver}_{j}M\}}\] \[\quad+\partial f_{M}\] \[\quad+\sum_{j=1}^{\lambda-1}(-1)^{(1-j)(\lambda-j)}f_{\{ \operatorname{ver}_{0}M<\ldots<\operatorname{ver}_{j}M\}}\circ f_{\{ \operatorname{ver}_{j}M<\ldots<\operatorname{ver}_{\lambda+1}M\}}\bigg{)}_{M \in X_{\lambda}}\] but (2.5.7.1) says that \[\partial f_{M}=-\left(\sum_{j=1}^{\lambda-1}(-1)^{j}f_{M\setminus\{ \operatorname{ver}_{j}M\}}+\sum_{j=1}^{\lambda-1}(-1)^{\lambda(j-1)}f_{\{ \operatorname{ver}_{0}M<\ldots<\operatorname{ver}_{j}M\}}\circ f_{\{ \operatorname{ver}_{j}M<\ldots<\operatorname{ver}_{\lambda+1}M\}}\right)\] and so it suffices to show that \[(-1)^{\lambda(j-1)}=(-1)^{(1-j)(\lambda-j)}\] but \((1-j)\equiv(j-1)\mod 2\), and \(j(j-1)\equiv 0\mod 2\). This means that \((\operatorname{D}f+f\cdot f)_{\lambda}=0\) for all \(\lambda\geq 2\), and so \(\operatorname{D}f+f\cdot f=0\). Conversely, if we start with some Maurer-Cartan element \(f=(f_{p})\), then, by the exact same argument, the fact that \(\operatorname{D}f+f\cdot f=0\) is satisfied implies that the collection of the \(f_{K}\) that define the \(f_{p}\) also satisfy (2.5.7.1). A fundamental example of Theorem 2.5.12 is given by taking \(X_{\bullet}\) to be the prototypical simplicial set \(\Delta[n]=\operatorname{Hom}_{\operatorname{sSet}}(-,\{n\})\) for some fixed \(n\in\mathbb{N}\). By the Yoneda lemma, \[\operatorname{Hom}_{\operatorname{sSet}}\big{(}\Delta[n],\mathcal{N}^{ \operatorname{dg}}(\mathcal{D})\big{)}\cong\mathcal{N}^{\operatorname{dg}}( \mathcal{D})_{n}\] whence the following corollary. **Corollary 2.5.13**.: _With the notation and hypotheses of Theorem 2.5.12, we have a bijection_ \[\left\{\text{Maurer-Cartan elements of }\operatorname{Tot}^{1}\left(C^{p,q}(\Delta[n], \mathcal{D};K)\right)\right\}\longleftrightarrow\left\{\text{n-simplices }K\in\mathcal{N}^{\operatorname{dg}}(\mathcal{D})_{n}\right\}\] _where the \(n\)-simplex \(K\) defines the labelling \(\{i\}\mapsto\operatorname{ver}_{i}K\)._ For our purposes, we need a version of Definition 2.5.10 that is both generalised and specialised: generalising a single dg-category to a _presheaf_ of dg-categories, but specialising to the specific example where the simplicial set is the Cech nerve. **Definition 2.5.14**.: Let \(\mathcal{D}\) be a presheaf of dg-categories on the category of spaces, and \(\tilde{\mathcal{N}}\mathcal{U}_{\bullet}\) the Cech nerve of the cover \(\mathcal{U}\) of some space \(X\). Fix some labelling5\(\mathcal{L}=\{c_{\alpha}\in\mathcal{D}(U_{\alpha})\}_{U_{\alpha}\in\mathcal{U}}\) of \(\tilde{\mathcal{N}}\mathcal{U}_{0}\) by \(\mathcal{D}\). Footnote 5: Note that this usage of “labelling” is slightly more general than Definition 2.5.9, since each label \(c_{\alpha}\) lives in a different dg-category \(\mathcal{D}(U_{\alpha})\). We define a bigraded dg-algebra \(\hat{\mathcal{C}}^{*,\star}(\mathcal{U},\mathcal{D};\mathcal{L})\), which we call the _(deleted) Cech (bi)algebra_, by setting \[\hat{\mathcal{C}}^{p,q}(\mathcal{U},\mathcal{D};\mathcal{L})=\left\{\left(f_{ \alpha_{0}\ldots\alpha_{p}}\in\operatorname{Hom}^{q}_{\mathcal{D}(U_{\alpha_{ 0}\ldots\alpha_{p}})}(c_{\alpha_{p}}|U_{\alpha_{0}\ldots\alpha_{p}},c_{\alpha_{ 0}}|U_{\alpha_{0}\ldots\alpha_{p}})\right)_{U_{\alpha_{0}\ldots\alpha_{p}}\in \tilde{\mathcal{N}}\mathcal{U}_{p}}\right\}\] for \(p\geq 1\) and \(q\in\mathbb{Z}\). We then define the deleted Cech differential, internal differential, graded multiplication, and total differential entirely analogously to Definition 2.5.10, so that e.g. the graded multiplication is given by \[\hat{\mathcal{C}}^{p,q}(\mathcal{U},\mathcal{D};\mathcal{L})\times \hat{\mathcal{C}}^{r,s}(\mathcal{U},\mathcal{D};\mathcal{L}) \longrightarrow\hat{\mathcal{C}}^{p+r,q+s}(\mathcal{U},\mathcal{D}; \mathcal{L})\] \[\left((f_{\alpha_{0}\ldots\alpha_{p}})_{U_{\alpha_{0}\ldots\alpha _{p}}},(g_{\beta_{0}\ldots\beta_{r}})_{U_{\beta_{0}\ldots\beta_{r}}}\right) \longrightarrow\left((-1)^{qr}f_{\gamma_{0}\ldots\gamma_{p}}\right)_{U_{ \gamma_{0}\ldots\gamma_{p+r}}}\circ g_{\gamma_{p+1}\ldots\gamma_{p+r}}\] and the deleted Cech differential is given by \[\hat{\delta}\colon\hat{\mathcal{C}}^{p,q}(\mathcal{U},\mathcal{D} ;\mathcal{L}) \rightarrow\hat{\mathcal{C}}^{p+1,q}(\mathcal{U},\mathcal{D}; \mathcal{L})\] \[(f_{\alpha_{0}\ldots\alpha_{p}})_{U_{\alpha_{0}\ldots\alpha_{p}}} \mapsto\left(\sum_{i=1}^{p}(-1)^{i}f_{\alpha_{0}\ldots\widehat{ \alpha_{i}}\ldots\alpha_{p+1}}\right)_{U_{\alpha_{0}\ldots\alpha_{p+1}}}\] where, as per usual, the hat denotes omission. Note that this is well defined since the deleted Cech differential preserves the first and last vertices of the simplex, so that \(f_{\alpha_{0}\ldots\widehat{\alpha}_{i}\ldots\alpha_{p}}\) is still a morphism from \(c_{\alpha_{p}}\) to \(c_{\alpha_{0}}\) for all \(0<i<p\). \(\lrcorner\) **Remark 2.5.15**.: The very definition of the set \(\hat{\mathcal{C}}^{p,q}(\mathcal{U},\mathcal{D};\mathcal{L})\) in Definition 2.5.14 relies upon the fact that the Cech nerve gives us restriction maps \(\mathcal{D}(U_{\alpha_{i}})\rightarrow\mathcal{D}(U_{\alpha_{0}\ldots\alpha _{p}})\) induced by \(U_{\alpha_{0}\ldots\alpha_{p}}\hookrightarrow U_{\alpha_{i}}\). It would be possible to give the definition for arbitrary simplicial sets possessing this property, but we are only ever interested in the Cech nerve in this paper. \(\lrcorner\) Something that will turn up in Section 2.8 is the idea of pulling back a simplicial presheaf along the opposite of the Cech nerve. There we will better motivate and justify the importance of this construction, but for now we content ourselves with its definition. **Definition 2.5.16**.: The opposite of the Cech nerve \(\hat{\mathscr{N}}^{\rm op}\colon{\rm Space}^{\rm op}_{\mathscr{U}}\to[\Delta,{\rm Space }^{\rm op}]\) is a functor from the category of spaces with a chosen cover to that of cosimplicial spaces, and so we can pre-compose any simplicial presheaf \(\mathscr{T}\colon{\rm Space}^{\rm op}\to{\rm sSet}\) with this to obtain a cosimplicial simplicial set \(\mathscr{T}(\hat{\mathscr{N}}\mathscr{U}_{\star})\) whenever we evaluate on any given space \(X\) with cover \(\mathscr{U}\). We call this process _evaluating \(\mathscr{T}\) on the Cech nerve of \(\mathscr{U}\)_. Note also that \(\mathscr{N}^{\rm dg}\mathscr{D}(\hat{\mathscr{N}}\mathscr{U}_{0})\) is a simplicial set concentrated in dimension \(0\), with \(\mathscr{N}^{\rm dg}\mathscr{D}(\hat{\mathscr{N}}\mathscr{U}_{0})_{0}\cong \mathscr{D}(\hat{\mathscr{N}}\mathscr{U}_{0})\) (as sets), by definition of the dg-nerve; the latter is exactly \(\mathscr{D}(\coprod_{a}U_{a})\) by definition of the Cech nerve. For our purposes, we will be interested in presheaves of dg-categories such that \(\mathscr{D}(\coprod_{a}U_{a})\cong\coprod_{a}\mathscr{D}(U_{a})\). Using this, we can prove a theorem that is to Definition 2.5.14 what Corollary 2.5.13 is to Definition 2.5.10. **Theorem 2.5.17**.: _Let \(\mathscr{D}\) be a presheaf of dg-categories of cochain complexes of modules on the category of spaces, let \(\hat{\mathscr{N}}\mathscr{U}_{\star}\) be the Cech nerve of the cover \(\mathscr{U}\) of some space \(X\), and let \(\mathscr{L}=\{c_{\alpha}\in\mathscr{D}(U_{\alpha})\}_{U_{\alpha}\in\mathscr{U}}\) be a labelling of \(\hat{\mathscr{N}}\mathscr{U}_{0}\) by \(\mathscr{D}\). Assume that \(\mathscr{D}\) turns disjoint unions into products, i.e. that \(\mathscr{D}(\sqcup_{\alpha}U_{\alpha})\cong\coprod_{\alpha}\mathscr{D}(U_{ \alpha})\) is a bijective-on-objects equivalence.6 Then there is a bijection_ Footnote 6: The one specific example of \(\mathscr{D}\) that we are interested in for the purposes of this current paper is that which sends a ringed space \((X,\wp_{X})\) to the dg-category of bounded complexes of free \(\wp_{X}\)-modules on \(X\). For this choice of \(\mathscr{D}\), and in the case where the ringed spaces have representable structure sheaves, it is indeed the case that \(\mathscr{D}(\coprod_{\alpha}U_{\alpha})\cong\coprod_{\alpha}\mathscr{D}(U_{ \alpha})\) is an _isomorphism_ of categories. However, “bijective-on-objects equivalence” seems to be preferable terminology. \[\left\{f\in{\rm Tot}^{1}(\hat{\mathscr{C}}^{p,q}(\mathscr{U},\mathscr{D}; \mathscr{L}))\,|\,{\rm D}\,f+f\cdot f=0\right\}\longleftrightarrow\left\{F \colon\Delta[\star]\to\mathscr{N}^{\rm dg}\mathscr{D}(\hat{\mathscr{N}} \mathscr{U}_{\star})\,|\,F(\Delta[0])=\mathscr{L}\right\}\] _between Maurer-Cartan elements of the Cech algebra \(\hat{\mathscr{C}}^{\star,\star}(\mathscr{U},\mathscr{D};\mathscr{L})\) and morphisms of cosimplicial simplicial sets from \(\Delta[\star]\) to \(\mathscr{N}^{\rm dg}\mathscr{D}(\hat{\mathscr{N}}\mathscr{U}_{\star})\) that send the unique non-degenerate \(0\)-simplex in \(\Delta[0]\) to the element \((c_{\alpha})_{U_{\alpha}\in\mathscr{U}}\in\prod_{\alpha}\mathscr{D}(U_{\alpha} )\simeq\mathscr{N}^{\rm dg}\mathscr{D}(\hat{\mathscr{N}}\mathscr{U}_{0})\)._ The proof of this theorem is given below, but the reader might find it helpful to also read the proof of Theorem 4.1.1 (b) (which is indeed the intended application for this result), where the specific case of degree \(2\) is spelled out in more detail. The main idea of the proof is deceptively simple. If we fix some cosimplicial degree \(q\in\mathbb{N}\) then a morphism \(F\) of cosimplicial simplicial sets simply becomes a morphism of simplicial sets, and we can apply Corollary 2.5.13 to obtain a bijection with Maurer-Cartan elements in the bialgebra from Definition 2.5.10. Then, using the fact that the Cech nerve gives us restriction maps, and that a morphism of cosimplicial simplicial sets is, in particular, functorial with respect to the cosimplicial structure, we can "glue together" all these Maurer-Cartan elements for each \(q\in\mathbb{N}\) to obtain the desired result. We now explain this in detail. Proof.: Let \(F\colon\Delta[\star]\to\mathscr{N}^{\rm dg}\mathscr{D}(\hat{\mathscr{N}} \mathscr{U}_{\star})\) be a morphism of cosimplicial simplicial sets that sends the unique non-degenerate \(0\)-simplex in \(\Delta[0]\) to the element \((c_{\alpha})_{U_{\alpha}\in\mathscr{U}}\). For each \(q\in\mathbb{N}\), we thus have a morphism \[F^{q}\colon\Delta[q]\to\mathscr{N}^{\rm dg}\mathscr{D}(\hat{\mathscr{N}} \mathscr{U}_{q})\] of simplicial sets, but, by the Yoneda lemma, this is exactly the data of a \(q\)-simplex of \(\mathscr{N}^{\rm dg}\mathscr{D}(\hat{\mathscr{N}}\mathscr{U}_{q})\), which we will also denote by \(F^{q}\). But since \(\mathscr{D}\) turns disjoint unions into products, and since the dg-nerve is a right adjoint and thus preserves products, such a \(q\)-simplex is exactly the data of a \(q\)-simplex of \(\mathcal{N}^{\mathrm{dg}}\mathcal{D}(U_{a_{0}\ldots a_{q}})\) for all \(U_{a_{0}\ldots a_{q}}\); we denote this by \[F^{q}=(F^{q}_{a_{0}\ldots a_{q}})_{U_{a_{0}\ldots a_{q}}\in\hat{\mathcal{N}} \mathcal{U}_{q}}.\] By Corollary 2.5.13, to each \(F^{q}_{a_{0}\ldots a_{q}}\) there corresponds a Maurer-Cartan element in \(\mathrm{Tot}^{1}C^{m,n}(\Delta[q],\mathcal{D}(U_{a_{0}\ldots a_{q}});F^{q}_{a_ {0}\ldots a_{q}})\). If we denote the \(0\)-simplices of some \(F^{q}_{a_{0}\ldots a_{q}}\) by \(x_{0},\ldots,x_{q}\) then the corresponding Maurer-Cartan element \(\varphi=(\varphi^{p})_{p\geq 1}\) has components \[\big{(}\varphi^{p}_{L}\in\mathrm{Hom}^{1-p}_{\mathcal{D}(U_{a_{0}-a_{q}})}(x_ {\mathrm{ver}_{p+1}L},x_{\mathrm{ver}_{0}L})\big{)}_{L\in\Delta[q]_{p}}\] for each \(p\geq 1\). However, since the morphisms corresponding to degenerate simplices are zero (cf. the proof of Lemma 2.5.7) we can rewrite this as \[\varphi=\big{(}\varphi_{K}\in\mathrm{Hom}^{1-k}_{\mathcal{D}(U_{a_{0}\ldots a _{q}})}(x_{i_{k}},x_{i_{0}})\big{)}_{K\subset[q]}\] where \(k=|K|-1\geq 1\). The hypothesis on the morphism \(F\) of cosimplicial simplicial sets tells us that the image of \(\{0\}\in\Delta[0]\) is exactly \((c_{a})_{U_{a}\in\mathcal{N}}\); the fact that \(F\) is a morphism tells us, in particular, that it is functorial with respect to the cosimplicial structure, and so the \(0\)-simplices of all the \(F^{q}_{a_{0}\ldots a_{q}}\) are determined entirely by this data, and are given by \(x_{i}=c_{a_{i}}|U_{a_{0}\ldots a_{q}}\). More generally, for any \(p<q\), the \(p\)-simplices in \(F^{q}\) are exactly the restrictions of the corresponding \(p\)-simplices in \(F^{p}\). This tells us that the functorial collection of elements in \(\mathrm{Tot}^{1}C^{m,n}(\Delta[q],\mathcal{D}(U_{a_{0}\ldots a_{q}});F^{q})\) for all \(q\in\mathbb{N}\) is exactly the same as an element in \(\mathrm{Tot}^{1}\hat{\mathcal{C}}^{m,n}(\mathcal{U},\mathcal{D};\mathcal{L})\); furthermore, under this correspondence, the two definitions of \(\hat{\delta}\) agree (as do the two definitions of \(\partial\), though this is more immediate), which means that if the functorial collection of elements all satisfy the Maurer-Cartan condition, then so too does the resulting element in the Cech bialgebra. We will use Theorem 2.5.17 to prove that the points in the totalisation of a certain cosimplicial simplicial presheaf are exactly Maurer-Cartan elements in a well-known Cech algebra (Theorem 4.1.1 (b)), but we are also interested in the \(paths\) of this space. To study these, we need to understand the result analogous to Theorem 2.5.17 but where we replace morphisms \(F\colon\Delta[\star]\to\ldots\) by morphisms \(F\colon\Delta[\star]\times\Delta[1]\to\ldots\), and it turns out that such morphisms correspond to _closed_ elements in some relevant bialgebra. Although one could give a general statement, analogous to Theorem 2.5.17, we consider only the specific application that is of interest to us: this is the content of Theorem 4.2.2, and the explanation of how this relates to closed elements in a bialgebra is given in Appendix B.2. ### The pair subdivision We will now briefly discuss the _barycentric subdivision_ of a simplex, only for the purpose of contrasting it with the _pair subdivision_. Similar to how non-empty ordered subsets of \([p]\) are in bijective correspondence with sub-simplices of \(\Delta[p]\), we can describe the \(k\)-simplices of the barycentric subdivision of \(\Delta[p]\) in a combinatorial way. We write \(\Delta[p]_{\mathrm{bary}}\) to mean the _barycentric subdivision_ of \(\Delta[p]\), which we now define as a simplicial set (see also Figure 2.6.i). * The \(0\)-simplices of \(\Delta[p]_{\mathrm{bary}}\) correspond exactly to the \(k\)-simplices of \(\Delta[p]\) for \(k\leq p\), i.e. the (non-degenerate) faces of \(\Delta[p]\). But, as we have already said, these are in bijection with non-empty ordered subsets of \([p]=\{0,1,\ldots,p\}\), and there are \(2^{p+1}-1\) of these. * The \(1\)-simplices of \(\Delta[p]_{\mathrm{bary}}\) correspond exactly to a choice of a \(k\)-simplex \(\sigma\) of \(\Delta[p]\) along with an \(\ell\)-simplex \(\tau\) such that \(\tau\subset\sigma\), and these are in bijective correspondence with pairs \((\boldsymbol{S},T)\) of non-empty subsets of \([p]\) such that \(T\subset S\), of which there are \(\sum_{k=1}^{p}\sum_{\ell=0}^{k-1}\binom{p+1}{k+1}\binom{k+1}{\ell+1}\). * More generally, the \(q\)-simplices of \(\Delta[p]_{\mathrm{bary}}\) correspond exactly to a choice of \(k_{j}\)-simplex \(\sigma_{j}\) of \(\Delta_{p}\), for \(j=0,\ldots,q\), such that \(\sigma_{q}\subset\sigma_{q-1}\subset\ldots\subset\sigma_{0}\); these are in bijective correspondence with tuples \((S_{q},\ldots,S_{0})\) of non-empty subsets of \([p]\) such that \(S_{0}\subset S_{1}\subset\ldots\subset S_{q}\). For our purposes, one defect of the barycentric subdivision is the fact that it doesn't take the codimension of inclusions into account. That is, both the inclusion \(\{0\}\subset[2]\) of a _point_ into the \(2\)-simplex and the inclusion \(\{0<1\}\subset[2]\) of a _line_ into the \(2\)-simplex correspond to \(0\)-simplices of the barycentric subdivision, even though the former is of codimension \(2\) and the latter of codimension \(1\). What we would like is a method of subdivision where codimension-\(k\) inclusions correspond to \(k\)-simplices, and where we forget about the length of the flags and look only at pairs \(\tau\subset\sigma\). It turns out that such a subdivision exists. **Definition 2.6.1**.: Given the standard \(p\)-simplex \(\Delta[p]\), we define the _pair subdivision_\(\Delta[p]_{\mathrm{pair}}\) as follows (see [26, SS2] and [10, SS3.2] for more details; see also Figure 2.6.ii). The vertices are the original vertices of \(\Delta[p]\) along with the barycentres Figure 2.6.i. Points in the barycentric subdivision correspond to sub-simplices of the abstract simplex. of each face (just as in the barycentric subdivision), and these are labelled by pairs \((\sigma,\sigma)\), where \(\sigma\subset\Delta[p]\) is exactly the face in question. In general, the \(k\)-cells are given by pairs \((\tau,\sigma)\), where \(\tau\subseteq\sigma\subseteq\Delta[p]\) and \(k=\operatorname{codim}_{\sigma}\tau\coloneqq\dim\sigma-\dim\tau\); the vertices of such a \(k\)-cell are the barycentres of the simplices \(\eta\) such that \(\tau\subseteq\eta\subseteq\sigma\); the the set of codimension-\(\ell\) faces of \((\tau,\sigma)\) is the union of the set of cells of the form \((\underline{\tau},\sigma)\), where \(\underline{\tau}\subset\tau\) is a codimension-\(\ell\) face, and the set of cells of the form \((\tau,\overline{\sigma})\), where \(\sigma\subset\overline{\sigma}\) is a codimension-\(\ell\) face. We define the boundary of a pair \((\tau,\sigma)\) by \[\partial(\tau,\sigma)\coloneqq(\tau,\partial\sigma)+(-1)^{\dim\tau}(d\tau,\sigma)\] where \(d\) is the coboundary operator, sending an \(\ell\)-simplex to the (signed) sum of all \((\ell+1)\)-simplices that contain it as a face. \(\lrcorner\) For example, given the pair \((\bullet,\blacktriangle)\), where \(\Delta[0]=\bullet=\{0\}\hookrightarrow\{0<1<2\}=\blacktriangle=\Delta[2]\), we can calculate the boundary: \[\partial(\bullet,\blacktriangle)\ =\ \underbrace{-(\bullet,\_)+(\bullet, \_)}_{(\bullet,\partial\blacktriangle)}\ +\ \underbrace{-(\_,\blacktriangle)+(\,\prime,\blacktriangle)}_{(\_ {\bullet},\blacktriangle)}\ =\ \ ### Homotopy theory of simplicial presheaves There is an extensive theory of homotopy groups of simplicial presheaves, but we will not need the vast majority of it in for our purposes: we will content ourselves with a combinatorial definition of \(\pi_{0}\) and some black-boxed statements about the weak equivalences in certain model structures. Using geometric realisation, one can give a simple definition of homotopy groups of simplicial sets: we define the _\(nth\) topological homotopy group_ to be \(\pi_{n}|X_{\bullet}|\). However, passing to the geometric realisation can be computationally fiddly, and one might desire a more combinatorial approach to homotopy groups. _For Kan complexes_, there is a definition of _simplicial homotopy groups_ which uses the notion of _simplicial homotopy_, and for which there exist rather neat results expressing when two homotopy classes are equal in terms of their representatives bounding a common simplex of one dimension higher. This is all covered in e.g. [10, Chapter I]. One of the most important results concerning these simplicial homotopy groups is [10, Chapter I, Proposition 11.1], which implies that they are naturally isomorphic to the topological homotopy groups (i.e. those of the geometric realisation). We do not prove that \(\bar{G}^{\prime}\)_even_ or \(\beta\bar{Y}\)_wid_ are presheaves of Kan complexes, so we will not be able to make use of this statement, but there is a partial result that we can apply: if a simplicial set is such that all 2-horns fill, then the simplicial \(\pi_{0}\) is well defined and agrees with the topological \(\pi_{0}\). Recall that, if \(Y\) is a space, then \(\pi_{0}(Y)\) is the set of path-connected components of \(Y\). This means that, if \(X_{\bullet}\) is a Kan complex, the set \(\pi_{0}|X_{\bullet}|\) consists of equivalence classes of 0-simplices of \(X_{\bullet}\), where two 0-simplices are equivalent if they can be connected by a zig-zag of 1-simplices in \(X_{\bullet}\), i.e. \(v\sim w\) if and only if there exist 1-simplices \(e_{0},\dots,e_{n}\in X_{1}\) such that \(v\) is an endpoint of \(e_{0}\), \(w\) is an endpoint of \(e_{n}\), and each pair \((e_{i},e_{i+1})\) share a common endpoint but \((e_{i},e_{i+2})\) do not. The definition of the simplicial \(\pi_{0}\) is simpler, reducing the zig-zag to length 1. **Definition 2.7.1**.: Let \(X_{\bullet}\) be a Kan complex. Then we define the _\(0th\) simplicial homotopy group_\(\pi_{0}X_{\bullet}\) to be equivalence classes of 0-simplices of \(X_{\bullet}\), where two 0-simplices are equivalent if and only if can be connected by a single 1-simplex in \(X_{\bullet}\), i.e. \(v\sim w\) if and only if there exists a 1-simplex \(e\in X_{1}\) such that \(v=f^{1}(e)\) and \(w=f^{0}(e)\). \(\lrcorner\) A priori, these two definitions of \(\pi_{0}\) need not agree: the equivalence relation on the simplicial \(\pi_{0}\) is finer than that on the topological \(\pi_{0}\). As mentioned above, it turns out that all simplicial \(\pi_{n}\) are indeed naturally isomorphic to the topological \(\pi_{n}\), but the \(n=0\) case still holds under much weaker conditions. **Lemma 2.7.2**.: _Let \(X_{\bullet}\) be a simplicial set such that all 2-horns fill. Then the simplicial homotopy group \(\pi_{0}X_{\bullet}\) is naturally isomorphic7 to the topological homotopy group \(\pi_{0}|X_{\bullet}|\)._ Footnote 7: Recall that \(\pi_{0}\) is merely a set, so here “isomorphic to” means “in bijective correspondence with”. Proof.: Since the equivalence relation on \(\pi_{0}X_{\bullet}\) is finer than that on \(\pi_{0}|X_{\bullet}|\), it suffices to show that if two vertices are in the same equivalence class in the latter then they are in the same equivalence class in the former. So suppose that \(v,w\in X_{0}\) are such that \([v]=[w]\) in \(\pi_{0}|X_{\bullet}|\). This means that there exists a zig-zag of 1-simplices \(e_{0},\dots,e_{n}\in X_{1}\) connecting \(v\) and \(w\). But consider the pair of \(1\)-simplices \((e_{n-1},e_{n})\), which, by hypothesis, share a common endpoint. This means that they form a \(2\)-horn in \(X_{\star}\), and, again by hypothesis, we can fill this \(2\)-horn to obtain, in particular, a \(1\)-simplex \(\widetilde{e}_{n-1}\) such that \(v\) and \(w\) are joined by the zig-zag \(e_{0},\ldots,\widetilde{e}_{n-1}\). Repeating this \(n-2\) more times we obtain a single \(1\)-simplex \(\widetilde{e}_{0}\) whose endpoints are exactly \(v\) and \(w\), whence \([v]=[w]\) in \(\pi_{0}X_{\star}\). The category of simplicial sets can be endowed with different model structures, but the model structure of interest to us here is the one that models the \((\infty,1)\)-category of topological spaces, namely the _Kan-Quillen_ (or _classical_) model structure ([13, Definition 7.10.8]), which induces the _global projective_ model structure on the category of simplicial presheaves. We will not provide here many details about these structures except for those of which we will later have need. Since we only consider these model structures, we fix some terminology now. **Definition 2.7.3**.: We say that a simplicial set is _fibrant_ if it is a Kan complex, and that a simplicial presheaf is _globally fibrant_ if it is a presheaf of Kan complexes. We say that a morphism \(f\colon X_{\star}\to Y_{\star}\) of simplicial sets is a _weak equivalence_ if it is a weak equivalence in the Kan-Quillen model structure, i.e. if it induces an isomorphism on topological homotopy groups \(f\colon\pi_{n}|X_{\star}|\cong\pi_{n}|Y_{\star}|\) for all \(n\in\mathbb{N}\). We say that a morphism \(f\colon\mathcal{F}\to\mathcal{G}\) of simplicial presheaves is a _(global) weak equivalence_ if it is a weak equivalence in the induced global projective model structure, i.e. if it induces an isomorphism on topological homotopy groups \(f\colon\pi_{n}|\mathcal{F}(X)|\cong\pi_{n}|\mathcal{G}(X)|\) for all objects \(X\) and all \(n\in\mathbb{N}\). ### Cech totalisation Combining the Cech nerve and the totalisation of a cosimplicial simplicial set gives us a construction which we will use repeatedly, and it deserves a name. **Definition 2.8.1**.: Let \(X\) be a space with cover \(\mathcal{U}\), and let \(\mathcal{T}\colon\mathsf{Space}^{\mathrm{op}}\to\mathsf{sSet}\) be a simplicial presheaf on the category of spaces. We define the _Cech totalisation_ of \(\mathcal{T}\) (_at the cover \(\mathcal{U}\)_) to be the simplicial set \(\operatorname{\mathrm{Tot}}\mathcal{T}(\check{\mathscr{N}}\mathcal{U}_{\star})\) given by the totalisation of the cosimplicial simplicial set \(\mathcal{T}(\check{\mathscr{N}}\mathcal{U}_{\star})\) given by evaluating \(\mathcal{T}\) on the Cech nerve (Definition 2.5.16). This procedure of Cech totalisation looks very similar to some sort of sheafification: indeed, if applied to a presheaf of sets (considered as a constant simplicial presheaf) then it recovers the usual construction of sheafification via taking sections of the espace etale. Not only that, but Corollary 2.8.5 tells us that, in the case of presheaves of Kan complexes, this totalisation is really the homotopy limit. More generally, there are certain conditions under which Cech totalisation does compute the sheafification of a simplicial presheaf (for example, one should expect to have to take a colimit over refinements of covers, so the cover \(\mathcal{U}\) should be particularly nice somehow). However, we do not concern ourselves with such questions in this paper, referring the interested reader instead to [11, SS5.1]. For us it is sufficient that this construction returns "interesting" results in many examples, and that it satisfies the following useful properties. **Lemma 2.8.2** ([11, Proposition C.1]).: _Let \(\mathcal{T}\) be a presheaf of Kan complexes on \(\mathsf{Space}\). Then \(\operatorname{\mathrm{Tot}}\mathcal{T}(\check{\mathscr{N}}\mathcal{U}_{\star})\) is a Kan complex._ Proof.: The idea of the proof is relatively straightforward: we use the fact that \(\operatorname{Tot}\) is the right-adjoint part of a Quillen equivalence between the Reedy model structure on cosimplicial simplicial sets and the Kan-Quillen model structure on simplicial sets, and thus preserves fibrant objects; we then apply Lemma 2.8.4. **Lemma 2.8.3**.: _Let \(\mathcal{T}\) and \(\mathcal{G}\) be presheaves of Kan complexes on \(\operatorname{Space}\). If \(\mathcal{T}\) and \(\mathcal{G}\) are weakly equivalent, then so too are their Cech totalisations \(\operatorname{Tot}\mathcal{T}(\check{\mathcal{N}}\mathcal{U}_{\star})\) and \(\operatorname{Tot}\mathcal{G}(\check{\mathcal{N}}\mathcal{U}_{\star})\)._ _In other words, Cech totalisation sends weak equivalences of presheaves of Kan complexes to weak equivalences of Kan complexes._ Proof.: Since we are using the global projective model structure on simplicial presheaves (Section 2.7), we know that \(\mathcal{T}(U)\) and \(\mathcal{G}(U)\) are weakly equivalent for all spaces \(U\). Since \(\mathcal{T}\) and \(\mathcal{G}\) are presheaves of Kan complexes, we can apply Lemma 2.8.4. Then we again use the fact that \(\operatorname{Tot}\) is a Quillen right adjoint, and thus preserves weak equivalences between fibrant objects. Our initial justification in Section 2.3 for studying the totalisation was that it computes the homotopy limit in the case of Reedy fibrant objects, and we do indeed find ourselves in this case whenever we have a presheaf of Kan complexes. **Lemma 2.8.4** ([11, Lemma C.5]).: _Let \(\mathcal{T}\) be a presheaf of Kan complexes on \(\operatorname{Space}\), and let \(X\) be a space with cover \(\mathcal{U}\). Then the Cech totalisation \(\mathcal{T}(\check{\mathcal{N}}\mathcal{U}_{\star})\) is a Reedy fibrant cosimplicial simplicial set._ **Corollary 2.8.5**.: _Let \(\mathcal{T}\) be a presheaf of Kan complexes on \(\operatorname{Space}\), and let \(X\) be a space with cover \(\mathcal{U}\). Then the Cech totalisation \(\mathcal{T}(\check{\mathcal{N}}\mathcal{U}_{\star})\) of \(\mathcal{T}\) computes the homotopy limit \(\operatorname{holim}\mathcal{T}(\check{\mathcal{N}}\mathcal{U}_{\star})\)._ Proof.: This follows immediately from Lemma 2.3.2. **Lemma 2.8.6**.: _Let \(\mathcal{T}\colon\operatorname{Space}^{\operatorname{op}}\to\operatorname{dg -Cat}\) be a presheaf of dg-categories that sends finite products to coproducts. Then there is a weak equivalence of Kan complexes_ \[\operatorname{Tot}[\mathcal{N}^{\operatorname{dg}}\mathcal{T}(\check{ \mathcal{N}}\mathcal{U})]\simeq[\mathcal{N}^{\operatorname{dg}}\big{(} \operatorname{Tot}\mathcal{T}(\check{\mathcal{N}}\mathcal{U})\big{)}]\] _where on the left-hand side we take the totalisation of cosimplicial simplicial sets, and on the right-hand side we take the totalisation of cosimplicial dg-categories._ We haven't formally defined Cech totalisation for dg-categories, and we will not do so, nor will we explain the Dwyer-Kan model structure on dg-Cat, since we expect this Lemma to mainly be of interest to those already somewhat familiar with these: it is somewhat of a comparison result for [1], as we explain in Remark 4.2.5. Proof.: Recalling that evaluation on the Cech nerve is given exactly by pre-composition with the functor \(\check{\mathcal{N}}\mathcal{U}_{\star}^{\operatorname{op}}\colon\Delta\to \operatorname{Space}^{\operatorname{op}}\) (Definition 2.5.16), the statement of the lemma is equivalent to the commutativity (up to weak equivalence) of the diagram where \([\mathsf{Space}^{\mathsf{op}},\mathsf{dg-Cat}]_{\mathsf{fpp}}\) denotes the subcategory of \([\mathsf{Space}^{\mathsf{op}},\mathsf{dg-Cat}]\) consisting of those presheaves that preserve finite products. The two smaller squares on the left commute on the nose, since composition of functors is strictly associative, and the horizontal arrows are given by pre-composition with the opposite of the Cech nerve and the vertical arrows are given by post-composition with the dg-nerve or maximal-Kan-complex functor. Next, we can apply [1, Proposition 4.3], which says that the pre-composition of a finite-product-preserving presheaf of dg-categories with a split simplicial object is a Reedy fibrant cosimplicial dg-category, since the Cech nerve of an open cover is always a split simplicial object, and the presheaves preserve finite products by assumption. This means that the top-left horizontal arrows lands inside the subcategory of fibrant objects of \([\Delta,\mathsf{dg-Cat}]\). Similarly, Lemma 2.8.4 tells us that a presheaf of Kan complexes evaluated on the Cech nerve is Reedy fibrant cosimplicial simplicial set, and so the composite of the two leftmost vertical arrows followed by the bottom left horizontal arrow also lands inside the subcategory of fibrant objects of \([\Delta,\mathsf{SSet}]\). This, combined with the strict commutativity of the two smaller squares in the above diagram, allows us to reduce to studying the diagram since every object in dg-Cat is fibrant in the Dwyer-Kan model structure, and Lemma 2.8.2 tells us that the Cech totalisation of any presheaf of Kan complexes is a Kan complex and thus fibrant in sSet. Note that there is a small abuse of notation which makes this square look like it should trivially commute, but there is indeed something to prove: the two totalisations take place in different categories, and the right-hand vertical arrow is the pointwise version of the left-hand one. Since we are only considering Reedy fibrant simplicial objects, the totalisation computes the homotopy limit. More precisely, we have a natural weak equivalence \(\mathsf{Tot}\,Y_{\star}^{\star}\simeq\hom Y_{\star}^{\star}\) for all \(Y_{\star}^{\star}\) in either dg-Catfib or sSetfib. This means that, _under the _assumption that the vertical arrows send weak equivalences to weak equivalences_, it suffices to show that the diagram commutes. We shall first prove this, and then show that the two vertical arrows do indeed satisfy this hypothesis.8 Footnote 8: We could give a much more succinct, but more abstract, proof from here on, simply appealing to the fact that \(k^{!}\circ\mathcal{N}^{\mathrm{dg}}\) is a Quillen right adjoint and that \(k^{!}\simeq[-]\) (see Remark 2.8.7), but we opt to continue “by hand”. So let \(\mathcal{D}^{*}\in[\Delta,\mathrm{dg}\text{-}\mathrm{Cat}]^{\mathrm{fib}}\) be a Reedy fibrant cosimplicial \(\mathrm{dg}\)-category given by evaluating some finite-product-preserving presheaf of \(\mathrm{dg}\)-categories on the \(\mathrm{C}\mathrm{e}\mathrm{e}\mathrm{n}\)erve. Since it is Reedy fibrant, we know that \[\operatorname{holim}\mathcal{D}^{*}\simeq\lim\mathcal{D}^{*}\] and, by the assumption that the vertical arrows send weak equivalences to weak equivalences, we thus have that \[[\mathcal{N}^{\mathrm{dg}}\operatorname{holim}\mathcal{D}^{*}]\simeq[ \mathcal{N}^{\mathrm{dg}}\lim\mathcal{D}^{*}].\] Now note that \([\mathcal{N}^{\mathrm{dg}}(-)]\) is the composition of three right adjoints \[\mathrm{dg}\text{-}\mathrm{Cat}\xrightarrow{\mathcal{N}^{\mathrm{dg}}} \operatorname{Quasi}\text{-}\mathrm{Cat}\xrightarrow{[-]}\operatorname{Kan} \hookrightarrow\operatorname{sSet}\] (since the inclusion \(\operatorname{Kan}\hookrightarrow\operatorname{sSet}\) also admits a left adjoint, as mentioned in Definition 2.1.7), which means that it itself is a right adjoint and thus commutes with limits, whence \[[\mathcal{N}^{\mathrm{dg}}\lim\mathcal{D}^{*}]\cong\lim[\mathcal{N}^{\mathrm{ dg}}\mathcal{D}^{*}].\] But we have already argued that \([\mathcal{N}^{\mathrm{dg}}\mathcal{D}^{*}]\) is Reedy fibrant (by strict commutativity of the two leftmost squares in the original diagram), and so its limit actually computes the homotopy limit: \[\lim[\mathcal{N}^{\mathrm{dg}}\mathcal{D}^{*}]\simeq\operatorname{holim}[ \mathcal{N}^{\mathrm{dg}}\mathcal{D}^{*}].\] Chaining these equivalences together, we see that \[[\mathcal{N}^{\mathrm{dg}}\operatorname{holim}\mathcal{D}^{*}]\simeq \operatorname{holim}[\mathcal{N}^{\mathrm{dg}}\mathcal{D}^{*}]\] and so the diagram commutes up to weak equivalence. It remains only to show that \([\mathcal{N}^{\mathrm{dg}}(-)]\) sends weak equivalences to weak equivalences both individually and pointwise, i.e. both as a functor \(\mathrm{dg}\text{-}\mathrm{Cat}^{\mathrm{fib}}\to\operatorname{sSet}^{ \mathrm{fib}}\) and as a functor \([\Delta,\mathrm{dg}\text{-}\mathrm{Cat}]^{\mathrm{fib}}\to[\Delta,\operatorname {sSet}]^{\mathrm{fib}}\). The functor \(\mathcal{N}^{\mathrm{dg}}\colon\mathrm{dg}\text{-}\mathrm{Cat}\to \operatorname{sSet}_{\mathrm{Joyal}}\) is a Quillen right adjoint _when we endow \(\operatorname{\mathsf{sSet}}\) with the Joyal model structure_, and thus preserves weak equivalences between fibrant objects. Since all dg-categories are fibrant in the Dwyer-Kan model structure, this means that a weak equivalence \(\mathscr{C}\simeq\mathscr{D}\) of dg-categories gets sent to a weak equivalence \(\mathscr{N}^{\text{dg}}\mathscr{C}\simeq\mathscr{N}^{\text{dg}}\mathscr{D}\) in the Joyal model structure_. But a categorical equivalence of quasi-categories (i.e. a weak equivalence in the Joyal model structure) induces a weak equivalence (in the Kan-Quillen model structure) of their maximal Kan complexes ([11, Lemma 34]). Thus we get a weak equivalence \([\mathscr{N}^{\text{dg}}\mathscr{C}]\simeq[\mathscr{N}^{\text{dg}}\mathscr{D}]\), as required. As for the induced functor \([\Delta,\text{dg-Cat}]^{\text{fib}}\to[\Delta,\operatorname{\mathsf{sSet}}]^{ \text{fib}}\), since the weak equivalences in the Reedy model structure on any \([\mathscr{R},\mathscr{M}]\) are simply those that are object-wise weak equivalences in \(\mathscr{M}\), we are done. **Remark 2.8.7**.: In the proof of Lemma 2.8.6, one might wonder why we don't simply show that the maximal-Kan functor is a _Quillen_ right adjoint, since it is already a right adjoint by definition, and then commutativity with the homotopy limit would be immediate. But note that the domain of the maximal-Kan functor \([-]\) is Quasi-Cat, not all of \(\operatorname{\mathsf{sSet}}\), and so we cannot simply compose it with \(\mathscr{N}^{\text{dg}}\colon\text{dg-Cat}\to\operatorname{\mathsf{sSet}}\). It _is_ true that the image of the dg-nerve actually lies entirely inside \(\operatorname{\mathsf{Quasi-Cat}}\hookrightarrow\operatorname{\mathsf{sSet}}\), but the dg-nerve only gives a Quillen right adjoint when considered with codomain equal to all of \(\operatorname{\mathsf{sSet}}\). It is possible to "model" the maximal Kan functor by a functor \(k^{!}\colon\operatorname{\mathsf{sSet}}\to\operatorname{\mathsf{sSet}}\) which then does realise a Quillen adjunction (indeed, even a homotopy colocalisation) between the Kan-Quillen and the Joyal model structures on \(\operatorname{\mathsf{sSet}}\) ([11, Proposition 1.16 through to Proposition 1.20]), so that \(k^{!}\mathscr{N}^{\text{dg}}\colon\text{dg-Cat}\to\operatorname{\mathsf{sSet}} _{\text{Kan-Quillen}}\) is indeed a Quillen right adjoint, but for our purposes it is convenient to work with the direct definition of the maximal Kan complex instead. **Remark 2.8.8**.: In the proof of Lemma 2.8.6, we use the fact that weak equivalences in the Reedy model structure are defined object-wise. If we had opted to use the Quillen right adjoint \(k^{!}\) from Remark 2.8.7 instead, then we could also appeal to a more general fact about Reedy model structures: if we have a Quillen adjunction \(\mathscr{M}\rightleftarrows\mathscr{N}\) then this induces a Quillen adjunction \([\mathscr{R},\mathscr{M}]\rightleftarrows[\mathscr{R},\mathscr{N}]\) between Reedy model structures for any Reedy category \(\mathscr{R}\). To prove this, note that e.g. a right adjoint preserves limits and thus sends matching objects in \([\mathscr{R},\mathscr{N}]\) to matching objects in \([\mathscr{R},\mathscr{M}]\), and a Quillen right adjoint also preserves fibrations; these two facts combined tell us that the Quillen right adjoint will send Reedy fibrations to Reedy fibrations. #### Example: the space of principal \(G\)-bundles **Remark 2.8.9**.: This example can be seen as a \(1\)-categorical version of [13, SS3.2.1]; we will see a full \(\infty\)-categorical example when we define the simplicial presheaf \(\mathcal{F}\)_wist_ in Section 3.2. By considering an example of a simplicial presheaf built from the categorical nerve, we can start to see how \(\operatorname{\mathsf{Cech}}\) totalisation can be thought of as "introducing geometry". Here we sketch a general construction that provides inspiration for our main object of study, introduced at the end of Section 2.9. We provide details of the specific case of principal \(\operatorname{GL}_{n}(\mathbb{R})\)-bundles in Appendix A. Let \(G\) be a Lie group, so that \(G\) is, in particular, also an element of the category of smooth manifolds \(\operatorname{Man}\), and consider the presheaf on \(\operatorname{Man}\) given by Yoneda: \[\dot{\times}(G)=\operatorname{Man}(-,G).\] Using the Lie group structure of \(G\), we can endow \(\dot{\times}(G)\) with the structure of a Lie group pointwise, and thus consider \(\dot{\times}(G)\) as a presheaf of Lie groups. This means that we can deloop \(\dot{\times}(G)\) to obtain a presheaf of one-element groupoids: \[\mathbb{B}\dot{\times}(G)(-).\] That is, for \(X\in\operatorname{Man}\), the groupoid \(\mathbb{B}\dot{\times}(G)(X)\) has one object, which we denote by \(*\), and with \(\operatorname{Hom}(*,*)\cong\operatorname{Man}(X,G)\), where we again use the group structure of \(G\) to endow \(\operatorname{Man}(X,G)\) with a group structure. We can then take the categorical nerve of this to obtain a presheaf of simplicial sets: \[\mathcal{NB}\dot{\times}(G)(-).\] Abstractly, then, we have a functor \[\mathcal{NB}\dot{\times}:\operatorname{LieGroup}\to[\operatorname{Man}^{ \operatorname{op}},\operatorname{sSet}].\] Next, write \(\operatorname{Man}_{\mathcal{U}}\) to mean the category whose objects are pairs \((X,\mathcal{U})\), where \(X\in\operatorname{Man}\), and \(\mathcal{U}\) is a good9 cover of \(X\), and whose morphisms \((X,\mathcal{U})\to(Y,\mathcal{V})\) are the morphisms \(f\colon X\to Y\) in \(\operatorname{Man}\) such that \(\mathcal{U}\) is a refinement of \(f^{-1}(\mathcal{V})\). Then we have the Cech nerve Footnote 9: That is, all non-empty finite intersections \(U_{a_{0}\dots a_{p}}\) (including the case where \(p=0\)) of open sets in the cover are contractible. \[\tilde{\mathcal{N}}\colon\operatorname{Man}_{\mathcal{U}}\to[\Delta^{ \operatorname{op}},\operatorname{Man}]\] which, using the fact that \([\mathcal{C},\mathcal{D}]^{\operatorname{op}}\cong[\mathcal{C}^{\operatorname {op}},\mathcal{D}^{\operatorname{op}}]\) for any categories \(\mathcal{C}\) and \(\mathcal{D}\), induces a functor \[\tilde{\mathcal{N}}^{\operatorname{op}}\colon\operatorname{Man}^{ \operatorname{op}}_{\mathcal{U}}\to[\Delta^{\operatorname{op}},\operatorname {Man}]^{\operatorname{op}}\cong[\Delta,\operatorname{Man}^{\operatorname{ op}}].\] So pre-composing \(\mathcal{NB}\dot{\times}\) with the opposite of the Cech nerve, we obtain a functor \[(\tilde{\mathcal{N}}^{\operatorname{op}})^{*}\mathcal{NB}\dot{\times}: \operatorname{LieGroup}\to[\operatorname{Man}^{\operatorname{op}}_{\mathcal{U }},[\Delta,\operatorname{sSet}]]=[\operatorname{Man}^{\operatorname{op}}_{ \mathcal{U}},\operatorname{csSet}].\] This means that, given any \(G\in\operatorname{LieGroup}\), we obtain a presheaf of cosimplicial simplicial sets on \(\operatorname{Man}_{\mathcal{U}}\). To simplify notation, we write \[\mathbf{N}\coloneqq(\tilde{\mathcal{N}}^{\operatorname{op}})^{*}\mathcal{NB} \dot{\times}.\] Finally then, we can apply totalisation to obtain a functor with values in presheaves of simplicial sets: \[\operatorname{Tot}(\mathbf{N})\colon\operatorname{LieGroup}\to[\operatorname {Man}^{\operatorname{op}}_{\mathcal{U}},\operatorname{sSet}].\] **Remark 2.8.10**.: Before taking the totalisation, \(\mathbf{N}\) took values in cosimplicial simplicial sets. The _cosimplicial_ structure came from pulling back along the opposite10 of the Cech nerve, and the _simplicial_ structure came from the ordinary nerve; we totalise over the _former_. \(\lrcorner\) Footnote 10: The Čech nerve itself is a simplicial object, so the opposite turns it into a cosimplicial one. So what is the purpose of this functor? If we apply it to a specific Lie group \(G\), then, since \(\mathbb{B}\nleq(G)(X)\) is a groupoid for any manifold \(X\), the resulting simplicial set \(\operatorname{Tot}(\mathbf{N})(G)(X)\) will be a Kan complex, i.e. a space. It turns out that the points of this space are exactly principal \(G\)-bundles, and the paths are exactly isomorphisms of principal \(G\)-bundles: this space deserves the name "the space of principal \(G\)-bundles". We provide the details of this argument for the case where \(G=\operatorname{GL}_{n}(\mathbb{R})\) in Appendix A. **Remark 2.8.11**.: What is very important in the construction described above is that we pull back along the _opposite_ of the Cech nerve. Of course, we are required to do this in order to compose the functors, but it also has an important geometric significance: when working with (pre)sheaves of functions on open sets, it ensures that we will have trivial codegeneracy maps and _restriction_ coface maps, and not trivial face maps and _extension_ degeneracy maps. To understand what we mean by this, consider the Cech nerve, which has face maps \(f_{p}^{i}\colon U_{a_{0}\dots a_{p}}\to U_{a_{0}\dots\widehat{a_{m}}\dots a_{p}}\) and degeneracy maps \(s_{p}^{p}\colon U_{a_{0}\dots a_{p}}\to U_{a_{0}\dots a_{i}a_{i}\dots a_{p}}\). The degeneracy maps are trivial: \(U_{a_{0}\dots a_{p}}=U_{a_{0}\dots a_{i}a_{i}\dots a_{p}}\); the face maps are (in general) non-trivial: \(U_{a_{0}\dots a_{p}}\subseteq U_{a_{0}\dots\widehat{a_{1}}\dots a_{p}}\). If we are considering, say, a (pre)sheaf \(\mathcal{T}\) such that \(\mathcal{T}(U)\) consists of some sort of functions on \(U\), then defining a map \(\mathcal{T}(U_{a})\to\mathcal{T}(U_{aa})\) is trivial, since we can simply take the identity; defining a map \(\mathcal{T}(U_{a\beta})\to\mathcal{T}(U_{a})\) is _hard_, since we might not be able to extend functions. Working with the _opposite_ of the Cech nerve, however, means that we will not have this problem: we will have to construct maps of the form \(\mathcal{T}(U_{a})\to\mathcal{T}(U_{a\beta})\), and this can be done by simply restricting the functions on the former. This is explained in the context of a worked example in Appendix A. \(\lrcorner\) ### Perfectness of complexes The classical references for the various notions relating to perfectness are [12, 13]; see also [14, SS2.1] and [15, 16]. One important fact of complex-analytic geometry is that not every coherent analytic sheaf can be resolved by a complex of locally free sheaves, but it can be _locally_ resolved. Indeed, throughout this paper, _the motivating example is always when \((X,\mathcal{O}_{X})\) is a complex-analytic manifold with the sheaf of holomorphic functions_. Perfectness conditions allow us to study this phenomenon more generally. **Definition 2.9.1**.: Let \((X,\mathcal{O}_{X})\) be a locally ringed space, and \(M^{\star}\) a cochain complex of \(\mathcal{O}_{X}\)-modules. * We say that \(M^{\star}\) is _finitely generated free_ if it is bounded and such that each \(M^{i}\) is a finite11 free \(\mathcal{O}_{X}\)-module. * We say that \(M^{\star}\) is _strictly perfect_ if it is bounded and such that each \(M^{i}\) is a finite locally free \(\varinjlim_{X}\)-module. * We say that \(M^{\star}\) is _perfect_ if it is locally quasi-isomorphic to a strictly perfect complex. That is, if, for all \(x\in X\), there exists some open neighbourhood \(U\) of \(x\), and some bounded complex \(L^{\star}_{U}\) of finite locally free \(\varinjlim_{X}\)-modules on \(U\), such that \(M^{\star}|U\simeq L^{\star}_{U}\). We write \(\mathsf{Free}(X)\) to denote the dg-category of finitely generated free complexes on \((X,\varinjlim_{X})\). \(\lrcorner\) Here we are mainly interested in finitely generated free complexes, and we mention strictly perfect and perfect complexes simply for context. The "full" story about these finiteness conditions involves the fact that twisting cochains constitute a dg-enhancement of the category of perfect complexes (a result of [13]), something to which we later allude in Theorem 4.2.2 and Remark 4.2.5. Now we can restate the fact about local resolutions of coherent analytic sheaves using this more abstract terminology. Indeed, [11, Expose I, Exemple 5.11] tells us that the derived category of bounded complexes of coherent analytic sheaves on a complex-analytic manifold \(X\) is equivalent to the derived category of perfect complexes on \(X\). But the fact that there exist coherent analytic sheaves that do _not_ admit global resolutions by locally free sheaves is an example of the fact that, although strictly perfect clearly implies perfect, the converse is not necessarily true (see also [13, Remark 2.4] for examples of how this converse is also false in the algebraic case). **Remark 2.9.2**.: The hypothesis that \((X,\varinjlim_{X})\) be _locally_ ringed is necessary for our definition of _strictly_ perfect in Definition 2.9.1: for an arbitrary ringed space, it is not necessarily true that a direct summand of a finite free \(\varinjlim_{X}\)-module is finite and locally free, and we have used this statement to simplify the definition of strictly perfect complexes. However, in the rest of this paper we do not deal with the notion of strictly perfect complexes, and so we will instead work in the more general setting of arbitrary ringed spaces. \(\lrcorner\) **Remark 2.9.3**.: The constructions that we are going to give build things out of free \(\varinjlim_{X}\)-modules, so if we want any hope of recovering _coherent_ sheaves of \(\varinjlim_{X}\)-modules at the end somehow, then it needs to be the case that _free modules are themselves coherent_. In the complex-analytic setting, this is ensured by the Oka coherence theorem, which tells us that \(\varinjlim_{X}\) is coherent; in the complex-algebraic setting, this is ensured if we work with a locally Noetherian scheme ([11, Tag 01XZ]). Although the constructions still "make sense" in settings where \(\varinjlim_{X}\) is _not_ coherent, we do not know how exactly the objects that we construct will relate to coherent sheaves. To deal with such questions, one would need to appeal to the more general definition of _pseudo-coherence_ ([11, Expose I, SS0. Introduction]). More generally, the relation between perfectness and coherence is an interesting subject of study. One particularly useful result is that, if the local rings \(\varinjlim_{X,x}\) are all regular (which is the case if, for example, \((X,\varinjlim_{X})\) is a complex-analytic manifold, and thus smooth), then every coherent sheaf is perfect, and, more generally, every pseudo-coherent complex with locally bounded cohomology is perfect [11, Expose I, Corollaire 5.8.1]. Three simplicial presheaves Using the Cech totalisation from Section 2.8 we can "apply geometry" to presheaves of simplicial sets, and it turns out that many familiar geometric objects arise in this way, such as complexes of locally free sheaves. We start by considering exactly this example, building it from the category of finitely generated free complexes (Definition 2.9.1), and then describe how to make this functorial, turning it into a presheaf on connected ringed spaces. From this presheaf we will construct three generalisations (Section 3.2, Section 3.3, and Section 3.4), which form the main objects of study of this paper. As a gentle reminder, we draw attention to Remark 3.1.2: **these presheaves are in fact only _pseudo_-presheaves in general, and if applied to anything other than the Cech nerve must possibly first be rectified**. ### Narrative Since we will eventually be interested in _local_ properties of simplicial presheaves on ringed spaces, from now on we freely switch between \((U,\Theta_{U})\) and \((X,\Theta_{X})\) as notation for an arbitrary ringed space. We are interested in the most restrictive of the notions of perfectness from Definition 2.9.1, namely that of finitely generated free complexes. Given a ringed space \((U,\Theta_{U})\), the objects of \(\operatorname{Free}(U)\) are _bounded_ complexes of _finite-rank free_\(\Theta_{U}\)-modules12 Footnote 12: Formally, we really work with the skeleton of this category: a free sheaf is uniquely determined by its rank. \[C=\big{(}0\to\Theta_{U}^{\oplus r_{k}}\xrightarrow{d_{k-1}}\Theta_{U}^{\oplus r _{k-1}}\xrightarrow{d_{k-2}}\quad\ldots\quad\xrightarrow{d_{2}}\Theta_{U}^{ \oplus r_{2}}\xrightarrow{d_{1}}\Theta_{U}^{\oplus r_{1}}\to 0\big{)}\] and the morphisms are given by \[\operatorname{Hom}_{\operatorname{Free}(U)}^{n}(C,D)=\prod_{m\in\mathbb{Z}} \operatorname{Hom}_{\Theta_{U}}(C^{m},D^{m+n}).\] This gives a dg-category by defining the differential \(\partial\) on the hom-sets as in Definition 2.5.1. Taking the ordinary nerve of this category13 gives us a simplicial set \(\mathcal{N}\operatorname{Free}(U)\), whose \(p\)-simplices are composible sequences of \(p\)-many morphisms: Footnote 13: Recall Definition 2.5.5: this really means the ordinary nerve of \(K_{0}\) of this category. \[\mathcal{N}\operatorname{Free}(U)_{p}=\{C_{0}\xrightarrow{\varphi_{1}}C_{1} \xrightarrow{\varphi_{2}}\ldots\xrightarrow{\varphi_{p}}C_{p}\mid\varphi_{i} \in\operatorname{Hom}_{K_{0}\operatorname{Free}(U)}(C_{i-1},C_{i})\}\] (though it will prove useful to think of the _blown-up_ nerve (Definition 2.4.3), so that we really have \(\binom{p+1}{2}\) many morphisms). Finally, we take the maximal Kan complex \[[\mathcal{N}\operatorname{Free}(U)]\subseteq\mathcal{N}\operatorname{Free}(U)\] which, by Lemma 2.5.6 (ii), is equivalent to asking that all the \(\varphi_{i}\) be _isomorphisms_. Now we have a simplicial presheaf on ringed spaces given by \[(U,\Theta_{U})\hookrightarrow[\mathcal{N}\operatorname{Free}(U)]\] and so we can try to understand the Cech totalisation (at a given space \(X\) and cover \(\mathcal{U}\)) of this simplicial presheaf through its homotopy groups: \[\pi_{n}\operatorname{Tot}[\mathcal{N}\operatorname{Free}(\tilde{\mathcal{N}} \mathcal{U}_{\star})].\] ( \[\divide\] ) Indeed, from one point of view, finding good generalisations of this construction is one key motivation for this entire paper. To explain this, let us take a step back and first explain why we care about (\(\divide\)). The fact that (\(\divide\)) describes any sort of interesting mathematical object is at least partially justified by an example. In Appendix A we show how this machinery recovers a space whose points are principal bundles and whose paths are gauge groups. For our applications, we use locally free sheaves instead of principal bundles, since these admit morphisms that are not simply automorphisms, which is necessary for the following. So we start with the notion of locally free sheaves (on some fixed ringed space), and think about useful ways in which we can generalise this. If we let the rank of the sheaf change across open subsets, and allow things to be _surjected on_ by something free instead of being free themselves, all in a "controlled" way, then we arrive at the notion of _coherent_ sheaf. The category of coherent sheaves is also very well behaved: it is an abelian category, whereas the category of vector bundles is not. Because of this (amongst many other reasons), coherent sheaves turn out to be very useful objects to study. For some particularly nice ringed spaces, the category of coherent sheaves is equivalent to the category of _cochain complexes_ of locally free sheaves, which suggests that we might eventually wish to consider cochain complexes. To relate this back to the story we are trying to tell, let's consider the subcategory of \(\operatorname{Free}(U)\) spanned by complexes concentrated in degree zero (i.e. each complex is just a single locally free sheaf). Then a point in the \(\pi_{0}\) case of (\(\divide\)) describes the data of a free sheaf over each open subset, with isomorphisms between them on overlaps wherever possible. In particular, the rank of the free sheaf on each open subset is the same. That is, we are describing exactly locally free sheaves _of constant rank_. If we consider all of \(\operatorname{Free}(U)\) then we end up with something similar: we have a complex of free sheaves on each open subset, with isomorphisms between them on overlaps, so we are describing cochain complexes of locally free sheaves of constant rank14 (the formal version of this statement is Theorem 4.1.1 (a)). But we wanted to be able to talk about objects where the rank can jump across open subsets, so we see that (\(\divide\)) is too strict or discrete in some sense, since sheaves of different ranks cannot interact with one another. There are (at least) two natural ways to solve this problem: Footnote 14: In the case where \((U,\mathcal{O}_{U})\) is _locally_ ringed, these are exactly strictly perfect complexes, following Remark 2.9.2. 1. we could use (\(\divide\)) as local input for some simplicial construction, obtaining an infinite tower of homotopical data; or 2. we could replace the nerve in (\(\divide\)) by the dg-nerve, since Lemma 2.5.6 tells us that then we won't be restricted to isomorphisms, but will instead be allowing _quasi-isomorphisms_.15 We will end up formalising both of these approaches: the first in Section 3.3, and the second in Section 3.2. The obvious question then presents itself: "how do these two constructions relate to one another?". One way of answering this is to consider what happens when we apply both simultaneously, and to ask if this lets us mediate between them -- this is what we do in Section 3.4 and further in Section 5. The first step is to generalise the construction described above to obtain a simplicial presheaf \([\mathcal{N}\operatorname{Free}(-)]\) on the category of connected ringed spaces. From this, we will construct three simplicial presheaves which are the main subjects of this current paper. All of these presheaves are constructed precisely so that Theorem 4.1.1 holds, i.e. so that we recover _Green complexes_, _twisting cochains_, and _simplicial twisting cochains_ after Cech totalisation. On this note, we also point out that these three presheaves will be named for what the become _after_ applying Cech totalisation in the complex-analytic case, not for what they are beforehand. The category on which these presheaves are defined is the category \(\operatorname{RingSp}_{\operatorname{conn}}\) of _connected_ ringed spaces, where we require connectedness in order for the rank of a free sheaf to be constant. **Lemma 3.1.1**.: _The assignment \((U,\mathcal{O}_{U})\mapsto\operatorname{Free}(U)\) that sends a ringed space to the dg-category of bounded complexes of free modules on that space defines a pseudofunctor \(\operatorname{Free}\colon(\operatorname{RingSp})^{\operatorname{op}}\to \operatorname{dg\text{-}Cat}\) by sending a morphism \((f,f^{\sharp})\colon(U,\mathcal{O}_{U})\to(V,\mathcal{O}_{V})\) to the pullback \(f^{*}=f^{-1}(-)\otimes_{f^{-1}\mathcal{O}_{V}}\mathcal{O}_{U}\colon\mathcal{O }_{V}\text{-}\mathsf{Mod}\to\mathcal{O}_{U}\text{-}\mathsf{Mod}\)._ Proof.: First, note that \(f^{*}\) sends free \(\mathcal{O}_{V}\)-modules to free \(\mathcal{O}_{U}\)-modules: \[f^{*}(\mathcal{O}_{V}^{r})\cong f^{-1}\mathcal{O}_{V}^{r}\otimes_{f^{-1} \mathcal{O}_{V}}\mathcal{O}_{U}\cong\mathcal{O}_{U}^{r}.\] Pseudofunctoriality follows from the fact that \((gf)^{*}\cong f^{*}g^{*}\) is a natural isomorphism but not necessarily an equality. The fact that \(\operatorname{Free}\) only defines a _pseudo_functor means that a diagram in \(\operatorname{RingSp}\) will _not_ give us a diagram in dg-Cat when we compose with \(\operatorname{Free}\), but merely a "pseudo-diagram", and we cannot a priori take (homotopy) limits of such things. One solution to this problem is via _rectification_, which is a strictification procedure: [1, Proposition 4.2] shows that one can replace any pseudo-presheaf of dg-categories with an dg-equivalent strict presheaf. However, for our purposes in this paper, we can make use of a much more elementary fact (which also appears as [1, Remark 4.5]), which is that evaluating \(\operatorname{Free}\) specifically on the Cech nerve does result in a strict cosimplicial diagram since the coface maps are then given by restriction to open subsets. Because of this, we will not worry about rectification here, but this disclaimer is important enough to merit a remark. **Remark 3.1.2**.: Since \(\operatorname{Free}(\check{\mathcal{N}}\mathscr{U}_{*})\) is a _strict_ cosimplicial dg-category, we do not need to first rectify \(\operatorname{Free}\) and obtain a strict presheaf of dg-categories. Throughout this paper, since we only work with the Cech nerve, we will continue to refer to \(\operatorname{Free}\) (the the other "presheaves" that we define in Section 3) as a presheaf instead of a pseudo-presheaf, **but this really is an abuse of terminology**. Note, however, that this pseudo/strict distinction only really matters when considering \(\mathsf{Free}\) applied to some diagram of ringed spaces -- whenever we talk about \(\mathsf{Free}(U)\) for some fixed \((U,\Theta_{U})\), the issue of pseudofunctoriality does not arise. It is also important to understand that \((-)^{*}\) is a map \(\operatorname{Hom}_{\mathsf{RingSp}}(U,V)\to[\mathsf{Free}(V),\mathsf{Free}(U)]\) that forms part of a _pseudofunctor_, whereas, for any specific \(f\colon U\to V\), the map \(f^{*}\colon\Theta_{V}\operatorname{-Mod}\to\Theta_{U}\operatorname{-Mod}\) is a _strict_ functor. **Lemma 3.1.3**.: _The assignment \((U,\Theta_{U})\mapsto[\mathcal{N}\mathsf{Free}(U)]\) defines a simplicial presheaf on \(\mathsf{RingSp}\)._ Proof.: Let \[C_{0}\xrightarrow{\varphi_{1}}C_{1}\xrightarrow{\varphi_{2}}\ldots\xrightarrow {\varphi_{p}}C_{p}\] be a \(p\)-simplex in \(\mathcal{N}\mathsf{Free}(V)\), so that each \(C_{i}\) is a bounded chain complex of finite free \(\Theta_{V}\operatorname{-modules}\), and each \(\varphi_{i}\) is a chain map. Since \(f^{*}\) is a functor from \(\Theta_{V}\operatorname{-modules}\) to \(\Theta_{U}\operatorname{-modules}\), it gives objects \(f^{*}C_{0},\ldots,f^{*}C_{p}\) in \(\mathsf{Free}(U)\), as well as degree-wise maps \(f^{*}\varphi_{1},\ldots f^{*}\varphi_{p}\), but we need to justify why these are indeed still chain maps in order to obtain a \(p\)-simplex in \(\mathcal{N}\mathsf{Free}(U)\). However, this follows immediately from the functoriality of \(f^{*}\), since functors preserve commutative squares, and so the \(f^{*}\varphi_{i}\) are indeed chain maps. So we have an map on objects \(f^{*}\colon\mathcal{N}\mathsf{Free}(V)\to\mathcal{N}\mathsf{Free}(U)\), but for this to be a morphism of simplicial sets we need to show that it commutes with the simplicial structure of the nerve. But since \(f^{*}\) is a functor, it sends identities to identities and compositions to compositions, which means that it respects the face and degeneracy maps of the nerve, and thus indeed gives a morphism of simplicial sets \(f^{*}\colon\mathcal{N}\mathsf{Free}(V)\to\mathcal{N}\mathsf{Free}(U)\). Finally, for \(f^{*}\) to induce a morphism \([\mathcal{N}\mathsf{Free}(V)]\to[\mathcal{N}\mathsf{Free}(U)]\), we need to know that it sends isomorphisms to isomorphisms (since Lemma 2.5.6 (ii) tells us that this defines the maximal Kan complex of the ordinary nerve). But this is again immediate: any functor preserves isomorphisms, by functoriality. ### Twisting cochains **Definition 3.2.1**.: Define \[\mathcal{T}\mathit{wist}(U)=[\mathcal{N}^{\operatorname{dg}}\mathsf{Free}(U)]\] for any ringed space \((U,\Theta_{U})\). Note that this is, by definition, a simplicial set, and indeed even a Kan complex. **Lemma 3.2.2**.: _The assignment \((U,\Theta_{U})\mapsto\mathcal{T}\mathit{wist}(U)\) defines a simplicial presheaf on \(\mathsf{RingSp}_{\mathsf{conn}}\)._ Proof.: The proof of this statement is almost identical to that of Lemma 3.1.3, but we just need to modify the argument to account for the fact that we are now taking the dg-nerve instead of the ordinary nerve. First of all, note that \(f^{*}\) does indeed induce a dg-functor \(\mathsf{Free}(V)\to\mathsf{Free}(U)\), since ([Stacks, Tag 09LB]) it is an additive functor from \(\Theta_{V}\operatorname{-modules}\) to \(\Theta_{U}\operatorname{-modules}\). Secondly, we need to know that \(f^{*}\) sends quasi-isomorphisms to quasi-isomorphisms, which is equivalent to \(f^{*}\) being exact16, but this follows from the fact that we are working only with chain complexes of _free_ modules, which are, in particular, flat.17 Footnote 16: This is a “standard” fact, but we sketch a proof here for completeness. If \(f^{*}\) preserves quasi-isomorphisms then it will in particular preserve the quasi-isomorphism between a short exact sequence and the zero complex, and so the image will also be quasi-isomorphic to zero, i.e. short exact; if \(f^{*}\) is exact, then it preserves acyclic complexes, but a quasi-isomorphism is exactly a morphism whose mapping cone is acyclic, and since \(f\) is additive we know that \(f^{*}\) preserves mapping cones, and thus sends quasi-isomorphisms to quasi-isomorphisms. Footnote 17: Since Tor is symmetric with respect to its two arguments, the functor \((-\otimes N)\) is exact if \(N\) is flat but also if it is restricted to a full subcategory consisting of flat modules: given a short exact sequence \(0\to A\to B\to C\to 0\), the associated long exact sequence is \(\ldots\to\operatorname{Tor}_{1}(C,N)\to A\otimes N\to B\otimes N\to C\otimes N\to 0\), and all the Tor terms vanish if all of \(A\), \(B\), and \(C\) are flat. **Remark 3.2.3**.: If we were not working with the dg-categories of complexes of _free_ modules, but instead arbitrary modules, then we would need to restrict the presheaf to the wide subcategory of (connected) ringed spaces with e.g. flat morphisms. The key point is that dg-functors do not a priori preserve quasi-isomorphisms, as one might hope. \(\lrcorner\) ### Green complexes **Definition 3.3.1**.: Let \(R\) be a commutative ring. Given \(R\)-modules \(M_{1},\ldots,M_{r}\), we say that a complex of \(R\)-modules is \(\{M_{1},\ldots,M_{n}\}\)-_elementary_ if it is a direct sum \(\epsilon^{i_{1}}_{p_{1}}\oplus\ldots\oplus\epsilon^{i_{m}}_{p_{m}}\) of complexes of the form \[\epsilon^{i_{j}}_{p_{j}}\coloneqq(0\to M_{i_{j}}\stackrel{{ \operatorname{id}}}{{\to}}M_{i_{j}}\to 0)[p_{j}]\] for some \(p_{1},p_{2},\ldots,p_{m}\in\mathbb{Z}\). More generally, we simply say that a complex is _elementary_ if there exists some finite set of modules \(\{M_{1},\ldots,M_{n}\}\) for which it is \(\{M_{1},\ldots,M_{n}\}\)-elementary. \(\lrcorner\) Note that the definition of elementary complexes extends immediately to complexes of sheaves of \(\Theta_{X}\)-modules. **Definition 3.3.2**.: Given two elementary complexes, we define _the elementary morphism_ between them to be the (unique) morphism given by the "maximal" direct sum of elementary identity morphisms. That is, if \(E\) is \(\{M_{1},\ldots,M_{n}\}\)-elementary and \(E^{\prime}\) is \(\{M^{\prime}_{1},\ldots,M^{\prime}_{n^{\prime}}\}\)-elementary \(E^{\prime}\), then we can write \[E=\bigoplus_{j=1}^{m}\epsilon^{i_{j}}_{p_{j}}\quad\text{and}\quad E^{\prime}= \bigoplus_{j=1}^{m^{\prime}}\epsilon^{i^{\prime}_{j}}_{p^{\prime}_{j}}\] and the elementary morphism from \(E\) to \(E^{\prime}\), which we denote by \(E\dashrightarrow E^{\prime}\), is then defined to be \[\bigoplus_{j}\operatorname{id}_{\epsilon^{i_{j}}_{p_{j}}}:E\dashrightarrow E ^{\prime}\] where the sum is taken over all \(j\) such that \(\epsilon^{i_{j}}_{p_{j}}=\epsilon^{i^{\prime}_{j}}_{p^{\prime}_{j}}\). In the dg-category of complexes of modules, we define _the elementary morphism of degree \(k\)_ to be exactly the elementary morphism defined above when \(k=0\), and exactly the zero map when \(k\neq 0\). Concretely then, the elementary morphism is either zero or the inclusion into a direct sum (and thus, in particularly nice cases, the identity map). By construction, elementary complexes are acyclic. Morally, we can think of them as the algebraic analogue of _contractible_ spaces. **Lemma 3.3.3**.: _Taking the direct sum with an elementary complex induces a quasi-isomorphism. More explicitly, if \(C^{\star}\) is a complex of modules, and \(E=(M\stackrel{{\mathrm{id}}}{{\longrightarrow}}M)[0]\) is an elementary complex, then the inclusion_ \[i\colon C^{\star}\hookrightarrow C^{\star}\oplus E^{\star}\] _is a chain homotopy equivalence (and thus, in particular, a quasi-isomorphism). The quasi-inverse is given by the projection_ \[p\colon C^{\star}\oplus E^{\star}\twoheadrightarrow C^{\star}.\] Proof.: One composition \(p\circ i\colon C^{\star}\to C^{\star}\) is the identity on the nose. The other composition \(i\circ p\colon C^{\star}\oplus E^{\star}\to C^{\star}\oplus E^{\star}\) is homotopic to the identity, as witnessed by the homotopy that is zero in all degrees except for in degree \(0\), where it is taken to be \((0,-\mathrm{id}_{M})\), i.e. We now give a fundamental definition, which will be used in constructing two of the three simplicial presheaves in which we are interested. **Definition 3.3.4**.: A _GTT-labelling of \(\Delta[p]\) by \([\mathcal{N}^{\mathrm{dg}}\mathsf{Free}(U)]\)_ (or simply a _GTT-labelling of \(\Delta[p]\)_) consists of a labelling of \(\Delta[p]_{\mathrm{pair}}\) (Definition 2.6.1) subject to some conditions. More precisely, it consists of the following data: 1. To each \(0\)-cell \[(\sigma,\sigma)\quad\longleftrightarrow\quad\{i_{0}<\ldots<i_{k}\}\subseteq \{i_{0}<\ldots<i_{k}\}\subseteq[p]\] we assign a \(k\)-simplex of \([\mathcal{N}^{\mathrm{dg}}\mathsf{Free}(U)]\), i.e. bounded complexes of finite-rank free \(\mathcal{O}_{U}\)-modules \[C_{i_{0}}(\sigma),C_{i_{1}}(\sigma),\ldots,C_{i_{k}}(\sigma)\in\mathsf{Free}(U)\] along with, for all non-empty subsets \(J=\{j_{0}<\ldots<j_{\ell}\}\subseteq\{i_{0}<\ldots<i_{k}\}\), morphisms \[\varphi_{J}(\sigma)\!\in\operatorname{Hom}_{\mathsf{Free}(U)}^{1-\ell}\big{(}C_ {j_{\ell}}(\sigma),C_{j_{0}}(\sigma)\big{)}\] such that * if \(|J|=1\), then \(\varphi_{J}(\sigma)\) is the differential on \(C_{J}(\sigma)\); * if \(|J|=2\), then \(\varphi_{J}(\sigma)\colon C_{j_{1}}(\sigma)\!\to C_{j_{0}}(\sigma)\) is a chain map; * if \(|J|\geqslant 3\), then \[\partial\varphi_{J}(\sigma)=\sum_{m=1}^{\ell-1}(-1)^{m-1}\varphi_{J\setminus\{ j_{m}\}}(\sigma)+\sum_{m=1}^{\ell-1}(-1)^{\ell(m-1)+1}\varphi_{\{j_{m}<\ldots<j_{k}\}} (\sigma)\circ\varphi_{\{j_{0}<\ldots<j_{m}\}}(\sigma).\] * To each \((k-\ell)\)-cell \[(\tau,\sigma)\quad\longleftarrow\quad(j_{0}<\ldots<j_{\ell})\subset\{i_{0}< \ldots<i_{k}\}\subseteq[p]\] (where \(0\leqslant l<k\)) we assign an \((\ell+1)\)-tuple of objects \[\Big{(}C_{j_{m}}^{\perp\sigma}(\tau)\in\mathsf{Free}(U)\Big{)}_{0\leqslant m \leqslant\ell}\] where each \(C_{j_{m}}^{\perp\sigma}(\tau)\) is elementary.18 We refer to the \(C_{j_{m}}^{\perp\sigma}(\tau)\) as the _elementary (orthogonal) complements_. Footnote 18: See Remark 3.3.6. This data is subject to the conditions that, for any \((k-\ell)\)-cell \[(\tau,\sigma)\quad\longleftrightarrow\quad\{j_{0}<\ldots<j_{\ell}\}\subset \{i_{0}<\ldots<i_{k}\}\subseteq[p]\] (where \(0\leqslant l<k\)) the following are satisfied: * There is a direct-sum decomposition \[\theta_{j_{m}}^{\perp\sigma}(\tau)\colon C_{j_{m}}(\tau)\oplus C_{j_{m}}^{ \perp\sigma}(\tau)\stackrel{{\cong}}{{\rightharpoonup}}C_{j_{m}} (\sigma)\] for all \(0\leqslant m\leqslant\ell\). We refer to the isomorphism \(\theta_{j_{m}}^{\perp\sigma}(\tau)\) as the \(\tau\)_-trivialisation of \(C_{j_{m}}(\sigma)\)_. * For any non-empty subset \(K\subseteq\{j_{0}<\ldots<j_{\ell}\}\) with \(|K|\geqslant 2\), the diagram commutes, where the \(\hookrightarrow\) are the inclusions and the \(\twoheadrightarrow\) are the projections of the direct sum, the dashed arrow on the far right is the elementary morphism19 of degree \((2-|K|)\), and we write \(\varphi_{K}(\sigma)_{\tau}\) to mean the composition Footnote 19: Recall Definition 3.3.2: for \(|K|\neq 2\), this is zero; for \(|K|=2\) (i.e. for \(K=\{j_{a}<j_{b}\}\)) this is a sum of identity maps. In the latter case, although \(C_{j_{a}}^{\perp\sigma}(\tau)\) is elementary in \(C_{j_{a}}(\sigma)\), and \(C_{j_{b}}^{\perp\sigma}(\tau)\) is elementary in \(C_{j_{b}}(\sigma)\), both \(C_{j_{a}}(\sigma)\) and \(C_{j_{a}}(\sigma)\) consist of free modules over the same ring, namely \(\mathcal{O}(U)\), and so the elementary morphism will “often” (i.e. in practice, when the GTT-labelling arises from the twisting cochain constructed from a coherent sheaf) be non-zero. \[C_{\mathrm{ver}_{|K|}K}(\tau)\oplus C_{\mathrm{ver}_{|K|}K}^{\perp\sigma}(\tau )\xrightarrow{\theta_{\mathrm{ver}_{|K|}K}^{\perp\sigma}(\tau)}C_{\mathrm{ver} _{|K|}K}(\sigma)\xrightarrow{\varphi_{K}(\sigma)}C_{\mathrm{ver}_{0}K}(\sigma )\xrightarrow{\theta_{\mathrm{ver}_{0}K}^{\perp\sigma}(\tau)^{-1}}C_{\mathrm{ ver}_{0}K}(\tau)\oplus C_{\mathrm{ver}_{0}K}^{\perp\sigma}(\tau)\] which we refer to as the \(\tau\)_-trivialisation of \(\varphi_{K}(\sigma)\)_._ **Remark 3.3.5**.: There are some important comments to make concerning Definition 3.3.4, which hopefully elucidate the rather opaque specificities. Firstly, the direct-sum decomposition in condition (ii) is "strict", i.e. the morphism \(C_{i_{m}}(\tau)\hookrightarrow C_{i_{m}}(\sigma)\) is exactly the inclusion into the direct sum, and not just some arbitrary monomorphism (and similarly for the projection \(\twoheadrightarrow\) out of the direct sum). Another way of expressing this condition would be to ask for the \(\varphi_{K}\) to induce a morphism of short exact sequences where now the middle vertical arrow is \(\varphi_{K}(\sigma)\) instead of \(\varphi_{K}(\sigma)_{\tau}\), and the horizontal arrows contain the composition with the \(\theta_{j_{m}}^{\perp\sigma}(\tau)\). The moral reason for this condition is that we want for the homotopies on higher-dimensional faces to restrict down to agree exactly with those already present on the lower-dimensional faces, and to also restrict down to be exactly the elementary morphism on the elementary orthogonal complements. Alternatively, writing morphisms between direct sums in block matrix form, this condition says that, in the \(\tau\)-trivialisation, \[\varphi_{K}(\sigma)=\begin{pmatrix}\varphi_{K}(\tau)&*\\ 0&e\end{pmatrix}\] where \(e\) is the elementary morphism of degree \((2-|K|)\), and \(*\) is some arbitrary morphism \(C_{\mathrm{ver}_{|K|}K}^{\perp\sigma}(\tau)\to C_{\mathrm{ver}_{0}K}(\tau)\). Secondly, it is tempting to try to include the \(|K|=1\) case in condition (ii), since this would seem to express the fact that the isomorphisms \(\theta_{j_{m}}^{\perp\sigma}(\tau)\) from condition (i) do indeed commute with the differentials, allowing us to weaken condition (i) to simply ask for _degree-wise_ isomorphisms of the complexes. However, this isn't quite so simple, since we do not want the differential \(\varphi_{K}(\sigma)\) (in the case \(|K|=1\)) to simply be upper triangular (in the block-matrix point of view described above) with respect to the differential \(\varphi_{K}(\tau)\), but instead diagonal: the differential on \(C_{j_{m}}(\sigma)\) should be exactly the direct sum of the differential of \(C_{j_{m}}(\tau)\) with the differential of \(C_{j_{m}}^{\perp\sigma}(\tau)\). Finally, note that we could remove the need for the data of the elementary complements entirely (thus giving only the data of \(k\)-simplices labelling the _vertices_ of \(\Delta[p]_{\mathrm{pair}}\)) and rephrase condition (i) entirely to require two things: that \(C_{j_{m}}(\tau)\hookrightarrow C_{j_{m}}(\sigma)\) for all \(0\leq m\leq\ell\); and that the cokernel of this morphism be elementary (and thus free, implying that the short exact sequence splits). This definition sounds much more concise, and could maybe even be expressed without reference to the pair subdivision at all, but instead as some sort of totalisation. But for the purposes of explicit calculation, we need a specific choice of cokernel and isomorphism with the direct-sum decomposition for each pair of faces \(\tau\subset\sigma\), and these all need to be coherent with one another, whence the definition we give. In summary, there is some matter of taste in how one chooses to phrase this definition, and we have opted for the one that seems closest to that found in [17], but we do not think that this is necessarily the most succinct one possible. Indeed, finding a better definition would potentially allow us to prove the full generalisation of Corollary 5.1.2. \(\lrcorner\) **Remark 3.3.6**.: In [11, SS1.4], Green specifies the modules with respect to which the orthogonal complements in Definition 3.3.4 are elementary; the restatement of this definition in [17, Green's Theorem 1], as well as the very definition of simplicial twisting cochains _loc. cit._, makes no reference to these modules. In practice, this makes no difference: the important fact is that the complexes are elementary with respect to _something_, since being elementary implies being homotopically zero, irrespective of the choice of modules. Furthermore, being elementary ensures that one obtains a _compatible sequence of connections_ ([11, SS4.5]). We _could_ say something more precise about how these complexes are elementary: one can ask for those corresponding to inclusions \(\{i\}\subset\{i<j\}\) to be \(\{C_{i}(i<j),C_{j}(i<j)\}\)-elementary, and for all complements labelling higher-dimensional cells to be elementary with respect to both these and the modules constituting the target complexes (as is the case in the proof of Lemma 5.1.1, for example). \(\lrcorner\) There is a particular case of Definition 3.3.4 which merits its own name. **Definition 3.3.7**.: If a GTT-labelling of \(\Delta[p]\) by \([\mathcal{N}^{\mathrm{dg}}\mathsf{Free}(U)]\) is such that the \(\varphi_{J}(\sigma)\) are zero for all \(|J|\geq 3\) then we call it a _GTT-\(1\)-labelling_. \(\lrcorner\) **Remark 3.3.8**.: By Lemma 2.5.6, asking for \(\varphi_{J}(\sigma)\) to be zero for all \(|J|\geq 3\) in Definition 3.3.7 is equivalent to asking that they lie inside the Kan sub-complex \([\mathcal{N}\mathsf{Free}(U)]\hookrightarrow[\mathcal{N}^{\mathrm{dg}} \mathsf{Free}(U)]\). In particular, the \(\varphi_{J}(\sigma)\) for \(|J|=2\) are then isomorphisms, i.e. the \(k\)-simplex assigned to any \(0\)-cell \(\sigma\) in \(\Delta[p]_{\mathrm{pair}}\) is of the form \[C_{i_{0}}(\sigma)\cong C_{i_{1}}(\sigma)\cong\ldots\cong C_{i_{k}}(\sigma)\] with \(\varphi_{i_{j}<i_{j+1}}(\sigma)\) being the isomorphism \(C_{i_{j+1}}(\sigma)\to C_{i_{j}}(\sigma)\). Because of this, we may also refer to a GTT-\(1\)-labelling as a _GTT-labelling by_\([\mathcal{N}\mathsf{Free}(U)]\). **Definition 3.3.9**.: Define \[\mathcal{G}^{\prime}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! corresponding to the \(1\)-simplex \(\{i<i^{\prime}\}\) with the identity morphism \(C\stackrel{{\mathrm{id}}}{{\leftarrow}}C\). Then, for any \(n\)-simplex \(\sigma\) in \(\Delta[p+1]\) of the form \(\{j_{0}<\ldots<i<i^{\prime}<\ldots<j_{n-2}\}\) with \(n\geq 2\), the vertex in \(\Delta[p+1]_{\mathrm{pair}}\) corresponding to the \((n-1)\)-simplex \(\{j_{0}<\ldots<i<\ldots<j_{n-2}\}\) is already labelled with some \[C_{j_{0}}\leftarrow\ldots\gets C_{i}\leftarrow\ldots\gets C_{j_{n-2}}\] and so we label the vertex in \(\Delta[p+1]_{\mathrm{pair}}\) corresponding to \(\sigma\) with \[C_{j_{0}}\leftarrow\ldots\gets C_{i}\stackrel{{\mathrm{id}} }{{\leftarrow}}C_{i}\leftarrow\ldots\gets C_{j_{n-2}}.\] Now for the \(n\)-cells of \(\Delta[p+1]_{\mathrm{pair}}\) for \(n\geq 1\), which correspond to codimension-\(n\) inclusions \(\tau\subset\sigma\) of simplices in \(\Delta[p+1]\). If \(\sigma\) contains \(i^{\prime}\) but _does not contain_\(i\), then we label the corresponding cell in \(\Delta[p+1]_{\mathrm{pair}}\) with whatever labels the already-labelled cell corresponding to the pair \((\widetilde{\tau},\widetilde{\sigma})\) where we replace \(i^{\prime}\) by \(i\). Otherwise, if \(i^{\prime}\not\in\tau\) but \(i^{\prime}\in\sigma\), then we know that the vertex corresponding to \(\sigma\) is labelled in such a way that the face corresponding to \(\tau\) of the element of the nerve is exactly what labels the vertex corresponding to \(\tau\), and so we can simply label \((\tau,\sigma)\) with all zero objects. Finally, if \(i^{\prime}\in\tau\), then \(i^{\prime}\in\sigma\), and we label the cell \((\tau,\sigma)\) in \(\Delta[p+1]_{\mathrm{pair}}\) with whatever already labels the cell \((\widetilde{\tau},\widetilde{\sigma})\), where \(\widetilde{\tau}\coloneqq\tau\setminus\{i^{\prime}\}\) and \(\widehat{\sigma}\coloneqq\sigma\setminus\{i^{\prime}\}\), with one complement corresponding to \(i\) duplicated for \(i^{\prime}\), i.e. \[C_{j}^{\perp\sigma}(\tau)\coloneqq\begin{cases}C_{j}^{\perp\widehat{\sigma} }(\widetilde{\tau})&\text{if }j\neq i^{\prime};\\ C_{i}^{\perp\widehat{\sigma}}(\widetilde{\tau})&\text{if }j=i^{\prime}.\end{cases}\] It remains only to check that the conditions of Definition 3.3.4 are satisfied by this new labelling. By construction, we only need to check conditions (i) and (ii) for the \(n\)-cells that contain both \(i\) and \(i^{\prime}\). If \(i^{\prime}\not\in\tau\) but \(i^{\prime}\in\sigma\), then the elementary complements are all zero, and so both conditions hold immediately; if \(i^{\prime}\in\tau\) then the elementary complements are exactly those that come from the original GTT-labelling of \(\Delta[p]\), and so the conditions hold by hypothesis. **Lemma 3.3.11**.: _The assignment \((U,\Theta_{U})\mapsto\mathcal{G}\mathit{ceen}(U)\), defines a simplicial presheafon \(\mathsf{RingSp}_{\mathrm{conn}}\)._ Proof.: Again, the majority of this proof is identical to the proof of Lemma 3.1.3, and the only thing that we need to prove here is that the GTT-\(1\)-labelling conditions are preserved by \(f^{*}\). But this is immediate, since the only conditions of Definition 3.3.7 are that certain direct sum decompositions exist and that two squares commute, and \(f^{*}\) commutes with direct sums (since it is an adjoint) and sends commutative squares to commutative squares (since it is a functor). ### Simplicial twisting cochains **Definition 3.4.1**.: Define \[\mathcal{ST}\mathit{wid}(U)_{p}\coloneqq\left\{\text{GTT-labellings of }\Delta[p]\right\}\] for any ringed space \((U,\Theta_{U})\) and any integer \(p\geq 0\). Figure 3.3.ii. The image of a 1-simplex in \(\tilde{G}^{\text{\tiny\em c}}\)_even_(\(U\)) under the degeneracy map \(s_{1}^{1}\), given by placing the 1-simplex along the \(\{0<1\}\) edge and the \(\{0<2\}\) edge, and then filling in the rest with homotopically trivial data. For space, we "inflate" the triangle into a hexagon. Figure 3.3.ii. The image of a 1-simplex in \(\tilde{G}^{\text{\tiny\em c}}\)_even_(\(U\)) under the degeneracy map \(s_{1}^{1}\), given by placing the 1-simplex along the \(\{0<1\}\) edge and the \(\{0<2\}\) edge, and then filling in the rest with homotopically trivial data. For space, we "inflate" the triangle into a hexagon. **Lemma 3.4.2**.: _The assignment \((U,\Theta_{U})\mapsto\partial\mathcal{F}\mathit{wist}(U)\) defines a simplicial presheafon \(\mathsf{RingSp}_{\mathsf{conn}}\)._ Proof.: The fact that \(\partial\mathcal{F}\mathit{wist}(U)\), has the structure of a simplicial set is almost the same as the proof of Lemma 3.3.10. The only real difference is in constructing the degeneracy maps, since we need to label vertices of \(\Delta[p+1]_{\mathrm{pair}}\) with elements of the dg-nerve instead of the regular nerve. However, since all the faces of any element of the blown-up nerve (Definition 2.4.3) commute on the nose, we can enrich to obtain an element of the (blown-up) dg-nerve by simply adding identity homotopies everywhere necessary. In other words, the degeneracy maps actually land in the subset \(\mathcal{G}\mathit{even}(U)_{p+1}\subset\partial\mathcal{F}\mathit{wist}(U)_{ p+1}\). Showing functoriality is then exactly the same as in Lemma 3.3.11, since the higher conditions necessary to be a full GTT-labelling (rather than just a GTT-1-labelling) just posit that yet more squares commute, and \(f^{*}\) sends commutative squares to commutative squares. **Remark 3.4.3**.: Just as \(\mathcal{G}\mathit{even}\) was defined exactly to recover the definition of complexes of "simplicial vector bundles" (satisfying the conditions arising from Green's resolution) given in [11, SS1], we have defined \(\partial\mathcal{F}\mathit{wist}\) exactly to recover the definition of simplicial twisting cochains given in [11, SS3], as we will prove in Theorem 4.1.1. We know, by Lemma 2.5.6, how the nerve sits inside the dg-nerve (and similarly for their maximal Kan complexes), and this corresponds exactly to the explanation that "_complexes of simplicial vector bundles are specific examples of simplicial twisting cochains_" given in [11, p. 269]: 1. the fact that the \(f_{[0<1]}\) are isomorphisms (Lemma 2.5.6 (ii)) means that \[E^{*}_{\sigma,\alpha}\mathit{is\ independent\ of\ \alpha}\] since all the complexes are isomorphic; and 2. the fact that the \(f_{I}\) are zero for \(I\geq 3\) (Lemma 2.5.6 (i)) means that \[{}^{\sigma}\mathrm{a}^{k,1-k}=0\mathit{\ for\ k>1}.\] This suggests the alternative nomenclature "_dg-Green complex_" to mean "simplicial twisting cochain"; conversely, we might suggest "_homotopy-truncated simplicial twisting cochain_" to mean "Green complex". ## 4 Complex-analytic examples To justify our interest in the three simplicial presheaves \(\mathcal{F}\mathit{wist}\), \(\mathcal{G}\mathit{even}\), and \(\partial\mathcal{F}\mathit{wist}\), we now show that, in the setting of complex-analytic manifolds, we recover well-known objects (and, in the case of \(\mathcal{F}\mathit{wist}\), well-known paths between them). This means that we now turn our study towards the Cech totalisation of these presheaves in the case where we have a good Stein cover of a connected complex-analytic manifold \(X\), with \(\Theta_{X}\) the sheaf of holomorphic functions. After this section, we will return to the more general study of the three simplicial presheaves before Cech totalisation. ### Points in all three **Theorem 4.1.1**.: _Let \(\underline{X}=(X,\mathscr{U})\), with \(X\) a connected complex-analytic manifold with the structure sheaf \(\mathcal{O}_{X}\) of holomorphic functions, and \(\mathscr{U}\) a Stein cover.20 Then_ Footnote 20: For the statement of the theorem we do not necessarily need the cover to be Stein, but any open cover can always be refined to a Stein cover (and, more precisely, one whose intersections are also all Stein) so there is no loss in generality in assuming this, and this will be necessary if one wants to talk about coherent analytic sheaves. 1. \(\operatorname{Tot}^{0}[\mathscr{N}\operatorname{Free}(\check{\mathscr{N}} \mathscr{U}_{\star})]\) _is the set of bounded complexes of locally free sheaves_21 _on_ \((X,\mathscr{U})\)_;_ Footnote 21: Since here \((X,\mathcal{O}_{X})\) is _locally_ ringed, these are exactly the strictly perfect complexes, as mentioned in Remark 2.9.2. 2. \(\operatorname{Tot}^{0}\mathcal{F}\mathit{wisst}(\check{\mathscr{N}}\mathscr{U }_{\star})\) _is the set of twisting cochains_ _[_11, SS3_]_ _on_ \((X,\mathscr{U})\)_;_ 3. \(\operatorname{Tot}^{0}\check{G}\mathit{ereen}(\check{\mathscr{N}}\mathscr{U }_{\star})\) _is the set of complexes of "simplicial vector bundles"_ _[_11, SS1_]_ _satisfying the conditions necessary for them to be a Green complex_ _[_10, SS23_]_ _on_ \((X,\mathscr{U})\)_;_ 4. \(\operatorname{Tot}^{0}\mathcal{F}\mathit{wisst}(\check{\mathscr{N}}\mathscr{U }_{\star})\) _is the set of simplicial twisting cochains_ _[_11, SS3_]_ _on_ \((X,\mathscr{U})\)_._ Proof.: The proof of these theorems is generally nothing more than "by construction" -- the simplicial presheaves \(\mathcal{G}\mathit{ereen}\), \(\mathcal{F}\mathit{wisst}\), and \(\mathcal{F}\mathit{wisst}\) were defined exactly so that these results would hold. Here we will assume some familiarity with Tot calculations; we refer the interested reader to Appendix A or Appendix B.1 for examples of worked calculations with more detail, or to [11, Appendix B] for a more general formal discussion. 1. Since \(\mathcal{O}([\coprod_{\alpha}U_{\alpha})\cong\prod_{\alpha}\mathcal{O}(U_{ \alpha})\), a free \(\mathcal{O}([\coprod_{\alpha}U_{\alpha})\)-module is exactly the data of a free \(\mathcal{O}(U_{\alpha})\)-module for all \(\alpha\). Since the dg-nerve and the core functor are both right adjoints, their composition preserves all limits. This means that \[[\mathscr{N}\operatorname{Free}(\coprod_{\alpha}U_{\alpha})]_{0}\cong\prod _{\alpha}[\mathscr{N}\operatorname{Free}(U_{\alpha})]\] Thus the data of a \(0\)-simplex in \(\operatorname{Tot}[\mathscr{N}\operatorname{Free}(\check{\mathscr{N}} \mathscr{U}_{\star})]\) is exactly a cohesive choice of * \(E^{\star}_{\alpha}\in[\mathscr{N}\operatorname{Free}(U_{\alpha})]_{0}= \operatorname{Free}(U_{\alpha})\) for all \(U_{\alpha}\in\mathscr{U}\); and * \(\varphi_{a_{0}\dots\alpha_{p}}\in[\mathscr{N}\operatorname{Free}(U_{a_{0} \dots\alpha_{p}})]_{p}\) for all \(U_{a_{0}\dots\alpha_{p}}\) where "cohesive" means that, in particular, the endpoints of \(\varphi_{a\beta}\) are exactly \(E^{\star}_{\alpha}\) and \(E^{\star}_{\beta}\), and the boundary of \(\varphi_{a\beta\gamma}\) consists exactly of \(\varphi_{a\beta}\), \(\varphi_{\beta\gamma}\), and \(\varphi_{a\gamma}\); there are analogous conditions coming from the degeneracy maps. The \(1\)-dimensional face conditions first tell us that \(\varphi_{a\beta}\colon E^{\star}_{\beta}|U_{a\beta}\to E^{\star}_{\alpha}|U_{a\beta}\); the fact that \(\varphi_{a\beta}\) is an element of the maximal Kan complex of the nerve tells us that it is an isomorphism. The \(2\)-dimensional face conditions tell us that \(\varphi_{\beta\gamma}\circ\varphi_{a\beta}=\varphi_{a\gamma}\). Since the ordinary nerve is generated by its \(1\)-simplices, and since composition of chain maps is strictly associative, the higher-dimensional face conditions impose no further restriction nor give any further information; in fact, this also tells us that the \(\varphi_{a_{0}\ldots a_{p}}\) are simply sequences of \((p+1)\)-many composible morphisms, and the face conditions tell us that they are given exactly by the \(\varphi_{a_{i}a_{i+1}}\). The degeneracy conditions tell us that \(\varphi_{aa}=\operatorname{id}_{\mathcal{E}_{a}}\) since the only degenerate simplices in the nerve are those given by identity morphisms. In summary then, we have bounded complexes \(\mathcal{E}_{a}^{*}\) of free sheaves for each \(U_{a}\), along with isomorphisms \(\varphi_{a\beta}\colon E_{\beta}^{*}|U_{a\beta}\to E_{a}^{*}|U_{a\beta}\) on each overlap \(U_{a\beta}\), satisfying the cocycle condition \(\varphi_{\beta\gamma}\circ\varphi_{a\beta}=\varphi_{a\gamma}\) and the degeneracy condition \(\varphi_{aa}=\operatorname{id}_{\mathcal{E}_{a}^{*}}\). This is exactly the data of a bounded complex of locally free sheaves. 2. Morally, the argument here starts identically to that of Theorem 4.1.1 (a), in that the data of a \(0\)-simplex in \(\operatorname{Tot}[\mathcal{N}^{\operatorname{dg}}\operatorname{Free}( \tilde{\mathcal{N}}\mathcal{U}_{\star})]\) is exactly a cohesive choice of * \(E_{\alpha}^{*}\in[\mathcal{N}^{\operatorname{dg}}\operatorname{Free}(U_{a})]_{0}\) for all \(U_{a}\in\mathcal{U}\); and * \(\varphi_{a_{0}\ldots a_{p}}\in[\mathcal{N}^{\operatorname{dg}}\operatorname{ Free}(U_{a_{0}\ldots a_{p}})]_{p}\) for all \(U_{a_{0}\ldots a_{p}}\) but since we are working with the \(\operatorname{dg}\)-nerve instead of the ordinary nerve, all \(\varphi_{a_{0}\ldots a_{p}}\) will be relevant, not just the \(\varphi_{a\beta}\). As before, the \(1\)-dimensional face conditions tell us that \(\varphi_{a\beta}\colon E_{\beta}^{*}|U_{a\beta}\to E_{a}^{*}|U_{a\beta}\); the fact that \(\varphi_{a\beta}\) is an element of the maximal Kan complex of the \(\operatorname{dg}\)-nerve tells us that it is a \(quasi\)-isomorphism. But now it is _not_ the case that these satisfy the cocycle condition: the \(2\)-dimensional face conditions tell us that \(\varphi_{a\beta}\colon\varphi_{a\gamma}\Rightarrow\varphi_{\beta\gamma}\circ \varphi_{a\beta}\) is a (possibly non-trivial) chain homotopy. As a specific example of this, \(\varphi_{a\beta\alpha}\) and \(\varphi_{\beta a\beta}\) are the chain homotopies witnessing that \(\varphi_{a\beta}\) is a quasi-isomorphism with quasi-inverse \(\varphi_{\beta\alpha}\). More generally, the \(p\)-dimensional face conditions describe homotopies controlling the \((p-1)\)-dimensional elements. As described in [10, (8.2.7)], this seems to correspond exactly to the data of a twisting cochain -- an idea that this theorem now makes precise. To prove this formally, note that \(\mathcal{D}=\operatorname{Free}\) satisfies the hypothesis of Theorem 2.5.17, in that it turns disjoint unions of spaces into products22, and so elements of \(\operatorname{Tot}^{0}\mathcal{N}^{\operatorname{dg}}\operatorname{Free}( \tilde{\mathcal{N}}\mathcal{U}_{\star})\) are exactly Maurer-Cartan elements in the corresponding \(\operatorname{Cech}\) algebra. Then we simply appeal to Corollary 2.5.8, which tells us that elements of \(\operatorname{Tot}^{0}[\mathcal{N}^{\operatorname{dg}}\operatorname{Free}( \tilde{\mathcal{N}}\mathcal{U}_{\star})]\) are exactly those Maurer-Cartan elements whose \(1\)-simplices are quasi-isomorphisms, and these are exactly twisting cochains. To better explain this, it may be helpful to spell out the details of what happens in degree \(2\). By definition, a \(0\)-simplex of \(\operatorname{Tot}[\mathcal{N}^{\operatorname{dg}}\operatorname{Free}( \tilde{\mathcal{N}}\mathcal{U}_{\star})]\) is exactly a morphism of cosimplicial simplicial sets Footnote 22: An isomorphism of ringed spaces \(f\colon(X,\mathcal{O}_{X})\cong(Y,\mathcal{O}_{Y})\) induces an equivalence of categories \(f^{*}\colon\operatorname{Free}(Y)\simeq\operatorname{Free}(X)\), since \(f^{*}\circ(f^{-1})^{*}\cong(f^{-1}\circ f)^{*}\cong\operatorname{id}_{ \operatorname{Free}(X)}\), and this is bijective on objects since we are working only with free modules, which are uniquely determined by their rank, and this is preserved by pullback: \(f^{*}\circ f^{*}\circ f^{-1}\circ f^{\prime}\circ g^{-1}\circ g\circ X\cong \mathcal{O}_{X}^{r}\). Then we just need to show that \(\mathcal{O}_{(-)}\) turns disjoint unions into products, but in the case of the sheaf of holomorphic functions this follows from the fact that it is representable, with \(\mathcal{O}_{(-)}\cong\operatorname{Hom}(-,C)\). \[\Delta[\star]\to[\mathcal{N}^{\operatorname{dg}}\operatorname{Free}(\tilde{ \mathcal{N}}\mathcal{U}_{\star})]\] which is exactly a functorial collection of morphisms of simplicial sets \[\Delta[0] \to\mathsf{Free}(\tilde{\mathcal{N}}\mathcal{U}_{0})\cong\prod_{ \alpha}\mathsf{Free}(U_{\alpha})\] \[\Delta[1] \to[\mathcal{N}^{\mathrm{dg}}\,\mathsf{Free}(\tilde{\mathcal{N}} \mathcal{U}_{1})]_{1}\cong\prod_{\alpha\beta}[\mathcal{N}^{\mathrm{dg}}\, \mathsf{Free}(U_{\alpha\beta})]\] \[\Delta[2] \to[\mathcal{N}^{\mathrm{dg}}\,\mathsf{Free}(\tilde{\mathcal{N}} \mathcal{U}_{2})]_{1}\cong\prod_{\alpha\beta\gamma}[\mathcal{N}^{\mathrm{dg}} \,\mathsf{Free}(U_{\alpha\beta\gamma})]\] and so on, where we again appeal to the fact that both the dg-nerve and the core functor are right adjoints (as explained in Appendix B.1). This data forms a Cech bialgebra, generalising Definition 2.5.10 to a presheaf of dg-categories instead of a single fixed dg-category, though this relies on the fact that the simplicial set in question is exactly the Cech nerve (see Definition 2.5.14 for the precise definition). In particular, we can consider the bidegree-\((2,0)\) parts of a \(0\)-simplex of this totalisation. The image of \(\Delta[2]\) will give us one: some homotopy \(f_{xyz}\) fitting into a diagram of the form i.e. a map \(f_{xyz}:z\to x\) of degree \(-1\) such that \(\partial f_{xyz}=f_{xz}-f_{yz}\circ f_{xy}\), defined over some \(U_{\alpha\beta\gamma}\), with \(x\in\mathsf{Free}(U_{\alpha})\), \(y\in\mathsf{Free}(U_{\beta})\), and \(z\in\mathsf{Free}(U_{\gamma})\). There are two other bidegree-\((2,0)\) terms that arise when we apply the two differentials: writing \(f=(f_{yz},f_{xz},f_{xy})\), they are \(\hat{\delta}f\) and \(f\cdot f\), defined by \[(\hat{\delta}f)_{xyz} =f_{xz}\] \[(f\cdot f)_{xyz} =f_{xy}\cdot f_{yz}\coloneqq f_{yz}\circ f_{xy}\] (and here we are using functoriality of this collection of morphisms of simplicial sets in the definition of \(\hat{\delta}\), for example). But then the defining equation for \(\partial f\) tells us exactly that the Maurer-Cartan equation is satisfied in bidegree \((2,0)\), since \[\underbrace{(\partial-\hat{\delta})f}_{=:\,\mathrm{D}f}+f\cdot f=0.\] One important thing to mention is that the definition of twisting cochain that we recover from this construction is indeed the "classical" one -- we expand upon this comment in Remark 4.2.1. 3. This will follow immediately from Theorem 4.1.1 (d) combined with Remark 3.4.3. 4. This proof consists solely of unravelling definitions; we spell out the full details in Appendix B.1. The intuition is that, in a GTT-labelling, after Cech totalisation, the _vertices_ of \(\Delta[p]_{\mathrm{pair}}\) are labelled exactly with a twisting cochain; the _edges_ (and higher-dimensional cells) are labelled with the extra data of the elementary orthogonal complements; the extra conditions describe how different twisting cochains \({}^{\sigma}\mathfrak{a}\) and \({}^{\tau}\mathfrak{a}\) (in the notation of [17]) fit together in a compatible manner. The natural question to ask next is what the \(\operatorname{Tot}^{1}\) analogue of Theorem 4.1.1 is, or to ask what the \(\pi_{0}\) of these three simplicial presheaves are. In the rest of this section, we give some partial and some complete answers to these questions. Of course, it would be satisfying to have results pertaining to the higher-dimensional simplices (or homotopy groups) as well, but this lies beyond the scope of this paper. ### Edges in twisting cochains Twisting cochains have been well studied, especially in the language of dg-categories. This means that there are, for example, definitions of morphisms and weak equivalences of twisting cochains that can be found elsewhere. We can now study how our construction of twisting cochains via \(\operatorname{Tot}\mathcal{T}\mathit{wist}(\mathcal{N}\mathcal{U}_{\star})\) relates to these other approaches. **Remark 4.2.1**.: In earlier papers on twisting cochains (such as [14]), the condition imposed on the \(\alpha\alpha\) term of a twisting cochain was that it be _equal_ to the identity map of the \(E_{\alpha}\) complex. However, as pointed out in [15], if one wants to construct a pre-triangulated dg-category of twisting cochains, then, in order for mapping cones to exist, this condition needs to be weakened to only asking that the \(\alpha\alpha\) term be _chain homotopic_ to the identity map.23 Footnote 23: In [15] the terminology _twisted (perfect) complex_ is used instead, to differentiate between the classical and the more homotopical definitions. In this present article, we say e.g. “morphism of twisting cochains” to mean “morphism of twisted perfect complexes”, i.e. considering twisting cochains as a full subcategory of twisted perfect complexes. Throughout the literature in general, the use of twisted/twisting cochain/complex is not entirely consistent. One might ask whether we could modify the construction somehow in order to rectify this discrepancy, obtaining the more modern definition. But note that degenerate \(1\)-simplices are defined entirely by an _object_, i.e. by a degeneracy map applied to a \(0\)-simplex. Inherent to the dg- (or, more generally, enriched-) categorical framework is the fact that we only enrich the _morphisms_; the objects remain purely set-theoretical, and so cannot hope to describe any sort of homotopical information. This point aside, more specific to the question in hand, we note that it is not too surprising that we are not able to recover the pre-triangulated structure on twisting cochains. Indeed, we are not trying to construct from them a dg-category, but instead a _space_, and so rather than expecting something resembling a pre-triangulated structure, one should instead look towards studying the higher \(\pi_{n}\) of the resulting space to see what information it contains. \(\lrcorner\) Since \(\mathcal{T}\mathit{wist}\) is a presheaf of Kan complexes by definition, Lemma 2.8.2 says that \(\operatorname{Tot}\mathcal{T}\mathit{wist}(\mathcal{N}\mathcal{U}_{\star})\) is a Kan complex. Morally, this means that we should expect its \(1\)-simplices to describe _invertible_ morphisms. We can make this observation formal by showing that the \(1\)-simplices are not merely morphisms of twisting cochains, but instead _weak equivalences_. **Theorem 4.2.2**.: _Let \(E=(E^{\star},\varphi)\) and \(F=(F^{\star},\psi)\) be points in \(\operatorname{Tot}^{0}\mathcal{T}\mathit{wist}(\mathcal{N}\mathcal{U}_{\star})\), where \((X,\mathcal{U})\) is as in Theorem 4.1.1. Then a path \(\lambda\in\operatorname{Tot}^{1}\mathcal{T}\mathit{wist}(\mathcal{N}\mathcal{U }_{\star})\) from \(E\) to \(F\) gives a weak equivalence of twisting cochains \((F^{\star},\psi)\mathrel{\widetilde{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.}}}}(E^{ \star},\varphi)\) in the sense of [16, Definition 2.27]._ Proof.: This is a purely combinatorial calculation using an explicit description of the non-degenerate simplices of \(\Delta[p]\times\Delta[1]\). We give the full details (including recalling the definition of a weak equivalence of twisting cochains) in Appendix B.2. As an immediate consequence, we can use the language of Section 2.7 (since \(\mathcal{T}\)_wist_ is a presheaf of Kan complexes, and is thus in particular such that \(2\)-horns fill) to say the following. **Corollary 4.2.3**.: _Let \((X,\mathcal{U})\) be as in Theorem 4.1.1. Then \(\pi_{0}\operatorname{Tot}\mathcal{T}\)wist(\(\tilde{\mathcal{N}}\mathcal{U}_{\star}\)) consists of twisting cochains modulo weak equivalence of twisting cochains._ **Remark 4.2.4**.: One important thing to note here is what we mean when we say that a path "gives" a weak equivalence in the statement of Theorem 4.2.2. The data of a \(1\)-simplex in the totalisation is actually slightly more than just a weak equivalence, since it also contains factorisations of the \(\lambda_{a_{0}\dots a_{p}}\) terms for \(p\geq 1\). For example, in Appendix B.2 we _define_ the \(\lambda_{a\beta}\) term of the purported weak equivalence associated to the path \(\lambda\) as the composition of two homotopies, and in doing so thus forget about the common (co)domain through which this factors. This is because the product \(\Delta[1]\times\Delta[1]\) is not simply a square, but instead a square with diagonal, and it is exactly the data associated to the diagonal that we forget when constructing a weak equivalence of twisting cochains. As a consequence, there are multiple \(1\)-simplices in \(\operatorname{Tot}\mathcal{T}\)wist(\(\tilde{\mathcal{N}}\mathcal{U}_{\star}\)) which describe the same weak equivalence of twisting cochains. Rather than quotienting the space by some equivalence relation in order to remedy this (harmless) situation, it seems more desirable to better understand how the dg-nerve has an inherent _cubical_ structure to it, by relating its combinatorics to that of the pair subdivision; the extra data floating around in the \(1\)-simplices in the totalisation comes exactly from the fact that we are working simplicially instead of cubically. **Remark 4.2.5**.: The study of twisting cochains in the dg-category setting is well explained in papers such as [16, 1], where it is shown how they relate to the dg-category of perfect complexes. More precisely, in the language of this present paper, [1, Proposition 4.9] says that, for any ringed space \((X,\mathcal{O}_{X})\) with locally finite open cover \(\mathcal{U}\), the dg-category of twisting cochains is exactly the Cech totalisation of the presheaf that sends a ringed space to the dg-category of strictly perfect complexes. One can consider Theorem 4.1.1 (b) and Theorem 4.2.2 as a sort of space-theoretic analogue of these dg-categorical results: Lemma 2.8.6 tells us that if we applied \([\mathcal{N}^{\operatorname{dg}}(-)]\) to the dg-category described as the homotopy limit in [1, Proposition 4.9] then we would obtain the same space as given by our construction here. For example, if \(M^{\star}\) and \(N^{\star}\) are perfect complexes of quasi-coherent (or, in particular, coherent) \(\mathcal{O}_{X}\)-modules on a connected complex-analytic manifold \(X\), and \(\mathcal{U}\) is a locally finite Stein cover, then [16, Proposition 3.21] tells us that we can resolve \(M^{\star}\) and \(N^{\star}\) by twisting cochains \(E\) and \(F\) (respectively), in that their sheafifications [16, Definition 3.1] satisfy \(\mathcal{S}(E)\simeq M^{\star}\) and \(\mathcal{S}(F)\simeq N^{\star}\). Now, if \(M^{\star}\) and \(N^{\star}\) are quasi-isomorphic, then \(\mathcal{S}(E)\simeq M^{\star}\simeq N^{\star}\simeq\mathcal{S}(F)\); we can then apply [11, Corollary 3.10] to show that \(E\) and \(F\) are weakly equivalent twisting cochains. But, by Theorem 4.2.2, this says that if two complexes are connected by a quasi-isomorphism, then we can resolve them by twisting cochains that are connected by a path in \(\pi_{0}\operatorname{Tot}\mathcal{J}\mathit{wist}(\tilde{\mathcal{N}}\mathcal{U}_ {\star})\). In the language of Kan complexes and their homotopy theory, this is exactly saying that there is an isomorphism between \(\pi_{0}\operatorname{Tot}\mathcal{J}\mathit{wist}(\tilde{\mathcal{N}}\mathcal{U} _{\star})\) and the \(\pi_{0}\) of the space whose points are perfect complexes of quasi-coherent sheaves and whose paths are quasi-isomorphisms between them, induced by the sheafification and resolution functors \(\mathcal{S}\) and \(\mathcal{T}\) from [11]. \(\lrcorner\) **Remark 4.2.6**.: Given Remark 4.2.5, it seems natural to try to construct a space of perfect complexes using the same method of Cech totalisation, but the structure of perfect complexes does not really allow for this. Note that, when describing the \(1\)-simplices in the Cech totalisation, the degree-\(0\) term is exactly a \(1\)-simplex in the same simplicial set as the degree-\(1\) terms that constitute the \(0\)-simplices (see e.g. Appendix A.2 or Appendix B.2). For example, in the case of \(\operatorname{Tot}[\mathcal{N}\operatorname{Free}(\tilde{\mathcal{N}}\mathcal{U} _{\star})]\), which is the space of complexes of locally free sheaves (though for simplicity here we will consider a complex concentrated in degree \(0\), i.e. a single locally free sheaf), the degree-\(0\) part \(\lambda_{\alpha}\colon E_{\alpha}\to F_{\alpha}\) of the \(1\)-simplices is of the same type as the degree-\(1\) part \(g_{\alpha\beta}\) of the \(0\)-simplices: the local data of morphisms of locally free sheaves and the transition functions are both \(1\)-simplices in \([\mathcal{N}\operatorname{Free}(U)]\) (for some \(U\)), i.e. isomorphisms of free sheaves. But in the case of perfect complexes, the gluing data (playing the role of the transition functions) consists of isomorphisms (since a perfect complex is a single global object), whereas the morphisms consist of \(\mathit{quasi}\)-isomorphisms -- this mismatch means that we cannot describe a space whose points are perfect complexes and whose paths are quasi-isomorphisms via Cech totalisation. In terms of morphisms, this space looks like it arises from the Cech totalisation of some \([\mathcal{N}^{\operatorname{dg}}\mathcal{O}(-)]\); morally, in terms of objects, it sits somewhere between \([\mathcal{N}\mathcal{O}(-)]\) and \([\mathcal{N}^{\operatorname{dg}}\mathcal{O}(-)]\). However, this is not a defect of perfect complexes. Indeed, one reason that perfect complexes are so useful is the fact that they are global objects with homotopically weak local properties (i.e. they are locally quasi-isomorphic to complexes of locally free sheaves). This relates to the other key example that we cannot express in this language of Cech totalisation: the space of quasi-coherent sheaves. \(\lrcorner\) ### Edges in Green complexes When it comes to the \(1\)-simplices in \(\operatorname{Tot}\mathcal{G}\mathit{reen}(\tilde{\mathcal{N}}\mathcal{U}_{ \star})\), there is not really a classical notion of morphism against which we can compare them. The only definition that we know of is implicitly in [11, Definition 3.2], where the category of Green complexes is defined as a full subcategory of the homotopical category of cartesian locally free sheaves on the Cech nerve, meaning that the morphisms are simply chain maps, and the weak equivalences are quasi-isomorphisms. It is with this structure that the \((\infty,1)\)-category of Green complexes is shown _loc. cit._ to be equivalent to that of locally coherent sheaves (after taking a homotopy colimit over refinements of covers). The structure that we obtain here, from \(\operatorname{Tot}\mathcal{G}\mathit{reen}(\tilde{\mathcal{N}}\mathcal{U}_{ \star})\), is very different: since we know that all \(2\)-horns fill in \(\mathcal{G}\mathit{reen}\) (Corollary 5.1.2), we know that the resulting morphisms will all be invertible. But the difference is much more profound than this: as explained in Remark 4.2.6, the \(1\)-simplices in a Cech totalisation are of the same type as the gluing data for objects, so we should not expect to recover something like chain maps or quasi-isomorphisms, but instead something built from GTT-labellings. However, it could be the case that these two notions happen to coincide, in that one can be strictified to recover the other, as we allude to in Section 6. Following the general description of 1-simplices in the totalisation (cf. Appendix B.2), we can start to describe those in. Given Green complexes and in, the degree-0 component of a path consists of 1-simplices in. In other words, the degree-0 component of the morphism is exactly a "common generalisation" of the degree-0 parts of a Green complex (the complexes of free sheaves over each open subset), namely isomorphic complexes and such that are elementary. In terms of the homotopy theory, this says that a necessary condition for there to exist a morphism between and is that they are built from resolutions of "the same" coherent sheaf (or complexes of coherent sheaves, cf. Section 5.4), since and are forced to have the same homology. The degree-1 component will then be some common generalisation of and, along with two 2-simplices in mediating between this diagonal, the degree-0 parts of the morphism, and the parts of the Green complexes (cf. Figure B.2.i); but the nerve is 2-coskeletal, so it is only the diagonal 1-simplex that actually provides any new data. Continuing on for higher simplices, we see that a path will provide us with these common generalisations for all. But since the form a Green complex, we already have common generalisations for. Composing with these allows us to find common generalisations for _all_ for _all_, and so it seems likely that we can invert the morphism. This (weakly) suggests the following conjecture, which is related to Section 5.1. **Conjecture 4.3.1**.: _The simplicial presheaf is a presheaf of Kan complexes._ Following on from Remark 4.2.5, note that the existence of a 1-simplex connecting two Green complexes built from resolutions of two quasi-isomorphic complexes would follow from Conjecture 5.4.1. We discuss the relation between and complexes of coherent sheaves further in Section 6. ### Edges in simplicial twisting cochains The 1-simplices in can be described in exactly the same way as those in Section 4.3, but with two key differences: the isomor phisms become quasi-isomorphisms; and the structure is no longer 2-coskeletal, so there is higher homotopy data than simply a collection of 1-simplices (or of "common generalisations", in the language of Section 4.3). Since [14] does not define any notion of morphism of simplicial twisting cochains, we have nothing against which to compare the 1-simplices in \(\operatorname{\mathsf{Tot}}\!\!\cdot\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! direction from the one in the composite morphism from [17], but we conjecture (Conjecture 5.3.5) that these two morphisms are homotopy inverse to one another. This is an important generalisation in two ways: first of all, this is _independent of the geometry_, since we are working _before_ Cech totalisation; secondly, this contains information about _higher structure_, since we are not just working with the 0-simplices, but instead the entire space. ### Horn filling conditions By construction, \(\mathcal{T}\mathit{wid}\) is a presheaf of Kan complexes, and thus globally fibrant. This means that it satisfies many nice properties. For example, its simplicial homotopy groups are naturally isomorphic to its topological homotopy groups; and its Cech totalisation is a Kan complex (Lemma 2.8.2), so we obtain a _space_ of twisting cochains. It is not so simple to show whether or not \(\mathcal{G}\mathit{reen}\) and \(\mathcal{A}\mathcal{T}\mathit{wid}\) are presheaves of Kan complexes. Indeed, in this paper, we only provide a partial result in this direction: we show that all 2-horns in \(\mathcal{G}\mathit{reen}\) and \(\mathcal{A}\mathcal{T}\mathit{wid}\) fill. Of course, it would be desirable to fully generalise this result and show that _all_ horns fill, but with the current definition of GTT-labelling this is not particularly easy (though we do still conjecture that \(\mathcal{G}\mathit{reen}\) at least is a presheaf of Kan complexes in Conjecture 4.3.1). But, at the very least, showing that 2-horns fill allows us to apply Lemma 2.7.2, which says that the simplicial \(\pi_{0}\) is isomorphic to the topological \(\pi_{0}\). After proving that 2-horns fill, we will remark on how one might try to generalise the proof to higher dimensions, and also on how the outer horns seem to be actually no more difficult to fill than the inner ones. **Lemma 5.1.1**.: _Any 2-horn in \(\mathcal{A}\mathcal{T}\mathit{wid}(U)\) can be filled, for any \((U,\mathcal{O}_{U})\in\mathrm{RingSp}_{\mathrm{conn}}\)._ We give here a proof that the outer horn \(\Lambda_{0}[2]\) lifts, since the same argument can be applied to the other outer horn \(\Lambda_{2}[2]\), and the argument for the inner horn \(\Lambda_{1}[2]\) is strictly simpler (as we explain in the proof). Proof.: Consider an arbitrary 2-horn \(\Lambda_{0}[2]\to\mathcal{A}\mathcal{T}\mathit{wid}(U)\) as in Figure 5.1.i. To fill this horn, we need, in particular, to construct a 2-simplex in \([\mathcal{N}^{\mathrm{dg}}\mathsf{Free}(U)]\) to label the central vertex \(\{0<1<2\}\), and we already have some restrictions on what the vertices of this 2-simplex must look like in terms of the complexes that must sit inside it with elementary orthogonal complements. Labelling the three vertices of this 2-simplex as \(C_{0}(012)\), \(C_{1}(012)\), and \(C_{2}(012)\), we know that, for example, \(C_{0}(012)\) must be such that both \(C_{0}(01)\) and \(C_{0}(02)\) must sit inside as a direct summand with elementary direct-sum complement. But we can write both of these as a direct sum of \(C_{0}(0)\) with something elementary, since by assumption \[C_{0}(01) \cong C_{0}(0)\oplus C_{0}^{\perp 01}(0)\] \[C_{0}(02) \cong C_{0}(0)\oplus C_{0}^{\perp 02}(0).\] This suggests that the "minimal" possibility for \(C_{0}(012)\) is \[C_{0}(012)\coloneqq C_{0}(0)\oplus C_{0}^{\perp 01}(0)\oplus C_{0}^{\perp 02}(0)\] where we simply add both elementary complements, since then we can mediate between \(C_{0}(01)\) and \(C_{0}(02)\) to try to construct the rest of the 2-simplex in a compatible way. Another condition of being a GTT-labelling (Definition 3.3.4) is that whatever quasi-isomorphisms \(\varphi_{ij}(012)\colon C_{j}(012)\mathrel{\mathop{\kern 0.0pt\rightharpoonup} \limits}C_{i}(012)\) constitute the 2-simplex that we construct must be compatible extensions of the already given \(\varphi_{ij}(ij)\). Again, this suggest the "minimal" possibility for \(\varphi_{ij}(012)\) to be simply given by taking the direct sum of \(\varphi_{ij}(ij)\) with the identity on the remaining elementary component. Putting this all together, the given 2-horn in \(\beta\mathscr{T}\mathit{wid}(U)\) shown in Figure 5.1.i induces a 2-horn in \([\mathscr{N}^{\mathrm{dg}}\operatorname{Free}(U)]\), as shown in Figure 5.1.ii. Figure 5.1.ii. The "minimal" 2-horn in \([\mathscr{N}^{\mathrm{dg}}\operatorname{Free}(U)]\) induced by the 2-horn in \(\beta\mathscr{T}\mathit{wid}(U)\) from Figure 5.1.i. We write \(\widetilde{\varphi}_{ij}(ij)\) to denote the conjugation of \(\varphi_{ij}(ij)\) by the isomorphisms \(\theta_{k}^{\perp ij}(k)\) between the direct sum \(C_{k}(k)\oplus C_{k}^{\perp ij}(k)\) and \(C_{k}(ij)\) for \(k\in\{i,j\}\), i.e. \(\widetilde{\varphi}_{ij}(ij)\colon=\theta_{j}^{\perp ij}(j)^{-1}\varphi_{ij}( ij)\theta_{i}^{\perp ij}(i)\). Figure 5.1.i. An arbitrary outer 2-horn in \(\beta\mathscr{T}\mathit{wid}(U)\), which we want to fill. For brevity, we write \(ij\) instead of \(\{i<j\}\). But now we have a 2-horn in a Kan complex (by definition, since we are labelling by the maximal Kan complex of the dg-nerve), and so we know that it can be filled. Here we can actually give a slightly more concrete description of how this works: we can apply the generalised Whitehead theorem to invert the quasi-isomorphism \(\widetilde{\varphi}_{01}(01)\oplus\operatorname{id}\) and obtain a chain homotopy witnessing this quasi-inverse; pre-composition with \(\operatorname{id}\oplus\widetilde{\varphi}_{02}(02)\) then gives the desired quasi-isomorphism \(\varphi_{12}(012)\) along with the chain homotopy \(\varphi_{012}(012)\). This gives us a 2-simplex as shown in Figure 5.1.iii. Note here that, if we were filling an inner horn \(\Lambda_{1}[2]\to\mathcal{J}\mathit{v}\mathit{v}\mathit{i}\mathit{v}\mathit{i }\mathit{v}\mathit{i}\mathit{v}(U)\) then we would not need to apply this argument, since we could simply compose the two existing quasi-isomorphisms and let \(\varphi_{012}(012)\) be the identity chain homotopy. Using this 2-simplex from Figure 5.1.iii to label the vertex \(\{0<1<2\}\), all that remains to label is the collection of vertices and edges between \(\{1\}\) and \(\{2\}\), and then the three 2-cells. But starting from this 2-simplex we basically have no choices to make in how to label the lower-dimensional components. The only exception is \(C_{1}(12)\mathrel{\widehat{=}}C_{2}(12)\), since we could conceivably set \(C_{1}(12)\) to be any of \(C_{1}(1)\), \(C_{1}(1)\oplus C_{1}^{\perp 01}(1)\), or \(C_{1}(1)\oplus C_{1}^{\perp 01}(1)\oplus C_{0}^{\perp 02}(0)\). However, since we want \(\varphi_{12}(12)\) to be an _isomorphism_ in the case where we restrict to \(\mathcal{G}\mathit{e}\mathit{e}\mathit{e}\mathit{e}\mathit{e}\mathit{e}\mathit{e}\) instead of \(\mathcal{J}\mathit{v}\mathit{i}\mathit{v}\mathit{i}\mathit{v}\mathit{i}\mathit{v}\) (Corollary 5.1.2), the only option that makes sense is the last one. This gives us the complete labelling as shown in Figure 5.1.iv. Now we need to check that this labelling does indeed satisfy the GTT-labelling conditions of Definition 3.3.4 in order for it to be a 2-simplex in \(\mathcal{J}\mathit{v}\mathit{i}\mathit{v}\mathit{i}\mathit{v}\mathit{i} \mathit{v}(U)\). First of all, the data "type-checks": the vertices are labelled with simplices in \([\mathcal{N}^{\operatorname{dg}}\operatorname{Free}(U)]\) of the right degree, and the higher-dimensional cells are labelled with elementary complexes.24 Footnote 24: Recall Remark 3.3.6, and note how here the elementary complements are not completely arbitrary, but instead built up from the \(C_{k}^{\perp ij}(ij)\). Next we need to check that we do indeed have direct-sum decompositions \[\theta_{j_{m}}^{\perp\sigma}(\tau)\colon C_{j_{m}}(\tau)\oplus C_{j_{m}}^{ \perp\sigma}(\tau)\mathrel{\widehat{=}}C_{j_{m}}(\sigma)\] for all \(\tau\subset\sigma\), but this is clear by construction. Figure 5.1.iii. The filling of the induced 2-horn from Figure 5.1.ii given by applying the generalised Whitehead theorem to invert \(\widetilde{\varphi}_{01}(01)\oplus\operatorname{id}\) and then pre-composing with \(\operatorname{id}\oplus\widetilde{\varphi}_{02}(02)\) to obtain \(\varphi_{12}(012)\) and \(\varphi_{012}(012)\). Finally, we need certain diagrams to commute, and there are three cases to check, corresponding to the three subsets \(\{0<1\}\), \(\{0<2\}\), and \(\{1<2\}\) of \(\{0<1<2\}\). The diagram for \(\{0<1\}\) is where the \(\hookrightarrow\) (resp. \(\twoheadrightarrow\)) are now not just the inclusion (resp. projection) of the direct sum, but instead the post-composition (resp. pre-composition) with the corresponding \(\theta_{k}^{\perp ij}(k)^{-1}\) (resp. \(\theta_{k}^{\perp ij}(k)\)), but this expresses exactly the same condition as the diagram in Definition 3.3.4, since \(\theta\) is an isomorphism. But this clearly commutes by definition: the left-hand square commutes since \(\widetilde{\varphi}_{01}(01)\) is defined (in Figure 5.1.ii) exactly Figure 5.1.iv. The filling of the 2-horn from Figure 5.1.i, where we use the 2-simplex from Figure 5.1.iii to label the central vertex. Note that the 2-cells are labelled trivially, in that the two paths along their boundary (giving two choices of elementary complement) agree on the nose. as the conjugation of \(\varphi_{01}(01)\) by the corresponding \(\theta_{k}^{1\,01}(k)\), and the right-hand square commutes since the two horizontal arrows are identical. The same argument applies for the diagram corresponding to \(\{0<2\}\). For \(\{1<2\}\), the commutativity is even more immediate: the diagram is and this commutes by definition, since the \(\hookrightarrow\) are simply identity morphisms, and \(\varphi_{12}(12):=\varphi_{12}(012)\). **Corollary 5.1.2**.: _Any \(2\)-horn in \(\mathcal{G}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! central vertex \(([p],[p])\) of \(\Delta[p]_{\mathrm{pair}}\) with \(\tau\), use the face maps of \(\Delta[p]\) to label all other vertices, and finally label all edges with zero elementary orthogonal complements (so that the inclusions into the direct sums are exactly identity maps). By construction, all the conditions of Definition 3.3.4 are then trivially satisfied. For clarity, we spell this out explicitly in dimensions \(0\), \(1\), and \(2\). * A \(0\)-simplex of \(\mathcal{T}\mathit{wilst}(U)\) is a complex \(A\) of \(\mathcal{O}_{U}\)-modules, which gives us a labelling of \(\Delta[0]_{\mathrm{pair}}=\{*\}\) by simply labelling the single vertex with this complex. * A \(1\)-simplex of \(\mathcal{T}\mathit{wilst}(U)\) is a quasi-isomorphism \(A\xleftarrow{}B\) of complexes. We label the vertex of \(\Delta[1]_{\mathrm{pair}}\) corresponding to \(\{0<1\}\) with this element \(\sigma\), label \(\{0\}\) with \(A\), and \(\{1\}\) with \(B\). Both edges \(\{0\}\subset\{0<1\}\) and \(\{1\}\subset\{0<1\}\) are labelled with \(0\) (so the inclusion into direct-sum decomposition is given by the corresponding identity map), giving us * A \(2\)-simplex \(\tau\) of \(\mathcal{T}\mathit{wilst}(U)\) consists of three quasi-isomorphisms and a chain homotopy, of the form * A \(C\) We label the vertex of \(\Delta[2]_{\mathrm{pair}}\) corresponding to \(\{0<1<2\}\) with \(\tau\), and then label the vertex corresponding to \(\{i<j\}\) with the face \(\{i<j\}\) of \(\tau\), and finally the vertex corresponding to \(\{i\}\) with the vertex \(\{i\}\) of \(\tau\), giving us * A \(C\) The fact that this defines a morphism follows from the fact that this construction commutes with the face and degeneracy maps in \(\mathcal{T}\mathit{wilst}\) and \(\mathcal{T}\mathit{wilst}\). **Definition 5.2.2**.: The _inclusion of Green into 3.7 twist_ is the injective morphism \[\mathcal{Green}\hookrightarrow\mathcal{F}\mathit{wid}\] induced by the inclusion of the nerve into the dg-nerve. In other words, any GTT-1-labelling of \(\Delta[p]\) by \([\mathcal{N}\operatorname{\mathsf{Free}}(U)]\) is a specific example of a GTT-labelling of \(\Delta[p]\) by \([\mathcal{N}^{\operatorname{dg}}\operatorname{\mathsf{Free}}(U)]\) where the \(\varphi_{J}\) are zero for \(|J|\geq 3\), by definition \(\lrcorner\) ### Equivalences For presheaves of Kan complexes, Lemma 2.8.3 tells us that weak equivalences are preserved by Cech totalisation. Just as it is easier to work with simplicial homotopy groups instead of the homotopy groups of the geometric realisation, it can be easier to work with globally fibrant simplicial presheaves directly instead of their Cech totalisations. If we already know a concrete description of \(\operatorname{\mathsf{Tot}}\mathcal{T}(\tilde{\mathcal{N}}\mathcal{U}_{ \star})\) and \(\operatorname{\mathsf{Tot}}\mathcal{G}(\tilde{\mathcal{N}}\mathcal{U}_{ \star})\) (as we do for \(\mathcal{F}\mathit{wid}\), \(\mathcal{Green}\), and \(\mathcal{F}\mathit{wid}\) on any complex-analytic \((X,\mathcal{U})\), thanks to Theorem 4.1.1), then to show they are equivalent as spaces it suffices to show the stronger statement that \(\mathcal{F}\simeq\mathcal{G}\). Since we are unable to prove whether or not \(\mathcal{Green}\) and \(\mathcal{F}\mathit{wid}\) are indeed presheaves of Kan complexes, we cannot even work with their simplicial \(\pi_{n}\) for \(n\geq 1\), let alone hope to see the existence of, or obstruction to, an equivalence between \(\operatorname{\mathsf{Tot}}\mathcal{G}^{\mathit{reen}}(\tilde{\mathcal{N}} \mathcal{U}_{\star})\) and \(\operatorname{\mathsf{Tot}}\mathcal{F}\mathit{wid}(\tilde{\mathcal{N}} \mathcal{U}_{\star})\). However, since we know that 2-horns fill (Lemma 5.1.1 and Corollary 5.1.2), their simplicial \(\pi_{0}\) are indeed well defined and calculate the topological \(\pi_{0}\) (Lemma 2.7.2). Using this, we can show that \(\pi_{0}\mathcal{F}\mathit{wid}\), \(\pi_{0}\mathcal{G}^{\mathit{reen}}\), and \(\pi_{0}\mathcal{F}\mathit{wid}\) are all isomorphic. **Theorem 5.3.1**.: _The inclusion \(i:\mathcal{F}\mathit{wid}\hookrightarrow\mathcal{F}\mathit{wid}\) induces an isomorphism_ \[i_{0}\colon\pi_{0}\mathcal{F}\mathit{wid}(U)\cong\pi_{0}\mathcal{F}\mathit{ wid}(U)\] _for any \((U,\mathcal{O}_{U})\in\operatorname{\mathsf{RingSp}}_{\mathsf{conn}}\)._ Proof.: Since \(\pi_{0}\) is merely a set, we simply need to show that \(i_{0}\) is a bijection. Firstly, note it is surjective, since \(\mathcal{F}\mathit{wid}(U)_{0}=\mathcal{F}\mathit{wid}(U)_{0}\). To prove that it is injective, we need to show that, if (the image under \(i\) of) two 0-simplices of \(\mathcal{F}\mathit{wid}(U)\) are connected by some 1-simplex in \(\mathcal{F}\mathit{wid}(U)\), then they are already connected by some 1-simplex in \(\mathcal{F}\mathit{wid}(U)\). So let \(A,B\in\mathcal{F}\mathit{wid}(U)_{0}\) be such that there exists some 1-simplex in \(\mathcal{F}\mathit{wid}(U)\) (where we omit the elementary orthogonal complements labelling the 1-simplices since they do not play a role in this proof). Note that, by definition, the morphisms \(A\to A^{\prime}\) and \(B\to B^{\prime}\) are not just quasi-isomorphisms, but also have quasi-inverses (Lemma 3.3.3).25 Finding some 1-simplex in \(\mathcal{F}\mathit{wid}(U)\) connecting \(A\) and \(B\) just means, by definition, finding some quasi-isomorphism \(A\mathrel{\raisebox{-0.86pt}{\scalebox{0.86pt}{\scalebox{0.86pt}{\scalebox{0.86pt}{ \scalebox{0.86pt}{\scalebox{0.86pt}{\scalebox{0.86pt}{\scalebox{0.86pt}{\scalebox{0.86 pt}{\scalebox{0.86pt}{\scalebox{0.86pt}{\scalebox{0.86pt}{\scalebox{0.86pt}{\scalebox{0.86 pt}{\scalebox{0.86pt}{\scalebox{0.86pt}{\scalebox{0.86pt}{\scalebox{0.86pt}{\scalebox{0.86pt}{ \scalebox{0.86pt}{\scalebox{0.86pt}{\scalebox{0.86pt}{\scalebox{0.86pt}{\scalebox{0.86 pt}{\scalebox{0.86pt}{\scalebox{0.86pt}{\scalebox{0.86pt}{\scalebox{0.86pt}{\scalebox{0.86pt}{\scalebox{0.86pt}{ \scalebox{0.86pt}{\scalebox{0.86pt}{\scalebox{0.86pt}{\scalebox{0.86pt}{\scalebox{0.86pt }{\scalebox{0.86pt}{\scalebox{0.86pt}{\scalebox{0.86pt}{\scalebox{0.86pt}{\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! **Remark 5.3.2**.: One particularly useful special case of Green's construction is given by looking at the Cech-degree-1 part ([11], (8.3.6)] or [10, pp. 23-25]). This shows that any quasi-isomorphism of bounded complexes of free modules can be "strictified" to an isomorphism in the following sense: if \(A\xleftarrows_{f}B\) is a quasi-isomorphism of bounded complexes of free modules, then there exists complexes \(\widetilde{A}\) and \(\widetilde{B}\) such that 1. \(\widetilde{A}=A\oplus E_{A}\), where \(E_{A}\) is a (bounded) \(B\)-elementary complex; 2. \(\widetilde{B}=B\oplus E_{B}\), where \(E_{B}\) is a (bounded) \(A\)-elementary complex; 3. \(\widetilde{A}\xleftarrows\widetilde{B}\) is an isomorphism; 4. the restriction of \(\widetilde{f}\) to \(B\) is exactly \(A\xleftarrows_{f}B\). Note that (i) and (ii) imply, in particular, that \(A\simeq\widetilde{A}\) and \(B\simeq\widetilde{B}\) are chain homotopy equivalent. In fact, Green proves something stronger, even in Cech-degree 1. He shows that, if we further have coherent homotopies \(p_{i}\) and \(q_{i}\) for \(i\geq 1\) (so, for example, \(p_{1}\) and \(q_{1}\) are the homotopies witnessing that \(f\) has a quasi-inverse, say \(g\); \(p_{2}\) and \(q_{2}\) witness the failure of \(p_{1}\) and \(q_{1}\) to commute with \(f\) and \(g\); etc.), then this construction can still be applied, resulting in a strict isomorphism, with all higher homotopies \(p_{i}\) and \(q_{i}\) being strictly zero. \(\lrcorner\) **Theorem 5.3.3**.: _The inclusion \(i:\mathcal{G}\mathit{even}\hookrightarrow\mathcal{J}\mathit{wist}\) induces an isomorphism_ \[i_{0}\colon\pi_{0}\mathcal{G}_{i}\mathit{even}(U)\cong\pi_{0}\mathcal{J} \mathit{wist}(U)\] _for any \((U,\mathcal{O}_{U})\in\mathsf{RingSp}_{\mathrm{conn}}\)._ Proof.: Since \(\pi_{0}\) is merely a set, we simply need to show that \(i_{0}\) is a bijection. Firstly, note that it is surjective, since \(\mathcal{G}\mathit{even}(U)_{0}=\mathcal{J}\mathit{wist}(U)_{0}\). To prove that it is injective, we need to show that, if (the image under \(i\) of) two 0-simplices of \(\mathcal{G}\mathit{even}(U)\) are connected by some 1-simplex in \(\mathcal{J}\mathit{wist}(U)\), then they are already connected by some 1-simplex in \(\mathcal{G}\mathit{even}(U)\). So let \(A,B\in\mathcal{G}\mathit{even}(U)_{0}\) be such that there exists some 1-simplex in \(\mathcal{J}\mathit{wist}\). Since \(A^{\prime}\) and \(B^{\prime}\) are bounded complexes of free modules, the generalised Whitehead theorem tells us that the quasi-isomorphism \(A^{\prime}\xleftarrows B^{\prime}\) is actually a chain homotopy equivalence, with chain homotopy inverse.26 But this then puts us in the setting of Remark 5.3.2, and so obtain an isomorphism Footnote 26: To show this, we appeal to the classical abstract argument: we endow \(\mathsf{Free}(U)\) with the projective model structure; since the objects of \(\mathsf{Free}(U)\) are _bounded_ complexes, they are fibrant; since they are complexes of _free_ modules, they are also cofibrant; by the generalised Whitehead theorem [10, §II, Theorem 1.10], every quasi-isomorphism in \(\mathsf{Free}(U)\) is a homotopy equivalence, and thus has a homotopy inverse. \[\widetilde{A}\coloneqq A\oplus\widetilde{A}^{\perp}\cong B\oplus\widetilde{ B}^{\perp}=:\widetilde{B}\] where \[\widetilde{A}^{\perp} \coloneqq A^{\perp}\oplus\left(\bigoplus_{(B\ni^{\prime}\neq 0}(B\oplus B ^{\perp})^{i}\xrightarrow{\operatorname{id}}(B\oplus B^{\perp})^{i}\right)\] \[\widetilde{B}^{\perp} \coloneqq B^{\perp}\oplus\left(\bigoplus_{(A\ni^{\prime}\neq 0}(A \oplus A^{\perp})^{i}\xrightarrow{\operatorname{id}}(A\oplus A^{\perp})^{i}\right)\] and where the resulting isomorphism \(\widetilde{B}\cong_{f}\widetilde{A}\) is such that its restriction \(B^{\prime}\to A^{\prime}\) is exactly the initial quasi-isomorphism, but it is non-trivially modified in its other three components. By definition, \(A^{\perp}\) is elementary in \(A^{\prime}\), and so \(\widetilde{A}^{\perp}\) is elementary in \(A^{\prime}\) and \(B^{\prime}\), since we simply take the direct sum of \(A^{\perp}\) with \(B^{\prime}\)-elementary complexes. So there are no further conditions to check: we have constructed a \(1\)-simplex in \(\widetilde{G}^{\prime}\)_een_(\(U\)). **Corollary 5.3.4**.: _Let \((X,\mathbb{O}_{X})\in\operatorname{RingSp}_{\operatorname{conn}}\) with cover \(\mathcal{U}\). Then_ \[\pi_{0}\operatorname{Tot}\mathcal{J}\operatorname{\mathit{wist}}(\tilde{ \mathcal{N}}\mathcal{U}_{\star})\cong\pi_{0}\operatorname{Tot}\mathcal{G}^{ \prime}\text{een}(\tilde{\mathcal{N}}\mathcal{U}_{\star})\cong\pi_{0} \operatorname{Tot}\mathcal{J}\operatorname{\mathit{wist}}(\tilde{\mathcal{N }}\mathcal{U}_{\star}).\] Proof.: This is Lemma 2.8.3 applied to Theorem 5.3.1 and Theorem 5.3.3. Knowing that \(\widetilde{G}^{\prime}\)_een_, \(\mathcal{J}\operatorname{\mathit{wist}}\), and \(\mathcal{J}\operatorname{\mathit{wist}}\) all have equivalent \(\pi_{0}\) leads to the natural question: are all higher \(\pi_{n}\) equivalent as well? In other words, _are these simplicial presheaves globally weakly equivalent_? We do not claim to have an answer to this question, as we are largely hindered by the fact that we are unable to prove global fibrancy of \(\widetilde{G}^{\prime}\)_een_ and \(\mathcal{J}\operatorname{\mathit{wist}}\). If, however, one could prove global fibrancy, then it is not too difficult to show by hand that, for example, \(i:\pi_{1}\mathcal{J}\operatorname{\mathit{wist}}\to\pi_{1}\mathcal{J} \operatorname{\mathit{wist}}\) is a surjection, and it seems possible that the method of proof could generalise to higher dimensions. We further discuss what the implications of these global weak equivalences would be in Section 6. For now, we provide a single conjecture. **Conjecture 5.3.5**.: _The inclusion \(i:\widetilde{G}^{\prime}\text{een}\hookrightarrow\mathcal{J}\operatorname{ \mathit{wist}}\) induces a weak equivalence of simplicial presheaves. Furthermore, this weak equivalence is witnessed by a homotopy inverse, given by the construction of Conjecture 5.4.1._ Note that this conjecture assumes that both \(\widetilde{G}^{\prime}\)_een_ and \(\mathcal{J}\operatorname{\mathit{wist}}\) are globally fibrant, and is thus dependent on both Conjecture 4.3.1 and the statement (which we do not separately conjecture) that \(\mathcal{J}\operatorname{\mathit{wist}}\) is a presheaf of Kan complexes. The justification for Conjecture 5.3.5 is that one can try to show by hand that the induced map \(i_{n}:\pi_{n}\widetilde{G}^{\prime}\text{een}(U)\to\pi_{n}{}^{\wedge}\mathcal{J }\operatorname{\mathit{wist}}(U)\) is an isomorphism for each \(n\in\mathbb{N}\), and showing surjectivity amounts to applying Green's resolution (specifically the version described in [17, SS3]). This suggests that realising Green's resolution as a morphism \(\mathcal{J}\operatorname{\mathit{wist}}\to\widetilde{G}^{\prime}\text{een}\) of simplicial presheaves would provide an explicit homotopy inverse to the inclusion \(\widetilde{G}^{\prime}\text{een}\hookrightarrow\mathcal{J}\operatorname{ \mathit{wist}}\). But the construction of this morphism is not so trivial, and requires certain modifications to the theory that we have developed, as we now explain. ### Green's resolution We _would like_ to be able to say that Green's resolution as described in [17, SS3] defines a morphism of simplicial presheaves \[\mathcal{A}\mathcal{F}\mathit{wist}\to\mathcal{G}\mathit{eeen}\] and, indeed, this is the content of Conjecture 5.4.1. Although we are _almost_ able to construct this morphism, we will show how there is a technical problem that requires us to enrich our framework with the addition of _cyclic structure_ on our simplices. In the name of brevity and clarity, we are reluctant to introduce the necessary extra details here, and instead give a sketch of the idea and leave the formalism to appear in future work. By design, it is _not_ true that simplices of \(\mathcal{A}\mathcal{F}\mathit{wist}\) are exactly simplicial twisting cochains: to get such a result, we need to apply Cech totalisation, and we then recover simplicial twisting cochains as the \(0\)-simplices (Theorem 4.1.1 (d)). If, however, we apply a suitable change of notation, then the simplices of \(\mathcal{A}\mathcal{F}\mathit{wist}\) behave "sufficiently similar" to simplicial twisting cochains that we could try to directly apply the version of Green's construction given in [17, SS3], since they satisfy conditions (STC 1) to (STC 4) _loc. cit._, which is all that is necessary. Indeed, as we will see, the \(p\)-simplices of \(\mathcal{A}\mathcal{F}\mathit{wist}(U)\) are _local_ and _truncated_ simplicial twisting cochains: they live over a single space \(U\) instead of a cover, and their bidegree-\((k,1-k)\) terms are zero for \(k>p\). This is exactly why we can prove Theorem 4.1.1 (d): pulling back along the opposite of the Cech nerve lets us remove the adjective "local" by resolving \(U\) by a cover, and taking the totalisation (which is a way of computing the homotopy limit) lets us remove the adjective "truncated" by gluing together infinitely many \(p\)-simplices for all \(p\in\mathbb{N}\). However, there is one very important caveat hidden in this adjective "local" which is what prevents us from formally constructing this morphism, namely that our change of notation will only give us _ordered_ and _non-degenerate_ terms, as we will explain. Let us now spell out the formal details. Let \(\tau\in\mathcal{A}\mathcal{F}\mathit{wist}(U)_{p}\) be a \(p\)-simplex, which is exactly a GTT-labelling of \(\Delta[p]\) by \([\mathcal{N}^{\mathrm{dg}}\operatorname{Free}(U)]\). For \(\alpha\in[p]\) and \(\sigma\) a subset of \([p]\) containing \(\alpha\), set \[E^{*}_{\sigma,\alpha}\coloneqq C_{\alpha}(\sigma)\] and, for \(\tau\subseteq\sigma\) a sub-face also containing \(\alpha\), set \[E^{*}_{\sigma,\tau,\alpha}\coloneqq C^{\perp\sigma}_{\alpha}(\tau).\] For \(\sigma=\{\alpha_{0}<\ldots<\alpha_{k}\}\) and any non-empty subset \(J=\{\beta_{0}<\ldots<\beta_{\ell}\}\subseteq\sigma\), set \[{}^{\sigma}\alpha^{\ell,1-\ell}_{\beta_{0}\ldots\beta_{\ell}}\coloneqq\varphi _{J}(\sigma)\] and define \[{}^{\sigma}\alpha\coloneqq\sum_{i=0}^{k}{}^{\sigma}\alpha^{i,1-i}.\] The fact that \(\tau\) is a GTT-labelling (Definition 3.3.4) immediately tells us that this change of notation gives objects that satisfy conditions (STC 3) and (STC 4) in [17, SS3] (which posit, respectively, the existence of elementary orthogonal complements and that higher homotopies be upper triangular, as in Remark 3.3.5). It remains only to check three things: 1. that conditions (STC 1) and (STC 2) are also satisfied (where the former asks that each \(E^{*}_{\sigma,\alpha}\) be a free resolution of a given coherent sheaf, and the latter asks that the \({}^{\sigma}\) a satisfy the Maurer-Cartan equation); 2. that the result of the inductive construction defined there gives a \(p\)-simplex of \(\mathcal{G}^{\mathit{c}een}(U)\), i.e. that the codomain of the morphism thus constructed is indeed \(\mathcal{G}^{\mathit{c}een}\); and 3. that the construction is functorial, and thus defines a morphism of simplicial presheaves. The second would follow rather immediately from Remark 3.4.3, and the third would be a lengthy but uninspired explicit calculation; it is the first that poses a technical problem. Condition (STC 1) asks that the \(E^{*}_{\sigma,\alpha}\) be local resolutions of some given coherent sheaf \(S\) restricted to \(U_{\sigma}\). In particular, this implies that the \(E^{*}_{\sigma,\alpha}\) be _exact_ in all but the highest degree. However, this condition is not actually necessary for the construction of Green's resolution: what is important is that the construction does not modify the internal homology of the complexes, i.e. the resulting object still resolves the same coherent sheaf \(S\). Indeed, [10] deals with twisting cochains that are not exact -- in general, these correspond to resolutions of _complexes_ of coherent sheaves. As mentioned in Section 6, a better understanding of complexes of coherent (analytic) sheaves in one of the main motivations for this present work. Condition (STC 2) is where the real problem lies. This condition concerns two types of terms outside of those that arise from our tentative morphism: unordered ones, such as \(\mathfrak{a}^{0,1}_{\beta\alpha}\), and degenerate ones, such as \(\mathfrak{a}^{2,-1}_{\alpha\beta\alpha}\). It seems possible to recover the degenerate terms by applying degeneracy maps, but the unordered ones are simply extra information that is not given in our current framework. However, by replacing \(\Delta[p]\) with some "thicker" structures in our definitions, it seems that we could resolve this problem. Before formalising this, let us first motivate what we mean. If we wish to describe a morphism \(f\) between two objects \(x\) and \(y\), then we might label a 1-simplex accordingly: label \(\{0\}\) with \(x\), \(\{1\}\) with \(y\), and \(\{0<1\}\) with \(f\). Now suppose we wish to record the fact that \(f\) is actually invertible up to homotopy, such as in the case of a quasi-isomorphism between complexes of free modules, or a chain homotopy in general. Then it makes sense to actually introduce _another_ 1-cell connecting \(\{0\}\) and \(\{1\}\) with the opposite orientation, and to label it with \(f^{-1}\). But now we have a non-contractible space: we have constructed a copy of the circle \(S^{1}\). So to ensure that we have something homotopically equivalent to our original description, we might try introducing a 2-cell bounded by the two 1-simplices, filling in \(S^{1}\) with a disc to obtain \(D^{2}\). This is indeed now contractible, but what information does this 2-cell describe? Since it goes from one 1-cell to the other, it seems justifiable to label it with the data of the homotopy \(f^{-1}f\Rightarrow\mathrm{id}_{x}\). But now we have the same asymmetry as we did at the start, and we should introduce another 2-cell with the opposite orientation, labelled with \(\mathrm{id}_{y}\Rightarrow ff^{-1}\). Again, however, we have created a sphere, namely \(S^{2}\), and want to fill this with a 3-cell in order for it to remain contractible. Repeating this process indefinitely, labelling with higher and higher homotopical data, we see that to "really" describe the data of a morphism with homotopy inverse, we want to replace \(\Delta[1]\) by \(S^{\infty}\). More generally, this leads to the idea of replacing \(\Delta[p]\) by the nerve of the groupoid with \((p+1)\)-many objects and a unique (invertible) morphism between any two objects. **Conjecture 5.4.1**.: _If we define \(\mathcal{G}\)reen\({}^{\prime}\) and \(\mathcal{S}\)wist\({}^{\prime}\) by replacing \(\Delta[1]\) with \(S^{\infty}\), as described above (and analogously for higher simplices), after taking the pair subdivision in the construction of \(\mathcal{G}\)reen and \(\mathcal{S}\)wist (respectively), then the construction of Green's resolution described in [17] defines a morphism of simplicial presheaves \(\mathcal{S}\)wist\(\simeq\mathcal{S}\)wist\({}^{\prime}\to\mathcal{G}\)reen\({}^{\prime}\simeq\mathcal{G}\)reen._ ## 6 Future work In this paper we have shown how to construct generalisations of \([\mathcal{N}\operatorname{Free}(U)]\), as motivated in Section 3.1 -- one version using the dg-nerve and one using a simplicial labelling construction. The objects resulting from the former can be thought of as abstract twisting cochains, and from the latter as abstract Green complexes; this is justified by Theorem 4.1.1 which tells us that we do indeed recover the corresponding classical objects when passing to the Cech totalisation in the holomorphic setting. We have shown that the connected components of these two simplicial presheaves, \(\mathcal{S}\)wist\({}^{\prime}\) and \(\mathcal{G}\)reen, are in bijection (Theorem 5.3.1 and Theorem 5.3.3), via the construction of their common generalisation \(\mathcal{S}\)wist, which can be thought of as the abstraction of simplicial twisting cochains. This leads us to possibly the largest unanswered question that we have raised: _What is the full description of the relationship between \(\mathcal{S}\)wist, \(\mathcal{G}\)reen, and \(\mathcal{S}\)wist in terms of the homotopy theory of simplicial presheaves?_ We hope, as described in Conjecture 5.3.5, that \(\mathcal{G}\)reen and \(\mathcal{S}\)wist can be shown to be weakly equivalent, using Green's resolution, as described in Conjecture 5.4.1. However, the question of whether or not \(\mathcal{S}\)wist and \(\mathcal{S}\)wist are weakly equivalent is much more open; if it is indeed the case that \(\mathcal{G}\)reen\(\simeq\mathcal{S}\)wist, then the weak equivalence \(\mathcal{S}\)wist\(\simeq\mathcal{S}\)wist would give us a weak equivalence between \(\mathcal{S}\)wist and \(\mathcal{G}\)reen, and this would seem to have rather far-reaching implications, as we now explain. We know that weak equivalence of globally fibrant simplicial presheaves is preserved by Cech totalisation (Lemma 2.8.3), and so if \(\mathcal{S}\)wist, \(\mathcal{G}\)reen, and \(\mathcal{S}\)wist are indeed all weakly equivalent _and also all globally fibrant_, then so too are their Cech totalisations. Although we do not prove (nor even claim) that this is indeed the case, we have seen a very weak version of this statement, namely Corollary 5.3.4, which says that \[\pi_{0}\operatorname{Tot}\mathcal{S}\)wist(\(\mathcal{S}\mathcal{U}_{\star}) \cong\pi_{0}\operatorname{Tot}\mathcal{G}\)reen(\(\mathcal{S}\mathcal{U}_{\star})\cong\pi_{0} \operatorname{Tot}\mathcal{S}\)wist(\(\mathcal{S}\mathcal{U}_{\star}).\] In the case where \((X,\mathcal{O}_{X})\) is a complex-analytic manifold with Stein cover \(\mathcal{U}\), this first equivalence tells us that twisting cochains up to weak equivalence are "the same as" Green complexes up to weak equivalence (whatever this might mean for Green complexes, cf. Section 4.3). The relation between twisting cochains and perfect complexes is rather well studied, with [16] showing that the dg-category of the former gives a dg-enhancement of the latter (see also [13]), and it is a classical fact that perfect complexes allow access to the derived category of coherent sheaves: if \((X,\mathcal{O}_{X})\) is smooth, then there is an equivalence of triangulated categories between that of perfect complexes on \(X\) and the bounded derived category of coherent sheaves on \(X\) ([13, Expose I, Corollaire 5.10 and Exemples 5.11]). On the other hand, Green complexes allow us to resolve not just coherent sheaves, but actually complexes of sheaves of \(\mathcal{O}_{X}\)-modules whose cohomology consists of coherent modules: if \((X,\mathcal{O}_{X})\) is a complex-analytic manifold, then there is an equivalence of \((\infty,1)\)-categories between Green complexes on \(X\) (after taking a homotopy colimit over refinements of covers) and complexes of sheaves of \(\mathcal{O}_{X}\)-modules with coherent cohomology ([14, Corollary 3.21 and Lemma 4.36]). The question of whether or not the derived category of coherent sheaves is equivalent to the category of complexes of sheaves with coherent cohomology is a long-standing problem in the complex-analytic setting, and it seems as though this framework describing the simplicial presheaves \(\mathcal{T}\mathit{wist}\) and \(\mathcal{G}\mathit{reen}\) (and their common generalisation \(\mathcal{T}\mathit{wist}\)) might provide another way of approaching this question. One specific aspect of \(\mathcal{G}\mathit{reen}\) that first needs to be better understood is whether or not it is globally fibrant (or, at least, for which specific spaces its Cech totalisation is fibrant). Then one can try to relate this simplicial presheaf to the \((\infty,1)\)-category of Green complexes described in [14], with the hope being that there is an equivalence between the cores of the two. At the moment, however, one can still start to phrase statements of this flavour: for example, [14, Corollary 3.21 and Lemma 4.36] tell us that, given a complex with coherent cohomology, we can resolve it (up to quasi-isomorphism) by a Green complex, i.e. there is a surjection \(\mathit{of}\mathit{sets}\) from \(\mathrm{Tot}^{0}\mathcal{G}\mathit{reen}(\mathcal{N}\mathcal{U}_{\star})\) to the set of quasi-isomorphism classes of complexes with coherent cohomology. In other words, it would be useful to construct the two horizontal morphisms in the diagram and study their properties. Finally, this machinery should allow us to study equivariant theories. Given a group \(G\) and a \(G\)-space \(X\) with suitable cover \(\mathcal{U}\), we can consider the bisimplicial space given by the natural combination of the Cech nerve of \(\mathcal{U}\) and the bar construction of the \(G\)-action; the diagonal of this should give a suitable simplicial space on which to define simplicial presheaves analogous to those considered in this present work. However, in order to provide the full technical details, one first needs to understand, for example, what cofibrant replacements look like in this setting. ## Appendix A Motivation: principal bundles In this appendix we show how the abstract machinery of Section 2.8 can be used to recover principal \(\operatorname{GL}_{n}(\mathbb{R})\)-bundles. Using the notation of Section 2.8, take \(G=\operatorname{GL}_{n}(\mathbb{R})\in\operatorname{LieGroup}\) (which we write as \(\operatorname{GL}_{n}\) for brevity), and consider the presheaf of simplicial sets \[\operatorname{\mathcal{B}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! where \(\operatorname{id}_{*_{a}}\) is the constant map to the identity, i.e. \(\operatorname{id}_{*_{a}}\colon x\mapsto\operatorname{id}\) for all \(x\in U_{a}\). 1. Again, unravelling the definition of \(\mathbf{N}\), we see that \(v^{1}\) is an element of \[\big{(}\mathbf{N}(\operatorname{GL}_{n})(\underline{X})\big{)}_{1}^{1} =\big{(}\mathcal{N}\operatorname{B}\nmid\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! * The degeneracy map \(s_{0}^{1}\) acts via \(\alpha\beta\mapsto\alpha\alpha\beta\); the map \(s_{1}^{1}\) via \(\alpha\beta\mapsto\alpha\beta\beta\). The corresponding conditions thus reduce to asking that \[g_{\alpha\beta}=g_{\alpha\beta\beta}=g_{\alpha\beta}\] for all \(\alpha,\beta\). * More generally, the degeneracy maps \(s_{i}^{p}\) for \(p\geq 2\) will give analogous conditions to the \(p=1\) case: any \(g_{\alpha_{0}\ldots\alpha_{i}\alpha_{i}\ldots\alpha_{p}}\) is equal to \(g_{\alpha_{0}\ldots\alpha_{i}\ldots\alpha_{p}}\), i.e. we can always remove repeated indices without changing the map. Now for the face maps \(f_{p}^{i}\). * The face map \(f_{1}^{0}\) acts via \(\alpha\beta\mapsto\beta\); the map \(f_{1}^{1}\) via \(\alpha\beta\mapsto\alpha\). The condition given by these face maps is that \(f_{1}^{0}(v^{0})\) and \(f_{1}^{1}(v^{0})\) should be the endpoints of \(v^{1}\), i.e. that the line labelled by \(g_{\alpha\beta}\) should go from \(*_{\alpha}\) to \(*_{\beta}\). * The face map \(f_{2}^{0}\) acts via \(\alpha\beta\gamma\mapsto\beta\gamma\); the map \(f_{2}^{1}\) via \(\alpha\beta\gamma\mapsto\alpha\gamma\); and the map \(f_{2}^{2}\) via \(\alpha\beta\gamma\mapsto\beta\gamma\). Asking for the three edges of \(v^{2}\) to be given by the images of \(v^{1}\) under these face maps thus reduces to asking for \[g_{\alpha\beta\gamma}^{(1)} =f_{2}^{2}(g_{\alpha\beta})\mathbin{\mathop{:}}=g_{\alpha\beta}| U_{\alpha\beta\gamma}\] \[g_{\alpha\beta\gamma}^{(2)} =f_{2}^{0}(g_{\beta\gamma})\mathbin{\mathop{:}}=g_{\beta\gamma} |U_{\alpha\beta\gamma}\] \[g_{\alpha\beta\gamma}^{(2)} \mathbin{\mathop{:}}g_{\alpha\beta\gamma}^{(1)} =f_{2}^{2}(g_{\alpha\gamma})\mathbin{\mathop{:}}=g_{\alpha\gamma} |U_{\alpha\beta\gamma}.\] But note that composition in \(\mathbb{BMan}(U_{\alpha\beta\gamma},\mathrm{GL}_{n})\) is given by multiplication in \(\mathrm{GL}_{n}\): \[(h\circ g)(x)\mathbin{\mathop{:}}=h(x)\cdot g(x)\] and so the above conditions simplify to \[g_{\beta\gamma}g_{\alpha\beta}=g_{\alpha\gamma}\] (where we omit the restrictions from our notation). * Since the nerve is generated by \(1\)-simplices, and since the multiplication in \(\mathrm{GL}_{n}\) is (strictly) associative, the higher face maps give us no further conditions. In summary, a point in \(\mathcal{B}un_{\mathrm{GL}_{n}(\mathbb{R})}(\underline{X})\) consists of smooth maps \(g_{\alpha\beta}\mathbin{\mathop{:}}U_{\alpha\beta}\mathbin{\mathop{:}}\mathrm{ GL}_{n}\) that satisfy the identity condition \(g_{\alpha\alpha}=\mathrm{id}_{*_{a}}\) and the cocycle condition \(g_{\beta\gamma}g_{\alpha\beta}=g_{\alpha\gamma}\). More concisely, \[\text{\it a point in $\mathcal{B}un_{\mathrm{GL}_{n}(\mathbb{R})}( \underline{X})$ is exactly the data of a principal $\mathrm{GL}_{n}$-bundle on $X$}.\] Edges in \(\mathcal{B}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! sense in which a 1-simplex should describe a morphism between two bundles \(g_{\alpha\beta}\) and \(h_{\alpha\beta}\). Since this diagram takes values in the nerve, the diagonal is simply labelled with the common composite \(h_{\alpha\beta}\circ\lambda_{\alpha}=\lambda_{\beta}\circ g_{\alpha\beta}\), where composition is given by multiplication in \(\operatorname{GL}_{n}\), so that \[h_{\alpha\beta}\lambda_{\alpha}=\lambda_{\beta}g_{\alpha\beta}.\] Note that defining \(\lambda_{\alpha}^{-1}(x)\) to be \(\lambda_{\alpha}(x)^{-1}\) shows that \(\lambda_{\alpha}\) is invertible. The degeneracy map gives no extra information here, since it simply says that \(\operatorname{id}_{\alpha}\circ\lambda_{\alpha}=\lambda_{\alpha}\circ \operatorname{id}_{\alpha}\). * For \(p\geq 2\), there are no extra non-trivial conditions or data. This is entirely analogous to how we only needed to study \(p\leq 2\) in the case of vertices: since the nerve is generated by its 1-simplices, all the interesting information is concentrated in degrees 0, 1, and 2; since here the degrees shift by 1 (i.e. \(l^{0}\) consists of a line, not just a point), all the interesting information is concentrated in degrees 0 and 1. In summary then, a line in \(\mathcal{B}un_{\operatorname{GL}_{n}(\mathbb{R})}(\underline{X})\) consists of smooth maps \(U_{\alpha}\to\operatorname{GL}_{n}\) that are invertible and commute with transition maps \(g_{\alpha\beta},h_{\alpha\beta}\) given by its endpoints. More concisely, _a line in \(\mathcal{B}un_{\operatorname{GL}_{n}(\mathbb{R})}(\underline{X})\) is exactly the data of a morphism (which is necessarily an isomorphism) between two principal \(\operatorname{GL}_{n}\)-bundles on \(\underline{X}\)._ **Remark A.2.1**.: If we did not take the maximal Kan complex in the definition of \(\mathcal{B}un_{\operatorname{GL}_{n}(\mathbb{R})}\), the the objects would be exactly the same: the fact that the \(g_{\alpha\beta}\) are isomorphisms is implied by the cocycle condition \(g_{\beta\gamma}g_{\alpha\beta}=g_{\alpha\gamma}\) and the degeneracy condition \(g_{\alpha\alpha}=\operatorname{id}\). The morphisms, however, would then simply be morphisms of bundles, not isomorphisms. This would give us a _quasi-category_ of principal \(\operatorname{GL}_{n}\)-bundles instead of a space, which is perfectly usable, but for which the simplicial homotopy groups (Section 2.7) are not a priori well defined: one would have to first pass to a fibrant replacement. \(\lrcorner\) ### Higher simplices in \(\mathcal{B}un_{\operatorname{GL}_{n}(\mathbb{R})}(\underline{X})\) Since the nerve of a groupoid is 2-coskeletal, we only need to study the \(p\)-simplices for \(p\leq 2\). But, as already mentioned, the 2-simplices are uniquely determined by the 1-simplices on their boundary, which means that \[\pi_{1}(\mathcal{B}un_{\operatorname{GL}_{n}(\mathbb{R})}(\underline{X})) \cong\mathcal{B}un_{\operatorname{GL}_{n}(\mathbb{R})}(\underline{X})_{1}\] and so we can simply ignore all higher simplices in \(\mathcal{B}un_{\operatorname{GL}_{n}(\mathbb{R})}(\underline{X})\). ### Homotopy groups of \(\mathcal{B}un_{\operatorname{GL}_{n}(\mathbb{R})}(\underline{X})\) In summary, points in \(\mathcal{B}un_{\operatorname{GL}_{n}(\mathbb{R})}(\underline{X})\) are principal \(\operatorname{GL}_{n}\)-bundles on \(X\), and lines in \(\mathcal{B}un_{\operatorname{GL}_{n}(\mathbb{R})}(\underline{X})\) are bundle isomorphisms. Thus27 Footnote 27: Here we are making implicit use of the fact that, for Kan complexes, it suffices to work with _simplicial_ homotopy groups instead of taking the geometric realisation, as explained in Section 2.7. \(\pi_{0}(\mathcal{B}un_{\mathrm{GL}_{n}(\mathbb{R})}(\underline{X}))\) _consists of isomorphism classes of principal \(\mathrm{GL}_{n}\)-bundles on_ \(X\)_. Furthermore, for any principal \(\mathrm{GL}_{n}\)-bundle \(E\) on \(X\), \(\pi_{1}(\mathcal{B}un_{\mathrm{GL}_{n}(\mathbb{R})}(\underline{X}),E)\) _is the gauge group \(\mathrm{Aut}(E)\) of \(E\)_. Finally, since \(\mathcal{B}un_{\mathrm{GL}_{n}(\mathbb{R})}(\underline{X})\) is \(2\)-coskeletal, we know that all higher \(\pi_{n}\) (i.e. for \(n\geq 2\)) are zero. ## Appendix B Technical proofs Some of the proofs in this paper are so laden with notation (multiple indices, lower-dimensional faces of simplices, etc.) that they obfuscate the actual ideas from which they were constructed. Because of this, we have decided to place them in an appendix; the reader is invited to carefully check all details, and to verify that we have not mislead them in claiming that these proofs are more technical than they are interesting. Proof of Theorem 4.1.1 (d) -- \(\mathrm{Tot}^{0}\,{}_{\partial}\mathcal{T}\,{}_{\bullet}\)) recovers simplicial twisting cochains The data of a \(0\)-simplex \(v\) in \(\mathrm{Tot}\,{}_{\partial}\mathcal{T}\,{}_{\bullet}\)) is the data of \(v^{p}\in\partial\mathcal{T}\,{}_{\partial}\mathcal{T}\,{}_{\bullet}\) for \(p\in\mathbb{N}\), subject to face and degeneracy conditions. By definition, \(v^{0}\in\partial\mathcal{T}\,{}_{\partial}\mathcal{T}\,{}_{\bullet}\))\({}_{0}^{0}=\partial\mathcal{T}\,{}_{\partial}\mathcal{T}\,{}_{\partial} \mathcal{T}\,{}_{\partial}\mathcal{T}\,{}_{\bullet}\))\({}_{0}\) which is the set of GTT-labellings of \(\Delta[0]\) by \([\mathcal{N}^{\mathrm{dg}}\,\mathsf{Free}(\sqcup_{a}U_{a})]\). But, as in the proof of Theorem 4.1.1 (b), since \(\mathcal{O}(\sqcup_{a}U_{a})\cong\prod_{a}\mathcal{O}(U_{a})\), a free \(\mathcal{O}(\sqcup_{a}U_{a})\)-module is exactly the data of a free \(\mathcal{O}(U_{a})\)-module for all \(a\). Since both the dg-nerve and the core functor are right adjoints, their composition preserves all limits. In summary then, \([\mathcal{N}^{\mathrm{dg}}\,\mathsf{Free}(\sqcup_{a}U_{a})]\cong\prod_{a}[ \mathcal{N}^{\mathrm{dg}}\,\mathsf{Free}(U_{a})]\) and so \(v^{0}\) is an element of the set of GTT-labellings of \(\Delta[0]\) by \(\prod_{a}[\mathcal{N}^{\mathrm{dg}}\,\mathsf{Free}(U_{a})]\), which simply means a choice of complex \(C_{a}\in\mathsf{Free}(U_{a})\) for each \(U_{a}\). Next, \(v^{1}\in\partial\mathcal{T}\,{}_{\partial}\mathcal{T}\,{}_{\bullet}\mathcal{ T}\,{}_{\bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{ \bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{ \bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{ \bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{ \bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{ \bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{ \bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{ \bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{ \bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{ \bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{ \bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{ \bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{ \bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{ \bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{ \bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{ \bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{ \bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{ \bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{ \bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{ \bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{ \bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{ \bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{ \bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{ \bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{ \bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{ \bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{ \bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{ \bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{ \bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{ \bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{ \bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{ \bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{ \bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{ \bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{ \bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{ \bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{ \bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{ \bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{ \bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{ \bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{ \bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{ \bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{ \bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{ \bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{ \bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{ \bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{ \bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{ \bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{ \bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{ \bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{ \bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{ \bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{ \bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{\bullet} \mathcal{T}\,{}_{\bullet}\mathcal{T}\,{}_{\bullet}\mathcal{T}\,{ \((1)\subset\{0<1\}\), of \(\Delta[1]_{\mathrm{pair}}\), all such that the conditions of Definition 3.3.4 are satisfied. The face conditions of the totalisation tell us that, for each \(U_{\alpha\beta}\), the two \(0\)-simplices \(\{0\}\) and \(\{1\}\) are labelled with the complexes \(C_{0}(\alpha)\coloneqq C_{\alpha}\) and \(C_{1}(\beta)\coloneqq C_{\beta}\) (respectively) from the \(v^{0}\) above; the remaining \(0\)-simplex \(\{0<1\}\) is labelled with a quasi-isomorphism \(C_{0}(\alpha\beta)\coloneqq C_{1}(\alpha\beta)\) of complexes in \(\mathsf{Free}(U_{\alpha\beta})\). The two \(1\)-simplices \(\{0\}\subset\{0<1\}\) and \(\{1\}\subset\{0<1\}\) are labelled with elementary complexes \(C_{0}^{\perp a\beta}(\alpha)\) and \(C_{1}^{\perp a\beta}(\beta)\) (respectively) such that \[C_{0}(\alpha\beta) \cong C_{0}(\alpha)\oplus C_{0}^{\perp a\beta}(\alpha)\] \[C_{1}(\alpha\beta) \cong C_{1}(\beta)\oplus C_{1}^{\perp a\beta}(\beta).\] Note that the final condition of Definition 3.3.4 is irrelevant here, since \(k=1\). The degeneracy map \(\alpha\mapsto\alpha\alpha\) imposes the condition that the quasi-isomorphism \(C_{0}(\alpha\alpha)\coloneqq C_{1}(\alpha\alpha)\) be the identity, and that \(C_{0}^{\perp a\alpha}(\alpha)=C_{1}^{\perp a\alpha}(\alpha)=0\). We now pass immediately to \(v^{p}\in\mathcal{\tilde{N}}_{\mathrm{pair}}(\mathcal{\tilde{N}}\mathcal{U}_{ \star})_{p}^{p}\) for arbitrary \(p\geq 2\). The face conditions will tell us that all of the \((p-1)\)-dimensional data of \(v^{p}\) coincides with that already given by \(v^{0},\ldots,v^{p-1}\); the degeneracy conditions will tell us that it suffices to consider non-degenerate intersections \(U_{\alpha_{0}\ldots\alpha_{p}}\), since if \(\alpha_{i}=\alpha_{i+1}\) for some \(0\leq i<p\) then the corresponding edge (containing a \(0\)-simplex and two \(1\)-simplices) will be trivially labelled. As a notational shorthand, given a simplex \(I=\{i_{0}<\ldots<i_{k}\}\) of indices, we write \(\alpha_{I}\coloneqq\alpha_{i_{0}}\ldots\alpha_{i_{k}}\). Since we are evaluating on the Cech nerve, rather than labelling the simplices of \(\Delta[p]_{\mathrm{pair}}\) by subsets of \([p]\), it makes sense to label them with subsets of \(\{\alpha_{0}<\ldots<\alpha_{p}\}\) for each fixed \(U_{\alpha_{0}\ldots\alpha_{p}}\in\mathcal{\tilde{N}}\mathcal{U}_{p}\). Following Definition 3.3.4, each \(0\)-simplex \(\alpha_{i_{0}}\ldots\alpha_{i_{k}}\leq\alpha_{0}\ldots\alpha_{p}\) is labelled with a \(k\)-simplex of \([\mathcal{N}^{\mathrm{deg}}\mathsf{Free}(U_{\alpha_{I}})]\), i.e. by complexes \[C_{i_{0}}(\alpha_{I}),\ldots,C_{i_{k}}(\alpha_{I})\] along with, for all non-empty subsets \(J=\{j_{0}<\ldots<j_{\ell}\}\subseteq\{i_{0}<\ldots<i_{k}\}\), morphisms \[\varphi_{J}(\alpha_{I})\in\mathrm{Hom}_{\mathsf{Free}(U_{\alpha_{I}})}^{1- \ell}\big{(}C_{j_{\ell}}(\alpha_{I}),C_{j_{0}}(\alpha_{I})\big{)}\] such that, for all \(J\) with \(|J|\geq 3\), \[\partial\varphi_{J}(\alpha_{I})=\sum_{m=1}^{\ell-1}(-1)^{m-1}\varphi_{J \setminus\{j_{m}\}}(\alpha_{I})+\sum_{m=1}^{\ell-1}(-1)^{\ell(m-1)+1}\varphi_ {j_{0}<\ldots<j_{m}}(\alpha_{I})\circ\varphi_{j_{m}<\ldots<j_{k}}(\alpha_{I}).\] Next, setting \(I=\{i_{0}<\ldots<i_{k}\}\) and \(J=\{j_{0}<\ldots<j_{\ell}\}\subset I\), each \((k-\ell)\)-cell \(\alpha_{J}<\alpha_{I}\) in \(\Delta[p]_{\mathrm{pair}}\) is labelled with an \((\ell+1)\)-tuple of objects \[\big{(}C_{j_{m}}^{\perp\alpha_{I}}(\alpha_{J})\in\mathsf{Free}(U_{\alpha_{I}} )\big{)}_{0\leq m\leq\ell}\] where each \(C_{j_{m}}^{\perp\alpha_{I}}(\alpha_{J})\) is elementary, and such that we obtain a direct-sum decomposition \[C_{J_{m}}(\alpha_{I})\cong C_{J_{m}}(\alpha_{J})\oplus C_{J_{m}}^{\perp \alpha_{I}}(\alpha_{J})\] for all \(0\leq m\leq\ell\). Now we introduce the change of notation that will recover the definition of simplicial twisting cochain from [17, SS3], which we have already seen in Section 5.4. Given \(U_{a_{0}\dots\alpha_{p}}\), some specific \(\alpha\coloneqq\alpha_{i}\), and some subset \(I\subseteq[p]\) containing \(i\), let \(\sigma=\alpha_{I}\); for \(J\subseteq I\) also containing \(i\), let \(\tau=\alpha_{J}\), so that \[\{\alpha_{i}\}=\{\alpha\}\subseteq\tau\subseteq\sigma\subseteq\alpha_{[p]}= \alpha_{0}\dots\alpha_{p}.\] We then define \[\mathbf{E}^{\star}_{\sigma,\alpha} \coloneqq C_{i}(\sigma)\] \[\mathbf{E}^{\star}_{\sigma,\tau,\alpha} \coloneqq C_{i}^{\perp\sigma}(\tau)\] \[{}^{\sigma}\mathfrak{a}_{\alpha_{J}}^{\ell,1-\ell} \coloneqq\varphi_{J}(\sigma)\] \[{}^{\sigma}\mathfrak{a} \coloneqq\sum_{i=0}^{k-1}{}^{\sigma}\mathfrak{a}^{i,1-i}\] where \(k=|\sigma|-1\) and \(\ell=|\tau|-1\). It remains to show that conditions (STC 1) to (STC 4) in [17, SS3] are satisfied. As mentioned in Section 5.4, conditions (STC 3) and (STC 4) are satisfied by construction, following the definition of a GTT-labelling, and condition (STC 1) is not really necessary, if we consider simplicial twisting cochains that resolve _complexes_ of coherent sheaves. What remains to show is that (STC 2) is satisfied, but this is exactly the content of Theorem 4.1.1 (b), i.e. that the \(\varphi_{J}(\sigma)\) (which constitute the \({}^{\sigma}\mathfrak{a}\)) satisfy the Maurer-Cartan equation. Proof of Theorem 4.2.2 -- \(\operatorname{Tot}^{1}\mathcal{F}\operatorname{wist}(\mathcal{N}\mathcal{U}_{ \star})\) recovers weak equivalences We start by unravelling the definition of a weak equivalence of twisting cochains ([16, Definition 2.27]) and spelling out the explicit conditions in the first three degrees; we then do the same for the definition of a 1-simplex in the totalisation, and show that the conditions agree with those of a weak equivalence. Finally, we give a general combinatorial argument that applies in arbitrary degree. Let \((\mathbf{E}^{\star},\varphi)\) be a twisting cochain: the data of \(\mathbf{E}^{\star}_{\alpha}\in\mathsf{Free}(U_{\alpha})\) for all \(U_{\alpha}\in\mathcal{U}\), and \(\varphi=\sum_{p\in\mathbb{N}}\varphi^{p,1-p}\) satisfying the Maurer-Cartan equation, where \[\varphi_{a_{0}\dots\alpha_{p}}^{p,1-p}\colon\mathbf{E}^{\star}_{a_{p}}\to\mathbf{E}^{ \star}_{a_{0}}[p-1]\] and with \(\varphi_{a}^{0,1}\) being exactly the differential of \(\mathbf{E}_{\alpha}\). If \((\mathbf{F}^{\star},\psi)\) is another twisting cochain, then a degree-0 morphism \(\Lambda\colon(\mathbf{F}^{\star},\psi)\to(\mathbf{E}^{\star},\varphi)\) is, following [16, Definition 2.12], the data of maps (that do not necessarily commute with the differentials) \[\Lambda_{a_{0}\dots\alpha_{p}}^{p,-p}\colon\mathbf{F}^{\star}_{a_{p}}\to\mathbf{E}^{ \star}_{a_{0}}[p].\] This morphism is a _weak equivalence_ ([16, Definition 2.27]) if the \(\Lambda_{\alpha}^{0,0}\) are quasi-isomorphisms, and if the \(\Lambda^{p,-p}\) satisfy \[\delta\Lambda+\varphi\cdot\Lambda-\Lambda\cdot\psi=0.\] (Morally, this is asking that \(\operatorname{D}\Lambda\coloneqq(\hat{\delta}+\partial)\Lambda=0\), where we think of \(\partial\) as an analogue to the differential in the category of chain complexes, given by the difference between pre- and post-composition with the two "differentials", which are now the twisting cochains \(\varphi\) and \(\psi\)). All the terms in this equation are of total degree \(1\), and so we can consider what happens in each different bidegree.28 As is often the case, ensuring that the signs are correct is the majority of the work, and there are two things to be aware of: the composition of a \((p,q)\)-term with an \((r,s)\)-term has a sign of \((-1)^{qr}\) ([16, SS2.2, Equation 3]); and the differential of a morphism \(f\colon A\to B\) of degree \(n\) in a dg-category of chain complexes is given by \(\partial f=f\circ\mathrm{d}_{A}+(-1)^{n+1}\mathrm{d}_{B}\circ f\). Footnote 28: We omit explicitly writing the degrees of the \(\varphi\), \(\psi\), and \(\Lambda\) terms from now on, since they can be deduced from the degree of the simplex \(a_{0}\dots\alpha_{p}\) in the subscript, knowing that \(\deg\varphi=\deg\psi=1\) and \(\deg\Lambda=0\). #### b.2.1 The first three terms To simplify notation, we will write \(E_{\alpha_{0}\dots\alpha_{p}}\) (resp. \(F_{\alpha_{0}\dots\alpha_{p}}\)) instead of \(\varphi_{a_{0}\dots\alpha_{p}}\) (resp. \(\psi_{a_{0}\dots\alpha_{p}}\)). For the sake of clarity, we usually denote the complex by \(E_{\alpha_{i}}^{\star}\) so as not to confuse it with its differential \(E_{\alpha_{i}}\). * The \((0,1)\)-terms tell us that \[\begin{split} E_{\alpha}\circ\Lambda_{\alpha}-\Lambda_{\alpha} \circ F_{\alpha}=0\end{split}\] ( \[\star_{0}\] ) which simply says that \(\Lambda_{\alpha}\) is a chain map from \(F_{\alpha}^{\star}\) to \(E_{\alpha}^{\star}\). * The \((1,0)\)-terms tell us that \[\begin{split} 0=& E_{\alpha\beta}\Lambda_{\beta}- \Lambda_{\alpha}F_{\alpha\beta}\\ -& E_{\alpha}\Lambda_{\alpha\beta}-\Lambda_{\alpha \beta}F_{\beta}\end{split}\] which we can rewrite as \[\begin{split}\partial\Lambda_{\alpha\beta}=& E_{ \alpha\beta}\Lambda_{\beta}-\Lambda_{\alpha}F_{\alpha\beta}.\end{split}\] ( \[\star_{1}\] ) * The \((2,-1)\) terms tell us that \[\begin{split} 0=& E_{\alpha\beta\gamma}\Lambda_{\gamma}- \Lambda_{\alpha}F_{\alpha\beta\gamma}\\ +& E_{\alpha\beta}\Lambda_{\beta\gamma}+\Lambda_{ \alpha\beta}F_{\beta\gamma}\\ +& E_{\alpha}\Lambda_{\alpha\beta\gamma}-\Lambda_{ \alpha\beta\gamma}F_{\gamma}\\ +&\Lambda_{\alpha\gamma}\end{split}\] which we can rewrite as \[\begin{split}\partial\Lambda_{\alpha\beta\gamma}=& E_{ \alpha\beta\gamma}\Lambda_{\gamma}-\Lambda_{\alpha}F_{\alpha\beta\gamma}\\ +& E_{\alpha\beta}\Lambda_{\beta\gamma}+\Lambda_{ \alpha\beta}F_{\beta\gamma}\\ +&\Lambda_{\alpha\gamma}.\end{split}\] ( \[\star_{2}\] ) Now we look at the first three terms of a \(1\)-simplex in \(\operatorname{\mathrm{Tot}}\mathcal{T}\mathit{visit}(\mathcal{\check{N}} \mathcal{U}_{\star})\). To do this, we need to understand the simplicial structure of \(\Delta[p]\times\Delta[1]\) for \(p\in\mathbb{N}\). Fortunately there is a simple combinatorial description of all sub-simplices, given by considering certain paths in a two-dimensional grid: a non-degenerate \(q\)-simplex in \(\Delta[p]\times\Delta[1]\) is given by a strictly increasing sequence of \((q+1)\) pairs \((i,j)\), for \(0\leqslant i\leqslant p\) and \(0\leqslant j\leqslant 1\), where "strictly increasing" means that at least one of \(i\) and \(j\) increases with each successive element. We write such a sequence as \(\left[\begin{smallmatrix}i_{0}&i_{1}&\ldots&i_{q}\\ j_{0}&j_{1}&\ldots&j_{q}\end{smallmatrix}\right]\). For example, in \(\Delta[1]\times\Delta[1]\), * the \(0\)-simplices are \(\left[\begin{smallmatrix}0\\ 0\end{smallmatrix}\right]\), \(\left[\begin{smallmatrix}1\\ 0\end{smallmatrix}\right]\), \(\left[\begin{smallmatrix}0\\ 1\end{smallmatrix}\right]\), and \(\left[\begin{smallmatrix}1\\ 1\end{smallmatrix}\right]\); * the \(1\)-simplices are \(\left[\begin{smallmatrix}0&1\\ 0&0\end{smallmatrix}\right]\), \(\left[\begin{smallmatrix}0&0\\ 0&1\end{smallmatrix}\right]\), \(\left[\begin{smallmatrix}1&1\\ 0&1\end{smallmatrix}\right]\), \(\left[\begin{smallmatrix}0&1\\ 1&1\end{smallmatrix}\right]\), and \(\left[\begin{smallmatrix}0&1\\ 0&1\end{smallmatrix}\right]\); * the \(2\)-simplices are \(\left[\begin{smallmatrix}0&1&1\\ 0&0\end{smallmatrix}\right]\) and \(\left[\begin{smallmatrix}0&0&1\\ 0&1\end{smallmatrix}\right]\). Thinking of \(\left[\begin{smallmatrix}i\\ j\end{smallmatrix}\right]\) as a coordinate, \(\left[\begin{smallmatrix}i_{0}&i_{1}\\ j_{0}&j_{1}\end{smallmatrix}\right]\) as the path from \(\left[\begin{smallmatrix}i_{0}\\ J_{0}\end{smallmatrix}\right]\) to \(\left[\begin{smallmatrix}i_{1}\\ j_{1}\end{smallmatrix}\right]\), and \(\left[\begin{smallmatrix}i_{0}&i_{1}&i_{2}\\ J_{0}&j_{1}&j_{2}\end{smallmatrix}\right]\) as the triangle given by the vertices \(\left[\begin{smallmatrix}i_{0}\\ j_{0}\end{smallmatrix}\right]\), \(\left[\begin{smallmatrix}i_{1}\\ j_{1}\end{smallmatrix}\right]\), and \(\left[\begin{smallmatrix}i_{2}\\ j_{2}\end{smallmatrix}\right]\), we can draw \(\Delta[1]\times\Delta[1]\) as a square with diagonal, as in Figure B.2.i. The story is entirely analogous in higher dimensions: see Figure B.2.ii for the case of \(\Delta[2]\times\Delta[1]\), and [11, Appendix B.2] for the general description of \(\Delta[p]\times\Delta[q]\) (though here we are only concerned with \(\Delta[p]\times\Delta[1]\)). * Since \(\Delta[0]\times\Delta[1]\cong\Delta[1]\), the degree-\(0\) component of a \(1\)-simplex in \(\operatorname{Tot}\mathcal{T}\mathit{widst}(\tilde{\mathcal{N}}\mathcal{U}_{\star})\) is given by \[\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\end{array}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\end{array}[]{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\end{array}[]{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\end{array}[c]{ \array}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}[c]{c}\begin{array}{c}\begin{array}[c]{c}\end{array}[]{c} \begin{array}{c}\begin{array}{c}\begin{array}[c * As described above (and in Figure B.2.i), the product \(\Delta[1]\times\Delta[1]\) is a square with diagonal. This means that the degree-1 component is of the form \(F_{\alpha}^{*}\)\(F_{\beta\beta}\)\(F_{\alpha\beta}\)\(F_{\beta}^{*}\)\(F_{\alpha\beta}\)\(F_{\alpha\ giving us the diagram where \(\lambda_{\alpha\beta}\) satisfies \[\partial\lambda_{\alpha\beta}=E_{\alpha\beta}\lambda_{\beta}-\lambda_{\alpha}F_{ \alpha\beta}\] ( \[\ast_{1}\] ) by linearity of \(\partial\). * The degree-2 component is given by a labelling of the canonical triangulation of \(\Delta[2]\times\Delta[1]\), as shown in Figure B.2.ii. There are three 3-simplices (i.e. tetrahedra) that make up this triangular prism, each one labelled with a 3-simplex in the dg-nerve: the equations that these three homotopies satisfy are shown in Figure B.2.iii. Applying the linearity of \(\partial\) to the equations in Figure B.2.iii, we see that \[\begin{array}{rl}\partial\left(f_{\left[\begin{smallmatrix}0&1&2&2\\ 0&0&1\end{smallmatrix}\right]}-f_{\left[\begin{smallmatrix}0&1&1&2\\ 0&0&1\end{smallmatrix}\right]}+f_{\left[\begin{smallmatrix}0&0&1&2\\ 0&1&1\end{smallmatrix}\right]}\right)=&f_{\left[\begin{smallmatrix}0&2&2\\ 0&0&1\end{smallmatrix}\right]}-f_{\left[\begin{smallmatrix}0&0&2\\ 0&1&1\end{smallmatrix}\right]}\\ +f_{\left[\begin{smallmatrix}0&1&0\\ 0&0\end{smallmatrix}\right]}\left(f_{\left[\begin{smallmatrix}1&1&2\\ 0&1&1\end{smallmatrix}\right]}-f_{\left[\begin{smallmatrix}1&2&2\\ 0&0&1\end{smallmatrix}\right]}\right)\\ +\left(f_{\left[\begin{smallmatrix}0&0&1\\ 0&1&1\end{smallmatrix}\right]}-f_{\left[\begin{smallmatrix}0&1&1\\ 0&0&1\end{smallmatrix}\right]}\right)f_{\left[\begin{smallmatrix}1&2\\ 1&1\end{smallmatrix}\right]}\\ -f_{\left[\begin{smallmatrix}0&0&1\end{smallmatrix}\right]}\left[\begin{smallmatrix} 0&1&2\\ 1&1\end{smallmatrix}\right]+f_{\left[\begin{smallmatrix}0&1&2\\ 0&0&0\end{smallmatrix}\right]}f_{\left[\begin{smallmatrix}2&2\\ 0&1\end{smallmatrix}\right]}.\] This means that, if we define \(\lambda_{\alpha\beta\gamma}\) to be the alternating sum of these three homotopies \[\lambda_{\alpha\beta\gamma}\coloneqq f_{\left[\begin{smallmatrix}0&1&2&2\\ 0&0&0&1\end{smallmatrix}\right]}-f_{\left[\begin{smallmatrix}0&1&1&2\\ 0&0&1&1\end{smallmatrix}\right]}+f_{\left[\begin{smallmatrix}0&0&1&2\\ 0&1&1&1\end{smallmatrix}\right]}.\] then, using Figure B.2.ii, the above translates to \[\begin{array}{rl}\partial\lambda_{\alpha\beta\gamma}=&h_{\alpha\gamma}^{F }-h_{\alpha\gamma}^{E}\\ +E_{\alpha\beta}\left(h_{\beta\gamma}^{F}-h_{\beta\gamma}^{E}\right)\\ +\left(h_{\alpha\beta}^{F}-h_{\alpha\beta}^{E}\right)F_{\beta\gamma}\\ -\lambda_{\alpha}F_{\alpha\beta\gamma}+E_{\alpha\beta\gamma}\lambda_{\gamma} \end{array}\] which rearranges to give \[\begin{array}{rl}\partial\lambda_{\alpha\beta\gamma}=&E_{\alpha\beta \gamma}\lambda_{\gamma}-\lambda_{\alpha}F_{\alpha\beta\gamma}\\ +E_{\alpha\beta}\lambda_{\beta\gamma}+\lambda_{\alpha\beta}F_{\beta\gamma}\\ +\lambda_{\alpha\gamma}.\end{array}\] ( \[\ast_{2}\] ) But then equations (\(\star_{i}\)) and (\(\ast_{i}\)) are identical for \(i=0,1,2\). In other words, _up to degree \(2\)_, a \(1\)-simplex (\(\lambda_{\alpha},\lambda_{\alpha\beta},\lambda_{\alpha\beta\gamma}\)) in the totalisation defines exactly a weak equivalence (\(\Lambda_{\alpha},\Lambda_{\alpha\beta},\Lambda_{\alpha\beta\gamma}\)) of twisting cochains. It remains only to give a general argument for arbitrary degree. #### b.2.2 Full proof Now we give the argument for arbitrary degree. For \(0\leq m\leq p\), we define (cf. Figure B.2.iv) \[\Delta_{m}^{p+1}\coloneqq\left[\begin{smallmatrix}0&1&2&\ldots&m&m+1&\ldots&p-1 &p\\ 0&0&0&\ldots&0&1&1&\ldots&1&1\end{smallmatrix}\right]\] i.e. the non-degenerate (\(p+1\))-simplex of \(\Delta[p]\times\Delta[1]\) given by travelling along the bottom \(p\)-simplex (corresponding to \(\Delta[p]\times\{0\}\)) for \(m\) steps (between the first \(m+1\) vertices), then travelling straight up to the top \(p\)-simplex (corresponding to \(\Delta[p]\times\{1\}\)), before continuing on along the remaining vertices. Note that these are _all_ of the non-degenerate (\(p+1\))-simplices of \(\Delta[p]\times\Delta[1]\). We can think of a morphism \(\Delta[p]\times\Delta[1]\to\mathcal{F}\mathit{wit}(\tilde{\mathcal{N}} \mathcal{U}_{p})\) as a labelling of the "prism" \(\Delta[p]\times\Delta[1]\), where the vertices are labelled with objects \(x_{i_{j}}\in\mathcal{F}\mathit{wit}(\tilde{\mathcal{N}}\mathcal{U}_{p})^{0}\), and each simplex \(I=\{i_{0}<\ldots<i_{k}\}\) with \(k\geq 2\) is labelled by \(f_{I}\in\mathrm{Hom}^{k-1}(x_{i_{k}},x_{i_{0}})\) satisfying the equation (2.5.2.1) defining the dg-nerve, which here is exactly \[\partial f_{I}+\sum_{j=1}^{k-1}(-1)^{j}f_{I\setminus\{i_{j}\}}+\sum_{j=1}^{k- 1}(-1)^{k(j-1)}f_{\{i_{0}<\ldots<i_{j}\}}f_{\{i_{j}<\ldots<i_{k}\}}=0.\] ( \[\star\] ) Generalising the notation of Figure B.2.ii, we write \(E^{\star}_{a_{i}}\) for the object labelling the vertex \(\left[\begin{smallmatrix}i\\ 0\end{smallmatrix}\right]\), and \(F^{\star}_{a_{i}}\) for the object labelling the vertex \(\left[\begin{smallmatrix}i\\ 1\end{smallmatrix}\right]\); given \(I=\{i_{0}<\ldots<i_{k}\}\subseteq[p]\) with \(k\geq 1\), we write \(E_{a_{i_{0}}\ldots a_{i_{k}}}\) for the morphism \(f_{\left[\begin{smallmatrix}I\\ 0\end{smallmatrix}\right]}\), and \(F_{a_{i_{0}}\ldots a_{i_{k}}}\) for the morphism \(f_{\left[\begin{smallmatrix}I\\ 1\end{smallmatrix}\right]}\), where \(\left[\begin{smallmatrix}I\\ j\end{smallmatrix}\right]=\left[\begin{smallmatrix}i_{0}&i_{1}&\ldots&i_{k}\\ j&\ldots&j\end{smallmatrix}\right]\). Given any simplex (\(\alpha_{0}\ldots a_{p}\)), we define \[\lambda_{\alpha_{0}\ldots a_{p}}\coloneqq\sum_{m=0}^{p}(-1)^{m}f_{\Delta_{m} ^{p+1}}.\] By (\(\star\)), we know that \[\sum_{m=0}^{p}(-1)^{m}\partial f_{\Delta_{m}^{p+1}}\] ( \[\circ_{\partial}\] ) \[+\sum_{m=0}^{p}\sum_{j=1}^{p}(-1)^{m}(-1)^{j}f_{\Delta_{m}^{p+1} \setminus\mathrm{ver}_{j}}\] ( \[\circ_{\delta}\] ) \[+\sum_{m=0}^{p}\sum_{j=1}^{p}(-1)^{m}(-1)^{(p+1)(j-1)}f_{\Delta_{ m}^{p+1}(0,j)}f_{\Delta_{m}^{p+1}(j,p+1)}\] ( \[\circ_{\circ}\] ) \[=0\] where we introduce two notational shorthands: given a simplex \(\sigma\) we write \(\sigma\setminus\mathrm{ver}_{j}\) to mean \(\sigma\setminus\mathrm{ver}_{j}\,\sigma\), and \(\sigma(i,j)\) to mean \(\{\mathrm{ver}_{i}\,\sigma<\ldots<\mathrm{ver}_{j}\,\sigma\}\). We will now examine each of (\(\circ\)), (\(\circ_{\delta}\)), and (\(\circ_{\circ}\)) in turn. Figure B.2.iii. _Left:_ The three 3-simplices of \(\Delta[2]\times\Delta[1]\). _Right:_ The equations satisfied by the corresponding homotopy in the dg-nerve, where \(f_{[-]}\) is the morphism labelling the simplex \([-]\). Firstly, (\(\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{14.0pt}{\(\raisebox{-1.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\(\raisebox{-1.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\(\raisebox{-1.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\(\raisebox{-1.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\(\raisebox{-1.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\(\raisebox{-1.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\(\raisebox{-1.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\(\raisebox{-1.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\(\raisebox{-1.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\(\raisebox{-1.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\(\raisebox{-1.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\includegraphics[1.0pt]{14.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\includegraphics[1.0pt]{14.0pt}{\includegraphics[ height=1.0pt]{14.0pt}{\includegraphics[1.0pt]{14.0pt}{\includegraphics[ 1.0pt]{14.0pt}{\includegraphics[1.0pt]{14.0pt}{\includegraphics[1.0pt]{14.0pt}{ \includegraphics[1.0pt]{14.0pt}{\includegraphics[1.0pt]{14.0pt}{\includegraphics[ 1.0pt]{14.0pt}{\includegraphics[1.0pt]{14.0pt}{\includegraphics[1.0pt]{14.0pt}{ \includegraphics[1.0pt]{14.0pt}{\includegraphics[1.0pt]{14.0pt}{\includegraphics[ 1.0pt]{14.0pt}{\includegraphics[1.0pt]{14.0pt}{\includegraphics[1.0pt]{14.0pt}{ \includegraphics[1.0pt]{14.0pt}{\includegraphics[1.0pt]{14.0pt}{\includegraphics[ 1.0pt]{14.0pt}{\includegraphics[1.0pt]{14.0pt}{\includegraphics[ 1.0pt]{14.0pt}{\includegraphics[1.0pt]{14.0pt}{\includegraphics[1.0pt]{14.0pt}{ \includegraphics[1.0pt]{14.0pt}{\includegraphics[1.0pt]{14.0pt}{\includegraphics[ 1.0pt]{14.0pt}{\includegraphics[1.0pt]{14.0pt}{\includegraphics[1.0pt]{14.0pt}{ \includegraphics[1.0pt]{14.0pt}{\includegraphics[1.0pt]{14.0pt}{ \includegraphics[1.0pt]{14.0pt}{\includegraphics[1.0pt]{14.0pt}{\includegraphics[ 1. but the first sum on the right vanishes (since \(\Delta_{m}^{p+1}\vee\operatorname{ver}_{m}=\Delta_{m-1}^{p+1}\vee\operatorname{ ver}_{m}\)) and the second simplifies (following the pattern of Figure B.2.v) to \[\sum_{j=1}^{p-1}\sum_{m=0}^{p-1}(-1)^{j-1}(-1)^{m}f_{\Delta_{m}^{p+1}\vee}=\sum_ {j=1}^{p-1}(-1)^{j-1}\left(\sum_{m=0}^{p-1}f_{\Delta_{m}^{p+1}\vee}\right)\] which is \[-\sum_{j=1}^{p-1}(-1)^{j}\lambda_{a_{0}\dots\widehat{a}_{j}\dots\alpha_{p}}\] ( \[\boxtimes_{\hat{\delta}}\] ) precisely by our definition of \(\lambda\). Finally, (\(\otimes_{\circ}\)). A table showing all terms in the case \(p=3\) is given in Figure B.2.vi. To start, we split up the sum in (\(\otimes_{\circ}\)) into an "upper triangular" part and a "lower triangular" part. triangular" part: \[\sum_{m=0}^{p}\sum_{j=1}^{p}(-1)^{m}(-1)^{(p+1)(j-1)}f_{\Delta_{m}^{p +1}(0,j)}f_{\Delta_{m}^{p+1}(j,p+1)}\] \[=\sum_{j=1}^{p}\sum_{m=0}^{j-1}(-1)^{m}(-1)^{(p+1)(j-1)}f_{\Delta_{ m}^{p+1}(0,j)}f_{\Delta_{m}^{p+1}(j,p+1)}\] \[+\sum_{j=1}^{p}\sum_{m=j}^{p}(-1)^{m}(-1)^{(p+1)(j-1)}f_{\Delta_{ m}^{p+1}(0,j)}f_{\Delta_{m}^{p+1}(j,p+1)}.\] Now, in the first sum on the right-hand side, the \(f_{\Delta_{m}^{p+1}(j,p+1)}\) term will be constant over all values of \(m\), since if \(m<j\) then \(\Delta_{m}^{p+1}(j,p+1)\) is simply \(\left[\begin{smallmatrix}j-1&j&\ldots&p\\ 1&1&\ldots&1\end{smallmatrix}\right]\), and similarly for the \(f_{\Delta_{m}^{p+1}(0,j)}\) term in the second sum. In other words, we can write (\(\,\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{fig-1.pdf}}\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \ but the first sum is exactly \((\hat{\delta}\lambda)_{a_{0}\dots a_{p}}\), so it remains only to show that \(-(\boxtimes_{\partial})-(\boxtimes_{\circ})=E\lambda-\lambda F\), since then \(\lambda\) satisfies \[\hat{\delta}\lambda+E\lambda-\lambda F=0\] which is exactly the condition necessary in order for \(\lambda\) to be a weak equivalence (since the \(\lambda_{\alpha}\) terms live in the maximal Kan complex of the dg-nerve, and are thus quasi-isomorphisms by Lemma 2.5.6). We can merge the two terms of \((\boxtimes_{\partial})\) into the two sums of \((\boxtimes_{\circ})\) to obtain \[-(\boxtimes_{\partial})-(\boxtimes_{\circ}) =-\sum_{j=0}^{p}(-1)^{(p+1)(j-1)}(-1)^{j}E_{a_{0}\dots a_{j}} \lambda_{a_{j}\dots a_{p}}-\sum_{j=0}^{p}(-1)^{(p+1)j}\lambda_{a_{0}\dots a_{j} }F_{a_{j}\dots a_{p}}\] \[=\sum_{j=0}^{p}\Big{[}(-1)^{(p+1)(j-1)+j+1}E_{a_{0}\dots a_{j}} \lambda_{a_{j}\dots a_{p}}-(-1)^{(p+1)j}\lambda_{a_{0}\dots a_{j}}F_{a_{j} \dots a_{p}}\Big{]}.\] But \((p+1)(j-1)+j+1\equiv(1-j)(p-j)\mod 2\) and \((p+1)j\equiv-j(p-j)\mod 2\), so we can write the above as \[\sum_{j=0}^{p}\left[(-1)^{(1-j)(p-j)}E_{\alpha_{0}\ldots\alpha_{j}}\lambda_{ \alpha_{j}\ldots\alpha_{p}}-(-1)^{-j(p-j)}\lambda_{\alpha_{0}\ldots\alpha_{j}}F _{\alpha_{j}\ldots\alpha_{p}}\right]\] and this is exactly \(E\lambda-\lambda F\).
2310.13435
Dynamically assisted pair production in subcritical potential step and particle--anti-particle interpretations
Particle--anti-particle interpretation under spatially inhomogeneous external fields within the framework of quantum field theory is a nontrivial problem. In this paper, we focus on the two interpretations established in [Phys. Rev. D 93, 045002 (2016)] and [Prog. Theor. Exp. Phys. 2022, 073B02 (2022)], both of which give consistent results of vacuum instability and pair production. To shed light on their differences, a pair production under a potential step assisted by a weak and oscillating electric field is discussed. It is shown that the potential step and the oscillating field, each insufficient for vacuum decay, can produce pairs when combined. In addition, the two pictures give rise to quantitative differences in the number of created pairs at the second-order perturbation of the oscillating field. It might provide a clue to investigate the correct particle--anti-particle interpretation by comparing the result with numerical simulations or experiments.
Makoto Ochiai
2023-10-20T11:48:02Z
http://arxiv.org/abs/2310.13435v1
Dynamically assisted pair production in subcritical potential step and particle-anti-particle interpretations ###### Abstract Particle-anti-particle interpretation under spatially inhomogeneous external fields within the framework of quantum field theory is a nontrivial problem. In this paper, we focus on the two interpretations established in [Phys. Rev. D **93**, 045002 (2016)] and [Prog. Theor. Exp. Phys. **2022**, 073B02 (2022)], both of which give consistent results of vacuum instability and pair production. To shed light on their differences, a pair production under a potential step assisted by a weak and oscillating electric field is discussed. It is shown that the potential step and the oscillating field, each insufficient for vacuum decay, can produce pairs when combined. In addition, the two pictures give rise to quantitative differences in the number of created pairs at the second-order perturbation of the oscillating field. It might provide a clue to investigate the correct particle-anti-particle interpretation by comparing the result with numerical simulations or experiments. ## I Introduction The particle-anti-particle pair production from the vacuum under external fields has been discussed in a wide variety of areas such as particle physics, nuclear physics, cosmology, and astrophysics [1; 2]. In particular, in the case of strong electric fields, it is known as the Schwinger effect. It was first predicted by Sauter's observation [3] of exact solutions of the Dirac equation under a constant homogeneous electric field, in association with the Klein paradox [4]. In later years, many physicists, including Heisenberg, Euler, and Schwinger [5; 6] have revealed its non-perturbative aspects in quantum field theory. Nowadays, it is naively understood as a kind of dielectric breakdown of a quantum vacuum filled with virtual pairs of particles and anti-particles. The pair production accompanied by the vacuum instability is exponentially suppressed in many cases, and its direct detection needs electric fields with incredibly high intensity, given by the Schwinger limit. In recent years, a situation in which a strong electric field is superimposed on a weak and oscillating electromagnetic field has attracted physicists' attention. Perturbative contribution is combined with the non-perturbative one to yield a new interplay effect, drastically enhancing the pair production. The process is called the dynamically assisted Schwinger effect [7], and its experimental verification with intense laser facilities is expected to be within reach in the near future [8]. One of the simplest and most powerful tools when discussing the pair-creating phenomena is the canonical quantization of fields using mode functions, known as the Furry picture [9; 10]. One solves the Dirac equation under external fields to expand a field operator on the basis of a complete orthonormal set of its solutions (mode functions). Creation/annihilation operators are introduced as the expansion coefficients. The key ingredient is how to define physically appropriate particles and anti-particles under the influence of external fields. The particle-anti-particle concept can be well-posed when a strong electric field depends only on time and switches off (asymptotically) in the distant past and future \(t=\pm\infty\). In this case, the Dirac equation reduces to a first-order linear ordinary differential equation in terms of time. Its non-stationary solutions that asymptotically behave as single-mode plane waves in the distant past/future are called "in/out" mode functions, providing a physical vacuum and corresponding particle and anti-particle states, which can be interpreted in the distant past/future. Thus, various transition amplitudes between the two asymptotic times and expectation values of physical quantities can be calculated. Previous studies have evaluated vacuum persistence probability, the pair production number, etc., under time-dependent strong electric fields, using exact solutions [11] or approximated solutions by the WKB method [12; 13; 14; 15]. Some studies [16; 17] have discussed the dynamically assisted Schwinger effect by incorporating the perturbation theory into the mode functions method, where strong electric fields are treated non-perturbatively, while weak fields perturbatively (Furry-picture perturbation theory). Spatial inhomogeneity in the gauge backgrounds makes the discussion much more complicated than the time-dependent case because one must solve the Dirac equation as a partial differential equation. Even if we assume that the external fields are formally time-independent by neglecting their switching-on/off effect, the equation reduces to a stationary Dirac equation, and its stationary solutions never satisfy the boundary conditions for the "in/out" mode functions. Thus, defining the physical vacua and particle-anti-particle pictures in the spatially inhomogeneous backgrounds is nontrivial. The problem is related to the Klein paradox or Klein tunneling [18], where scattering of a relativistic particle off a high potential step is considered. For the numerical simulations of these phenomena, see [19; 20; 21; 22; 23; 24; 25; 26; 27; 28]. We first remark on Nikishov's work from the viewpoint of gauge invariance in relativistic quantum mechanics [29], where Green's functions under the constant homogeneous electric field are introduced by using mode functions in two gauges. This electric field is brought not only from a time-dependent vector potential (a temporal gauge) but also from a position-dependent scalar potential (a spatial gauge). A vacuum decay rate and a pair-production number are evaluated in both gauges to confirm the coincidence. The mode functions in a spatial gauge are characterized by the boundary conditions at spatial infinity instead of those in the distant past and future. The criteria for the mode functions are based on the assumption that particles and anti-particles, if appropriately defined, should be in the spatial infinity at initial and final times. Nikishov applied the criteria to scalar potentials with one-dimensional spatial inhomogeneity, which cannot be deformed into temporal gauges, such as a hyperbolic tangent potential (Sauter potential) and a step potential [30]. The vacuum decay rate for the Sauter potential is in good agreement with the one calculated using a path-integral based technique called the worldline method [31; 32; 33]. Gavrilov and Gitman incorporate Nikishov's particle-anti-particle picture into a framework of quantum field theory to investigate various physical quantities such as an electric current, an energy-momentum tensor, etc., in a vacuum state or a one-particle states [34]. They confirm that the results are consistent with the conventional hole picture, where particles in the so-called Dirac sea spontaneously tunnel into the positive-frequency area, yielding a current of particle-anti-particle pairs. There is another attempt to develop the quantum field theory under the potential step on the basis of different characterizations of asymptotic states [35]. In this work, one does not choose "in/out" mode functions from the start; instead, one tries to observe the asymptotic behavior of the field operator and determine the mode functions in an actual calculation. The basic idea is that asymptotic creation/annihilation operators equipped with appropriate particle-anti-particle interpretation are accompanied by monochromatic plane waves included in the field operator at asymptotic times. Thus, one calculates Dirac inner products of plane waves and the field operator in the limit \(t\to\pm\infty\) to obtain those creation/annihilation operators. To do this, the field operator is expanded on the basis of a particular complete orthonormal set with formal creation/annihilation operators and quantized. Here, the formal operators just play the role of parameters connecting the physical creation/annihilation operators. Eliminating the parameters leads to the so-called Bogoliubov transformation, which precisely agrees with Gavrilov and Gitman's formula [34]. This result is also consistent with the other relevant works [36; 37; 38; 39; 40]. The two frameworks [34] and [35] are, in fact, partially inconsistent with each other in terms of the choice of the mode functions, and thus, particle-anti-particle interpretation. However, they give the same Bogoliubov transformation, and the inconsistency does not cause any quantitative differences in the discussion of vacuum decay and pair production under the stationary external fields with spatial inhomogeneities. In this paper, we superimpose a fluctuating field on the one-dimensional potential step and evaluate the dynamically assisted pair production. It is shown that the two frameworks yield different particle numbers at the second-order perturbation of the fluctuating field. Although there is no guarantee that either of them characterizes the correct particle-anti-particle picture, the result implies that the dynamical assistance effect might exhibit dependence on the definition of particles and anti-particles in the pair-creation phenomenon. The paper is organized as follows: in the next section, we review the two frameworks adopted in [35] and [34] under a position-dependent strong electric field, which is brought by a scalar potential. We call them the pictures (A) and (B), respectively, and see their differences in the context of quantum field theory. In Sec. III, a weak and oscillating electric field, given as a vector potential, is incorporated as a perturbation. The particle numbers created from the vacuum are calculated in each framework and displayed in the subsequent sections: in Sec. IV, several features in the momentum distribution of the particle number with their underlying physics are discussed, and in Sec. V, the dependence on the different particle-anti-particle pictures on the results is shown. Sec. VI is devoted to the conclusion and future work. Comments on spinors and mode functions are added in Appendices A and B, respectively. ## II Two frameworks with different particle and anti-particle interpretations First, the field-theoretical frameworks under a strong electric field, (A) in [35] and (B) in [34], are reviewed. Natural units \(\hbar=c=1\) are adopted throughout the paper. A field of relativistic fermion with mass \(m\) under the influence of a step potential alone is evolved by the Dirac equation \[[i\gamma^{0}(\partial_{t}-ieA_{0}(z))+i\gamma^{3}\partial_{z}-m]\Psi^{(0)}=0, \tag{1}\] where \(\gamma^{0},\gamma^{3}\) are \(4\times 4\) gamma matrices and \(e>0\) is a magnitude of an electron's charge. Here, the dependence on spatial coordinates other than \(z\) is neglected for simplicity. The scalar potential \(A_{0}(z)\) stands for a one-dimensional step potential along \(z\)-direction: denoting \(\theta(z)\) as a step function, \[V(z)=-eA_{0}(z)=V_{0}\theta(z), \tag{2}\] with the potential height \(V_{0}\). We consider exclusively the subcritical case \(V_{0}<2m\), where the potential does not induce vacuum instability and pair production. The superscript on the field implies that the solution of (2) will be used as the unperturbed one when an oscillating field is added as a perturbation. The step potential gives a time-independent electric field localized at \(z=0\). The equation of motion (1) can be written as the Schrodinger-like equation \[i\partial_{t}\Psi^{(0)}=H\Psi^{(0)} \tag{3}\] with \[H=-i\alpha_{z}\partial_{z}+\beta m+V(z), \tag{4}\] (\(\alpha_{z}=\gamma^{0}\gamma^{3},\beta=\gamma^{0}\)) and the mode functions adopted in (A) and (B) are both stationary solutions of (3). Their explicit forms are described in the next two subsections. The energy spectra of the Dirac Hamiltonian (4) are classified into four regions: for an energy eigenvalue \(E\), 1. \(E>V_{0}+m\), 2. \(m<E\leq V_{0}+m\), 3. \(-m\leq E<V_{0}-m\), and 4. \(E<-m\). Since the eigenfunctions in regions (ii) and (iii) are uniquely determined due to a mass gap, the mode functions in (A) and (B) are the same up to their normalization factors. In the other energy regions, however, they are doubly degenerated, and thus, there remains a possibility of choosing different mode functions and the different particle-anti-particle pictures (A) and (B). ### Particle-anti-particle picture (A) In [35], scattering wave functions are adopted as an expansion basis of the Dirac field \(\Psi^{(0)}\). A left-incident scattering wave function \(\psi_{s}^{(E)}\) in an energy eigenvalue \(E\in\) (i) and a spin \(s\), is composed of the incident and transmitted waves moving to the right (positive \(z\)-direction) and a reflected wave moving to the left (negative \(z\)-direction). The momentum of these waves is determined by the energy-momentum relation \(E=E_{p}=V_{0}+E_{q}\) (\(E_{p}=\sqrt{p^{2}+m^{2}}\)), where \(p\) and \(q\) denote the magnitudes of momenta of the initial and transmitted waves, respectively. \(\psi_{s}^{(E)}\) is expressed by using a Dirac spinor of positive frequency \(u\) (see Appendix A) as \[\psi_{s}^{(E)}(z,t)=\frac{1}{\sqrt{2\pi}}\sqrt{\frac{m}{E}}e^{-iEt}\Big{\{} \theta(-z)\big{[}u(p,s)e^{ipz}+R_{\psi}(p)u(-p,s)e^{-ipz}\big{]}+\theta(z)T_{ \psi}(p)u(q,s)e^{iqz}\Big{\}}, \tag{5}\] with reflection and transmission coefficients \[R_{\psi}(p)=\frac{\sqrt{\frac{E-V_{0}+m}{E+m}}-\sqrt{\frac{E-V_{0}-m}{E-m}}}{ \sqrt{\frac{E-V_{0}+m}{E+m}}+\sqrt{\frac{E-V_{0}-m}{E-m}}},\quad T_{\psi}(p)= \frac{2}{\sqrt{\frac{E-V_{0}+m}{E+m}}+\sqrt{\frac{E-V_{0}-m}{E-m}}}. \tag{6}\] They are determined by the continuity condition for the scattering wave function at the discontinuous point of the potential \(z=0\). The continuity of a vector current along \(z\)-axis \(j=\bar{\psi}_{s}^{(E)}\gamma^{3}\psi_{s}^{(E)}=\psi_{s}^{(E)\dagger}\alpha_{z }\psi_{s}^{(E)}\) at \(z=0\) gives the so-called probability conservation: \[P_{\text{ref}}+P_{\text{trans}}=|R_{\psi}(p)|^{2}+\frac{q}{p}|T_{\psi}(p)|^{2}=1, \tag{7}\] where the reflection and transmission probabilities are defined as \(P_{\rm refl}=|j_{\rm refl}/j_{\rm inc}|,P_{\rm trans}=|j_{\rm trans}/j_{\rm inc}|\) with incident, reflected and transmitted currents \(j_{\rm inc},j_{\rm refl},j_{\rm trans}\). The above function (5) describes an over-the-barrier scattering of a positive-frequency wave. Note that the directions of each current coincide with those of phase velocity and group velocity. The energy-momentum relation also holds for the right-incident case in the same energy region. The right-incident scattering wave function \(\phi_{s}^{(E)}\) is expressed as \[\phi_{s}^{(E)}(z,t)=\frac{1}{\sqrt{2\pi}}\sqrt{\frac{m}{E-V_{0}}}e^{-iEt}\Big{\{} \theta(-z)T_{\phi}(q)u(-p,s)e^{-ipz}+\theta(z)\big{[}u(-q,s)e^{-iqz}+R_{\phi}(q )u(q,s)e^{iqz}\big{]}\Big{\}}, \tag{8}\] where \(-p\) refers to the transmitted wave and \(-q\) to the incident wave. The reflection and transmission coefficients are written as \[R_{\phi}(q)=\frac{\sqrt{\frac{E+m}{E-V_{0}+m}}-\sqrt{\frac{E-m}{E-V_{0}-m}}}{ \sqrt{\frac{E+m}{E-V_{0}+m}}+\sqrt{\frac{E-m}{E-V_{0}-m}}},\quad T_{\phi}(q)= \frac{2}{\sqrt{\frac{E+m}{E-V_{0}+m}}+\sqrt{\frac{E-m}{E-V_{0}-m}}}, \tag{9}\] which are related to those for the left-incident case through the reciprocal relations: \[R_{\phi}(q)=-R_{\psi}(p),\quad T_{\phi}(q)=\frac{q}{p}T_{\psi}(p). \tag{10}\] \(\phi_{s}^{(E)}\) also describes an over-the-barrier scattering of a positive-frequency wave. For scattering wave functions in the other energy regions, see [41]. All the scattering behavior of waves described by the left- and right-incident scattering wave functions under the subcritical potential step (2) is displayed in Fig. 1. The incident waves in regions (ii) and (iii) penetrate the mass gap with exponential suppression and experience a total reflection. \(\psi_{s}^{(E)}\) and \(\phi_{s}^{(E)}\) in the region (iv) both exhibit the over-the-barrier scattering of negative-frequency waves. Note that for an overcritical potential step (with height \(V_{0}>2m\)), another energy region \(m<E\leq V_{0}-m\) (often called the Klein region) is present, where incident waves can transmit the barrier without exponential suppression. The tunneling differs from the usual one in non-relativistic quantum mechanics in that it occurs even in the rigid-wall limit \(V_{0}\rightarrow\infty\). This counter-intuitive effect is known as the Klein tunneling [3; 18]. The scattering wave functions in every region (i)-(iv) are normalized as follows: \[\int_{-\infty}^{\infty}dz\psi_{s}^{(E)\dagger}(z,t)\psi_{s^{ \prime}}^{(E^{\prime})}(z,t) =\theta(EE^{\prime})\delta(p-p^{\prime})\delta_{s,s^{\prime}}, \tag{11}\] \[\int_{-\infty}^{\infty}dz\phi_{s}^{(E)\dagger}(z,t)\phi_{s^{ \prime}}^{(E^{\prime})}(z,t) =\theta((E-V_{0})(E^{\prime}-V_{0}))\delta(q-q^{\prime})\delta_{s,s^{\prime}}, \tag{12}\] where \(p\) and \(q\) are the absolute values of momenta of the initial waves in \(\psi_{s}^{(E)}\) and \(\phi_{s}^{(E)}\), respectively. The step functions in the right-hand side of the above equations stand for the orthogonality between eigenfunctions belonging Figure 1: The scattering wave functions belonging to the energy regions (i)–(iv) under the subcritical potential step (with height \(V_{0}<2m\)). Blue arrows represent directions of incident, reflected, and transmitted waves in the scattering wave functions. Gray-shaded regions are a mass gap, where oscillating solutions do not exist. The left half corresponds to the left-incident case, while the right half corresponds to the right-incident case. to different eigenvalues of a Hermitian operator (4). The orthogonality between the degenerated eigenfunctions \(\psi_{s}^{(E)}\) and \(\phi_{s}^{(E)}\) seems nontrivial, but can be shown explicitly. Bound states do not exist, and all the scattering wave functions \(\psi_{s}^{(E)}\) and \(\phi_{s}^{(E)}\) form a complete set [41; 42]. The orthogonality and completeness are essential properties in the field quantization discussed below. We expand the Dirac field \(\Psi^{(0)}\) in the Heisenberg picture on the basis of the scattering wave functions as \[\begin{split}\Psi^{(0)}(z,t)&=\sum_{s}\int_{0}^{ \infty}dp\Big{(}a_{L}(p,s)\psi_{s}^{(E_{p})}(z,t)+b_{L}^{\dagger}(p,s)\psi_{s}^ {(-E_{p})}(z,t)\Big{)}\\ &\quad+\sum_{s}\int_{0}^{\infty}dq\Big{(}a_{R}(q,s)\phi_{s}^{(V_ {0}+E_{q})}(z,t)+b_{R}^{\dagger}(q,s)\phi_{s}^{(V_{0}-E_{q})}(z,t)\Big{)}, \end{split} \tag{13}\] with equal-time canonical anti-commutation relations: \[\{\Psi_{\alpha}^{(0)}(z,t),\Psi_{\beta}^{(0)\dagger}(z^{\prime},t)\}=\delta(z -z^{\prime})\delta_{\alpha,\beta}, \tag{14}\] where \(\alpha,\beta\) are spinor indices, and the other anti-commutators vanish. On the right-hand side of (13), annihilation operators \(a_{L}(p,s)\) and \(a_{R}(q,s)\) are introduced as coefficients of \(\psi_{s}^{(E_{p})}\) in the energy regions (i)-(ii) and \(\phi_{s}^{(V_{0}+E_{q})}\) in (i), whereas creation operators \(b_{L}^{\dagger}(p,s)\) and \(b_{R}^{\dagger}(q,s)\) in accordance to \(\psi_{s}^{(-E_{p})}\) in (iv) and \(\phi_{s}^{(V_{0}-E_{q})}\) in (iii)-(iv). Because of the orthonormality and completeness of the expansion basis, the anti-commutation relations for the Dirac field are equivalent to those for the creation/annihilation operators: for example, for the annihilation operator \(a_{L}(p,s)\), only the anti-commutator with itself is nonvanishing, giving a delta function \[\{a_{L}(p,s),a_{L}^{\dagger}(p^{\prime},s^{\prime})\}=\delta(p-p^{\prime}) \delta_{s,s^{\prime}}. \tag{15}\] It should be noted that the creation/annihilation operators are just the formal ones and are not equipped with particle-anti-particle interpretation. Aside from these operators, asymptotic creation/annihilation operators with physical meaning can be introduced by extracting wave modes characterizing particles and anti-particles from the field operator in the distant past and future \(t=\pm\infty\). Particles and anti-particles in the area \(z\neq 0\), if properly defined, are not subjected to an electric force from the potential step. Consider a scattering process; in the distant past and future, they should exist at the spatial infinity \(|z|=\infty\) and be characterized by a monochromatic plane wave with positive and negative frequency, respectively. Since the field operator in (13) satisfies the equation of motion (1) at any time, the particle and anti-particle modes, along with the asymptotic creation/annihilation operators as their coefficients, should be included in the limit \(t\to-\infty\) of the field operator. For instance, an "in" annihilation operator of particle at the left infinity \(z=-\infty\) is defined as \[a_{L,\text{in}}^{(0)}(p,s)=\lim_{t\to-\infty}\int_{-\infty}^{\infty}dzu_{p,s}^ {\dagger}(z,t)\Psi^{(0)}(z,t), \tag{16}\] where \(u_{p,s}(z,t)\) is a positive-frequency plane wave on the left of the step \[u_{p,s}(z,t)=\frac{1}{\sqrt{2\pi}}\sqrt{\frac{m}{E_{p}}}u(p,s)e^{-iE_{p}t+ipz}. \tag{17}\] After substituting the field decomposition (13), one has to evaluate the limit \(t\to-\infty\) of the inner products of the plane wave and the scattering wave functions. Notice that almost all of them do not contribute to the limit because they indefinitely oscillate in time and thus vanish due to Riemann-Lebesgue's lemma. The candidates to remain are the singular terms without the oscillation factors, i.e., those composed of the plane wave and the scattering wave functions belonging to the energy eigenvalue \(E_{p}\). The concrete calculation yields a simple expression of the annihilation operator of the "in" particle for \(p>0\), as \[a_{L,\text{in}}^{(0)}(p,s)=a_{L}(p,s). \tag{18}\] The result provides the formal operator \(a_{L}(p,s)\) with the physical meaning of a particle incoming from the left in the distant past. In other words, the left-incident scattering wave function \(\psi_{s}^{(E_{p})}\) plays a role of the "in" mode function in this framework (A). The same relations for the other creation/annihilation operators \(b_{L}^{\dagger}(p,s),a_{R}(q,s),b_{R}^{\dagger}(q,s)\) and their counterparts \(b_{L,\text{in}}^{\dagger}(p,s),a_{R,\text{in}}(q,s),b_{R,\text{in}}^{\dagger}(q,s)\) are obtained. One finds that the corresponding scattering wave functions are the "in" mode functions in (A) which characterize a particle and an anti-particle incoming from the spatial infinity \(\left|z\right|=\infty\). Thus, (13) can be understood as the field decomposition on the basis of the "in" mode functions in (A), where the subscripts "in" on the creation/annihilation operators should be added. Vacuum at the distant past, or "in" vacuum \(\left|0\right\rangle_{\text{in}}\), is defined as an eigenstate which is annihilated by any "in" annihilation operators: \[a_{L,\text{in}}^{(0)}(p,s)\left|0\right\rangle_{\text{in}}=a_{R,\text{in}}^{(0) }(q,s)\left|0\right\rangle_{\text{in}}=b_{L,\text{in}}^{(0)}(p,s)\left|0\right \rangle_{\text{in}}=b_{R,\text{in}}^{(0)}(q,s)\left|0\right\rangle_{\text{in}}=0. \tag{19}\] Any "in" states of the picture (A) are created by applying the creation operators on the "in" vacuum. Note that the right-hand side of (16) for \(-p\) disappears. All information of the "in" particle and anti-particle approaching the center \(z=0\) are included in the field operator \(\Psi^{(0)}\), but those going away from the center are not, at \(t=-\infty\) limit. The "out" creation/annihilation operators are introduced in the same way as the "in" creation/annihilation operators, but in the opposite limit \(t\rightarrow\infty\). The field operator in the distant future \(t=\infty\) contains the information of an outgoing particle and anti-particle away from the center. The explicit forms of the "out" creation/annihilation operators are obtained as a sum of the "in" creation/annihilation operators, satisfying the canonical anti-commutation relations; see Appendix B. It should be mentioned that the "in/out" transformations hold among the asymptotic creation/annihilation operators in the same energy eigenvalue. It implies that particle-anti-particle mode mixing occurs only in the Klein region \(m<E\leq V_{0}-m\), where the positive-frequency spectrum of \(\psi_{s}^{(E)}\) and the negative-frequency spectrum of \(\phi_{s}^{(E)}\) overlap. In this case, the "in" vacuum becomes unstable to induce spontaneous pair production. The overcritical potential step supplies energy \(V_{0}\) to a virtual particle-anti-particle pair by \[E_{p}+E_{q}=V_{0}, \tag{20}\] making it a real pair. Note that "out" mode functions in (A), which are unnecessary in the above framework, are derived from the "in/out" transformation among the asymptotic creation/annihilation operators. ### Particle-anti-particle picture (B) In the other particle-anti-particle picture (B), two complete orthonormal sets are prepared in the field decomposition. Although the previous work [34] treated a scalar potential with more general configurations, we restrict to the case of the step potential for comparison with the picture (A). Two degenerated stationary solutions of the Dirac equation \(\varphi_{s}^{(E)}\) and \(\chi_{s}^{(E)}\) in the energy region (i) are expressed as \[\varphi_{s}^{(E)}(z,t) =\frac{1}{\sqrt{2\pi}}\sqrt{\frac{m}{E}}\sqrt{\frac{p}{q}}e^{-iEt }\Big{\{}\theta(-z)T_{\phi}(q)u(p,s)e^{ipz}+\theta(z)\left[u(q,s)e^{iqz}+R_{ \phi}(q)u(-q,s)e^{-iqz}\right]\Big{\}}, \tag{21}\] \[\chi_{s}^{(E)}(z,t) =\frac{1}{\sqrt{2\pi}}\sqrt{\frac{m}{E-V_{0}}}\sqrt{\frac{q}{p}} e^{-iEt}\Big{\{}\theta(-z)\left[u(-p,s)e^{-ipz}+R_{\psi}(p)u(p,s)e^{ipz} \right]+\theta(z)T_{\psi}(p)u(-q,s)e^{-iqz}\Big{\}}, \tag{22}\] where \(R_{\psi}(p),T_{\psi}(p),R_{\phi}(q),T_{\phi}(q)\) are the same as in (6) and (9). The above solutions are normalized by the conditions (11) and (12), respectively. The normalizations are different from those in [34], where the delta functions of momenta in (11) and (12) are replaced with \(\delta(E-E^{\prime})\), but it does not change the particle-anti-particle notion provided the solutions satisfy the orthogonality and completeness relations. On the left of the step, \(\varphi_{s}^{(E)}\) is composed of a single-mode plane wave moving to the right. Nikishov, and in later years, Gavrilov and Gitman, have interpreted this solution as particle mode ingoing from the left infinity in the distant past. The particle-anti-particle concept is brought about from the boundary conditions that the "in" mode functions behave asymptotically as single-mode plane waves approaching the center at the left infinity. The boundary conditions in terms of spatial infinity are in accordance with the case of time-dependent backgrounds as if the labels of space and time are interchanged. \(\chi_{s}^{(E)}\) consists of a single-mode plane wave moving to the center on the right of the step, which is interpreted as particle mode incoming from the right infinity in the distant past. The behaviors of the "in" mode functions in (B) in every energy range are drawn in Fig. 2. The "in" mode functions (B) in the regions (ii) and (iii), shown in blue arrows, are the same as the "in" mode functions (A), i.e., the scattering wave functions describing the total reflection. The mode functions (B) in the other regions, drawn in red arrows, differ from (A). The red arrows in the left (right) panel of Fig. 2 correspond to the blue arrows in the right (left) panel of Fig. 1 whose directions are reversed. One finds that \(\varphi_{s}^{(E)}\) and \(\chi_{s}^{(E)}\) in (21) and (22) coincide with the "out" mode functions (A) up to their normalizations (see Appendix B). Thus, "in/out" separations and whether an (anti-)particle is on the left or right of the step are defined conversely. Note that the mode functions (A) and (B) in the same energy eigenvalue are related to each other by linear transformations: \(\varphi_{s}^{(E)}\) and \(\chi_{s}^{(E)}\) in the region (i) can be rewritten as a linear combination of \(\psi_{s}^{(E)}\) and \(\phi_{s}^{(E)}\), i.e., \[\begin{pmatrix}\varphi_{s}^{(E)}(z,t)\\ \chi_{s}^{(E)}(z,t)\end{pmatrix}=\begin{pmatrix}\sqrt{\frac{q}{p}}T_{\psi}(p)& \sqrt{\frac{p}{q}\frac{E_{s}}{E_{p}}}R_{\phi}(q)\\ \sqrt{\frac{q}{p}\frac{E_{s}}{E_{q}}}R_{\psi}(p)&\sqrt{\frac{p}{q}T_{\phi}(q) }\end{pmatrix}\begin{pmatrix}\psi_{s}^{(E)}(z,t)\\ \phi_{s}^{(E)}(z,t)\end{pmatrix}. \tag{23}\] The field \(\Psi^{(0)}\) is expanded on the basis of the "in" mode functions (B) as \[\begin{split}\Psi^{(0)}(z,t)&=\sum_{s}\int_{0}^{\infty}dp \Big{(}c_{L,\text{in}}^{(0)}(p,s)\varphi_{s}^{(E_{p})}(z,t)+d_{L,\text{in}}^{ (0)\dagger}(p,s)\varphi_{s}^{(-E_{p})}(z,t)\Big{)}\\ &\quad+\sum_{s}\int_{0}^{\infty}dq\Big{(}c_{R,\text{in}}^{(0)}(q,s )\chi_{s}^{(V_{0}+E_{q})}(z,t)+d_{R,\text{in}}^{(0)\dagger}(q,s)\chi_{s}^{(V_ {0}-E_{q})}(z,t)\Big{)},\end{split} \tag{24}\] where other "in" creation/annihilation operators are introduced as their coefficients. The anti-commutation relations for these operators are equivalent to the equal-time canonical anti-commutation relations for the field operator due to the completeness and orthonormality of the "in" mode functions (B). We denote the creation/annihilation operators with the subscript "in" since they have already been equipped with the physical meaning of particle-anti-particle by the above mode functions. One finds that "in" vacuum within the picture (B), introduced as a zero eigenstate for any annihilation operators in (24), is the same as \(\ket{0}_{\text{in}}\) in (19). It is confirmed by the relations among the "in" creation/annihilation operators in both pictures (A) and (B). To see this, let us write the "in" annihilation operator \(c_{L,\text{in}}^{(0)}(p,s)\), for example, as the inner products of the corresponding mode functions and the field operator: \[c_{L,\text{in}}^{(0)}(p,s)=\lim_{t\rightarrow-\infty}\int_{-\infty}^{\infty} dz\varphi_{s}^{(E_{p})\dagger}(z,t)\Psi^{(0)}(z,t), \tag{25}\] where the limit \(t\rightarrow-\infty\) is applied to the inner product in accordance with (16), though it is unnecessary for an actual calculation. By substituting the field decomposition (13) and the linear transformation for \(\varphi_{s}^{(E_{p})}\) in (23), the annihilation operator for the energy region (i) is expressed as \[c_{L,\text{in}}^{(0)}(p,s)=\sqrt{\frac{q}{p}}T_{\psi}(p)a_{L}(p,s)+\sqrt{\frac {p}{q}\frac{E_{q}}{E_{p}}}R_{\phi}(q)a_{R}(q,s). \tag{26}\] As shown in the "in/out" transformation among the annihilation operators in (B4), \(c_{L,\text{in}}^{(0)}(p,s)\) is proportional to the "out" annihilation operator \(a_{R,\text{out}}^{(0)}(q,s)\). The field operator \(\Psi^{(0)}\) is also expanded on the basis of "out" mode functions (B) with "out" creation/annihilation operators equipped with particle-anti-particle notion and similar correspondence between (A) and (B) can be found. Figure 2: The “in” mode functions (B) in the energy regions (i)–(iv) under the same potential step. The left half of the figures shows \(\varphi_{s}^{(E)}\), while the right half shows \(\chi_{s}^{(E)}\). The directions of plane waves included in the mode functions are expressed in blue or red arrows. The blue arrows in the regions (ii) and (iii) are the same as those in Fig. 1. The "out" mode functions \(\varphi^{(E)}_{s,\text{out}}\) and \(\chi^{(E)}_{s,\text{out}}\) in the region (i) are identical to the "in" mode functions (A) up to their normalization factors: \[\varphi^{(E)}_{s,\text{out}}(z,t)=\sqrt{\frac{p}{q}\frac{E_{q}}{E_{p}}}\phi^{(E )}_{s}(z,t),\quad\chi^{(E)}_{s,\text{out}}(z,t)=\sqrt{\frac{q}{p}\frac{E_{p}}{E _{q}}}\psi^{(E)}_{s}(z,t). \tag{27}\] They are normalized by (11) and (12), respectively. The relations (27) are reflected in the creation/annihilation operators, such as \[c^{(0)}_{L,\text{out}}(p,s)=\lim_{t\to\infty}\int_{-\infty}^{\infty}dz\varphi^{ (E_{p})\dagger}_{s,\text{out}}(z,t)\Psi^{(0)}(z,t)=\sqrt{\frac{p}{q}\frac{E_{ q}}{E_{p}}}a_{R}(q,s), \tag{28}\] where the limit \(t\to\infty\) in the middle is actually unnecessary in the same way as (25). It should be stressed again that particle-anti-particle mode mixing among the creation/annihilation operators occurs only in the Klein region. This is guaranteed by the orthogonality of the mode functions; see (25) and (28). Comments on the particle-anti-particle picture (B) for the overcritical case are in order. The "in/out" mode functions in the Klein region are the same as (A). The criteria for choosing the two sets of mode functions (B) are discussed by Gavrilov and Gitman [34] within the quantum field theory by calculating various physical quantities, such as an energy-momentum tensor in a one-(anti-)particle state. As discussed in the previous subsection, vacuum instability and pair production are attributed only to particle and anti-particle in the Klein region. Thus, quantitative differences between the two pictures do not emerge in the vacuum decay rate, the number of created pairs, etc. ## III Furry-picture perturbation theory In the following discussion, a weak and oscillating electric field is incorporated into the quantum field theory developed in Sec. II as a perturbation. We consider an oscillating gauge field with a single frequency \(\omega\): \[A_{3}(z,t)=\frac{\mathcal{E}_{z}}{\omega}\sin(\omega t)e^{-(z/l)^{2}}, \tag{29}\] which is localized in space with a width \(l\) along the same direction as the potential step. It gives a spatially localized and oscillating electric field with a maximum field strength \(\mathcal{E}_{z}\), which is assumed to be much smaller than the Schwinger limit \(m^{2}/e\). The time-evolution equation to be considered is now \[[i\gamma^{0}(\partial_{t}-ieA_{0}(z))+i\gamma^{3}\partial_{z}-m]\Psi=-e\gamma ^{3}A_{3}(z,t)\Psi, \tag{30}\] and the right-hand side of it is treated as a perturbation term. The fermionic field \(\Psi\) under the total external fields is expanded in a series of the perturbative gauge field \(A_{3}\) up to the first order, as \[\Psi(z,t)=\sqrt{Z}\Big{[}\Psi^{(0)}(z,t)-e\int_{-\infty}^{\infty}dz^{\prime} dt^{\prime}S_{\text{ret}}(z,t;z^{\prime},t^{\prime})\gamma^{3}A_{3}(z^{\prime},t ^{\prime})\Psi^{(0)}(z^{\prime},t^{\prime})+\mathcal{O}\big{(}(A_{3})^{2} \big{)}\Big{]}, \tag{31}\] with an overall factor \(\sqrt{Z}\) and the zeroth-order field \(\Psi^{(0)}\) in the previous section. \(S_{\text{ret}}\) is a retarded Green's function, which obeys an equation \[[i\gamma^{0}(\partial_{t}-ieA_{0}(z))+i\gamma^{3}\partial_{z}-m]S_{\text{ret} }(z,t;z^{\prime},t^{\prime})=\delta(z-z^{\prime})\delta(t-t^{\prime}), \tag{32}\] where the spinor indices are omitted. It is easy to show that the following form of \(S_{\text{ret}}\) \[S_{\text{ret}}(z,t;z^{\prime},t^{\prime})=-i\theta(t-t^{\prime})\sum_{\epsilon,s=\pm}\biggl{[}\int_{0}^{\infty}dp\,\psi^{(\epsilon E_{p})}_{s}(z,t)\bar{ \psi}^{(\epsilon E_{p})}_{s}(z^{\prime},t^{\prime})+\int_{0}^{\infty}dq\,\phi ^{(V_{0}+\epsilon E_{q})}_{s}(z,t)\bar{\phi}^{(V_{0}+\epsilon E_{q})}_{s}(z^{ \prime},t^{\prime})\biggr{]} \tag{33}\] satisfies (32) along with the retarded boundary condition. Note that the right-hand side of \(\Psi\) in (31) is not a perturbation series of the coupling constant \(e\) because the scalar potential \(A_{0}\) with its coupling constant is incorporated non-perturbatively to the zeroth-order field and the retarded Green's function. The non-perturbative contributions from \(A_{0}\) as well as the perturbative ones from \(A_{3}\) are combined in the perturbation expansion (31), in which the interplay between the non-perturbative and perturbative mechanisms in [16; 17] is expected to occur. The "in" creation/annihilation operators are not subjected to the perturbation because they are extracted from the asymptotic field \(\Psi^{(0)}\), which is related to the field \(\Psi\) through \(\Psi\to\sqrt{Z}\Psi^{(0)}\) in the limit \(t\to-\infty\). It means that the "in" vacuum \(\left|0\right\rangle_{\text{in}}\) in (19) is unperturbed by the additional electric field. The "out" creation/annihilation operators, on the other hand, should be extracted from another asymptotic field, denoted by \(\Psi_{\text{out}}\) with a relation \(\Psi\rightarrow\sqrt{Z}\Psi_{\text{out}}\) in the limit \(t\rightarrow\infty\), which is dressed by the Furry-picture perturbation. For instance, the "out" annihilation operator of particle leaving from the center in the left of the step is defined in the picture (A) as \[a_{L,\text{out}}(p,s)=\lim_{t\rightarrow\infty}\frac{1}{\sqrt{Z}}\int_{- \infty}^{\infty}dzu_{-p,s}^{\dagger}(z,t)\Psi(z,t), \tag{34}\] where \[u_{-p,s}(z,t)=\frac{1}{\sqrt{2\pi}}\sqrt{\frac{m}{E_{p}}}u(-p,s)e^{-iE_{p}t-ipz}. \tag{35}\] The anti-commutation relations for the "out" creation/annihilation operators are guaranteed by assuming the equal-time canonical anti-commutation relations for the asymptotic field \(\Psi_{\text{out}}\). By the substitution of (31), it is decomposed to the series in terms of \(A_{3}\), i.e., \(a_{L,\text{out}}(p,s)=\sum_{n\geq 0}a_{L,\text{out}}^{(n)}(p,s)\). In the energy region (i), the zeroth-order term is calculated in Appendix B to be a linear combination of the "in" annihilation operators of particles. The limit evaluation yields the next order as \[a_{L,\text{out}}^{(1)}(p,s)=ie\sqrt{\frac{p}{q}\frac{E_{q}}{E_{p}}}\int_{- \infty}^{\infty}dzdt\bar{\chi}_{s}^{(V_{0}+E_{q})}(z,t)\gamma^{3}A_{3}(z,t) \Psi^{(0)}(z,t), \tag{36}\] where the transformation for \(\chi_{s}^{(E)}\) in (23) is used. \(\chi_{s}^{(V_{0}+E_{q})}\) with the square-root factor in the right-hand side of (36) is equivalent to the "out" mode function in (A) which characterizes a particle leaving from the center on the left of the step (see (B6)). Observe that the integration over \(t\) gives delta functions of energy, implying the energy conservation relations. In particular, the energy eigenvalues of the "in" mode functions belonging to the negative-frequency spectra (iii) and (iv) can reach the energy eigenvalue \(E_{p}=V_{0}+E_{q}\) of the "out" mode functions \(\sqrt{pE_{q}/qE_{p}}\,\chi_{s}^{(V_{0}+E_{q})}\) with an energy assistance \(\omega\) of the perturbative electric field, which leads to the particle-anti-particle mode mixing. For the "in" mode functions \(\phi_{s^{\prime}}^{(V_{0}-E_{q^{\prime}})}\), for example, the energy conservation relation is \[E_{p}=V_{0}-E_{q^{\prime}}+\omega, \tag{37}\] implying the energy balance between the particle-anti-particle pair and the external fields (compare it with (20) under the potential step alone). The same relation can be derived for \(E_{p}\leq V_{0}+m\). Therefore, it holds for any \(p>0\). Note that for given \(p\), the existence of \(q^{\prime}\) satisfying (37) requires a condition \[V_{0}+\omega-E_{p}>m. \tag{38}\] When we denote its solution as \(q^{\prime}=q_{1}(>0)\), the delta function of the energy is rewritten as \[\delta(E_{p}-V_{0}+E_{q^{\prime}}-\omega)=\frac{E_{q_{1}}}{q_{1}}\delta(q^{ \prime}-q_{1}), \tag{39}\] and \(\delta(q^{\prime}-q_{1})\) cancels the integration over \(q^{\prime}\) in the field operator \(\Psi^{(0)}\). The criticality condition for the vacuum instability is derived from the existence condition (38) as \[V_{0}+\omega>2m. \tag{40}\] This indicates that the assisted pair production can occur even when the potential energy \(V_{0}\) as well as the assistance energy \(\omega\) are below the threshold \(2m\). Under the condition (40), the number of created particles moving to the left per unit momentum in the distant future \[\left\langle\frac{dN}{dp}\right\rangle=\left.{}_{\text{in}}\langle 0|a_{L,\text{ out}}^{\dagger}(p,s)a_{L,\text{out}}(p,s)|0\rangle_{\text{in}}=\left.{}_{\text{in}} \langle 0|a_{L,\text{out}}^{(1)\dagger}(p,s)a_{L,\text{out}}^{(1)}(p,s)|0\rangle_{ \text{in}}+\mathcal{O}\big{(}(A_{3})^{3}\big{)}\right. \tag{41}\] becomes finite at the second order of the perturbation. Zeroth- and first-order contributions do not exist because \(a_{L,\text{out}}^{(0)}(p,s)\) does not include "in" creation/annihilation operators of anti-particle. One can also calculate the quantity up to the same order for right-moving particles. Particle creation under the perturbation within the picture (B) is also discussed. A creation/annihilation operator of a particle moving left on the left of the step, for example, is introduced by an analogy with (25), as \[c_{L,\text{out}}(p,s)=\lim_{t\to\infty}\frac{1}{\sqrt{Z}}\int_{-\infty}^{\infty} dz\varphi_{s,\text{out}}^{(E_{p})\dagger}(z,t)\Psi(z,t), \tag{42}\] where the "out" mode function \(\varphi_{s,\text{out}}^{(E_{p})}\) in (27) is used. One can calculate the right-hand side of the above equation in the same way and understand that the first-order term of the perturbation, which causes the particle-anti-particle mode mixing, is different from the "out" annihilation operator \(a_{L,\text{out}}^{(1)}(p,s)\) in (36). Therefore, the "out" vacuum in the picture (B) differs from that of (A). Since the "in" vacuum is unchanged by the perturbation (as explained in the previous paragraph), a differential particle number in the picture (B) is evaluated as \[{}_{\text{in}}\langle 0|c_{L,\text{out}}^{\dagger}(p,s)c_{L,\text{out}}(p,s)|0 \rangle_{\text{in}}\,. \tag{43}\] It has a nontrivial value at the second-order perturbation and, interestingly, shows a different momentum distribution from that of (41), as will be shown in Sec. V. ## IV Dynamically Assisted Pair Production We investigate the momentum-dependence of the differential particle number within the particle-anti-particle picture (A) up to the second order. A differential number density of left-moving particle, denoted by \(n(-p,s)\), is introduced from the differential particle number (41) divided by a volume factor, i.e., \[\Big{\langle}\frac{dN}{dp}\Big{\rangle}=n(-p,s)\delta(p-p), \tag{44}\] where the delta function \(\delta(p-p)\) is proportional to an infinite spatial length along \(z\)-direction. This is brought about from the factors \({}_{\text{in}}\langle 0|bb^{\dagger}|0\rangle_{\text{in}}\), where the operators \(b,b^{\dagger}\), written in a shorthand notation, are included in the "out" creation/annihilation operators (36). By denoting the mode functions as separated forms to space and time, such as \(\psi_{s}^{(E)}(z,t)=\psi_{s}^{(E)}(z)e^{-iEt}\), the number density for \(E_{p}=V_{0}+E_{q}>V_{0}+m\) is analytically expressed as \[\begin{split} n(-p,s)=\Big{(}\frac{e\mathcal{E}_{z}\pi}{\omega} \Big{)}^{2}\frac{E_{q}}{q}&\bigg{\{}\theta(E_{q_{1}}-m)\frac{E_ {q_{1}}}{q_{1}}\bigg{|}\sum_{s^{\prime}}\int_{-\infty}^{\infty}dz\bar{\chi}_{s }^{(V_{0}+E_{q})}(z)\gamma^{3}\phi_{s^{\prime}}^{(V_{0}-E_{q_{1}})}(z)e^{-(z/l )^{2}}\bigg{|}^{2}\\ &+\theta(E_{p_{2}}-m)\frac{E_{p_{2}}}{p_{2}}\bigg{|}\sum_{s^{ \prime}}\int_{-\infty}^{\infty}dz\bar{\chi}_{s}^{(V_{0}+E_{q})}(z)\gamma^{3} \psi_{s^{\prime}}^{(-E_{p_{2}})}(z)e^{-(z/l)^{2}}\bigg{|}^{2}\bigg{\}}+ \mathcal{O}(\mathcal{E}_{z}^{3}),\end{split} \tag{45}\] where \(q_{1}>0\) is the solution \(q^{\prime}=q_{1}\) of the equation (37), and \(p_{2}>0\) is determined by another equation of \(p^{\prime}\): \[E_{p}=-E_{p^{\prime}}+\omega. \tag{46}\] The step functions in (45) stem from the exsistence conditions for \(q_{1}\) and \(p_{2}\), and the momenta suffice \(E_{p_{2}}=V_{0}+E_{q_{1}}\) if both of them exist, implying \(p_{1}=p_{2}\) and \(q_{1}=q_{2}\). The first term in the curly brackets of (45) is a contribution from \(\phi_{s^{\prime}}^{(V_{0}-E_{q_{1}})}\), i.e., negative-frequency electrons on the right of the step, while the second term is one from negative-frequency electrons on the left of the step. By substituting the functional forms of the mode functions, one can perform the integrations over \(z\) to obtain expressions with the error function and find that the spin dependence does not appear. For \(E_{p}\leq V_{0}+m\), the result (45) is modified by a following replacement: \[\sqrt{\frac{E_{q}}{q}}\chi_{s}^{(V_{0}+E_{q})}(z)\xrightarrow{E_{p}\leq V_{0} +m}\sqrt{\frac{E_{p}}{p}}\psi_{s}^{(E_{p})}(z). \tag{47}\] \(\psi_{s}^{(E_{p})}\) on the right-hand side of (47) is the "out" mode functions (A) of a particle moving to left in the energy region (ii) up to the normalization. Observe that the above replacement of the functions is continuous in terms of the energy eigenvalue because the left-hand side is written, with the aid of the transformation among the mode functions (23) and the reciprocity (10), as \[\sqrt{\frac{E_{p}}{p}}\Big{(}R_{\psi}(p)\psi_{s}^{(E_{p})}(z,t)+\sqrt{\frac{E_ {q}}{E_{p}}}T_{\psi}(p)\phi_{s}^{(V_{0}+E_{q})}(z,t)\Big{)}. \tag{48}\] At the energy eigenvalue \(E_{p}=V_{0}+m\), where the scattering behavior is switched from the over-the-barrier scattering to the total reflection, the reflection and transmission coefficients become \(R_{\psi}(p)=1\) and \(T_{\psi}(p)=0\). It assures that \(n(-p,s)\) is also continuous in terms of the energy eigenvalue \(E_{p}\), and therefore, the momentum \(p\). The differential number density of a right-moving particle, which is defined by factoring \(\delta(q-q)\) out of the differential particle number with momentum \(q\), is expressed as \[\begin{split} n(q,s)=\Big{(}\frac{e{\cal E}_{z}\pi}{\omega} \Big{)}^{2}\frac{E_{p}}{p}\bigg{\{}&\theta(E_{q_{1}}-m)\frac{E_ {q_{1}}}{q_{1}}\bigg{|}\sum_{s^{\prime}}\int_{-\infty}^{\infty}dz\bar{\varphi} _{s}^{(E_{p})}(z)\gamma^{3}\phi_{s^{\prime}}^{(V_{0}-E_{q_{1}})}(z)e^{-(z/l)^{ 2}}\bigg{|}^{2}\\ &+\theta(E_{p_{2}}-m)\frac{E_{p_{2}}}{p_{2}}\bigg{|}\sum_{s^{ \prime}}\int_{-\infty}^{\infty}dz\bar{\varphi}_{s}^{(E_{p})}(z)\gamma^{3}\psi_ {s^{\prime}}^{(-E_{p_{2}})}(z)e^{-(z/l)^{2}}\bigg{|}^{2}\bigg{\}}+\mathcal{O}( \mathcal{E}_{z}^{3}),\end{split} \tag{49}\] with the same notation \(q_{1}\) and \(p_{2}\). Note that the volume factors \(\delta(p-p)\) and \(\delta(q-q)\) are the same quantity since the momenta in the left and right of the step are written by \(\pm p\) and \(\pm q\), respectively. Note also that \(\varphi_{s}^{(E_{p})}\) on the right-hand side of (49) is proportional to the "out" mode function of a right-moving particle, see (66). The left panel of Fig. 3 shows the momentum distribution of \(n(k,s)\), with \(k=-p\) or \(q\), when the energy of the oscillating electric field \(\omega\) is below the threshold \(\omega<2m\), especially when \(\omega=1.5m\). The height of the potential step is also set to the subcritical value \(V_{0}=1.5m\), although the criticality condition (40) is satisfied. For this case, only \(n(-p,s)\) for \(E_{p}\leq V_{0}+m\) can be nonvanishing. The field-strength of the oscillating electric field is \({\cal E}_{z}=0.01m^{2}/e\), and its spatial width varies from \(l=2m^{-1}\) to \(\infty m^{-1}\). The figure shows that the pair production from the vacuum can occur even when the two external fields are insufficient for the vacuum decay. Its mechanism can be understood by looking at the right panel of Fig. 3, where electrons in the Dirac sea in \(z>0\) are excited with the help of \(\omega\) and penetrate to the positive-frequency area in \(z<0\), yielding the electron-hole pairs. This is the combined process of the perturbative and non-perturbative contributions, which has already been explained in the original paper of the dynamically assisted Schwinger effect [7]. It is worth mentioning that the dynamically assisted Schwinger effect under time-dependent backgrounds has been known as the drastic enhancement of the usual Schwinger pair production in which the produced number is exponentially suppressed but nonvanishing. On the other hand, the result obtained here states that the produced number changes from completely zero to a finite value. The difference comes from whether the energy is conserved or not; the system under the potential step has the time-translational invariance, and the energy conservation relations (37) and (46) exactly hold. These relations determine the finite support of the energy distribution \(m<E_{p}<V_{0}+\omega-m\). The momentum distribution lies only on the negative region since \(E_{q}=E_{p}-V_{0}>m\) cannot be satisfied for any \(q\), or physically speaking, the created particles have no choice but to go left because of the forbidden region in \(z>0\). One can also see that the differential number density increases for a wider area of dynamical assistance \(l\) and is maximized for \(l=\infty m^{-1}\). The dependence on the parameter is most sensitive at \(k\sim-0.5m\) and \(k\sim-1.6m\), where two peaks are growing with the increase of \(l\). At \(k\sim-1.1m\), on the other hand, this dependence hardly exists except for \(l=2m^{-1}\). Figure 3: (Left panel) The momentum distribution of the differential particle number density for \(V_{0}=1.5m\) and \(\omega=1.5m\). The other parameters are \({\cal E}_{z}=0.01m^{2}/e\) and \(l=2m^{-1},4m^{-1},6m^{-1},8m^{-1}\) and \(\infty m^{-1}\). The results for each \(l\) are drawn in the blue, orange, green, and red solid lines and purple dashed lines, respectively. (Right panel) The schematic picture of particle production from the Dirac sea when \(V_{0}<2m\) and \(\omega<2m\). The created particle moving to the left is drawn in the orange ball with the orange arrow, while the hole in the Dirac sea corresponds to the dashed orange circle. The particle in the site of the hole jumps up and tunnels to the positive-frequency area in \(z<0\) along the blue and red dashed arrows. The differential number density shows different behavior when the assistance energy exceeds the threshold. The left panel of Fig. 4 shows the case \(\omega=2.5m\) with the other parameters the same as those for the previous one. Now, three peaks are evident for \(l\geq 4m^{-1}\), and especially the graph for \(l=\infty m^{-1}\) diverges, implying the failure of the perturbative evaluation. The behavior may attribute to the simply perturbative pair production under the oscillating electric field alone [17; 43] (see also the text [44]), where a particle-anti-particle pair can be produced only by the assistance energy \(\omega\). However, the perturbative calculations for \(l=\infty m^{-1}\) in the previous works give peaks localized in the momentum space by delta functions. The divergences in the current study, on the other hand, is the type of \(1/(k-k_{0}\pm i\varepsilon)\) (\(k_{0}\) is a divergent point and \(\varepsilon\) is a positive infinitesimal), which is brought about from the integration over \(z\) in the expressions of the differential number density \(n(-p,s)\) and \(n(q,s)\). The position \(k_{0}\) of each peak is explained by the particle production described in the right panel of Fig. 4: the simple excitation in the left end contributes to the peak in the middle of the momentum distribution. The combined process of excitation and tunneling can also occur in the present case (drawn in the middle of the three processes), making the middle peak the highest. Particles created by the simple excitation process in the right end of the figure can move toward both directions, generating the peaks in the left and right ends, respectively. Specifically, particles created on the right of the step with negative momenta gain the extra kinetic energy \(V_{0}\) when passing through \(z=0\), which brings about the left-most peak in the far left. For the assistance energy under consideration, the particle created on the left of the step cannot transmit over the potential barrier because \(\omega<V_{0}+2m\). If \(\omega\) surpasses the threshold, however, the transmission of the particle by losing its energy by \(V_{0}\) can occur, and a new peak accompanied by the process appears in the positive-momentum region. Note that two points \(k\sim-1.1m\) and \(k\sim-2.3m\), at which the momentum distribution seems not smooth, are found. They correspond to the energy \(E_{p}=V_{0}\) and \(E_{p}=V_{0}+m\) in \(n(-p,s)\), respectively, where the mode functions in (47) are continuous but not smooth in terms of the energy eigenvalue. Note also that the two peaks which have been mentioned on the left-hand side of Fig. 3 are related to those in the negative-momentum region of the distribution in Fig. 4. The peaks of the differential number density grow with the increase of the assistance energy \(1.5m<\omega<2m\), shifting their positions in the negative direction. When \(\omega\) arrives at the threshold \(2m\), the peaks become highest at \(p=0\) and \(E_{p}=V_{0}+m\), especially those for \(l=\infty m^{-1}\) get divergent. Above the threshold energy, the peaks except for \(l=\infty m^{-1}\) get lower with shifting positions (to the left for those in the negative-momentum region and to the right for those in the positive-momentum region). ## V Influence of different particle-anti-particle pictures on pair production In this section, we evaluate the differential particle number based on the particle-anti-particle picture (B) and compare it with that of (A). The differential number density is introduced by factoring out the delta function of momenta, such as \[\underset{\text{in}}{\langle 0|c^{\dagger}_{L,\text{out}}(p,s)c_{L,\text{out}}(p, s)|0\rangle}_{\text{in}}=n^{\prime}(-p,s)\delta(p-p). \tag{50}\] Figure 4: (Left panel) The momentum distribution of the particle number \(V_{0}=1.5m\) and \(\omega=2.5m\). The other parameters are the same as Fig. 3. (Right panel) The schematic picture of particle production from the Dirac sea when \(V_{0}<2m\) and \(2m<\omega<V_{0}+2m\). The differential number density of the created particle with negative momenta is as follows: for \(E_{p}>V_{0}+m\), \[\begin{split} n^{\prime}(-p,s)=\Big{(}\frac{e\mathcal{E}_{z}\pi}{ \omega}\Big{)}^{2}\frac{E_{q}}{q}\bigg{\{}&\theta(E_{q_{1}}-m) \frac{E_{q_{1}}}{q_{1}}\bigg{|}\sum_{s^{\prime}}\int_{-\infty}^{\infty}dz\bar{ \phi}_{s}^{(V_{0}+E_{q})}(z)\gamma^{3}\chi_{s^{\prime}}^{(V_{0}-E_{q_{1}})}(z) e^{-(z/l)^{2}}\bigg{|}^{2}\\ &+\theta(E_{p_{2}}-m)\frac{E_{p_{2}}}{p_{2}}\bigg{|}\sum_{s^{ \prime}}\int_{-\infty}^{\infty}dz\bar{\phi}_{s}^{(V_{0}+E_{q})}(z)\gamma^{3} \varphi_{s^{\prime}}^{(-E_{p_{2}})}(z)e^{-(z/l)^{2}}\bigg{|}^{2}\bigg{\}}+ \mathcal{O}(\mathcal{E}_{z}^{3}),\end{split} \tag{51}\] and for \(E_{p}\leq V_{0}+m\), the above formula with the replacement \[\sqrt{\frac{E_{q}}{q}}\phi_{s}^{(V_{0}+E_{q})}(z)\xrightarrow{E_{p}\leq V_{0} +m}\sqrt{\frac{E_{p}}{p}}\psi_{s}^{(E_{p})}(z). \tag{52}\] In fact, the equality \(n^{\prime}(-p,s)=n(-p,s)\) holds for \(E_{p}\leq V_{0}+m\) and an arbitrary \(\omega\). This is obvious for \(\omega\leq E_{p}+m\), since \(\chi_{s^{\prime}}^{(V_{0}-E_{q_{1}})}\) in the first line of (51) is the same as \(\phi_{s^{\prime}}^{(V_{0}-E_{q_{1}})}\) up to an overall phase factor. It implies that any differences between the differential number density within the two pictures cannot be found for subcritical assistance energy. For \(\omega>E_{p}+m\), the equality is ensured by the following relation \[\begin{split}&\frac{E_{q_{1}}}{q_{1}}\bigg{|}\sum_{s^{\prime}} \int_{-\infty}^{\infty}dz\bar{\psi}_{s}^{(E_{p})}(z)\gamma^{3}\chi_{s^{\prime }}^{(V_{0}-E_{q_{1}})}(z)e^{-(z/l)^{2}}\bigg{|}^{2}+\frac{E_{p_{1}}}{p_{1}} \bigg{|}\sum_{s^{\prime}}\int_{-\infty}^{\infty}dz\bar{\psi}_{s}^{(E_{p})}(z) \gamma^{3}\varphi_{s^{\prime}}^{(-E_{p_{1}})}(z)e^{-(z/l)^{2}}\bigg{|}^{2}\\ &=\frac{E_{q_{1}}}{q_{1}}\bigg{|}\sum_{s^{\prime}}\int_{-\infty} ^{\infty}dz\bar{\psi}_{s}^{(E_{p})}(z)\gamma^{3}\phi_{s^{\prime}}^{(V_{0}-E_{ q_{1}})}(z)e^{-(z/l)^{2}}\bigg{|}^{2}+\frac{E_{p_{1}}}{p_{1}}\bigg{|}\sum_{s^{ \prime}}\int_{-\infty}^{\infty}dz\bar{\psi}_{s}^{(E_{p})}(z)\gamma^{3}\psi_{ s^{\prime}}^{(-E_{p_{1}})}(z)e^{-(z/l)^{2}}\bigg{|}^{2}.\end{split} \tag{53}\] Its validity can be easily checked using the transformation formula among the mode functions (23). The differential number density with positive momenta \(n^{\prime}(q,s)\) is also calculated to be \[\begin{split} n^{\prime}(q,s)=\Big{(}\frac{e\mathcal{E}_{z}\pi}{ \omega}\Big{)}^{2}\frac{E_{p}}{p}&\bigg{\{}\theta(E_{q_{1}}-m) \frac{E_{q_{1}}}{q_{1}}\bigg{|}\sum_{s^{\prime}}\int_{-\infty}^{\infty}dz\bar {\psi}_{s}^{(E_{p})}(z)\gamma^{3}\chi_{s^{\prime}}^{(V_{0}-E_{q_{1}})}(z)e^{-( z/l)^{2}}\bigg{|}^{2}\\ &+\theta(E_{p_{2}}-m)\frac{E_{p_{2}}}{p_{2}}\bigg{|}\sum_{s^{ \prime}}\int_{-\infty}^{\infty}dz\bar{\psi}_{s}^{(E_{p})}(z)\gamma^{3}\varphi_ {s^{\prime}}^{(-E_{p_{2}})}(z)e^{-(z/l)^{2}}\bigg{|}^{2}\bigg{\}}+\mathcal{O}( \mathcal{E}_{z}^{3}).\end{split} \tag{54}\] The spin dependence does not appear in this particle-anti-particle picture. Figure 5 shows the comparison of the momentum distributions of the different number density between the pictures (A) and (B). The spatial width of the oscillating electric field is fixed to \(l=6m^{-1}\). The other parameters are the same as Fig. 4 (especially the graph expressed in the blue solid line is the same as that in the green solid line in the last figure). The region between \(k\sim-2.3m\) and \(k=0\) (corresponding to \(E_{p}=V_{0}+m\) and \(p=0\) in Figure 5: Comparison of the momentum distributions of the number density of particles within the particle–anti-particle picture (A) and (B), each of which is expressed in the blue solid line and orange dashed line, respectively. The parameters are \(V_{0}=1.5m\), \(\omega=2.5m\), \(l=6m^{-1}\), and \(\mathcal{E}_{z}=0.01m^{2}/e\). \(n(-p,s)\), respectively) shows the equivalence of the number densities in (A) and (B) as explained above. The other region exhibits their quantitative difference in that the heights of the leftmost and rightmost peaks are reversed. It may remind the relation, which has been seen in Sec. II.2, that whether a particle is on the left or right of the step is oppositely interpreted in the pictures (A) and (B). Another feature is that the momentum distribution for (B) is discontinuous at \(E_{p}=V_{0}+m\) and \(p=0\). It is confirmed analytically from (51). Let us observe its behavior at \(E_{p}=V_{0}+m\) for instance: we only need to evaluate the limit \(q\to 0\) for the first line of (51), i.e., \[\lim_{q\to 0}\sqrt{\frac{E_{q}}{q}}\int_{-\infty}^{\infty}dz\tilde{\phi}_{s}^{( V_{0}+E_{q})}(z)\gamma^{3}\chi_{s^{\prime}}^{(V_{0}-E_{q_{1}})}(z)e^{-(z/l)^{2}}. \tag{55}\] Since the transmission coefficient in \(\phi_{s}^{(V_{0}+E_{q})}\) is \(T_{\phi}(q)=\mathcal{O}(q)\) for an infinitesimal \(q\), the integration over negative \(z\) included in (55) does not give any contribution in the limit \(q\to 0\). For positive \(z\), the explicit form of \(\phi_{s}^{(V_{0}+E_{q})}\) is \[\sqrt{\frac{E_{q}}{q}}\phi_{s}^{(V_{0}+E_{q})}(z)=\frac{1}{\sqrt{2\pi}}\sqrt{ \frac{m}{q}}\big{[}u(-q,s)e^{-iy}+R_{\phi}(q)u(q,s)e^{iy}\big{]}, \tag{56}\] where \(y=qz\) is introduced as a new integration variable instead of \(z\). Notice that the Gaussian convergence factor multiplied by the Jacobian \(dz/dy\) produces a delta function in the limit \(q\to 0\): \[\frac{1}{q}e^{-(y/(ql))^{2}}\stackrel{{ q\to 0}}{{\longrightarrow}}l \sqrt{\pi}\delta(y). \tag{57}\] Thus, the integration (55) reduces to \[\lim_{q\to 0}l\sqrt{\frac{\pi E_{q}}{q}}\tilde{\phi}_{s}^{(V_{0}+E_{q})}(0) \gamma^{3}\chi_{s^{\prime}}^{(V_{0}-E_{q_{1}})}(0). \tag{58}\] It is easy to check that (56) at \(z=0\) vanishes for \(q\to 0\) because the reflection coefficient is \(R_{\phi}(q)=-1+\mathcal{O}(q)\), and therefore the differential number density (51) at \(E_{p}=V_{0}+m\) gives zero for arbitrary \(\omega\). The discontinuity of the momentum distribution of the differential number density may brought about from that of the mode functions in terms of their energy eigenvalues. It should be noted that the characteristics shown in this and the previous sections are valid up to the second-order perturbation of the oscillating electric field. The possibility that the higher-order perturbations compensate for the differences in the results between the two pictures cannot be excluded from the analysis. ## VI Conclusion and future work In this paper, we discussed the pair production from the vacuum under the subcritical potential step (2) superimposed by the oscillating electric field (29). One of the results is that vacuum decay by producing particle-anti-particle pairs can occur at the second-order perturbation of the oscillating field when the total energy supplied by the external fields \(V_{0}+\omega\) is larger than twice the mass \(2m\), as shown in Sec. IV. The phenomenon can occur especially when both the potential height \(V_{0}\) and the energy carried by the oscillating field \(\omega\) are smaller than the threshold energy \(2m\), which is nothing but the consequence of the combined effect of the perturbative and non-perturbative pair-creation processes, i.e., the dynamically assisted Schwinger effect. Its underlying mechanism is naively understood with the conventional hole theory that particles in the Dirac sea are excited by the oscillating field and leak out to the positive-frequency area; in that sense, this result is not at all surprising. Note, however, that the criticality condition (40) is the exact condition for the vacuum instability and the pair production, i.e., the dynamical assistance makes the particle number finite from the zero. It is in contrast to the case of time-dependent backgrounds, where the vacuum decay rate or the particle number under the strong field alone is exponentially suppressed but remains finite. The difference comes from the energy conservation for the systems under consideration, and it is ensured for the current case due to the time-translational invariance of the potential step. When the assistance energy \(\omega\) exceeds the threshold \(2m\), the perturbative pair production by the simple excitation can also occur, resulting in significant changes in the momentum distribution of the differential number density. The distribution peaks emerge according to the pair-creating scenarios: the number of peaks and their positions in the momentum space are related to the places of particles created and their directions to move. Note that the result is different from the simple perturbative pair production in [43; 17; 44] in terms of their peak structures. Its origin is also considered the interplay between the perturbative and non-perturbative effects. Another result is about the influence of the particle-anti-particle pictures on physical quantities. In the quantum field theory under external fields, a fundamental but nontrivial problem of adequately defining particle and anti-particle as well as vacuum arises, especially for the external fields with spatial inhomogeneity. The two previous works [34; 35], which we called (A) and (B), have tackled the problem in the context of the Klein paradox with their original definitions of the particle-anti-particle notions to obtain the physical quantities such as the vacuum decay rate, the number of created pairs, etc. In the presence of a one-dimensional strong electric field alone, those works bring the same results for the quantities despite the difference of the particle-anti-particle pictures, as explained in Sec. II. We have demonstrated in Sec. V that the differential number density of the created particles exhibits different momentum dependence between the pictures (A) and (B) at the second-order perturbation of the oscillating electric field for \(\omega>2m\). Of course, possibilities that neither of particle-anti-particle interpretations (A) nor (B) are correct still remain. However, the result obtained here implies the dependence on the definition of particles and anti-particles, and it will motivate further discussions by the numerical simulations and experiments in laboratories. An application of this work to some condensed-matter systems such as graphenes [45; 46; 47; 48], which have been of interest as the playgrounds of particle physics without high-energy setups, can be one of the future works. Although we focused only on the particle number in this paper, other physical quantities, such as electric current and scattering amplitudes, are expected to show nontrivial dependence on particle-anti-particle interpretations, which will be reported elsewhere. Some critical points are left undiscussed. First, the gauge independence of the results is nontrivial since we have fixed the gauge as the background field. One way to address the issue is to replace the scalar potential (2) with a linear function \(A_{0}(z)\propto z\), which produces a constant homogeneous electric field. The dynamically assisted pair production under a different gauge, i.e., a linear vector potential \(A_{3}(t)\propto t\), has been analyzed in [16; 17] using the Furry-picture perturbation theory. Thus, the comparison among them will bring information on the gauge dependence. Another problem is the contributions of higher-order perturbations, as mentioned in the previous section. The differences in the differential particle number density between the two pictures of particle-anti-particle shown in Fig. 5 may be attributed to the truncation of the perturbation expansion. Although we have given the field strength of the oscillating electric field as sufficiently weak compared to the Schwinger limit, the validity of the perturbation theory should be checked numerically. In the numerical simulation, adiabatically switching on/off has to be incorporated into the external fields. Note that the previous works [16; 17] have conducted the numerical simulation of the pair production to show that the particle number density estimated up to the second order is in good agreement with the numerical results. The optimal order of the perturbation will also be of interest in the current case. ## Appendix A Dirac spinors Dirac spinors and their basic properties are summarized here. A four-component Dirac field \(\psi\) of a free relativistic fermion of mass \(m\) in the four-dimensional spacetime obeys the free Dirac equation \[(i\not{\partial}-m)\psi=0, \tag{10}\] where \(\not{\partial}=\gamma^{\mu}\partial_{\mu}\)\((\mu=0,1,2,3)\) and \(\gamma^{\mu}\) are \(4\times 4\) gamma matrices. An energy eigenvalue \(E\) of \(\psi\) belongs to two continuous spectra \(|E|\geq m\), split by a mass gap between \(\pm m\). Neglecting transversal directions to \(z\)-axis, the positive- and negative-frequency solutions of (10) for momentum \(k\) and spin \(s\) are written as \[\frac{1}{\sqrt{2\pi}}\sqrt{\frac{m}{E_{k}}}u(k,s)e^{-iE_{k}t+ikz},\quad\frac{1 }{\sqrt{2\pi}}\sqrt{\frac{m}{E_{k}}}v(k,s)e^{iE_{k}t-ikz}, \tag{11}\] (\(E_{k}=\sqrt{k^{2}+m^{2}}\) is the on-shell energy) respectively. The positive- and negative-frequency spinors \(u\) and \(v\) are expressed in the Dirac representation as \[u(k,s)=\sqrt{\frac{E_{k}+m}{2m}}\left(\begin{matrix}\mathbbm{1}\\ \frac{k\sigma_{\alpha}}{E_{k}+m}\end{matrix}\right)\boldsymbol{\xi}(s),\quad v (k,s)=\sqrt{\frac{E_{k}+m}{2m}}\left(\begin{matrix}\frac{k\sigma_{z}}{E_{k}+m} \\ \mathbbm{1}\end{matrix}\right)\boldsymbol{\xi}(s) \tag{12}\] with a two-component spinor \(\boldsymbol{\xi}(s)\), which is factored out of the four-component spinors. \(u\) and \(v\) in (12) satisfy the normalization and orthogonality conditions \[u^{\dagger}(k,s)u(k,s^{\prime})=\delta_{s,s^{\prime}}=v^{\dagger}(k,s)v(k,s^{ \prime}),\quad u^{\dagger}(k,s)v(-k,s^{\prime})=0 \tag{13}\] as well as the completeness relation \[\sum_{s=\pm}\bigl{[}u_{\alpha}(k,s)u_{\beta}^{\dagger}(k,s)+v_{\alpha}(-k,s)v_ {\beta}^{\dagger}(-k,s)\bigr{]}=\delta_{\alpha,\beta}, \tag{14}\] where \(\alpha,\beta=1,2,3,4\) stand for spinor indices. When a constant scalar potential such as \(V(z)=V_{0}\) is imposed on (14), an extra phase factor \(\exp(-iV_{0}t)\) is applied to the plane-wave solutions (15). Thus, the positive-frequency solutions belong to the energy eigenvalue \(E>V_{0}+m\), while the negative-frequency solutions belong to \(E<V_{0}-m\). ## Appendix B "Out" creation/annihilation operators and mode functions within the picture (A) Within the particle-anti-particle picture (A), "out" annihilation operators of particles in the left and right of the step, under the potential step alone, are defined as \[a^{(0)}_{L,\text{out}}(p,s)=\lim_{t\to\infty}\int_{-\infty}^{ \infty}{dzu_{-p,s}^{\dagger}(z,t)\Psi^{(0)}(z,t)}, \tag{16}\] \[a^{(0)}_{R,\text{out}}(q,s)=\lim_{t\to\infty}\int_{-\infty}^{ \infty}{dzu_{q,s}^{\dagger}(z,t)\Psi^{(0)}(z,t)}, \tag{17}\] where \(u_{-p,s}\) is the left-moving monochromatic plane wave (35) and \(u_{q,s}\) is the right-moving one: \[u_{q,s}(z,t)=\frac{1}{\sqrt{2\pi}}\sqrt{\frac{m}{E_{q}}}u(q,s)e^{-i(V_{0}+E_{ q})t+iqz}. \tag{18}\] In (17), the outgoing particle in the right of the step is characterized by the positive-frequency plane wave in the parentheses, with the factor \(\exp(-iV_{0}t)\) mentioned above. The limit evaluation of the right-hand sides of the above equations in the same way as the discussion in (16) yields the "in/out" transformation among the asymptotic creation/annihilation operators: for example, in the energy range (i) \(E_{p}=V_{0}+E_{q}>V_{0}+m\), \[\begin{pmatrix}a^{(0)}_{L,\text{out}}(p,s)\\ a^{(0)}_{R,\text{out}}(q,s)\end{pmatrix}=\begin{pmatrix}R_{\psi}(p)&\frac{p}{q} \sqrt{\frac{E_{q}}{E_{p}}}T_{\phi}(q)\\ \frac{q}{p}\sqrt{\frac{E_{q}}{E_{q}}}T_{\psi}(p)&R_{\phi}(q)\end{pmatrix} \begin{pmatrix}a^{(0)}_{L,\text{in}}(p,s)\\ a^{(0)}_{R,\text{in}}(q,s)\end{pmatrix}. \tag{19}\] The use of the above relation, the probability conservation (7), and the reciprocity (10) confirms the anti-commutation relations for the "out" annihilation operators. Note that (19) is not the unitary transformation because the momenta \(p\) and \(q\) are not well-defined quantum numbers. However, this is essentially the same as the so-called Bogoliubov transformation in the previous works [34], where energy eigenvalues instead of momenta characterize creation/annihilation operators. The "out" mode functions (A) are introduced as the basis functions of the field operator \(\Psi^{(0)}\) when it is decomposed to a sum of the "out" creation/annihilation operators. The "out" mode functions accompanied by \(a^{(0)}_{L,\text{out}}(p,s)\) and \(a^{(0)}_{R,\text{out}}(q,s)\) in the energy region (i), denoted by \(\psi^{(E)}_{s,\text{out}}\) and \(\phi^{(E)}_{s,\text{out}}\), are extracted from the field operator by the following anti-commutators \[\psi^{(E)}_{s,\text{out}}(z,t)=\{a^{(0)\dagger}_{L,\text{out}}(p,s),\Psi^{(0)} (z,t)\},\quad\phi^{(E)}_{s,\text{out}}(z,t)=\{a^{(0)\dagger}_{R,\text{out}}(q, s),\Psi^{(0)}(z,t)\}. \tag{20}\] By using (19) and the field expansion (13), they are calculated to give \[\psi^{(E)}_{s,\text{out}}(z,t)=\sqrt{\frac{p}{q}\frac{E_{q}}{E_{p}}}\chi^{(E) }_{s}(z,t),\quad\phi^{(E)}_{s,\text{out}}=\sqrt{\frac{q}{p}\frac{E_{p}}{E_{q} }}\varphi^{(E)}_{s}(z,t), \tag{21}\] which means that the outgoing particle (A) on the left(right) of the step is interpreted in the other picture as the incoming particle on the right(left) of the step. The correspondence between the two pictures is also seen for the other "out" mode functions in the regions (i) and (iv). Note that the overall factor in the right-hand side of the above equation is determined by the normalization conditions (11), while \(\chi^{(E)}_{s}\) is done by (12). ###### Acknowledgements. The author would like to thank K. Yamashiro for the fruitful discussion at the early stage of this work. The author is grateful to H. Nakazato for his critical reading of the manuscript and helpful advice.
2308.11068
Topological Graph Signal Compression
Recently emerged Topological Deep Learning (TDL) methods aim to extend current Graph Neural Networks (GNN) by naturally processing higher-order interactions, going beyond the pairwise relations and local neighborhoods defined by graph representations. In this paper we propose a novel TDL-based method for compressing signals over graphs, consisting in two main steps: first, disjoint sets of higher-order structures are inferred based on the original signal --by clustering $N$ datapoints into $K\ll N$ collections; then, a topological-inspired message passing gets a compressed representation of the signal within those multi-element sets. Our results show that our framework improves both standard GNN and feed-forward architectures in compressing temporal link-based signals from two real-word Internet Service Provider Networks' datasets --from $30\%$ up to $90\%$ better reconstruction errors across all evaluation scenarios--, suggesting that it better captures and exploits spatial and temporal correlations over the whole graph-based network structure.
Guillermo Bernárdez, Lev Telyatnikov, Eduard Alarcón, Albert Cabellos-Aparicio, Pere Barlet-Ros, Pietro Liò
2023-08-21T22:26:21Z
http://arxiv.org/abs/2308.11068v2
# Topological Graph Signal Compression ###### Abstract Recently emerged Topological Deep Learning (TDL) methods aim to extend current Graph Neural Networks (GNN) by naturally processing higher-order interactions, going beyond the pairwise relations and local neighborhoods defined by graph representations. In this paper we propose a novel TDL-based method for compressing signals over graphs, consisting in two main steps: first, disjoint sets of higher-order structures are inferred based on the original signal -by clustering \(N\) datapoints into \(K\ll N\) collections; then, a topological-inspired message passing gets a compressed representation of the signal within those multi-element sets. Our results show that our framework improves both standard GNN and feed-forward architectures in compressing temporal link-based signals from two real-word Internet Service Provider Networks' datasets -from \(30\%\) up to \(90\%\) better reconstruction errors across all evaluation scenarios-, suggesting that it better captures and exploits spatial and temporal correlations over the whole graph-based network structure. ## 1 Motivation Graph Neural Networks (GNNs)[24] have demonstrated remarkable performance in modelling and processing relational data on the graph domain, which naturally encodes binary interactions. Topological Deep Learning (TDL)[14] methods take this a step further by working on domains that can feature higher-order relations. By leveraging (algebraic) topology concepts to encode multi-element relationships (e.g. simplicial[6], cell[5] and combinatorial complexes[9]), Topological Neural Networks (TNNs) allows for a more expressive representation of the complex relational structure at the core of the data. In fact, despite its recent emergence, TDL is already postulated to become a relevant tool in many research areas and applications[9], including complex physical systems[4], signal processing[2], molecular analysis[6] and social interactions[16]. We argue that the task of data compression can hugely benefit from TDL by enabling to exploit multi-way correlations between elements beyond pre-defined local neighborhoods to get the desired lower-dimensional representations. To the best of our knowledge, current Machine Learning (ML) compression approaches mainly rely on Information Theory (IT) and are narrowed to Computer Vision applications[11; 21]. In contrast to that, and inspired by \(\mathcal{I}\!p\)[8] -the current state-of-the-art lossy compression method for floating-point data, more details in A.2.-, we propose in this paper a novel TDL framework to (_a_) first detect higher-order correlated structures over a given data, and (_b_) then directly apply TNNs to obtain compressed representations within those multi-element sets. This work provides evidence supporting that TDL could have great potential in compressing relational data. With the long-term objective of outperforming \(\mathcal{I}\!p\), our current goal is to assess if the proposed framework naturally exploits multi-datapoint interactions -between possibly distant elements- in a way that makes it more suitable for compression than other ML architectures (even if data comes from the graph domain). To do so, we consider the critical problem of traffic storage in today's Internet Service Providers (ISP) networks[1], and set the target to compress the temporal per-link traffic evolution -Figure 4- for two real-world datasets extracted from [13] (more details and motivation of this use case provided in A.1). Once the original link-based signal is divided into processable temporal windows, we benchmark our method against a curated set of GNN-based architectures -and a Multi-Layer Perceptron (MLP)- properly designed for compression as well. Obtained results clearly suggest that our topological framework defines the best baseline for _lossy neural compression_. ## 2 Methodology This section describes the proposed Topological (Graph) Signal Compression framework, which is divided into the following three primary modules: 1) Topology Inference Module.The first stage of the proposed model infers the computational topological structure -both pairwise and higher-order relationships- from the data measurements. In general, the framework assumes to have the set of signals \(\mathcal{S}=\{S_{i}\}_{j=1}^{M}\), where \(S_{i}\) consists of \(N\) measurements \(x_{j}\) of a pre-defined length \(d\), i.e. \(S_{i}=\{x_{j}^{i}\}_{j=1}^{N}\), \(x_{j}^{i}\in\mathbb{R}^{d}\). Thus, the pipeline that we describe as follows (also shown in Figure 1) is independently applied to every signal \(S_{i}\). Similarity Matrix:The initial signal \(\{x_{j}\}_{j=1}^{N}\)1 is encoded with a MLP into an embedding space \(h_{j}^{0}=\psi_{\theta_{\mathcal{V}}}(x_{j})\in\mathbb{R}^{d^{\prime}}\), \(\forall j\in\{1,\ldots,N\}\). Next, we compute the pairwise similarity matrix \(M_{S}=(m_{uv})\in\mathbb{R}^{d^{\prime}\times d^{\prime}}\) where \(m_{uv}:=f_{S}\left(h_{u}^{0},h_{v}^{0}\right)\) and \(f_{S}:\mathbb{R}^{d^{\prime}}\times\mathbb{R}^{d^{\prime}}\to\mathbb{R}\) is a similarity function. Footnote 1: For the sake of simplicity, and as abuse of notation, we will avoid writing the superscript \(i\) when referring to the measurements of a generic signal \(S_{i}\in\mathcal{S}\). Higher-order Relationships:We use clustering techniques on the similarity matrix \(M_{S}\) to deduce \(K\) higher-order structures, over which a Topological Message Passing pipeline -see next module 2)-performs the compression. In fact, the idea is to compress the signal within the inferred multi-element sets and encode compressed representations of the data into the final hidden states of these hyperedges. Therefore, the number of higher-order structures \(K\) is desired to be considerably lower than the number \(N\) of datapoints (\(K\ll N\)); we design the following _clustering_ scheme for this purpose: 1. The number of hyperedges are defined as \(K=\lfloor N/p\rceil\), where \(p\) is a hyperparameter that identifies the maximum hyperedge length. 2. For every row in the similarity matrix \(M_{S}\), we extract the top \(p-1\) highest entries and calculate their sum. We then select the row that corresponds to the highest summation value. This chosen row becomes the basis for forming a hyperedge as we gather the indices of the \(p-1\) selected columns along with the index of the row itself. Then the gathered indices are removed from the rows and columns of the similarity matrix \(M_{S}\), obtaining a reduced \(\hat{M}_{S}\in\mathbb{R}^{(d^{\prime}-p)\times(d^{\prime}-p)}\). 3. Previous step 2 is repeated with subsequent \(\hat{M}_{S}\) until \(K\) disjoint hyperedges are obtained.2 Footnote 2: When \(N/p\) is not an even division, at some point of the process the ranking starts considering the row-wise \(p-2\) higher entries to form \(p-1\)-length hyperedges, so that at the end a total of \(K=\lfloor N/p\rceil\) hyperedges of lengths \(p\) and \(p-1\) are obtained; see A.4.2 for further details. On the other hand, the choice of the similarity function becomes a crucial aspect for the compression task. Supported by our early experiments (see Section 3), our framework makes use of the **Signal to Noise Ratio (SNR)** distance metric presented in [22], proposed in the context of deep metric learning as it jointly preserves the semantic similarity and the correlations in learned features[22]. Pairwise relationships:Besides higher-order structures, our framework can optionally leverage graph-based relational interactions, either _(i)_ by considering the original graph connectivity if it is known, or _(ii)_ by inferring the edges via the similarity matrix as well -by connecting each element with a subset of top \(k\) row-based entries in \(M_{S}\). In our experiments only intra-hyperedge edges have been considered to keep the inferred higher-order structures completely disjoint from each other. Figure 1: Topology Inference Module. For each subsignal \(S_{i}\in\mathcal{S}\), it outputs a topological object \(\mathcal{T}=(\mathcal{V},\mathcal{E},\mathcal{W})\) determined by \(K\) disjoint hyperedges. 2) Compression Module via Topological Message Passing.We implemented two topological Message Passing (MP) compression pipelines, named **SetMP** and **CombMP**. **SetMP** is a purely set-based architecture that operates only over hyperedges and nodes; more details in A.3. In this section we describe **CombMP**, our most general architecture that leverages the three different structures (nodes, edges, hyperedges) in a hierarchical way,3 and can be seen as a generalisation of **SetMP**. Footnote 3: Edges and hyperedges are distinguished because, analogously to recent Combinatorial Complexes (CCC) models[9], edges can hierarchically communicate with hyperedges if they are contained in them; in fact, the name **CombMP** relates to these general topological constructions (more details in Appendix A.2). For a given signal \(S_{i}\) and its corresponding initial embeddings \(\{h^{0}_{j}\}_{j=1}^{N}\), our model operates over a topological object \(\mathcal{T}=(\mathcal{V},\mathcal{E},\mathcal{W})\) where \(\mathcal{V}\) denotes the set of elements or nodes, \(|\mathcal{V}|=N\); \(\mathcal{E}\in\mathcal{V}\times\mathcal{V}\) represent the set of edges; and \(\mathcal{W}\in(\mathcal{V}\times\cdots\times\mathcal{V})\) the set of hyperedges. The compression pipeline (visualized in Figure 2) can be described as follows: **Initial embeddings:** First, we generate initial embeddings for the three considered topological structures. For nodes, we use the previously computed embeddings \(\{h^{0}_{v}\}_{v=1}^{N}\). For edges and hyperedges, (learnable) permutation invariant functions are applied over the initial embeddings of the nodes they contain; respectively, \(h^{0}_{e}=\phi_{\theta_{\mathcal{E}}}\left(\oplus_{v\in e}h^{0}_{v}\right)\) for each \(e\in\mathcal{E}\), and \(h^{0}_{w}=\phi_{\theta_{\mathcal{W}}}\left(\oplus_{v\in w}h^{0}_{v}\right)\) for each \(w\in\mathcal{W}\). The same dimension \(d^{\prime}\) is used for all initial and intermediate hidden representations. **Edge-Hyperedge Message Passing:** We define a hierarchical propagation of messages between edges and hyperedges. First, neighboring edges communicate to each other to update their representations; denoting the edge neighbors of an edge \(e\in\mathcal{E}\) by \(\mathcal{N}^{e}_{e}:=\{e^{\prime}=(u,v)\in\mathcal{E}|e^{\prime}\neq e,u\in e \lor v\in e\}\), its new hidden state becomes \(h^{1}_{e}=\phi_{\theta_{\mathcal{E}}\to e}\left(\oplus_{e^{\prime}\in \mathcal{N}^{e}_{e}}\psi_{\theta_{\mathcal{E}}\to e}\left(h^{0}_{e},h^{0}_{e} \right)\right)\). Next, hyperedges also update their hidden states based on the updated representations according to \(h^{1}_{w}=\phi_{\theta_{\mathcal{E}}\to W}\left(\oplus_{v\in\mathcal{E},e\subset w }\psi_{\theta_{\mathcal{E}}\to w}\left(h^{0}_{w},h^{1}_{e}\right)\right)\), for each \(w\in\mathcal{W}\). Then the idea is to propagate downwards towards the edges, i.e. from hyperedges to edges, \(h^{2}_{e}=\phi_{\theta_{\mathcal{W}\to e}}\left(\oplus_{v\in\mathcal{W},e \subset w}\psi_{\theta_{\mathcal{W}\to e}}\left(h^{1}_{e},h^{1}_{w}\right)\right)\); and only between edges again. This whole communication process can be iterated \(T\) times. **Edge-to-Node Compression:** At this point, we perform a first compression step over the nodes by leveraging the updated edge hidden representations, the initial node embeddings, as well as the original node data as a residual connection. Formally, for each node \(v\in\mathcal{V}\) we get a compressed hidden representation \(h^{c}_{v}=\phi_{\theta_{\mathcal{E}}\to V}\left(\oplus_{e\in\mathcal{E},v\in v }\psi_{\theta_{\mathcal{E}}\to V}\left(x_{v},h^{0}_{v},h^{t}_{e}\right)\right) \in\mathbb{R}^{d^{c}_{v}}\). **Node-to-Hyperedge Compression:** Finally, a second and last compression step is performed over the hypergraph representations, in this case leveraging a residual connection to the original measurements, the previously computed compressed representations of nodes, as well as the updated hidden representations of hyperedges. More in detail, each hyperedge \(w\in\mathcal{W}\) obtains its final compressed hidden representation as \(h^{c}_{w}=\phi_{\theta_{\mathcal{V}}\to W}\left(\oplus_{v\in\mathcal{V},v\in w }\psi_{\theta_{\mathcal{V}}\to W}\left(x_{v},h^{c}_{v},h^{t}_{w}\right)\right) \in\mathbb{R}^{d^{c}_{w}}\). The final node and hyperedge states, \(\{\{h^{c}_{v}\}_{v\in\mathcal{V}},\{h^{c}_{w}\}_{w\in\mathcal{W}}\}\), encode the compressed representation of a signal \(S_{i}=\{x_{j}\}_{j=1}^{N}\). Consequently, the compression factor \(r_{c}\) can be expressed as: \[r_{c}=\frac{N\cdot d^{c}_{\mathcal{V}}+K\cdot d^{c}_{\mathcal{W}}}{N\cdot d} \tag{1}\] Figure 2: Compression Module workflow for the **CombMP** architecture. It is independently applied to each hyperedge \(w\in\mathcal{W}\) of the inferred topological object \(\mathcal{T}=(\mathcal{V},\mathcal{E},\mathcal{W})\). **3) Decompression Module.** This last module learns to reconstruct the original signal of every node through its compressed representation and the final hidden state of the hyperedge it belongs to. More formally, for each \(v\in\mathcal{V}\) and its corresponding hyperedge \(v\in w\in\mathcal{W}\), the reconstructed signal \(\hat{x}_{v}\) is obtained as \(\hat{x}_{v}=\phi_{dec}\left(h_{v}^{c},h_{w}^{c}\right)\), where \(\phi_{dec}\) is implemented as a MLP in our framework. The whole model is trained end-to-end to minimize the (mean squared) reconstruction error. ## 3 Evaluation & Discussion **Experimental Setup:** For the evaluation, we use two public real-world datasets -Abi lene, Geant- from [13]. They are pre-processed to generate subsignals \(S_{i}\) of network link-level traffic measurements in temporal windows of length \(d=10\), to which then a random \(60/20/20\) split is performed for training, validation and test, respectively. In this context, our method is compared against: * **GNN**: we implemented several standard GNN architectures (GCN[12], GAT[18], GATv2[7], GraphSAGE[10]) to perform signal compression over the network graph topology; we take the best result among them in each evaluation scenario. * **MPNN**: a custom MP-based GNN scheme -over the original network graph structure as well-whose modules and pipeline are similar to our proposed topological MPs. * **MLP**: a feed-forward auto-encoder architecture with no inductive biases over subsignals \(S_{i}\). **GNN** and **MPNN** baselines implement a decompression module similar to that of our TDL-based methods. More details about the evaluation and all model implementations are provided in A.4. **Experimental Results:** Table 1 shows the reconstruction error (MSE and MAE) obtained by our framework and the baselines in both datasets for two different compression factors (\(1/3\) and \(2/3\)). We can see that **SetMP**, our topological edge-less architecture, clearly performs the best in all scenarios -improving on average by \(75\%\) and \(48\%\), respectively, the best MSE and MAE obtained by baselines-, followed by our most general **CombMP** method. As for the baselines, in overall **MPNN** slightly outperforms **MLP**, and **GNN** performs the worst. A comparison against _%P_ can be found in A.4.5. **Discussion:** These results support our hypothesis that taking into account higher-order interactions could help in designing more expressive ML-based models for (graph) signal compression tasks, specially due to the fact that these higher-order structures can go beyond the (graph) local neighborhood and connect possibly distant datapoints whose signals may be strongly correlated (e.g. generator and sink nodes in ISP Networks). In that regard, TDL can provide us with novel methodologies that naturally encompass and exploits those multi-element relations. Moreover, it is interesting to see how our set-based architecture outperforms the combinatorial-based one in every scenario, suggesting that intermediate binary connections might add noise in the process of distilling compressed representations. Further discussion on future work, focusing on the current limitations of our method and how to possibly address them, can be found in A.5. \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline & \multicolumn{3}{c}{Abi l} & \multicolumn{3}{c}{Geast} \\ \cline{2-9} & \multicolumn{2}{c}{\(r_{c}=1/3\)} & \multicolumn{2}{c}{\(r_{c}=2/3\)} & \multicolumn{2}{c}{\(r_{c}=1/3\)} & \multicolumn{2}{c}{\(r_{c}=2/3\)} \\ \cline{2-9} & MSE & MAE & MSE & MAE & MSE & MAE & MSE & MAE \\ \hline GNN & \(1.95\cdot 10^{-2}\) & \(1.08\cdot 10^{-1}\) & \(1.95\cdot 10^{-2}\) & \(1.08\cdot 10^{-1}\) & \(2.33\cdot 10^{-2}\) & \(1.21\cdot 10^{-1}\) & \(2.32\cdot 10^{-2}\) & \(1.20\cdot 10^{-1}\) \\ MPNN & \(7.88\cdot 10^{-4}\) & \(1.24\cdot 10^{-2}\) & \(7.92\cdot 10^{-4}\) & \(1.24\cdot 10^{-2}\) & \(8.45\cdot 10^{-3}\) & \(4.13\cdot 10^{-2}\) & \(1.82\cdot 10^{-3}\) & \(2.39\cdot 10^{-2}\) \\ \hline MLP & \(1.04\cdot 10^{-3}\) & \(1.88\cdot 10^{-2}\) & \(9.71\cdot 10^{-4}\) & \(1.80\cdot 10^{-2}\) & \(3.76\cdot 10^{-3}\) & \(3.96\cdot 10^{-2}\) & \(3.62\cdot 10^{-3}\) & \(3.89\cdot 10^{-2}\) \\ \hline SetMP & \(\textbf{3.22}\cdot 10^{-4}\) & \(\textbf{8.75}\cdot 10^{-3}\) & \(\textbf{2.03}\cdot 10^{-4}\) & \(\textbf{6.80}\cdot 10^{-3}\) & \(\textbf{6.93}\cdot 10^{-4}\) & \(\textbf{1.52}\cdot 10^{-2}\) & \(\textbf{2.90}\cdot 10^{-4}\) & \(\textbf{1.05}\cdot 10^{-2}\) \\ CombMP & \(5.81\cdot 10^{-4}\) & \(1.12\cdot 10^{-2}\) & \(3.76\cdot 10^{-4}\) & \(1.06\cdot 10^{-2}\) & \(1.07\cdot 10^{-3}\) & \(1.88\cdot 10^{-2}\) & \(7.04\cdot 10^{-4}\) & \(1.61\cdot 10^{-2}\) \\ \hline \hline \end{tabular} \end{table} Table 1: Reconstruction Mean Squared Error (MSE) and Mean Absolute Error (MAE) over the test set of the considered datasets for two different compression factors. **Top:** Methods that leverage the original graph-based network structure. **Middle:** Methods with no inductive biases. **Bottom:** Methods that leverage higher-order structures (ours). Figure 3: Decompression Module. It is applied over each hyperedge-dependent compressed representation set generated by the Compression Module.
2304.06819
Modeling Dense Multimodal Interactions Between Biological Pathways and Histology for Survival Prediction
Integrating whole-slide images (WSIs) and bulk transcriptomics for predicting patient survival can improve our understanding of patient prognosis. However, this multimodal task is particularly challenging due to the different nature of these data: WSIs represent a very high-dimensional spatial description of a tumor, while bulk transcriptomics represent a global description of gene expression levels within that tumor. In this context, our work aims to address two key challenges: (1) how can we tokenize transcriptomics in a semantically meaningful and interpretable way?, and (2) how can we capture dense multimodal interactions between these two modalities? Specifically, we propose to learn biological pathway tokens from transcriptomics that can encode specific cellular functions. Together with histology patch tokens that encode the different morphological patterns in the WSI, we argue that they form appropriate reasoning units for downstream interpretability analyses. We propose fusing both modalities using a memory-efficient multimodal Transformer that can model interactions between pathway and histology patch tokens. Our proposed model, SURVPATH, achieves state-of-the-art performance when evaluated against both unimodal and multimodal baselines on five datasets from The Cancer Genome Atlas. Our interpretability framework identifies key multimodal prognostic factors, and, as such, can provide valuable insights into the interaction between genotype and phenotype, enabling a deeper understanding of the underlying biological mechanisms at play. We make our code public at: https://github.com/ajv012/SurvPath.
Guillaume Jaume, Anurag Vaidya, Richard Chen, Drew Williamson, Paul Liang, Faisal Mahmood
2023-04-13T21:02:32Z
http://arxiv.org/abs/2304.06819v2
Modeling Dense Multimodal Interactions Between Biological Pathways and Histology for Survival Prediction ###### Abstract Integrating whole-slide images (WSIs) and bulk transcriptomics for predicting patient survival can improve our understanding of patient prognosis. However, this multimodal task is particularly challenging due to the different nature of these data: WSIs represent a very high-dimensional spatial description of a tumor, while bulk transcriptomics represent a global description of gene expression levels within that tumor. In this context, our work aims to address two key challenges: (1) how can we tokenize transcriptomics in a semantically meaningful and interpretable way?, and (2) how can we capture dense multimodal interactions between these two modalities? Specifically, we propose to learn biological pathway tokens from transcriptomics that can encode specific cellular functions. Together with histology patch tokens that encode the different morphological patterns in the WSI, we argue that they form appropriate reasoning units for downstream interpretability analyses. We propose fusing both modalities using a memory-efficient multimodal Transformer that can model interactions between pathway and histology patch tokens. Our proposed model, SurvPath, achieves state-of-the-art performance when evaluated against both unimodal and multimodal baselines on five datasets from The Cancer Genome Atlas. Our interpretability framework identifies key multimodal prognostic factors, and, as such, can provide valuable insights into the interaction between genotype and phenotype, enabling a deeper understanding of the underlying biological mechanisms at play. We make our code public at [https://github.com/ajv012/SurvPath](https://github.com/ajv012/SurvPath). Computational Pathology; Multimodal learning; Survival Prediction ## I Introduction Predicting patient prognosis is a fundamental task in computational pathology (CPATH) that aims to utilize histology whole slide images (WSIs) for automated risk assessment, patient stratification and triage, and response-to-treatment prediction [1, 2, 3, 4, 5]. Patient prognostication is often framed as a survival analysis task, in which the goal is to learn risk estimates that correctly rank the patient's time-to-event (death in the case of survival prediction) from the primary tissue slide(s) [6, 7, 8, 9, 10]. Slide-level survival prediction can be seen as a fine-grained visual recognition problem [11], in which tiny details (_e.g._ visual concepts such as tumor cells or lymphocytes [12, 13]) need to be modeled for discriminating disease stages and risk groups. As WSIs can be as large as 100,000 \(\times\) 100,000 pixels, weakly supervised methods such as multiple instance learning (MIL) are often Figure 1: Our proposed architecture SurvPath enables visualization of learned interactions via Transformer cross-attention between _biological pathways_ and _morphological patterns_, here exemplified in a high-risk breast cancer patient. The chord thickness denotes attention weight. By explicitly combining _biological pathways_ and _morphological_ patterns, SurvPath outperforms previous unimodal and multimodal prognostication models. employed for addressing slide-level tasks. In MIL, WSIs are tokenized into small patches, from which features are extracted and then fed into pooling networks for downstream classification [14, 15]. While histology provides phenotypic information about cell types and the organization of these cell types into tissues, alternate modalities can provide complementary signals that may independently be associated with prognosis. For instance, bulk transcriptomics (the measurement of combined gene expression patterns of many cells without any spatial localization), can reveal a richer global landscape of cell types and cell states [16, 17] and has been shown to be a strong predictor of patient survival in and of itself [18, 19, 20]. By combining both modalities, we can integrate the global information provided by bulk transcriptomics with the spatial information from the WSI. While most existing methods adopt _late fusion_ mechanisms [16, 21] (_i.e.,_ fusing modality-level representations), we design an _early fusion_ method that can explicitly model fine-grained cross-modal relationships between local morphological patterns and transcriptomics. In comparison with traditional computer vision and multimodal vision-language tasks [22, 23, 24], multimodal fusion of transcriptomics and histology presents two key technical challenges: 1. _Tokenizing transcriptomics modality:_ While modalities of classical multimodal tasks (_e.g.,_ image, text) can be unequivocally tokenized into object regions and word tokens [24, 25], one challenge lies in tokenizing transcriptomics in a semantically meaningful and interpretable way. As transcriptomics data is already naturally represented as a feature vector, many prior studies ignore tokenization and directly concatenate the entire feature with other modalities, which limits multimodal learning to _late fusion_ operations [16, 17]. Alternatively, genes can be partitioned into coarse functional sets, representing different gene families (_e.g.,_ tumor-suppressor genes and oncogenes), that can be used as tokens [26]. Nevertheless, such sets provide a rudimentary and incomplete depiction of intracellular interactions, as one gene family can be involved in different cellular functions. Consequently, they may lack semantic correspondence with fine-grained pathology features. Instead, we propose tokenizing genes according to established _biological pathways_[27, 28, 29]. Pathways are gene sets with known interactions that relate to _specific_ cellular functions, such as the TGF-\(\beta\) signaling cascade, which contributes to the epithelial-mesenchymal transition in breast cancer [30]. Compared to coarse sets (_e.g.,_\(N_{\mathcal{P}}=6\)[26]), pathway-based gene grouping can yield hundreds to thousands of tokens that represent unique molecular processes (\(N_{\mathcal{P}}=331\) in our work), which we hypothesize are more suitable representations for multimodal fusion and alignment with fine-grained pathology features. In addition, as pathways represent unique cellular functions, they constitute appropriate basic reasoning units suitable for interpretability (see Fig. 1). 2. _Capturing dense multimodal interactions:_ Early fusion of histology and pathway tokens can be done with a Transformer that uses self-attention to capture pairwise similarities between all tokens [31]. However, modeling pairwise interactions between large sets of histology patch tokens (_e.g.,_\(N_{\mathcal{H}}=15,000\)) and pathway tokens (\(N_{\mathcal{P}}=331\)) poses scalability challenges for fusion and alignment. Due to the quadratic complexity of the Transformer attention with respect to the number of tokens, modeling all possible interactions imposes substantial computational and memory requirements. To tackle this issue, we introduce a new unified, memory-efficient attention mechanism that can successfully model patch-to-pathway, pathway-to-patch, and pathway-to-pathway interactions. Modeling these three forms of interaction is achieved by the following: (1) designing the queries, keys, and values to share parameters across token types [32, 33], and (2) simplifying the attention layer to ignore patch-to-patch interactions, which we find through experimentation to be not as effective for survival analysis. Adapting Transformer attention in such a manner leads to significant performance increase over prior works, as well as new applications for model interpretability in multimodal CPATH, especially by understanding interactions between pathways and morphological patterns. To summarize, our contributions are (1) a transcriptomics tokenizer that leverages existing knowledge of cellular biology to generate _biological pathway_ tokens; (2) SurvPath, a memory-efficient, multimodal Transformer formulation which integrates transcriptomics and patch tokens for predicting patient survival; (3) a multi-level interpretability framework that enables deriving unimodal and cross-modal insights about the prediction; (4) a series of experiments and ablations showing the predictive power of SurvPath, using five datasets from The Cancer Genome Atlas Program (TCGA) and benchmarked against both unimodal and multimodal fusion methods. ## II Related Work ### _Survival Analysis on WSIs_ Recently, several histology-based survival models utilizing MIL have been proposed [34, 35, 6]. Most contributions have been dedicated to modeling tumor heterogeneity and the tumor microenvironment. To this end, several MIL-based pooling strategies have been proposed, such as using graph neural networks to model local patch interactions [36, 37, 10], accounting for the variance between patch embeddings [38], or adopting multi-magnification patch representations [39]. ### _Multimodal Transformers and Interpretability_ In parallel, the use of Transformers for multimodal fusion has gained significant attention in classification and generative tasks [40, 41, 42]. Multimodal tokens can be concatenated and fed to a regular Transformer [31, 43], a hierarchical Transformer [44], or a cross-attention Transformer [45, 46, 47]. As the number and dimensionality of modalities increase, the typical sequence length can become too large to be fed to vanilla Transformers, hence the need for low-complexity methods. Several models have proposed re-formulations of self-attention to reduce memory requirements [48, 49, 50, 51, 52], for instance, by approximating self-attention with a low-rank decomposition [49, 53], using latent bottleneck distillation [32, 33, 54], or using sparse attention patterns [48, 55]. However, none of these works have been applied to survival analysis. Recently, interpretable multimodal models or post-hoc interpretation methods [56, 57, 58] have also emerged as a critical area of research, especially in sensitive human-AI collaborative decision-making scenarios such as healthcare and human-computer interactions. ### _Multimodal Survival Analysis_ Multimodal integration is an important objective in cancer prognosis [5], as combining histology and omics data such as genomics or transcriptomics is the current clinical practice for many cancer types. The majority of these works employ _late fusion_ mechanisms [59, 60], and mostly differ in the way modality fusion is operated. Fusion can be based on vector concatenation [61], modality-level alignment [62], bilinear pooling (_i.e.,_ Kronecker product) [60, 17], or factorized bilinear pooling [63, 16]. Differently, _early fusion_ mechanisms can be employed, in which cross-modal interactions between individual constituents of the input are modeled. Our work builds off Figure 2: **Block diagram of SurvPath. (1) We tokenize transcriptomics into _biological pathway_ tokens that are semantically meaningful, interpretable, and end-to-end learnable. (2) We further tokenize the corresponding histology whole-slide image into patch tokens using an SSL pre-trained feature extractor. (3) We combine pathway and patch tokens using a memory-efficient multimodal Transformer used for survival outcome prediction.** MCAT [26], which uses a cross-attention module to model the attention of histology patches (keys, values) toward gene sets (queries). However, MCAT has several limitations: (1) cross-attention being one-sided and models only patch-to-genes interactions, (2) transcriptomics tokenization using coarse sets which do not reflect actual molecular processes, and (3) significant gene overlap between sets which leads to redundant cross-attention heatmaps. Differently, adding more modalities has also been proposed, for instance, by including radiology data [64] or clinical patient information [62]. ## III Method In this section, we present SurvPath, our proposed method for multimodal survival prediction based on histology and transcriptomics. Sec. III-A presents the transcriptomics encoder to build biological pathway tokens, Sec. III-B presents the histology encoder to build patch tokens, Sec. III-C presents our Transformer-based multimodal aggregation, and Sec. III-D presents its application to survival prediction (see Fig. 2). Finally, Sec. III-E introduces our multi-level interpretability framework. ### _Pathway Tokenizer from Transcriptomics_ **Composing pathways:** Selecting the appropriate reasoning unit for transcriptomics analysis is challenging, owing to the intricate and hierarchical nature of cellular processes. Pathways, consisting of a group of genes or subpathways involved in a particular biological process, represent a natural reasoning unit for this analysis. A comparison may be drawn to action recognition, where an action (_i.e.,_ a biological pathway) can be described by a series of movements captured by sensors (_i.e.,_ transcriptomics measurements of a group of genes). **Encoding pathways:** Given a set of transcriptomics measurements of \(N_{\mathcal{G}}\) genes, denoted as \(\mathbf{g}\in\mathbb{R}^{N_{\mathcal{G}}}\), and the composition of each pathway, we aim to build pathway-level tokens \(\mathbf{X}^{(\mathcal{P})}\in\mathbb{R}^{N_{\mathcal{P}}\times d}\), where \(d\) denotes the token dimension. Transcriptomics can be seen as tabular data, which can be efficiently encoded with multilayer perceptrons (MLPs). Specifically, we are learning pathway-specific weights \(\phi_{i}\), _i.e.,_\(\mathbf{x}_{i}^{(\mathcal{P})}=\phi_{i}(\mathbf{g}_{\mathcal{P}_{i}})\), where \(\mathbf{g}_{\mathcal{P}_{i}}\) is the gene set present in pathway \(\mathcal{P}_{i}\). This can be viewed as learning a _sparse_ multi-layer perceptron (S-MLP) [65, 66, 67] that maps transcriptomics \(\mathbf{g}\in\mathbb{R}^{N_{\mathcal{G}}}\) to tokens \(\mathbf{x}^{(\mathcal{P})}\in\mathbb{R}^{N_{\mathcal{P}}d}\). The network sparsity is controlled by the gene-to-pathway connectivity embedded in the S-MLP weights. By simply reshaping \(\mathbf{x}^{(\mathcal{P})}\in\mathbb{R}^{N_{\mathcal{P}}d}\) into \(\mathbf{X}^{(\mathcal{P})}\in\mathbb{R}^{N_{\mathcal{P}}\times d}\), we define pathway tokens that can be used by the Transformer. Each pathway token corresponds to a deep representation of the gene-level transcriptomics that comprises it, which is both (1) interpretable as it encodes a specific biological function, and (2) learnable in an end-to-end fashion with respect to the prediction task. ### _Histology Patch Tokenizer from WSIs_ Given an input WSI, our goal is to derive low-dimensional patch-level embeddings defining patch tokens. We start by identifying tissue regions to ensure that the background, which carries no biological meaning, is disregarded. Then, we decompose the identified tissue regions into a set of \(N_{\mathcal{H}}\) non-overlapping patches at 20\(\times\) magnification (or \(\sim 0.5\,\mu m\)/pixel resolution), that we denote as \(\mathbf{H}=\{\mathbf{h}_{1},...,\mathbf{h}_{N_{\mathcal{H}}}\}\). Due to the large number of patches per WSI (_e.g.,_ can be \(>\) 50,000 patches or 78 GB as floats), patch embeddings need to be extracted prior to model training to reduce the overall memory requirements. Formally, we employ a pre-trained feature extractor \(f(\cdot)\) to map each patch \(\mathbf{h}_{i}\) into a patch embedding as \(\mathbf{x}_{i}^{(\mathcal{H})}=f(\mathbf{h}_{i})\). In this work, we use a Swin Transformer encoder that was pretrained via contrastive learning on more than 15 million pan-cancer histopathology patches [68, 69]. The resulting patch embeddings represent compressed representations of the patches (compression ratio of 256), that we further pass through a learnable linear transform to match the token dimension \(d\), yielding patch tokens \(\mathbf{X}^{(\mathcal{H})}\in\mathbb{R}^{N_{\mathcal{H}}\times d}\). ### _Multimodal Fusion_ We aim to design an early fusion mechanism that can model dense multimodal interactions between pathway and patch tokens. We employ Transformer attention [31] that can measure and aggregate pair-wise interactions between multimodal tokens. Specifically, we define a multimodal sequence by concatenating the pathway and patch tokens resulting in \((N_{\mathcal{H}}+N_{\mathcal{P}})\) tokens of dimensions \(d\), and denoted as \(\mathbf{X}\in\mathbb{R}^{(N_{\mathcal{P}}+N_{\mathcal{H}})\times d}\). Following the self-attention terminology [31], we define three linear projections of the tokens using learnable matrices, denoted as \(\mathbf{W}_{Q}\in\mathbb{R}^{d\times d_{q}}\), \(\mathbf{W}_{K}\in\mathbb{R}^{d\times d_{k}}\), and \(\mathbf{W}_{V}\in\mathbb{R}^{d\times d_{v}}\) to extract the queries (**Q**), keys (**K**), values (**V**), and self-attention \(\mathbf{A}\). We set \(d=d_{k}=d_{q}=d_{v}\). Transformer attention is then defined as: \[\mathbf{X}_{\text{Att}}=\sigma\Big{(}\frac{\mathbf{Q}\mathbf{K}^{T}}{\sqrt{d }}\Big{)}\mathbf{V}=\begin{pmatrix}\mathbf{A}_{\mathcal{P}\rightarrow\mathcal{ P}}&\mathbf{A}_{\mathcal{P}\rightarrow\mathcal{H}}\\ \mathbf{A}_{\mathcal{H}\rightarrow\mathcal{P}}&\mathbf{A}_{\mathcal{H} \rightarrow\mathcal{H}}\end{pmatrix}\begin{pmatrix}\mathbf{V}_{\mathcal{P}} \\ \mathbf{V}_{\mathcal{H}}\end{pmatrix} \tag{1}\] where \(\sigma\) is the row-wise softmax. The term \(\mathbf{Q}\mathbf{K}^{T}\) has memory requirements \(\mathcal{O}\big{(}(N_{\mathcal{H}}+N_{\mathcal{P}})^{2}\big{)}\), which for long sequences becomes prohibitively expensive to compute. This constitutes a major bottleneck as a WSI can include more than \(50,000\) patches making this computation challenging on most hardware, _e.g.,_ storing a \(50,000\times 50,000\) matrix requires 20 GB of RAM. Instead, we propose to decompose the multimodal Transformer attention into four intra- and cross-modality terms: (1) the intra-modal pathway self-attention encoding pathway-to-pathway interactions \(\mathbf{A}_{\mathcal{P}\rightarrow\mathcal{P}}\in\mathbb{R}^{N_{\mathcal{P}} \times N_{\mathcal{P}}}\), (2) the cross-modal pathway-guided cross-attention encoding pathway-to-patch interactions \(\mathbf{A}_{\mathcal{P}\rightarrow\mathcal{H}}\in\mathbb{R}^{N_{\mathcal{P}} \times N_{\mathcal{H}}}\), (3) the cross-modal histology-guided cross attention encoding patch-to-pathway interac tions \(\mathbf{A}_{\mathcal{H}\rightarrow\mathcal{P}}\in\mathbb{R}^{N_{\mathcal{H}}\times N _{\mathcal{P}}}\), and (4) the intra-modal full histology self-attention encoding patch-to-patch interactions \(\mathbf{A}_{\mathcal{H}\rightarrow\mathcal{H}}\in\mathbb{R}^{N_{\mathcal{H}} \times N_{\mathcal{H}}}\). As the number of patch tokens is much larger than the number of pathways, _i.e._, \(N_{\mathcal{H}}>>N_{\mathcal{P}}\), most memory requirements come from computing and storing \(\mathbf{A}_{\mathcal{H}\rightarrow\mathcal{H}}\). To address this bottleneck, we approximate Transformer attention as: \[\hat{\mathbf{X}}_{\text{Att}}=\begin{pmatrix}\mathbf{X}_{\text{Att}}^{( \mathcal{P})}\\ \hat{\mathbf{X}}_{\text{Att}}^{(\mathcal{H})}\end{pmatrix}=\sigma\left[\frac{1 }{\sqrt{d}}\begin{pmatrix}\mathbf{Q}_{\mathcal{P}}\mathbf{K}_{\mathcal{P}}^{ T}&\mathbf{Q}_{\mathcal{P}}\mathbf{K}_{\mathcal{H}}^{T}\\ \mathbf{Q}_{\mathcal{H}}\mathbf{K}_{\mathcal{P}}^{T}&-\infty\end{pmatrix} \right]\mathbf{V} \tag{2}\] where \(\mathbf{Q}_{\mathcal{P}}\) (respectively \(\mathbf{K}_{\mathcal{P}}\)) and \(\mathbf{Q}_{\mathcal{H}}\) (respectively \(\mathbf{K}_{\mathcal{H}}\)) denotes the subset of pathway and histology queries and keys. Setting pre-softmax patch-to-patch interactions to \(-\infty\) is equivalent to ignore these interactions. Expanding Eq. 2, we obtain that \(\mathbf{X}_{\text{Att}}^{(\mathcal{P})}=\sigma\left(\frac{\mathbf{Q}_{\mathcal{ P}}\mathbf{K}^{T}}{\sqrt{d}}\right)\mathbf{V}_{\mathcal{P}}\), and \(\hat{\mathbf{X}}_{\text{Att}}^{(\mathcal{H})}=\sigma\left(\frac{\mathbf{Q}_{ \mathcal{H}}\mathbf{K}_{\mathcal{P}}^{T}}{\sqrt{d}}\right)\mathbf{V}_{\mathcal{ H}}\). The number of interactions becomes drastically smaller, enabling computing \(\hat{\mathbf{A}}\) with limited memory. This formulation can be seen as a sparse attention pattern [48] on a multimodal sequence, where sparsity is imposed between patch tokens. This formulation is parameter-efficient as a unique set of keys, queries, and values is learned for encoding both modalities. After passing \(\hat{\mathbf{X}}_{\text{Att}}\) through a feed-forward layer with layer normalization, we take the mean representation of the post-attention pathway and patch tokens denoted as \(\bar{\mathbf{x}}_{\text{Att}}^{\mathcal{P}}\) and \(\bar{\mathbf{x}}_{\text{Att}}^{\mathcal{H}}\), respectively. The final representation \(\bar{\mathbf{x}}_{\text{Att}}\), is then defined by the concatenation of \(\bar{\mathbf{x}}_{\text{Att}}^{\mathcal{P}}\) and \(\bar{\mathbf{x}}_{\text{Att}}^{\mathcal{H}}\). ### _Survival Prediction_ Using the multimodal embedding \(\bar{\mathbf{x}}_{\text{Att}}\in\mathbb{R}^{2d}\), our objective is to predict patient survival. Following previous work [70], we define the patient's survival state by: (1) censorship status \(c\), where \(c=0\) represents an observed patient death and \(c=1\) corresponds to the patient's last known follow-up, and (2) a time-to-event \(t_{i}\), which corresponds to the time between the patient's diagnostic and observed death if \(c=0\), or the last follow-up if \(c=1\). Instead of directly predicting the observed time of event \(t\), we approximate it by defining non-overlapping time intervals \((t_{j-1},t_{j}),\ j\in[1,...,n]\) based on the quartiles of survival time values, and denoted as \(y_{j}\). The problem simplifies to classification with censorship information, where each patient is now defined by \((\bar{\mathbf{x}}_{\text{Att}},y_{j},c)\). We build a network classifier such that each output logit predicted by the network \(\hat{y}_{j}\) correspond to a time interval. From there, we define the discrete hazard function \(f_{\text{hazard}}(y_{j}|\bar{\mathbf{x}}_{\text{Att}})=S(\hat{y}_{j})\) where \(S\) is the sigmoid activation. Intuitively, \(f_{\text{hazard}}(y_{j}|\bar{\mathbf{x}}_{\text{Att}})\) represents the probability that the patient dies during time interval \((t_{j-1},t_{j})\). Additionally, we define the discrete survival function \(f_{\text{surv}}(y_{j}|\bar{\mathbf{x}}_{\text{Att}})=\prod_{k=1}^{j}\left(1-f_ {\text{hazard}}(y_{k}|\bar{\mathbf{x}}_{\text{Att}})\right)\) that represents the probability that the patient survives up to time interval \((t_{j-1},t_{j})\). These enable us to define the negative log-likelihood (NLL) survival loss [70], which generalizes NLL to data with censorship. Formally, we express it as: \[\mathcal{L} \Big{(}\{\bar{\mathbf{x}}_{\text{Att}}^{(i)},y_{j}^{(i)},c^{(i)}\}_ {i=1}^{N_{\mathcal{D}}}\Big{)}= \tag{3}\] \[\sum_{i=1}^{N_{\mathcal{D}}}-c^{(i)}\log(f_{\text{surv}}(y_{j}^{(i )}|\bar{\mathbf{x}}_{\text{Att}}^{(i)}))\] (4) \[+(1-c^{(i)})\log(f_{\text{surv}}(y_{j}^{(i)}-1|\bar{\mathbf{x}}_{ \text{Att}}^{(i)}))\] (5) \[+(1-c^{(i)})\log(f_{\text{hazard}}(y_{j}^{(i)}|\bar{\mathbf{x}}_{ \text{Att}}^{(i)})) \tag{6}\] where \(N_{\mathcal{D}}\) is the number of samples in the dataset. Intuitively, Eq. 4 enforces a high survival probability for patients who remain alive after the final follow-up, Eq. 5 enforces that patients that died have high survival up to the time stamp where death was observed, and Eq. 6 ensures that the correct timestamp is predicted for patients for whom death is observed. A thorough mathematical description can be found in [70]. Finally, by taking the negative of the sum of all logits, we can define a patient-level risk used to identify different risk groups and stratify patients. ### _Multi-Level Interpretability_ We propose an interpretability framework that operates across multiple levels, to derive transcriptomics, histology, and cross-modal interpretability (see Fig. 3). **Transcriptomics:** We employ Integrated Gradient (IG) [71] to identify the influence of _pathways_ and _genes_, resulting in a score describing the degree to which each pathway, respectively gene, contributes to predicting the risk. A negative IG score corresponds to a pathway/gene being associated with a lower risk, while a positive IG score indicates an association with a higher risk. A very small score denotes negligible influence. Such interpretability Figure 3: Proposed multi-level interpretability framework. analysis serves two purposes: (1) validation of known genes and pathways associated with prognosis and (2) identification of novel gene and pathway candidates that could be predictive of prognosis. **Histology:** We process analogously to derive _patch-level_ influence that enables studying the morphology of low and high-risk-associated patches. **Cross-modal interactions:** Finally, we can study _pathway-to-patch_ and _patch-to-pathway_ interactions using the learned Transformer attention matrix \(\hat{\mathbf{A}}\). Specifically, we define the importance of patch \(j\) (respectively pathway) with respect to pathway \(i\) (respectively patch) as \(\hat{\mathbf{A}}_{ij}\) (respectively \(\hat{\mathbf{A}}_{ji}\)). This enables building heatmaps correlating a pathway and corresponding morphological features. This interpretability property is unique to our framework and enables studying how specific cellular functions described by a pathway interact with histology. ## IV Experiments ### _Dataset and Implementation_ We evaluate SurvPath on five datasets from TCGA: Bladder Urothelial Carcinoma (BLCA) (n=359), Breast Invasive Carcinoma (BRCA) (n=869), Stomach Adenocarcinoma (STAD) (n=317), Colon and Rectum Adenocarcinoma (COADREAD) (n=296), and Head and Neck Squamous Cell Carcinoma (HNSC) (n=392). Prior studies have focused on predicting overall survival (OS) [60], however, this approach risks overestimating the proportion of cancer-related deaths as patients may have succumbed to other causes. Instead, we predict disease-specific survival (DSS) as a more accurate representation of the patient's disease status. **Pathway collection:** We used the Xena database [73] to access raw transcriptomics from TCGA (\(N_{\mathcal{G}}=60,499\) in total) along with DSS labels. We extracted pathways from two resources: Reactome [29] and the Human Molecular Signatures Database (MSigDB) - Hallmarks [27, 28]. Reactome and MSigDB-Hallmarks comprise 1,281 and 50 human biological pathways, respectively. We further selected pathways for which at least 90% of the transcriptomics are available, resulting in 331 pathways derived from 4,999 different genes (281 Reactome pathways from 1,577 genes and 50 Hallmarks pathways from 4,241 genes). **Histology collection:** We collected all diagnostic WSIs used for primary diagnosis, resulting in 2,407 WSIs with an average of 14,509 patches per WSI at 20\(\times\) (assuming \(256\times 256\) patches). In total, we collected over 2.86 TB of raw image data, comprising around 33.8 million patches. **Implementation:** We employed 5-fold cross-validation to train all models. Each split was stratified according sample collection site to mitigate potential batch artifacts [74]. Models were implemented in PyTorch, interpretability was derived with Captum [75]. To increase variability during training, we randomly sampled 4,096 patches from the WSI. At test time, all patches were used to yield the final prediction. SurvPath, baselines and ablations were optimized using the RAdam optimizer [76], a batch size of 1, a learning rate of \(5\times 10^{-4}\), and \(10^{-3}\) weight decay. The patch encoder yields 768-dimensional embeddings that are projected to \(d=256\), the token dimension. The transcriptomics encoder is composed of 2-layer feed-forward networks with alpha dropout [72] to yield pathway tokens. The Transformer is implemented with a single head and layer, without class (CLS) token. The transformer is followed by a layernorm, a feed-forward layer, and a 2-layer classification head. ### _Baselines and Metrics_ We group baselines into three categories: (1) unimodal histology methods, (2) unimodal transcriptomics methods, and (3) multimodal methods that are further sub-categorized into early _vs._ late fusion methods. **Histology baselines:** All baselines use the same pre-trained feature extractor as SurvPath[69]. We compare with _ABMIL_[14], which uses a gated-attention pooling, _AMISL_[6], which first clusters patch embeddings using K-means before attention, and _TransMIL_[15], that approximates patch self-attention with Nystrom method [49]. **Transcriptomics baselines:** All baselines use the same input defined by aggregating Reactome and Hallmarks transcriptomics. (a) _MLP_[77] uses a 4-layer MLP, (b) _SNN_[60, 77] supplements _MLP_ with additional alpha dropout layers, and (c) _S-MLP_[66, 67] uses a 2-layer sparse pathway-aware MLP followed by a dense 2-layer MLP. This baseline shares similarities with our transcriptomics encoder. **Multimodal baselines:** (a) **Late fusion:** We combine ABMIL [14], AMISL [6], and TransMIL [15] with an S-MLP using concatenation [61], denoted as _ABMIL (Cat)_, _AMISL (Cat)_, and _TransMIL (Cat)_, and Kronecker product [60, 78, 79, 80], denoted as _ABMIL (KP)_, _AMISL (KP)_, and _TransMIL (KP)_. (b) **Early fusion:**_MCAT_[26] uses genomic-guided cross-attention followed by modality-specific self-attention blocks. **Metrics:** The models are evaluated using (1) the concordance index (c-index, higher is better), which measures the proportion of all possible pairs of observations where the model's predicted values correctly predict the ordering of the actual survival (ranges from 0.5 (random prediction) to 1 (perfect prediction)), and (2) Kaplan-Meier (KM) curves that enable visualizing the probability of survival of patients of different risk groups over a certain period of time. We apply the logrank statistical significance test to determine if the separation between low and high-risk groups is statistically significant (\(\mathrm{p}\)-value \(<0.05\)). ### _Survival Prediction Results_ Table I and Table II present results of SurvPath and baselines evaluated at 20\(\times\) and 10\(\times\) magnification, respectively. SurvPath reaches best overall performance, outperforming unimodal and multimodal baselines at both 20\(\times\) and 10\(\times\). At 20\(\times\), SurvPath reaches +7.3\(\%\) compared to TransMIL, \(+3.0\%\) compared to MLP, and \(+3.5\) compared to MCAT. We attribute the high performance of SurvPath to (1) the use of both modalities, (2) a unified, simple, and parameter-efficient fusion model, and (3) a semantically meaningful transcriptomics tokenizer. **Transcriptomics _vs._ Histology _vs._ Multimodal: Multimodal baselines significantly outperform histology baselines. Interestingly, a simple MLP trained on our set of transcriptomics constitutes a strong baseline that outperforms several multimodal methods. This highlights the challenge of performing robust feature selection and integrating heterogeneous and high-dimensional data modalities. In addition, the relatively small dataset size further complicates learning of complex models. **Context _vs._ No context:** ABMIL and TransMIL perform similarly, despite TransMIL modeling path-to-patch interactions using Nystrom attention. This observation supports our design choice of disregarding patch-to-patch interactions. In addition, SurvPath performance is similar across magnifications (\(0.629\) overall c-index in both cases). This observation also holds for most histology and multimodal baselines. **Sparse _vs._ dense transcriptomics encoders:** A dense MLP yields better performance than a sparse pathway-aware MLP. However, sparse networks have shown to be particularly parameter-efficient when the number of genes considered drastically increases and are more interpretable than regular MLPs [67]. As the number of genes increases, \begin{table} \begin{tabular}{l l|c c c c c c} \hline \hline & Model/Study & BRCA (\(\uparrow\)) & BLCA (\(\uparrow\)) & COADREAD (\(\uparrow\)) & HNSC (\(\uparrow\)) & STAD (\(\uparrow\)) & Overall (\(\uparrow\)) \\ \hline \multirow{4}{*}{ \begin{tabular}{c} **Supervised** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **Category** \\ **C** \\ **Category** \\ **Category** \\ **C** \\ **C** **Category** \\ **C** \\ **C** \\ **C** **C** \\ **C** \\ **C** \\ **C** \\ **C** \\ **C** **C** \\ **C** \\ **C** **C** \\ **C** \\ **C** \\ **C** **C** \\ **C** **C** \\ **C** \\ **C** **C** \\ **C** \\ **C** **C** \\ **C** **C** \\ **C** \\ **C** \\ **C** **C** \\ **C** **C** \\ **C** \\ **C** **C** \\ **C** \\ **C** \\ **C** **C** \\ **C** \\ **C** \\ **C** \\ **C** \\ **C** \\ **C** **C** \\ **C** **C** \\ **C** **C** \\ **C** **C** \\ **C** **C** \\ **C** \\ **C** **C** \\ **C** **C** \\ **C** **C** \\ **C** \\ **C** **C** \\ **C** **C** \\ **C** \\ **C** **C** \\ **C** **C** \\ **C** \\ **C** **C** \\ **C** \\ **C** **C** \\ **C** **C** \\ **C** **C** \\ **C** **C** \\ **C** **C** \\ **C** **C** \\ **C** \\ **C** **C** \\ **C** **C** \\ **C** **C** \\ **C** **C** \\ **C** **C** \\ **C** **C** \\ **C** **C** \\ **C** **C** \\ **C** **C** \\ **C** \\ **C** **C** \\ **C** **C** \\ **C** **C** \\ **C** **C** \\ **C** **C** \\ **C** **C** \\ **C** **C** \\ **C** **C** \\ **C** \\ **C** **C** \\ **C** **C** \\ **C** **C** \\ **C** **C** \\ **C** **C** \\ **C** \\ **C** **C** \\ **C** **C** \\ **C** **C** \\ **C** **C** \\ **C** **C** \\ **C** ** \\ **C** **C** \\ **C** ** **C** \\ **C** \\ **C** ** **C** \\ **C** **C** \\ **C** **C** \\ **C** **C** \\ **C** **C** \\ **C** **C** \\ **C** **C** \\ **C** \\ **C** **C** \\ **C** ** **C** \\ **C** ** \\ **C** **C** \\ **C** ** \\ **C** ** **C** \\ **C** ** \\ **C** ** **C** \\ **C** ** **C** \\ **C** ** \\ **C** **C** \\ **C** **C** \\ **C** **C** \\ **C** \\ **C** **C** \\ **C** **C** \\ **C** **C** \\ **C** ** **C** \\ **C** **C** \\ **C** **C** \\ **C** **C** \\ **C** \\ **C** **C** \\ **C** **C** \\ **C** \\ **C** **C** \\ **C** ** **C** \\ **C** \\ **C** **C** \\ **C** **C** \\ **C** **C** \\ **C** **C** \\ **C** **C** \\ **C** **C** \\ **C** **C** this trend might evolve. **Early _vs._ Late fusion: _Early fusion_ methods (MCAT [26] and SurvPath) outperform late fusion multimodal baselines at 20\(\times\) (\(+5.2\) for SurvPath _vs._ TransMIL (KP)). We attribute this observation to the creation of a joint feature space that can model fined-grained interactions between transcriptomics and histology tokens. Overall, these findings justify the need for (1) modeling dense interactions between pathway and patch tokens and (2) unifying fusion in a single Transformer attention. **Kaplan Meier analysis:** Fig. 4 shows Kaplan-Meier survival curves of predicted high-risk and low-risk groups at 20\(\times\). All patients with a risk higher than the median of the entire cohort are assigned as high risk (red), and patients with a risk lower than the median as low risk (blue). For all five diseases, SurvPath highlights statistically better discrimination of the two risk groups compared to the best histology baseline (TransMIL), transcriptomics baseline (MLP), and multimodal baseline (MCAT). ### _Ablation Study_ To evaluate our design choices, we performed a series of ablations studying different _Tokenizer_ and _Fusion_ schemes. **Tokenizer:**SurvPath employs the Reactome and Hallmarks databases as sources of biological pathways. We assess the model performance when using each database in isolation, as well as using all genes assigned to one token (_Single_) and the gene families used in [26]. With increased granularity of transcriptomics tokens, the overall performance increases, showing that building semantic tokens, not only bring interpretability properties but also improves performance. We attribute this observation to the fact that each token encodes more and more specific biological functions, enabling better cross-modal modeling. **Fusion:** We ablate SurvPath by further simplifying Transformer attention to its left part considering \(A_{\mathcal{P}\rightarrow\mathcal{P}}\) and \(A_{\mathcal{H}\rightarrow\mathcal{P}}\), and to its top part \(A_{\mathcal{P}\rightarrow\mathcal{P}}\) and \(A_{\mathcal{P}\rightarrow\mathcal{H}}\) (this design resembles MCAT [26] where a single, shared multimodal attention layer is learned). Both branches bring complementary information (observed decrease of \(-5.6\%\) and \(-7.5\%\) in c-index), justifying the need to model both pathway-to-patch and patch-to-pathways interactions. We further adapt \begin{table} \begin{tabular}{l l|c c c c c c} \hline \hline & Model/Study & BRCA (\(\uparrow\)) & BLCA (\(\uparrow\)) & COADREAD (\(\uparrow\)) & HNSC (\(\uparrow\)) & STAD (\(\uparrow\)) & Overall (\(\uparrow\)) \\ \hline \multirow{4}{*}{**Fusion**} & Single & 0.617\(\pm\)0.147 & 0.599\(\pm\)0.077 & 0.533\(\pm\)0.07 & 0.544\(\pm\)0.077 & 0.524\(\pm\)0.117 & 0.563 \\ & Families & 0.534\(\pm\)0.156 & 0.588\(\pm\)0.060 & **0.686\(\pm\)**0.156 & 0.543\(\pm\)0.077 & 0.457\(\pm\)0.077 & 0.562 \\ & Hallmarks & 0.609\(\pm\)0.087 & 0.632\(\pm\)0.090 & 0.659\(\pm\)0.117 & 0.601\(\pm\)0.031 & 0.580\(\pm\)0.052 & 0.616 \\ & Reactome & **0.665\(\pm\)**0.086 & **0.634\(\pm\)**0.077 & 0.626\(\pm\)0.157 & **0.611\(\pm\)**0.067 & **0.603\(\pm\)**0.033 & 0.628 \\ & React.+Hallmarks & 0.640\(\pm\)0.093 & 0.628\(\pm\)0.073 & 0.675\(\pm\)0.175 & 0.605\(\pm\)0.068 & 0.598\(\pm\)0.081 & **0.629** \\ \hline \multirow{4}{*}{**Fusion**} & **A\({}_{\mathcal{P}\rightarrow\mathcal{P}}\)**, **A\({}_{\mathcal{P}\rightarrow\mathcal{H}}\)** & 0.589\(\pm\)0.077 & 0.570\(\pm\)0.099 & 0.594\(\pm\)0.124 & 0.568\(\pm\)0.067 & 0.546\(\pm\)0.135 & 0.573 \\ & **A\({}_{\mathcal{P}\rightarrow\mathcal{P}}\)**, **A\({}_{\mathcal{H}\rightarrow\mathcal{P}}\)** & 0.573\(\pm\)0.085 & 0.577\(\pm\)0.118 & 0.531\(\pm\)0.221 & 0.566\(\pm\)0.064 & 0.521\(\pm\)0.056 & 0.554 \\ \cline{1-1} & **A\({}_{\mathcal{P}\rightarrow\mathcal{P}}\)**, **A\({}_{\mathcal{H}\rightarrow\mathcal{P}}\)**, **A\({}_{\mathcal{P}\rightarrow\mathcal{H}}\)** & 0.640\(\pm\)0.093 & 0.628\(\pm\)0.073 & 0.675\(\pm\)0.175 & 0.605\(\pm\)0.068 & 0.598\(\pm\)0.081 & **0.629** \\ \cline{1-1} & **A\({}_{\mathcal{P}\rightarrow\mathcal{P}}\)**, **A\({}_{\mathcal{H}\rightarrow\mathcal{P}}\)**, **A\({}_{\mathcal{P}\rightarrow\mathcal{H}}\)** & 0.495\(\pm\)0.177 & 0.591\(\pm\)0.068 & 0.600\(\pm\)0.190 & 0.508\(\pm\)0.066 & 0.605\(\pm\)0.075 & 0.560 \\ \hline \hline \end{tabular} \end{table} Table IV: Studying design choices for tokenization (top) and fusion (bottom) in SurvPathat 10\(\times\) magnification. **Top:**_Single_ refers to no tokenization, using tabular transcriptomics features as a single token. _Families_ refers to the set of six gene families in MutSigDB, as used in [26]. _React.+Hallmarks_ refers to the main SurvPath model reported in Table II. **Bottom:**\(A_{\mathcal{P}\rightarrow\mathcal{P}}\) and \(A_{\mathcal{P}\leftrightarrow\mathcal{H}}\) refers to pathway-to-pathway, pathway-to-patch, and patch-to-pathway interactions, which is the main SurvPath model reported in Table II. \(\mathbf{\hat{A}}\) refers to using Nyström attention to approximate **A**. \begin{table} \begin{tabular}{l l|l|c c c c c c} \hline \hline & Model/Study & BRCA (\(\uparrow\)) & BLCA (\(\uparrow\)) & COADREAD (\(\uparrow\)) & HNSC (\(\uparrow\)) & STAD (\(\uparrow\)) & Overall (\(\uparrow\)) \\ \hline \multirow{4}{*}{**Fusion**} & Single & 0.625\(\pm\)0.149 & 0.560\(\pm\)0.086 & 0.604\(\pm\)0.176 & 0.580\(\pm\)0.075 & 0.563\(\pm\)0.140 & 0.586 \\ & Families & 0.620\(\pm\)0.094 & 0.613\(\pm\)0.061 & 0.671\(\pm\)0.111 & 0.600\(\pm\)0.076 & 0.540\(\pm\)0.071 & 0.609 \\ & Hallmarks & 0.645\(\pm\)0.039 & **0.635\(\pm\)**0.093 & 0.633\(\pm\)0.151 & 0.589\(\pm\)0.076 & 0.581\(\pm\)0.039 & 0.615 \\ & Reactome & 0.579\(\pm\)0.044 & 0.604\(\pm\)0.080 & 0.639\(\pm\)0.200 & 0.574\(\pm\)0.061 & **0.619\(\pm\)**0.047 & 0.602 \\ & React+Hallmarks & **0.655\(\pm\)**0.089 & 0.625\(\pm\)0.056 & **0.673\(\pm\)**0.170 & **0.600\(\pm\)**0.061 & 0.592\(\pm\)0.047 & **0.629** \\ \hline \multirow{4}{*}{**Fusion**} & **A\({}_{\mathcal{P}\rightarrow\mathcal{P}}\)**, **A\({}_{\mathcal{P}\rightarrow\mathcal{H}}\)** & 0.446\(\pm\)0.116 & 0.603\(\pm\)0.038 & 0.565\(\pm\)0.166 & 0.526\(\pm\)0.030 & 0.582\(\pm\)0.053 & 0.544 \\ & **A\({}_{\mathcal{P}\rightarrow\mathcal{P}}\)**, **A\({}_{\mathcal{H}\rightarrow\mathcal{P}}\)** & 0.546\(\pm\)0.118 & 0.589\(\pm\)0.037 & 0.633\(\pm\)0.130 & 0.498\(\pm\)0.037 & 0.480\(\pm\)0.083 & 0.549 \\ \cline{1-1} & **A\({}_{\mathcal{P}\rightarrow\mathcal{P}}\)**, **A\({}_{\mathcal{H}\rightarrow\mathcal{P}}\)**, **A\({}_{\mathcal{P}\rightarrow\mathcal{H}}\)** & **0.565\(\pm\)**0.089 & 0.625\(\pm\)0.056 & **0.673\(\pm\)**0.170 & **0.600\(\pm\)**0.061 & 0.592\(\pm\)0.047 & **0.629** \\ \cline{1-1} & **A\({}_{\mathcal{P}\rightarrow\mathcal{P}}\)**, **A\({}_{\mathcal{H}\rightarrow\mathcal{P}}\)**, **A\({}_ SurvPath with Nystrom attention that enables training on very long sequences by simplifying self-attention with a low-rank approximation. This yields significantly worse performance \(-6.9\%\). We hypothesize that the "true full attention" has low-entropy, making it more challenging to be approximated by low-rank methods [81], and that sparse attention patterns offer better approximations. ### _Interpretability_ Examination of the multi-level interpretability can lead to novel biological insight regarding the interplay between pathways and histology in determining a patient's risk. Here, we compare a low (top) and high (bottom) risk case of breast invasive carcinoma (BRCA) (Fig. 5) and bladder urothelial carcinoma (BLCA) (Fig. 6). **BRCA interpretability analysis:** Several pathways have high absolute importance scores in the low and high-risk cases, most notably the Hallmark Epithelial-Mesenchymal Transition (EMT) [82] and COX Reactions pathways [83], both of which are known to be involved in breast cancer. EMT is thought to underlie tumor cells' ability to invade and metastasize [84], and the inverse importance of this pathway for the low- and high-risk cases is compatible with this analysis. This finding is enforced by studying the cross-modal interpretability that highlights the association of EMT with nests of tumor cells invading stroma. Members of the COX family of cyclooxygenaseases, especially COX-2, have also been implicated in breast carcinogenesis and are being investigated as a component of therapeutic regimens [85]. Cross-modal interpretability demonstrates stromal and immune cells in both cases. Though there is some overlap between important pathways in the two cases in Fig. 5, the majority differ between the two. For instance, in the high-risk case, a pathway relating to iron metabolism (a known contributor to breast carcinogenesis and prognosis [86]) was identified, with patches showing small nests of tumor Figure 4: Kaplan Meier curves of SurvPath, compared against histology, transcriptomics, and multimodal baselines. High (red) and low-risk (blue) groups are identified by using the median predicted risk as cut-off. Logrank test was used to determine statistical significance (\(\alpha=0.05\)). cells invading through a dense stroma. In the low-risk case, a pathway relating to the cellular response to estrogen was found to be important, with corresponding patches demonstrating lower-grade invasive carcinoma or carcinoma in situ morphologies, consistent with others' observation that hormone-positive breast cancers tend to be lower grade and have longer survival times [87]. By further studying cases of estrogen receptor (ER)-positive patients in the BRCA cohort, we found that in 263 out of the 541 ER-positive patients (59%), estrogen response pathways (HALLMARK_ESTROGEN_RESPONSE_LATE and HALLMARK_ESTROGEN_RESPONSE_EARLY) are important (IG score larger than \(2\%\) of the total transcriptomics attributions). In addition, in 151 of the 263 ER-positive cases (\(57\%\)), ER pathways contribute to lowering patient risk. In contrast, we find that ER pathways are important in only 42 of the 157 ER-negative patients (\(26\%\)). Interestingly, the Hallmark Myogenesis pathway is assigned relatively high positive importance for both cases in Fig. 5. Myogenesis has not been extensively studied in breast cancer, but it is plausible that tumor cells either themselves express genes involved in this pathway as part of their epithelial-mesenchymal transition or they induce stromal cells to do so. This highlights the ability of our method to drive novel biological insight for subsequent investigation. **BLCA interpretability analysis:** Histology interpretation of low and high-risk cases indicates that the presence of healthy bladder muscle reduces risk, and pleomorphic tumor cells with foamy cytoplasm contribute to augmenting risk. The majority of important pathways relate to cell cycle control (e.g., G2M checkpoint, SCF \(\beta\) TrCP degradation of em1), metabolism (e.g., fatty acid metabolism), and immune-related function (allograft rejection and IL2 STAT5 signaling). Previous pathway expression analyses have found G2M checkpoint and immune-related pathways to be significant in predicting bladder cancer prognosis [88]. The contributions of pathways to overall risk are also in line with previous literature. For example, allograft rejection in the low-risk case, which attends highly to tumor-infiltrating lymphocytes, reduces the risk. The allograft rejection pathway consists of multiple genes that are activated in immune response to allografts and cancer. In the low-risk case, allograft rejection highly attends to tumor-infiltrating lymphocytes and collections of lymphocytes within and near Figure 5: Multi-level interpretability visualization for breast invasive carcinoma (BRCA) including those pathways with the top integrated gradient scores for two representative cases and selected pathways from each case. **Top:** Low-risk patient. **Bottom:** High-risk patient. Genes and pathways in red increase risk and those in blue decrease risk. Heatmap colors indicate importance, with red indicating high importance and blue indicating low importance. The pathways and morphologies identified as important in these cases generally correspond well with patterns that have been previously described in invasive breast cancer (e.g. Estrogen Response Late). the muscular wall of the bladder. In the higher-risk case, this pathway again attends to collections of inflammatory cells that are interspersed within the muscular wall. The SCF \(\beta\) TrCP degradation of em1 pathway is important in controlling cell division by mitosis. In the low-risk case, this pathway attends to uninvolved bladder muscle, whereas in the high-risk case, the same pathway attends to tumor cells invading the bladder muscle. While there is an overlap between pathways for low and high-risk cases, SurvPath also identifies unique pathways for the cases. For example, in the low-risk case, SurvPath finds the protein secretion pathway to be highly attending to tumor cells and not the healthy bladder muscle cells. In both cases, the G2M checkpoint pathway (critical for the healthy progress of the cell cycle) is found to be important. In the high-risk case, we see this pathway contributing largely to increasing risk. Interestingly, we also find that this pathway attends to large areas of necrosis, which is reasonable given that aberrations in cell cycle regulation lead to cell death. The flexibility of our approach in providing unimodal and cross-modal interpretability allows us to uncover novel multimodal biomarkers of prognosis that could conceivably be used to design better cancer therapies. As our understanding of the molecular underpinnings of disease grows, the interpretability of SurvPath may spur research into the possibility of targeting specific combinations of morphologies and pathways. ## V Conclusion **Summary:** This paper addresses two major challenges posed by the multimodal fusion of transcriptomics and histology for survival prediction: (1) we address the challenge of transcriptomics tokenization by defining _biological pathway_ tokens that encode semantically meaningful and interpretable cellular functions, and (2) we overcome the computational challenge of integrating long multimodal sequences by designing a multimodal Transformer with sparse modality-specific attention patterns. Our model, SurvPath, achieves state-of-the-art survival performance when tested on five datasets from TCGA. In addition, our proposed multi-level interpretability framework reveals known and candidate prognostic features. Figure 6: Multi-level interpretability visualization for bladder urothelial carcinoma (BLCA) including those pathways with the top integrated gradient scores for two representative cases and selected pathways from each case. **Top:** Low-risk patient. **Bottom:** High-risk patient. Genes and pathways in red increase risk, and those in blue decrease risk. Heatmap colors indicate importance, with red indicating high importance and blue indicating low importance. The pathways and morphologies identified as important in these cases generally correspond well with patterns that have been previously described in bladder urothelial carcinoma (e.g., the G2M checkpoint). **Limitations and future work:** While our interpretability framework enables identifying prognostic features, these findings remain qualitative. Future work could be dedicated to developing interpretability metrics to generalize findings at dataset-level, _e.g.,_ with quantitative morphological characterizations of specific pathway influence. In addition, our findings suggest that incorporating patch-to-patch interactions does not lead to improved performance. Nonetheless, the absence of a performance boost should not be construed as evidence that patch-to-patch interactions are unnecessary. Rather, efficiently modeling such interactions is a challenging problem that remains to be solved. Though the influence of batch artifacts was mitigated to the best of our ability via site-stratified evaluation, survival analysis in TCGA remains limited due to small sample sizes, which necessitates a community-wide effort for developing additional large survival datasets. Finally, this method is based on bulk transcriptomics, which cannot encode tumor heterogeneity. Emerging spatially-resolved technologies such as Spatial Transcriptomics [89] can be used to not only validate the histology-pathway interactions proposed by SurvPath, but also unleash new exciting methodological opportunities for early-based fusion with histopathology.
2308.06514
Joe Vinen and transverse force on vortex
The paper gives a glimpse on the problem of transverse force on a vortex, its proper determination in general and the role in the process of quantum vortex nucleation. Investigation of this problem is an essential part of the scientific heritage of Joe Vinen as an experimentalist and a theoretician.
Edouard Sonin
2023-08-12T09:37:53Z
http://arxiv.org/abs/2308.06514v1
# Joe Vinen and transverse force on vortex ###### Abstract The paper gives a glimpse on the problem of transverse force on a vortex, its proper determination in general and the role in the process of quantum vortex nucleation. Investigation of this problem is an essential part of the scientific heritage of Joe Vinen as an experimentalist and a theoretician. superfluid vortex, Magnus force, vortex mass, vortex quantum nucleation ## 1 Introduction The Magnus force on a vortex has long been known in classical hydrodynamics [1]. This force appears if the vortex moves with respect to a liquid. The force is normal to the relative vortex velocity and therefore is reactive and does not produce a work. In general, such a force arises always when a body with a flow circulation around it moves through a liquid or a gas (the Kutta-Joukowski theorem). The most important example is the lift force on a wing of an airplane which keeps the airplane in the air [2]. The key role of the Magnus force was clear from the seminal paper of Hall and Vinen [3], which marked the beginning of investigations of superfluid vortex dynamics and the emergence of the theory, which now is called the Hall-Vinen-Bekarevich-Khalatnikov (HVBK) theory. The Magnus force is proportional to the quantum of the velocity circulation around the vortex. This was used by Joe Vinen in his classical experiment [4] on the first detection of the circulation quantum in observations of vibrations of a fine wire with a trapped vortex line in superfluid \({}^{4}\)He. The existence of the Magnus force in type II superconductors was demonstrated theoretically by Nozieres and Vinen [5]. The superfluid Magnus force was defined as a force between a vortex and a superfluid and was proportional to the superfluid density. But in the two-fluid hydrodynamics the Magnus force is not the only transverse force on the vortex: there was also a transverse force produced by quasiparticles moving past the vortex. The transverse force from rotons was found by Lifshitz and Pitaevskii [6] from the semiclassical scattering theory. Later Iordanskii [7] revealed the transverse force from phonons which was equal in magnitude and opposite in sign with the force calculated by Lifshitz and Pitaevskii. From the very beginning the Iordanskii force was a subject of controversy. Iordanskii suggested that his force and the Lifshitz-Pitaevskii force were of different origins and for rotons they should be summed. As a result, he concluded that the transverse force from rotons vanished. Later it was demonstrated [8] that the Iordanskii force for rotons is identical to the Lifshitz- Pitaevskii force and they must not be added. In addition, the Lifshitz-Pitaevskii force from rotons was calculated in the original paper [6] with a wrong sign. After its correction the transverse force on the vortex had the same sign and magnitude both for rotons (the Lifshitz-Pitaevskii force) and for phonons (the Iordanskii force) [8; 9; 10]. The transverse force from quasiparticles results from interference between quasiparticles which move past the vortex on the left and on the right sides with different phase shifts, like in the Aharanov-Bohm effect [11]. In clean superconductors the BCS quasiparticles produce an additional transverse force on the vortex [12; 13] analogous to the transverse force from quasiparticles in superfluids. The controversy around the transverse force on the vortex was revived after the paper of Ao and Thouless [14]. They came to the conclusion that there is a universal exact expression for the total transverse force on the vortex derived from the concept of the geometrical phase (the Berry phase). The total transverse force coincided with the superfluid Magnus force and was proportional to the superfluid density \(\rho_{s}\). According to Ao and Thouless, there was no transverse force on the vortex from quasiparticles and impurities. The Ao-Thouless theory was in evident disagreement with the previous calculations of the transverse force on the vortex in superfluids and superconductors. It attracted a great attention and launched a vivid discussion [15; 16; 17; 18]. Joe Vinen had a great interest to this dispute. In fact, he was a moderator and discussed the issue with those involved in the dispute. Eventually with his participation some consensus was reached that the original calculation of the Berry phase missed the contribution from the normal-fluid circulation [19; 20]. This rehabilitated transverse forces from quasiparticles and impurities. Considering the problem of critical velocities in superfluids Vinen [21; 22] attracted attention to the fundamental problem of quantum nucleation of vortices. It was known from classical hydrodynamics that the energy of a vortex ring in a moving ideal fluid is not monotonous. At small ring radius the energy grows but at large ring radius it decreases and becomes negative. According to the Landau criterion for superfluids, this should mean that the superfluid flow is unstable. Vinen stressed an essential difference between elementary excitations (phonons, rotons), for which the Landau criterion was suggested, and the vortex ring. The latter is a macroscopic excitation. Its generation is accompanied by changing the states of a macroscopic number of particles. As a result, although the vortex nucleation is not forbidden by the energy conservation law, its probability is expected to be very low and requires an estimation. At zero temperature the vortex nucleation is a process of quantum tunneling through the potential barrier separating vortex rings of small and large radius. The transverse force on a vortex is important for this process. The physical picture of quantum nucleation crucially depends on whether dynamics of vortex is governed by the inertia force like in Newton's law, or by the transverse force. ## 2 Equation of motion for the vortex In the hydrodynamics of ideal classical fluids the equation of motion of the vortex with the circular velocity field around it, \[\mathbf{v}_{0}=\frac{\kappa[\mathbf{z}\times\mathbf{r}]}{2\pi r^{2}}, \tag{1}\] is \[-\rho\kappa[\mathbf{z}\times(\mathbf{v}_{L}-\mathbf{v})]=\mathbf{f}. \tag{2}\] Here \(\mathbf{v}_{L}\) is the velocity of the vortex, \(\rho\) is the fluid mass density, \(\mathbf{v}\) is the velocity of the fluid flow past the vortex, \(\mathbf{f}\) is an external force per unit length of the vortex, \(\kappa\) is the circulation of the velocity \(\mathbf{v}_{0}\), and \(\mathbf{z}\) is the unit vector along the \(z\) axis (axis of the vortex). All other vectors are in the plane normal to the \(z\) axis. In the absence of external forces the vortex moves with the transport fluid velocity \(\mathbf{v}\) (Helmholtz's theorem). In the two-fluid hydrodynamics of superfluids \(\kappa=h/m\) is the circulation quantum, and the stationary vortex motion is described by the equation \[-\rho_{s}\kappa[\mathbf{z}\times(\mathbf{v}_{L}-\mathbf{v}_{s})]-\mathbf{f}_{fr}=\mathbf{f}, \tag{3}\] where \[\mathbf{f}_{fr}=-D^{\prime}[\mathbf{z}\times(\mathbf{v}_{L}-\mathbf{v}_{nl})]-D(\mathbf{v}_{L}-\bm {v}_{nl}) \tag{4}\] is the mutual friction force, \(\rho_{s}\) and \(\rho_{n}\) are the mass densities of the superfluid and normal components respectively, \(\mathbf{v}_{s}\) is the superfluid velocity, and \(\mathbf{v}_{nl}\) is the local normal velocity, i.e., the velocity close to the vortex. The forces proportional to \(D\) and \(D^{\prime}\) arise from scattering of quasiparticles by the vortex. The force \(\propto D^{\prime}\) is the transverse Iordanskii force. The theory of scattering of non-interacting phonons and rotons yields that \(D^{\prime}=-\rho_{n}\kappa\)[8; 23]. Then in the absence of the external force \(\mathbf{f}\) and neglecting the dissipative force \(\propto D\) the vortex moves not with the superfluid velocity but with the center-of-mass velocity \(\frac{\rho_{s}}{\rho}\mathbf{v}_{s}+\frac{\rho_{n}}{\rho}\mathbf{v}_{n}\). This is a generalization of Helmholtz's theorem for superfluids. But the value \(D^{\prime}=-\rho_{n}\kappa\) is not universal and can be invalid at temperatures close to critical or in dirty Fermi superfluids [23; 24]. Note that while in classical hydrodynamics and sometimes in superfluid hydrodynamics the term "Magnus force" refers to the whole first term in Eq. (3) proportional to the relative velocity \(\mathbf{v}_{L}-\mathbf{v}_{s}\), in the theory of superconductivity only the part of this term proportional to the vortex velocity \(\mathbf{v}_{L}\) is called Magnus force. The other part proportional to the superfluid velocity \(\mathbf{v}_{s}\) is called the Lorentz force [24]. In this paper we use the latter definition of the Magnus and Lorentz force. The difference between the local normal velocity \(\mathbf{v}_{nl}\) close to the vortex and the normal velocity \(\mathbf{v}_{n}\) very far from the vortex, \[\mathbf{v}_{n}-\mathbf{v}_{nl}=\frac{\ln(r_{m}/r_{l})}{4\pi\rho_{n}\nu}\mathbf{f}_{fr}, \tag{5}\] was known in the hydrodynamics of classical viscous fluids [1]. Here \(\nu\) is the kinematic viscosity. Hall and Vinen [3] took it into account in their pioneer paper and called this effect viscous drag. The lower cut-off \(r_{l}\) in the logarithm is usually chosen to be of the order of the mean free path of quasiparticles. The upper cut-off \(r_{m}\) for a single vortex will be defined below. Excluding \(\mathbf{v}_{nl}\) from Eqs. (4) and (5) one obtains that the mutual friction force is \[\mathbf{f}_{fr}=-\mathcal{D}^{\prime}[\mathbf{z}\times(\mathbf{v}_{L}-\mathbf{v}_{n})]- \mathcal{D}(\mathbf{v}_{L}-\mathbf{v}_{n}), \tag{6}\] where coefficients \(\mathcal{D}\) and \(\mathcal{D}^{\prime}\) are connected with coefficients \(D\) and \(D^{\prime}\) by the complex relation \[\frac{1}{\mathcal{D}+i\mathcal{D}^{\prime}}=\frac{\ln(r_{m}/r_{l})}{4\pi\rho_ {n}\nu}+\frac{1}{D+iD^{\prime}}. \tag{7}\] One can rewrite the equation of vortex motion Eq. (3) as \[-\rho_{M}\kappa[\mathbf{z}\times\mathbf{v}_{L}]+\mathcal{D}\mathbf{v}_{L}=\tilde{\mathbf{f}}, \tag{8}\] where \(\rho_{M}=\rho_{s}-\mathcal{D}^{\prime}/\kappa\) is the effective density, which determines the total transverse force on the vortex, and the effective force \[\tilde{\mathbf{f}}=\mathbf{f}-\rho_{s}\kappa[\mathbf{z}\times\mathbf{v}_{s}]+\mathcal{D}^{ \prime}[\mathbf{z}\times\mathbf{v}_{n})]+\mathcal{D}\mathbf{v}_{n} \tag{9}\] includes not only the external force \(\mathbf{f}\) but also forces produced by motion of the superfluid (the Lorentz force) and the normal components past the vortex. Equations (8) and (9) are valid for Galilean invariant superfluids when the force balance can be determined only by the relative velocities \(\mathbf{v}_{L}-\mathbf{v}_{s}\) and \(\mathbf{v}_{L}-\mathbf{v}_{n}\). At zero temperature there is no quasiparticles, and \(\rho_{M}\) does not differ from \(\rho_{s}=\rho\), since \(\mathcal{D}^{\prime}=0\). In dirty superconductors interaction of Andreev bound states in the vortex core with impurities breaks Galilean invariance and produces the transverse force, which depends only on \(\mathbf{v}_{L}\) (Kopnin-Kravtsov force [13; 23; 24]). The Kopnin-Kravtsov force has a direction opposite to the superfluid Magnus force, and can essentially decrease the total transverse force \(\propto\rho_{M}\). The transverse force can be suppressed not only by disorder in dirty superconductors but also in superfluids put into a periodic potential. For example, in the Josephson-junction array the transverse force on a vortex vanishes, and \(\rho_{M}=0\). This follows from the particle-hole symmetry [9]. The transverse force is also suppressed in BEC of cold atoms in an optical lattice [25]. In accordance with Newton's third law, the force on the vortex is accompanied by a force \(-\mathbf{f}_{fr}\) of opposite direction on the normal fluid. The latter force produces the momentum flux in the normal component, which transports the momentum transferred to the normal component by the force to large distances from the vortex. The total momentum flux through a cylindric surface \(S\) around the vortex equal to \(-\mathbf{f}_{fr}\) is: \[-f_{fr\,i}=\oint\left[-\frac{\rho_{n}|\mathbf{v}_{n}-\mathbf{v}_{L}|^{2}}{2}\delta_{ij }+\rho_{n}v(v_{ni}-v_{Li})(v_{nj}-v_{Lj})-\tau_{ij}\right]dS_{j}. \tag{10}\] Here \[\tau_{ij}=\rho_{n}\nu(\nabla_{i}v_{nj}+\nabla_{j}v_{ni}) \tag{11}\] is the viscosity tensor, and \(dS_{j}\) are components of the vector normal to the surface \(S\). Its modulus is the differential \(dS\) of the surface area. The momentum flux in Eq. (10) includes only contributions connected with the motion of the normal component in the coordinate frame moving with the vortex velocity \(\mathbf{v}_{L}\). At distances from the vortex smaller than the Oseen length [1; 23] \[r_{O}\sim\nu/|\mathbf{v}_{n}-\mathbf{v}_{L}|, \tag{12}\] nonlinear terms quadratic in \(\mathbf{v}_{n}-\mathbf{v}_{L}\) can be neglected compared with the viscosity tensor. The Oseen length determines the upper cut-off \(r_{m}\) in Eq. (7) for a single vortex moving with constant velocity. At distances larger than the Oseen length the nonlinear term becomes a dominant term everywhere excepting the narrow area inside the laminar wake formed behind the vortex moving through the viscous normal fluid. The laminar wake and the viscosity-governed momentum flux inside it are important only for the dissipative longitudinal force proportional to \(\mathcal{D}\)[23]. Outside the laminar flow the momentum flux does not differ from that in a perfect fluid without viscosity, and the transverse force on the normal liquid inevitably requires the existence of the circulation of the normal velocity: \[\kappa_{n}=\oint d\mathbf{l}\cdot\mathbf{v}_{n}=-\frac{\mathcal{D}^{\prime}}{\rho_{n}}. \tag{13}\] Ao and Thouless [14] connected the total transverse force with the Berry phase proportional the circulation \(\oint(d\mathbf{l}\cdot\mathbf{j})\) of the total current \(\mathbf{j}=\rho_{s}\mathbf{v}_{s}+\rho_{n}\mathbf{v}_{n}\) past the vortex at large distance from the vortex [26].This means that the parameter \(\rho_{M}\) in Eq. (8) must be proportional to the total current circulation: \[\rho_{M}\propto\oint(d\mathbf{l}\cdot\mathbf{j}). \tag{14}\] In the original version of the theory it was assumed that the velocity circulation is possible only in the superfluid component, while \(\kappa_{n}=0\). This assumption led to the conclusion that the transverse force on the vortex reduces to only the superfluid Magnus force \(\propto\rho_{s}\) and all other forces including the Iordanskii force and the Kopnin-Kravtsov force are ruled out. Later Thouless _et al._[19] accepted the existence of the circulation of the normal velocity at large distances from the vortex. This revision has eliminated the disagreement between the Berry phase approach and the previous theories based on the momentum (force) balance investigation. According to Eq. (7), at very large Oseen length (\(r_{m}\sim r_{O}\rightarrow\infty\)) the force on the vortex ceases to depend on forces produced by quasiparticle scattering at the vortex. This refers both to the dissipative longitudinal force (\(\propto D\)) and the reactive transverse Iordanskii force (\(\propto D^{\prime}\)). Then the only transverse force is the Magnus force, the vortex velocity \(\mathbf{v}_{L}\) does not differ from the local normal velocity \(\mathbf{v}_{nl}\), and according to Eq. (6), the friction force is nothing else but the Stokes force on a cylinder moving through a viscous fluid [1]: \[\mathbf{f}_{fr}=-\frac{4\pi\rho_{n}\nu}{\ln(r_{m}/r_{l})}(\mathbf{v}_{L}-\mathbf{v}_{n}), \tag{15}\] This regime of vortex motion was assumed by Matheiu and Simon [27] to interpret experimental data on mutual friction parameters at intermediate temperatures. Since the Oseen length depends on the relative velocity \(\mathbf{v}_{L}-\mathbf{v}_{n}\) and diverges at \(\mathbf{v}_{L}-\mathbf{v}_{n}\to 0\), there is no friction force in this limit. This is the Stokes paradox [1; 23]. But the upper limit \(r_{m}\) in the logarithm determining the viscous drag is to be the Oseen length only for strictly stationary motion of a single vortex. In the oscillatory motion with the finite frequency \(\omega\)\(r_{m}\) is the viscous penetration depth \(\sqrt{\nu/\omega}\), and in the case of vortex lattice \(r_{m}\) is the intervortex distance, if these scales are less than the Oseen length (see further details in the book [23]). ## 3 Equation of motion for the vortex with mass Up to now we neglected the inertia force proportional to the vortex acceleration. Taking into account this force the equation of vortex motion Eq. (8) becomes \[\mu_{v}\frac{d\mathbf{v}_{L}}{dt}-\rho_{M}\kappa[\mathbf{z}\times\mathbf{v}_{L}]=\tilde{ \mathbf{f}}, \tag{16}\] where \(\mu_{v}\) is the mass per unit length of the vortex. This equation is analogous to the equation of motion of a charged particle in a magnetic field [28]. Here we neglect dissipative forces. Within the framework of hydrodynamics the effect of the vortex mass is normally very weak, especially in superfluid \({}^{4}\)He, where the core radius \(r_{c}\) does not exceed a few angstroms.1 But an important exception from this rule is a vortex line trapped by a flexible wire. This case was realized in Vinen's famous experiment [4], which provided the first experimental confirmation of quantization of circulation of a single vortex line. In the experiment oscillations of a thin wire with a trapped vortex line were observed. The wire was coaxial with a cylindric container filled by superfluid \({}^{4}\)He. The wire together with the trapped vortex line can be considered as a complex vortex line. The radius of the core is now the radius of the wire \(r_{w}\), which even for a thin wire is by many orders larger than the microscopically small core radius \(r_{c}\) of a free vortex. The mass per unit length \(\mu_{v}=\pi r_{w}^{2}(\rho_{w}+\rho)\) of the solid core includes the mass of the wire itself with mass density \(\rho_{w}\) and the associated mass of the fluid dragged by the wire (the term \(\propto\rho\)). Footnote 1: See discussion of vortex masses of various origins in the book [23]. In Vinen's experiment the external force was a line tension force \(\tilde{\mathbf{f}}=-\mathcal{K}\mathbf{u}\) restoring the original axial location of the wire, where \(\mathcal{K}\) is the elastic constant, \(\mathbf{u}\) is the two-dimensional vector of displacement in the middle of the wire, and \(\mathbf{v}_{L}=d\mathbf{u}/dt\). If there is no suppression of the Magnus force at zero temperature, i.e., \(\rho_{M}=\rho\), one can rewrite the equation of motion Eq. (16) as two equations for Cartesian variables \(u_{x}\) and \(u_{y}\) for a monochromatic oscillation mode \(\mathbf{u}\propto e^{-i\omega t}\): \[-\omega^{2}\mu_{v}u_{x}-i\omega\rho\kappa u_{y}=-\mathcal{K}u_{x},\] \[-\omega^{2}\mu_{v}u_{y}+i\omega\rho\kappa u_{x}=-\mathcal{K}u_{y}. \tag{17}\] The dispersion relation for oscillation of the wire is \[(\omega^{2}-\omega_{0}^{2})^{2}-\frac{\rho^{2}\kappa^{2}}{\mu_{v}^{2}}\omega^ {2}=0, \tag{18}\] where \(\omega_{0}=\sqrt{\mathcal{K}/\mu_{v}}\) is the oscillation frequency of the wire without the Magnus force, when the wire has two degenerate modes, which can be chosen as either two linearly or two circularly polarized modes. In Vinen's experiment there was a weak Magnus force, which lifted the degeneracy, and the dispersion relation yielded two circularly polarized modes with two close but different frequencies: \[\omega=\omega_{0}\pm\frac{\rho\kappa}{2\mu_{v}}. \tag{19}\] The presence of two close modes means that oscillations of the wire are accompanied by beats with frequencies \(\Delta\omega=\rho\kappa/\mu_{v}\). Measurements of the beat frequency yielded the value of the velocity circulation quantum \(\kappa\). Vinen's experiment on circulation measurement in superfluid \({}^{4}\)He was later repeated in superfluid \({}^{3}\)He-\(B\)[29]. Equation (17) describing oscillations of the wire with quantum circulation around it illustrates the crossover from dynamics governed by the inertia force to dynamics governed by the Magnus force. If the vortex mass \(\mu_{v}\) decreases and the ratio \(\rho\kappa/\mu_{v}\) becomes very large, the dispersion relation (18) yields two frequencies: \[\omega_{1}=\frac{\mathcal{K}}{\rho\kappa},\qquad\omega_{2}=\frac{\rho\kappa}{ \mu_{v}}. \tag{20}\] At \(\mu_{v}\to 0\) the frequency of the second mode grows to infinity and cannot be treated within the hydrodynamical approach. Only a single circularly polarized mode remains, which is governed by the Magnus force. This illustrates the crossover from dynamics of a particle governed by Newton's second law without transverse forces to dynamics of a massless vortex. In the equation of motion Eq. (8) of a massless vortex the force determines not an acceleration but a velocity. Thus, the velocity cannot be an independent variable determined by initial conditions as in the case of a particle. Therefore, a particle in the two-dimensional space has twice a number of degrees of freedom of a massless vortex performing two-dimensional motion. ## 4 Quantum nucleation of a massless vortex Vortex nucleation is possible due to thermal or quantum fluctuations. Approaching zero temperature, thermal nucleation of vortices is more and more improbable, and quantum vortex nucleation becomes predominant. The vortex is a macroscopic perturbation of a fluid, and its quantum nucleation is a process of _macroscopic quantum tunneling_, which changes states of a huge number of particles. The central assumption of the macroscopic quantum tunneling concept is that this many-body process can be reduced to dynamics of one or a few macroscopic degree of freedom. Here we restrict ourselves to an elementary theory of macroscopic tunneling, putting aside such an interesting topic as the effect of dissipation [30]. The semiclassical quantum tunneling theory considers motion of a particle in a classically forbidden area under the barrier by transition to imaginary time or coordinate [31]. One needs to calculate the action \(\mathcal{S}=\int\mathcal{L}\,dt\) along the trajectory crossing the classically inaccessible underbarrier region. Here \(\mathcal{L}\) is the Lagrangian. The exponent \(\Gamma\) of the probability of tunneling \(W\sim e^{-\Gamma}\) is determined by the imaginary part of the action: \(\Gamma=2\mathrm{Im}\mathcal{S}/\hbar\). The action along the trajectory follows from the Hamilton-Jacobi theory: \[\mathcal{S}=\sum_{i}\int_{L}P_{i}dx_{i}, \tag{21}\] where summation is over all pairs of conjugate variables \((x_{i},P_{i})\) and integration is over the trajectory \(L\) determined from the equations of motion. For application of this procedure to a massless vortex, it is important that the dynamics of this vortex is essentially different from that of a particle governed by Newton's second law: any external force on the vortex is opposed not by the inertia force, but by the superfluid Magnus force. The Magnus force is responsible for the Hall effect in superconductors, and this type of tunneling is sometimes called "Hall tunneling" [32]. Since we deal with neutral superfluids we shall call tunneling of a massless vortex Magnus tunneling. The first analysis of Magnus tunneling for a superfluid vortex was done by Volovik [33]. He considered nucleation of a circular vortex half-loop near a plane boundary. Here we consider a straight vortex in a thin film with the superfluid moving along the film edge parallel to the axis \(x\)[23; 34]. The vector equation of motion Eq. (8) may be considered as the Hamiltonian equations for the pair of conjugate variables "coordinate \(x\) - momentum \(P_{x}\)" \[\frac{dx}{dt}=\frac{\partial H}{\partial P_{x}},\ \ \frac{dP_{x}}{dt}=-\frac{ \partial H}{\partial x}, \tag{22}\] where the momentum of the vortex at the distance \(y\) from the edge is \[P_{x}=\rho\kappa y. \tag{23}\] The Hamiltonian for the superfluid moving with the velocity \(v\) is \[H=E_{v}+V(x,y),\ \ E_{v}(y)=\frac{\rho\kappa^{2}}{4\pi}\ln\frac{y}{r_{c}}-vP_{x}, \tag{24}\] where \(V(x,y)\) is the potential energy produced by a defect on the film edge and \(E_{v}\) is the energy of the vortex at the distance \(y\) from the film edge parallel to the axis \(x\). In contrast to previous sections, in this and next sections density is a mass per unit area of the film, and forces and energies are values for the whole vortex, but not its unit length. In this section we consider Galilean invariant superfluids at zero temperature. Thus, there is no difference between the superfluid velocity \(v_{s}\) and the center-of-mass velocity \(v\) and between the superfluid density \(\rho_{s}\) and the total mass density \(\rho\). The effective density \(\rho_{M}\), which determines the total transverse force, does not differ from \(\rho=\rho_{s}\). The Lagrangian in our case is \[\mathcal{L}=\dot{x}P_{x}-H=\dot{x}\rho\kappa y-E_{v}(y)-V(x,y). \tag{25}\] Ignoring for a while the defect potential, the original state with a microscopic vortex nucleus corresponds to \(y\approx r_{c}\approx 0\) and an energy close to zero. After nucleation there is a vortex with zero energy and the coordinate \(y\) equal to (see Fig. 1) \[y_{f}=\frac{\kappa}{4\pi v}\ln\frac{\kappa}{vr_{c}}. \tag{26}\] In the classically inaccessible area (underbarrier region) \(0<y<y_{f}\) the energy is positive. Without defects there is no trajectory which crosses the classically inaccessible underbarrier region. This is a consequence of translational invariance along the axis \(x\). The momentum \(P_{x}\) is a constant, and the coordinate \(y\) does not vary along any trajectory, even on the complex plane. There is no quantum process which would be able to change its value. So the presence of the defect, which breaks translational invariance, is crucial. The shape of the defect is not so essential. It can affect only a pre-exponential factor, which is difficult to calculate anyway. One can choose the simplest singular \(\delta\)-function potential \(V(x,y)=-g\delta(x^{2}+y^{2})\). The nucleation process must start from creation of the vortex nucleus near the defect at the edge of the film located a \(x=y=0\) in Fig. 1(a). The energy of the nucleus is on the order of the vortex core energy, which is small compared to logarithmically large vortex energies at finite distance from the film edge. Thus, one can ignore the stage of initial vortex nucleation and consider only the quantum tunneling of the vortex nucleated near the defect through the classically forbidden area. Figure 1(b) shows the trajectory of quantum tunneling in the plane with axes \(X=ix\) (imaginary coordinate of the vortex) and \(y\) (proportional to the vortex momentum \(P_{x}\)). The trajectory starts at the defect and goes along the line \(X=y\) at which the vortex remains at the defect and interacts with it. When the trajectory reaches the point \(y=X=y_{f}\) it crosses the trajectory with constant \(y=y_{f}\) along which there is no interaction with the defect. The path continues along this new segment of the trajectory until the point \(y=y_{f}\), \(x=X=0\). Here the path returns from the complex \(x\) plane to the real \(x\) axis. The tunneling exponent for such a trajectory is \[\Gamma=\frac{2\text{Im}\mathcal{S}}{\hbar}=-\frac{2}{\hbar}\text{Im}\left\{ \int\limits_{0}^{y_{f}}P_{x}(X)\,dX+\int\limits_{y_{f}}^{0}P_{x}(y_{f})\,dX\right\}\] Figure 1: Nucleation of a vortex near a film edge with a defect. (a) The vortex nucleated at the defect located at \(x=y=0\). After quantum tunneling the vortex appears at \(x=0\), \(y=y_{f}\). (b) The trajectory of quantum tunneling in the plane with axes \(X=ix\) and \(y\). The trajectory starts at \(x=X=y=0\), goes along the line \(y=X\) until \(X=y_{f}\), and continues along the line \(y=y_{f}\) reaching the final vortex position at \(x=0\), \(y=y_{f}\). \[=2\pi ny_{f}^{2}=\frac{\kappa^{2}n}{8\pi v_{s}^{2}}\left(\ln\frac{\kappa}{v_{s}r_{ c}}\right)^{2}, \tag{27}\] where \(n=\rho/m\) is the particle density. The probability logarithm \(\Gamma\) is roughly equal to the number of particles in the area \(y_{f}^{2}\) occupied by the velocity field induced by the vortex after nucleation. ## 5 Quantum nucleation of a vortex with mass It is interesting to discuss how vortex mass can affect quantum vortex nucleation via Magnus tunneling. Muirihead _et al._[28] addressed this issue for a two-dimensional vortex near a film edge on the basis of some estimations using a simplified version of the real vortex energy. Later the semiclassical theory for this case, taking into account the real energy given by Eq. (24) was suggested [23]. The theory allows to consider the crossover from a massive vortex to a massless vortex nucleated via Magnus tunneling. For a vortex with mass, the Lagrangian (25) must be modified to \[\mathcal{L}=\frac{\mu_{v}\dot{x}^{2}}{2}+\frac{\mu_{v}\dot{y}^{2}}{2}+\kappa \rho_{M}\dot{x}y-E_{v}(y), \tag{28}\] where \(\mu_{v}\) is the mass of the whole vortex (not its unit length) and the energy \(E_{v}(y)\) is given by Eq. (24) as before. We have two degrees of freedom, in contrast to one degree of freedom for a massless vortex. Correspondingly, we have two momenta canonically conjugate to coordinates \(x\) and \(y\): \[P_{x}=\frac{\partial\mathcal{L}}{\partial\dot{x}}=\mu_{v}\dot{x}+\kappa\rho_{ M}y,\qquad P_{y}=\frac{\partial\mathcal{L}}{\partial\dot{y}}=\mu_{v}\dot{y}. \tag{29}\] The Hamiltonian for the Lagrangian (28) is \[\mathcal{H}=\frac{\partial\mathcal{L}}{\partial\dot{x}}\dot{x}+ \frac{\partial\mathcal{L}}{\partial\dot{y}}\dot{y}-\mathcal{L}=\frac{\mu_{v} \dot{x}^{2}}{2}+\frac{\mu_{v}\dot{y}^{2}}{2}+E_{v}(y)\] \[=\frac{(P_{x}-\kappa\rho_{M}y)^{2}}{2\mu_{v}}+\frac{P_{y}^{2}}{2 \mu_{v}}+E_{v}(y). \tag{30}\] The classical equations of motion in vortex Cartesian coordinates are \[\mu_{v}\ddot{x}=-\kappa\rho_{M}\dot{y},\qquad\mu_{v}\ddot{y}=\kappa\rho_{M} \dot{x}-\frac{\partial E_{v}(y)}{\partial y}. \tag{31}\] The equations have two integrals. The first integral is the momentum \(P_{x}\), which is conserved because of translational invariance along the axis \(x\). The second integral is the energy. Now it is possible to find a trajectory for quantum tunneling without defects breaking translational invariance, and one need not retain the energy of interaction with a defect \(V(x,y)\). The constant \(P_{x}\) does not mean that the coordinate \(y\) cannot vary since \(P_{x}\) depends not only on \(y\), but also on \(\dot{x}\). The relevant trajectory starts at \(y=r_{c}\approx 0\) and nearly zero energy. Equation (30) shows that the zero energy condition yields the relation connecting \(P_{x}\) with the initial value of \(\dot{y}(0)\) at \(y=0\) where \(E_{v}(y)\) also vanishes: \[\frac{P_{x}^{2}}{2\mu_{v}}+\frac{\mu_{v}\dot{y}(0)^{2}}{2}=0. \tag{32}\] One can satisfy this condition only at imaginary \(\dot{y}(0)\), and we shall introduce the imaginary time \(t=-i\tau\) looking for an underbarrier trajectory. The first integration of the equations of motion yields \[\frac{dy}{d\tau}=\frac{1}{\mu_{v}}\sqrt{2\mu_{v}E_{v}(y)+(P_{x}-\kappa\rho_{M}y )^{2}},\qquad\frac{dx}{d\tau}=\frac{P_{x}-\kappa\rho_{M}y}{i\mu_{v}}. \tag{33}\] The trajectory starting at \(y\approx 0\) ends at the point \(y=y_{f}\) where \(E_{v}(y)=0\) and the classically accessible area begins. Eventually the tunneling exponent is: \[\Gamma=\frac{2}{\hbar}\mathrm{Im}\left\{\begin{array}{l} \int\limits_{0}^{x(y_{f})}P_{x}\,dx+\int\limits_{0}^{y_{f}}P_{y}\,dy\\ \end{array}\right\}=\frac{2}{\hbar}\mathrm{Im}\left\{\int\limits_{0}^{y_{f}} \left(P_{x}\frac{dx}{dy}+P_{y}\right)\,dy\right\}\\ =\frac{2}{\hbar}\int\limits_{0}^{y_{f}}\frac{2\mu_{v}E_{v}(y)+ \kappa\rho_{M}y(\kappa\rho_{M}y-P_{x})}{\sqrt{2\mu_{v}E_{v}(y)+(\kappa\rho_{M} y-P_{x})^{2}}}\,dy. \tag{34}\] The coordinate \(x\) is imaginary along the trajectory, but at the end of the tunneling trajectory it must become real again. This imposes a condition on the value of the momentum \(P_{x}\): \[x(y_{f})=\int\limits_{0}^{y_{f}}\frac{dx}{dy}dy=i\int\limits_{0}^{y_{f}}\frac{ \kappa\rho_{M}y-P_{x}}{\sqrt{2\mu_{v}E_{v}(y)+(\kappa\rho_{M}y-P_{x})^{2}}}dy =0. \tag{35}\] For a massless vortex (\(\mu_{v}=0\)) and \(\rho_{M}=\rho\) Eq. (34) yields exactly the same probability exponent as Eq. (27), which was obtained using the complex coordinate but not complex time. In this limit the momentum \(P_{x}\) is cancelled out in Eq. (34). In the opposite limit of very large mass \[\Gamma=\frac{2}{\hbar}\int\limits_{0}^{y_{f}}\sqrt{2\mu_{v}E_{v}(y)}\,dy= \frac{2}{\hbar}\int\limits_{0}^{y_{f}}\sqrt{2\mu_{v}\left(\frac{\rho\kappa^{2 }}{4\pi}\ln\frac{y}{r_{c}}-v\kappa\rho y\right)}\,dy. \tag{36}\] This is the standard expression for tunneling of a particle of mass \(\mu_{v}\) through the potential barrier described by the energy \(E_{v}(y)\). With logarithmic accuracy, i.e., replacing the logarithm in the integrand by a large constant \(\ln\frac{y_{f}}{r_{c}}\), one obtains \[\Gamma=\frac{4y_{f}}{3\hbar}\sqrt{\frac{\mu_{v}\rho\kappa^{2}}{2\pi}\ln\frac{y_{ f}}{r_{c}}}. \tag{37}\] While at Magnus tunneling the probability logarithm is proportional to \(y_{f}^{2}\), i.e., the area occupied by the velocity field induced by the vortex, for a massive vortex not subject to the Magnus force it is proportional to \(y_{f}\). Estimating the vortex mass as the core mass \(\mu_{v}=\pi\rho r_{c}^{2}\) the Magnus tunneling transforms into particle-like tunneling at \[\frac{r_{c}}{y_{f}}\sqrt{\ln\frac{y_{f}}{r_{c}}}>\frac{\rho_{M}}{\rho}. \tag{38}\] According to Eq. (26), the ratio \(y_{f}/r_{c}\) is on the order of \(v_{cr}/v\) (ignoring the logarithmic factor) and large compared to unity for velocities \(v\) small compared to the Landau critical velocity \(v_{cr}\sim\kappa/r_{c}\). Moreover, our estimation of the probability exponent is valid as far as the \(y_{f}/r_{c}\) is large. Thus, the left-hand side of the inequality Eq. (38) is small, and the vortex mass is important for quantum tunneling only at very small \(\rho_{M}/\rho\), i.e., at very strong suppression of the Magnus force in superfluids with broken Galilean invariance. Above we considered the traditional approach to macroscopic quantum tunneling based on the semiclassical theory for one or two macroscopic degrees of freedom. It is possible to address the problem within a more general many-body theoretical framework [23; 35]. This affected logarithmic factors in expressions for the probability exponent \(\Gamma\). ## 6 Conclusions The Magnus force (one from contributions to the transverse force) on a vortex appeared in the pioneer work on hydrodynamics of rotating superfluids written by Joe Vinen together with Henry Hall nearly 70 years ago. This force played a crucial role in Vinen's famous experiment demonstrating quantization of the velocity circulation in superfluids. Vinen together with Nozieres [5] connected the Magnus force with the Hall effect in superconductors. He followed the dispute about the physical nature of the transverse force and facilitated its resolution. The transverse force is an important factor in the process of quantum nucleation of vortices, investigations of which were pioneered by Joe Vinen. The force is a momentum transferred from one subsystem to another. The most reliable way of its determination is investigation of the momentum balance. There were numerous attempts to determine various components of this force from some general principle referring only to fluid behavior very far from the vortex. The attempt to derive the transverse force from the Berry phase is an example. The circulation \(\kappa_{n}\) of the normal velocity, on which the Berry phase depends, is not a topological charge and depends on details of interaction of quasiparticles with a vortex at small distances from the vortex. An information about this interaction is transported to large distances by the momentum flux. The Berry phase analysis at large distances itself cannot provide the value of the transverse force without analysis of processes at small distances. The transverse force must be determined not from the Berry phase, but vice versa: calculation of the transverse force from the momentum balance is necessary for determination of the Berry phase. **Acknowledgments.** My numerous interactions and discussions with Joe Vinen during the Royal Society Kapitza fellowship in Birmingham and at other various occasions had a great impact on my research work.
2309.01152
Local connectivity of boundaries of tame Fatou components of meromorphic functions
We prove local connectivity of the boundaries of invariant simply connected attracting basins for a class of transcendental meromorphic maps. The maps within this class need not be geometrically finite or in class $\mathcal B$, and the boundaries of the basins (possibly unbounded) are allowed to contain an infinite number of post-singular values, as well as the essential singularity at infinity. A basic assumption is that the unbounded parts of the basins are contained in regions which we call `repelling petals at infinity', where the map exhibits a kind of `parabolic' behaviour. In particular, our results apply to a wide class of Newton's methods for transcendental entire maps. As an application, we prove local connectivity of the Julia set of Newton's method for $\sin z$, providing the first non-trivial example of a locally connected Julia set of a transcendental map outside class $\mathcal B$, with an infinite number of unbounded Fatou components.
Krzystof Barański, Núria Fagella, Xavier Jarque, Bogusława Karpińska
2023-09-03T12:05:48Z
http://arxiv.org/abs/2309.01152v2
# Local connectivity of boundaries of tame Fatou components of meromorphic functions ###### Abstract. We prove local connectivity of the boundaries of invariant simply connected attracting basins for a class of transcendental meromorphic maps. The maps within this class need not be geometrically finite or in class \(\mathcal{B}\), and the boundaries of the basins (possibly unbounded) are allowed to contain an infinite number of post-singular values, as well as the essential singularity at infinity. A basic assumption is that the unbounded parts of the basins are contained in regions which we call'repelling petals at infinity', where the map exhibits a kind of 'parabolic' behaviour. In particular, our results apply to a wide class of Newton's methods for transcendental entire maps. As an application, we prove local connectivity of the Julia set of Newton's method for \(\sin z\), providing the first non-trivial example of a locally connected Julia set of a transcendental map outside class \(\mathcal{B}\), with an infinite number of unbounded Fatou components. 2010 Mathematics Subject Classification: Primary 37F10, 37F20, 30D05, 30D30 The first and fourth authors are supported by the National Science Centre, Poland, grant no 2018/31/B/ST1/02495. The second and third authors are partially supported by grants PID2020-118281GB-C32 and CEX2020-001084-M (Maria de Maeztu Excellence program) from the Spanish state research agency. The third author is additionally supported by ICREA Academia 2020 from the Catalan government. Introduction The study of the local connectivity of a graph \(G\) is a generalization of the notion of _local connectivity_, i.e., the connectivity of a graph \(G\), and the connectivity of a graph \(G\). The notion of _local connectivity_ is a generalization of the notion of _local connectivity_, i.e., the connectivity of a graph \(G\), and the connectivity of a graph \(G\). The notion of _local connectivity_ is a generalization of the notion of _local connectivity_, i.e., the connectivity of a graph \(G\), and the connectivity of a graph \(G\). The notion of _local connectivity_ is a generalization of the notion of _local connectivity_, i.e., the connectivity of a graph \(G\), and the connectivity of a graph \(G\). The notion of _local connectivity_ is a generalization of the notion of _local connectivity_, i.e., the connectivity of a graph \(G\), and the connectivity of a graph \(G\). The notion of _local connectivity_ is a generalization of the notion of _local connectivity_, i.e., the connectivity of a graph \(G\), and the connectivity of a graph \(G\). The notion of _local connectivity_ is a generalization of the notion of _local connectivity_, i.e., the connectivity of a graph \(G\), and the connectivity of a graph \(G\). The notion of _local connectivity_ is a generalization of the notion of _local connectivity_, i.e., the connectivity of a graph \(G\), and the connectivity of a graph \(G\). The notion of _local connectivity_ is a generalization of the notion of _local connectivity_, i.e., the connectivity of a graph \(G\), and the connectivity of a graph \(G\). The notion of _local connectivity_ is a generalization of the notion of _local connectivity_, i.e., the connectivity of a graph \(G\), and the connectivity of a graph \(G\). The notion of _local connectivity_ is a generalization of the notion of _local connectivity_, i.e., the connectivity of a graph \(G\), and the connectivity of a graph \(G\). The notion of _local connectivity_ is a generalization of the notion of _local connectivity_, i.e., the connectivity of a graph \(G\), and the connectivity of a graph \(G\). The notion of _local connectivity_ is a generalization of the notion of _local connectivity_, i.e., the connectivity of a graph \(G\), and the connectivity of a graph \(G\). The notion of _local connectivity_ is a generalization of the notion of _local connectivity_, i.e., the connectivity of a graph \(G\), and the connectivity of a graph \(G\). The notion of _local connectivity_ is a generalization of the notion of _local connectivity_, i.e., the connectivity of a graph \(G\), and the connectivity of a graph \(G\). The notion of _local connectivity_ is a generalization of the notion of _local connectivity_, i.e., the connectivity of a graph \(G\), and the connectivity of a graph \(G\). The notion of _local connectivity_ is a generalization of the notion of _local connectivity_, i.e., the connectivity of a graph \(G\), and the connectivity of a graph \(G\). The notion of _local connectivity_ is a generalization of the notion of _local connectivity_, i.e., the connectivity of a graph \(G\), and the connectivity of a graph \(G\). The notion of _local connectivity_ is a generalization of the notion of _local connectivity_, i.e., the connectivity of a graph \(G\), and the connectivity of a graph \(G\). The notion of _local connectivity_ is a generalization of the notion of _local connectivity_, i.e., the connectivity of a graph \(G\), and the connectivity of a graph \(G\). The notion of _local connectivity_ is a generalization of the notion of _local connectivity_, i.e., the connectivity of a graph \(G\), and the connectivity of a graph \(G\). The notion of _local connectivity_ is a generalization of the notion of _local connectivity_, i.e., the connectivity of a graph \(G\), and the connectivity of a graph \(G\). The notion of _local connectivity_ is a generalization of the notion of _local connectivity_, i.e., the connectivity of a graph \(G\), and the connectivity of a graph \(G\). The notion of _local connectivity_ is a generalization of the notion of _local connectivity_, i.e., the connectivity of a graph \(G\), and the connectivity of a graph \(G\). The notion of _local connectivity_ is a generalization of the notion of _local connectivity_, i.e., the connectivity of a graph \(G\), and the connectivity of a graph \(G\). The notion of _local connectivity_ is a generalization of the notion of _local connectivity_, i.e., the connectivity of a graph \(G\), and the connectivity of a graph \(G\). The notion of _local connectivity_ is a generalization of the notion of _local connectivity_, i.e., the connectivity of a graph \(G\), and the connectivity of a graph \(G\). The notion of _local connectivity_ is a generalization of the notion of _local connectivity_, i.e., the connectivity of a graph \(G\), and the connectivity of a graph \(G\). The notion of _local connectivity_ is a generalization of the notion of _local connectivity_, i.e., the connectivity of a graph \(G\), and the connectivity of a graph \(G\). The notion of _local connectivity_ is a generalization of the notion of _local connectivity_, i.e., the connectivity of a graph \(G\), and the connectivity of a graph \(G\). The notion of _local connectivity_ is a generalization of the notion of _local connectivity_, i.e., the connectivity of a graph \(G\), and the connectivity of a graph \(G\). The notion of _local connectivity_ is a generalization of the notion of _local connectivity_, i.e., the connectivity of a graph \(G\), and the connectivity of a graph \(G\). The notion of _local connectivity_ is a generalization of the notion of _local connectivity_, i.e., the connectivity of a graph \(G\), and the connectivity of a graph \(G\). The notion of _local connectivity_ is a generalization of the notion of _local connectivity_, i.e., the connectivity of a graph \(G\), and the connectivity of a graph \(G\). The notion of _local connectivity_ is a generalization of the notion of _local connectivity_, i.e., the connectivity of a graph \(G\), and the connectivity of a graph \(G\). The notion of _local connectivity_ is a generalization of the notion of _local connectivity_, i.e., the connectivity of a graph \(G\), and the connectivity of a graph \(G\). The notion of _local connectivity_ is a generalization of the notion of _local connectivity_, i.e., the connectivity of a graph \(G\), and the connectivity of a graph \(G\). The notion of _local connectivity_ is a generalization of the notion of _local connectivity_, i.e., the connectivity of a graph \(G\), and the connectivity of a graph \(G\). The notion of _local connectivity_ is a generalization of the notion of _local connectivity_, i.e., the connectivity of a graph \(G\), and the connectivity of a graph \(G\). The notion of _local connectivity_ is a generalization of the notion of _local connectivity_, i.e., the connectivity of a graph \(G\), and the connectivity of a graph \(G\). The notion of _local connectivity_ is a generalization of the notion of _local connectivity_, i.e., the connectivity of a graph \(G\), and the connectivity of a graph \(G\). The notion of _local connectivity_ is a generalization of the notion of _local connectivity_, i.e., the connectivity of a graph \(G\), and the connectivity of a graph \(G\). The notion of _local connectivity_ is a generalization of the notion of _local connectivity_, i.e., the connectivity of a graph \(G\), and the connectivity of a graph \(G\). The notion of _local connectivity_ is a generalization of the notion of _local connectivity_, i.e., the connectivity of a graph \(G\), and the connectivity of a graph \(G\). The notion of _local connectivity_ is a generalization of the notion of _local connectivity_, i.e., the connectivity of a graph \(G\), and the connectivity of a graph \(G\). The notion of _local connectivity_ is a generalization of the notion of _local connectivity_, i.e., the connectivity of a graph \(G\), and the connectivity of a graph \(G\). The notion of _local connectivity_ is a generalization of the notion of _local connectivity_, i.e., the connectivity of a graph \(G\), and the connectivity of a graph \(G\), and the connectivity of a graph \(G\). The notion of _local connectivity_ is a generalization of the notion of _local connectivity_, i.e., the connectivity of a graph \(G\), and the connectivity of a graph \(G\). The notion of _local connectivity_ is a generalization of the notion of _local connectivity_, i.e., the connectivity of a graph \(G\), and the connectivity of a graph \(G\), and the connectivity of a graph \(G\). The notion of _local connectivity_ is a generalization of the notion of _local connectivity_, i.e., the connectivity of a graph \(G\), and the connectivity of a graph \(G\), and the connectivity of a graph \(G\). The notion of _local connectivity_ is a generalization of the notion of _local connectivity_, i.e., the connectivity of a graph \(G\), and the connectivity of a graph \(G\), and the connectivity of a graph \(G\). The notion of _local connectivity_ is a generalization of the notion of _local connectivity_, i.e., the connectivity of a graph \(G\), and the connectivity of a graph \(G\), and the connectivity of a graph \(G\), respectively. The notion of _local connectivity_ is a generalization of the notion of _local connectivity_, i.e., the connectivity of a graph \(G\), and the connectivity of a graph \(G\), and the connectivity of a graph \(G\), respectively. The notion of _local connectivity_ is a generalization of the notion of _local connectivity_, i.e., the connectivity of a graph \(G\), and the connectivity of a graph \(G\), and the connectivity of a graph \(G\), respectively. The notion of _local connectivity_ is a generalization of the notion of _local connectivity_, i.e., the connectivity of a graph \(G\), and the connectivity of a graph \(G\), respectively. The notion of _local connectivity_ is a generalization of the notion of _local connectivity_, i.e., the connectivity of a graph \(G\), and the connectivity of a graph \(G\), respectively. The notion of _local connectivity_ is a generalization of the notion of _local connectivity_, i.e., the connectivity of a graph \(G\), and the connectivity of a graph \(G\), respectively. The notion of _local connectivity_ is a generalization of the notion of _local connectivity_, i.e., the connectivity of a graph \(G\), and the connectivity of a graph \(G\), respectively. The notion of _local connectivity_ is a generalization of the notion of _local connectivity_, i.e., the connectivity of a graph \(G\), and the connectivity of a graph \(G\), respectively. The notion of _local connectivity_ is a generalization of the notion of _local connectivity_, i.e., the connectivity of a graph \(G\), and the connectivity of a graph \(G\), respectively. The notion of _local connectivity_ is a generalization of the notion of _local connectivity_, i.e., the connectivity of a graph \(G\), and the connectivity of a graph \(G\), respectively. The notion of _local connectivity_ is a generalization of the notion of _local connectivity_, i.e., the connectivity of a graph \(G\), and the connectivity of a graph \(G\), respectively. The notion of _local connectivity_ is a generalization of the notion of _local connectivity_, i.e., the connectivity of a graph \(G\), and the connectivity of a graph \(G\), respectively. The notion of _local connectivity_ is a generalization of the notion of _local connectivity_, i.e., the connectivity of a graph \(G\), and the connectivity of a graph \(G\), and the connectivity of a graph \(G\), respectively. The notion of _local connectivity_ is a generalization of the notion of _local connectivity_, i. on \(U\)[11, 12]. Roughly speaking, this is due to the fact that \(\infty\) is not only an essential singularity but also an omitted value. Hence, preimages of unbounded paths in \(U\) are also unbounded and in many cases induce a 'comb-like' structure which prevents the boundary of \(U\) and \(J(f)\) from being locally connected. A well-know example is given by the exponential map \(z\mapsto\lambda e^{z}\) for \(0<\lambda<1/e\) which has an attracting basin of infinite degree with a dense set of accesses to infinity [10]; a finite degree example is provided by the map \(z\mapsto ze^{z+1}\) which has an unbounded superattracting basin of degree \(2\) with the same property and which conjecturally contains indecomposable continua of escaping points as part of its boundary. Compare [11] for a related example. On the other hand, it is known that for hyperbolic transcendental entire maps (where in the definition of hyperbolicity we additionally assume that the post-singular set \(\mathcal{P}(f)\) is bounded), the boundary of a bounded Fatou component of a hyperbolic transcendental entire map is a Jordan curve and even a quasicircle (see [14, 15]). Note that for entire maps, locally connected boundaries of bounded simply connected Fatou components are always Jordan curves by the maximum principle. More generally, one considers a transcendental analogue of geometrically finite maps, consisting of maps \(f\), for which \(\operatorname{Sing}(f)\cap\mathcal{F}(f)\) is compact, while \(\overline{\mathcal{P}(f)}\cap J(f)\) is finite. Note that the Fatou set of a transcendental entire geometrically finite map consists of a finite number of attracting or parabolic periodic basins, as showed in [13]. In [1], it was proved that for geometrically finite transcendental entire maps \(f\) satisfying an additional condition that \(J(f)\) does not contain asymptotic values of \(f\) and the local degree of \(f\) at the points of \(J(f)\) is uniformly bounded (_strongly geometrically finite maps_), all bounded periodic Fatou components are Jordan curves. The same holds for all Fatou components of \(f\) if, additionally, every Fatou component contains at most finitely many critical points. The local connectivity of the whole Julia set of a transcendental entire map was proved for hyperbolic and, more generally, strongly geometrically finite maps with only bounded Fatou components and no asymptotic values, satisfying a uniform bound on the number of critical points (with multiplicities) contained in a Fatou component, see [14, 1, 1]. Note that all the results mentioned above consider only transcendental entire maps from the Eremenko-Lyubich class \(\mathcal{B}\), where \[\mathcal{B}=\{f:\operatorname{Sing}(f)\text{ is bounded}\}.\] Paradoxically, the situation changes when we consider a more general (and a priori more complicated) class of transcendental meromorphic maps, for which the essential singularity is not an omitted value. A trivial example occurs within the tangent family \(z\mapsto\lambda\tan z\) for \(\lambda>1\), where the Fatou set consists of two completely invariant attracting basins (upper and lower half plane), and hence the Julia set is equal to the real line together with the point at infinity (i.e. a circle in \(\widehat{\mathbb{C}}\)). Numerical simulations suggest that also in the case of more complicated dynamics, many unbounded Fatou components of meromorphic maps may have locally connected boundaries, despite the presence of the essential singularity. In some examples, it looks plausible that also the whole Julia set has the same property (see the left part of Figure 1). Nevertheless, computer pictures indicate that non-locally connected boundaries do exist also for meromorphic maps, similarly to the entire case (see the right part of Figure 1). In this paper we show that for many transcendental meromorphic maps \(f\) outside class \(\mathcal{B}\), even with asymptotic values, the presence of (possibly infinitely many) post-singular orbits and the essential singularity in the boundary of a simply connected invariant Fatou component \(U\), pose no unsolvable obstacle for local connectivity, as long as \(f\) acts 'geometrically finitely' on a compact part of the closure \(\overline{U}\) in \(\mathbb{C}\), and the (possible) unbounded parts of \(\overline{U}\) are contained in a finite number of regions, where \(f\) is univalent and exhibits a 'tame' dynamical behaviour, similar to the one within a repelling total of a parabolic fixed point. We call these regions _repelling petals at infinity_, and their formal definition is given in Section 4. Although the definition allows for a quite general behaviour (e.g. spiralling petals), its simple model is given by Newton' method for the map \(z\mapsto z+e^{z}\), which behaves like the translation \(z\mapsto z-1\) for \(\operatorname{Re}(z)\to+\infty\) in any sector symmetric with respect to \(\mathbb{R}^{+}\) of angle less than \(\pi/2\) (see the left part of Figure 1). In this work we assume that \(U\) is an invariant attracting basin, leaving other cases for a forthcoming paper. Note that for transcendental meromorphic maps, periodic components of period larger than \(1\) require a separate treatment, since considering an iterate of the map takes us beyond the meromorphic class. **Definition 1.1**.: An invariant attracting basin \(U\) of a transcendental meromorphic map \(f\) is _tame at infinity_, if there exists a disc \(D\subset\mathbb{C}\) such that \(\overline{U}\setminus D\) is contained in the union of a finite number of repelling petals \(P_{i}\) at infinity for \(f\), \(i\in\mathcal{I}_{\infty}\), such that \(f(P_{i})\cap P_{i^{\prime}}=\emptyset\) for \(i\neq i^{\prime}\). To formulate our result, define the _post-critical_ and _post-asymptotic set_ as \[\mathcal{P}_{crit}(f) =\{f^{n}(v):\text{$v$ is a critical value of $f$, $n\geq 0$}\},\] \[\mathcal{P}_{asym}(f) =\{f^{n}(v):\text{$v$ is an asymptotic value of $f$, $n\geq 0$}\}\] and write \(\overline{A}\), \(\partial A\) for the closure and boundary in \(\mathbb{C}\) of a set \(A\subset\mathbb{C}\), and \(\operatorname{Acc}(A)\) for the set of its accumulation points. We also denote the local degree of a map \(f\) at a point \(w\) by \(\deg_{w}f\). Our main result is the following. **Theorem A**.: _Let \(U\) be a simply connected invariant attracting basin of a meromorphic map \(f\colon\mathbb{C}\to\widehat{\mathbb{C}}\). Assume that the following conditions are satisfied._ 1. _The set_ \(\big{(}\overline{\mathcal{P}_{asym}(f)}\cup\operatorname{Acc}(\mathcal{P}_{crit }(f))\big{)}\cap\overline{U}\) _is contained in the union of a compact subset of_ \(U\) _and a finite set of parabolic fixed points of_ \(f\) _in_ \(\partial U\)_._ 2. _There exists a compact set_ \(L\subset U\) _such that for every_ \(z\in\mathcal{P}_{crit}(f)\cap\overline{U}\setminus L\) _we have_ \(\sup\{\deg_{w}f^{n}:w\in f^{-n}(z),\,n>0\}<\infty\)_._ 3. \(U\) _is tame at infinity._ _Then the boundary of \(U\) in \(\widehat{\mathbb{C}}\) is locally connected._ Theorem A implies immediately the following corollary. **Corollary A'**.: _Let \(U\) be a simply connected invariant attracting basin of a strongly geometrically finite meromorphic map, such that \(U\) is tame at infinity. Then the boundary of \(U\) in \(\widehat{\mathbb{C}}\) is locally connected._ Indeed, the definition of strongly geometrically finite maps implies that \(\operatorname{Sing}(f)\) intersects only a finite number of Fatou components, which implies that \(\overline{\mathcal{P}(f)}\cap U\) is compact, providing the conditions (a)-(b) of Theorem A. **Remark 1.2**.: Note that in Definition 1.1 we do not assume that the set of repelling petals of \(f\) at infinity intersecting \(U\) is non-empty. Therefore, the result holds also for all bounded simply connected invariant attracting basins of transcendental entire or meromorphic maps satisfying the conditions (a)-(b). In particular, for bounded periodic attracting basins of transcendental entire maps we obtain a generalization of the mentioned above result from [1]. However, the case of unbounded basins is our primary area of interest. We remark that an attracting basin \(U\) satisfying the hypotheses of Theorem A necessarily fulfils some properties, as described in the following proposition. **Proposition 1.3**.: _Under the hypotheses of Theorem A, the following hold._ 1. _The degree of_ \(f\) _on_ \(U\) _is finite._ 2. \(\overline{U}\) _contains only a finite number of critical points of_ \(f\)_._ 3. _Every post-critical point of_ \(f\) _in_ \(\partial U\) _has a finite orbit._ 4. _Every asymptotic curve of an asymptotic value_ \(v\in\overline{U}\) _is eventually contained in_ \(\mathbb{C}\setminus\overline{U}\)_._ 5. \(f\) _maps_ \(\overline{U}\) _onto the closure of_ \(U\) _in_ \(\widehat{\mathbb{C}}\)_._ Following classical ideas explained above, used for proving local connectivity for rational and entire maps (see [1, 1, 10, 1]), the proof of Theorem A consists of constructing a suitable conformal metric \(d_{\zeta}\) with some expanding properties on a part of \(U\) neighbouring \(\partial U\). This is described in the following theorem. **Theorem B**.: _Let \(U\) be an invariant simply connected attracting basin of a meromorphic map \(f\colon\mathbb{C}\to\widehat{\mathbb{C}}\) satisfying the assumptions of Theorem A. Then there exists a simply connected domain \(A\subset U\) with \(\overline{A}\subset U\), such that for every compact set \(K\subset U\) one can find a conformal metric \(d_{\zeta}=\varsigma|dz|\) on \(U\setminus\overline{A}\) and numbers \(b_{n}\in(0,1)\), \(n\in\mathbb{N}\), such that \(\sum_{n=1}^{\infty}b_{n}<\infty\) and_ \[|(f^{n})^{\prime}(z)|_{\varsigma}>\frac{1}{b_{n}}\] _for every \(z\in U\setminus\overline{A}\) with \(f(z),\ldots,f^{n-1}(z)\in U\setminus\overline{A}\), \(f^{n}(z)\in K\setminus\overline{A}\)._ The domain \(A\) in Theorem A is defined as a sufficiently large absorbing domain in \(U\), such that \(\overline{f(A)}\subset A\). Like in the references mentioned above, the metric \(d\varsigma\) is constructed by 'patching up' the orbifold metric on a compact part of \(\partial U\) with a 'parabolic' metric \(d\sigma_{p,\alpha}=\frac{|dz|}{|z-p|^{\alpha}}\) for a suitable \(\alpha\in[0,1)\) on repelling petals of a parabolic fixed point \(p\in\partial U\). In our setting, however, we need to add a new element to this puzzle, namely a _petal metric_ which we use in the unbounded parts of \(\overline{U}\). To this aim, we prove that the map on a repelling petal at infinity has suitable expanding properties with respect to the metric \(d\sigma_{\alpha}=\frac{|dz|}{|z|^{\alpha}}\) for \(\alpha>1\). This result (Theorem 4.7) can be of independent interest and we hope it may be used in a much wider setting. The metric \(d\varsigma\) on the unbounded parts of \(\overline{U}\) is defined as a suitable modification of the metric \(d\sigma_{\alpha}\). Theorem B follows from a much more general result, Theorem 5.1, formulated in an abstract setting. The reason for this generality, which undoubtedly increases the technical difficulty of the proof, is the goal of further applications to other types of Fatou components (parabolic basins and Baker domains). Theorem A has already a number of applications, mostly among transcendental Newton maps (for which all the Fatou components are simply connected, as proved in [1]). Just to give some examples, most of Newton's methods for trigonometric polynomials studied in [1] have infinitely many unbounded attracting basins satisfying the hypotheses of Theorem A. We believe that in the setting of meromorphic maps, where infinity is no longer an omitted value, the local connectivity of Julia sets is a much more common phenomenon, even in the presence of unbounded Fatou components, as long as they satisfy the hypotheses of Theorem A. To show some evidence for this statement, we present an example of a transcendental meromorphic map with infinitely many unbounded basins of attraction, whose Julia set is locally connected (see Figure 2). Figure 2. Left: The dynamical plane of the map \(f(z)=z-\tan z\), showing the invariant attracting basins \(U_{k}\). Right: A zoom of the dynamical plane near a pole \(p_{k}\). **Theorem C**.: _Let \(f(z)=z-\tan z\), Newton's method for \(F(z)=\sin z\). Then \(J(f)\) is locally connected._ This provides the first non-trivial example of a locally connected Julia set of a transcendental meromorphic map \(f\) outside class \(\mathcal{B}\), with an infinite number of unbounded Fatou components. The structure of the paper is as follows. After preliminaries in Section 2 and a description of the dynamics of \(f\) on attracting and repelling petals of a parabolic fixed point, presented in Section 3, in the subsequent Section 4 we define attracting/repelling petals of \(f\) at infinity (Definition 4.1) and prove their contracting/expanding properties (Theorem 4.7). The metric \(d_{\mathbb{C}}\) is constructed in Section 5 (Theorem 5.1). Since the construction is quite involved, we split it into several parts, presented in Subsections 5.1-5.4, providing a short summary of the proof at the beginning of the section. Proposition 1.3 and Theorem B are proved in Section 7, while the proof of Theorem A is presented in Section 7. Finally, in Section 8 we prove Theorem C. ## 2. Preliminaries ### Notation By \(\overline{A}\), \(\operatorname{int}A\) and \(\partial A\) we denote, respectively, the closure, interior and boundary in \(\mathbb{C}\) of a set \(A\subset\mathbb{C}\). By \(\operatorname{conv}A\) we denote the convex hull of a set \(A\). For \(A\subset\mathbb{C}\) and \(z\in\mathbb{C}\) we write \(A+z=\{a+z:a\in A\}\). We write \(\operatorname{Acc}(A)\) for the set of accumulation points of \(A\). By \(\widehat{\mathbb{C}}=\mathbb{C}\cup\{\infty\}\) we denote the Riemann sphere with the standard topology. We write \(\mathbb{D}(z,r)\) for the Euclidean disc in \(\mathbb{C}\) of radius \(r\) and center \(z\), while \(\mathbb{D}\) is the open unit disc in \(\mathbb{C}\). The Euclidean diameter of a set \(A\subset\mathbb{C}\) is denoted by \(\operatorname{diam}A\), and area of a measurable set \(A\subset\mathbb{C}\) by \(\operatorname{area}A\). We set \(\mathbb{N}=\{1,2,\ldots\}\). Let \(F\colon V^{\prime}\to V\) be a meromorphic map on a domain \(V^{\prime}\subset\mathbb{C}\) into \(V\subset\widehat{\mathbb{C}}\). By \(\operatorname{Crit}(F)\) we denote the set of critical points of \(F\) (we do not treat multiple poles as critical points). The points \(F(z)\) for \(z\in\operatorname{Crit}(F)\) are _critical values_ of \(F\). A point \(v\in V\) is an _asymptotic value_ of \(F\) if there exists a curve \(\gamma\colon[0,+\infty)\to V^{\prime}\) such that \(\gamma(t)\xrightarrow[t\to+\infty]{}\partial V^{\prime}\) and \(F(\gamma(t))\xrightarrow[t\to+\infty]{}v\). We denote by \(\operatorname{Sing}(F)\) the _singular set_ of \(F\), i.e. the set of finite singularities of the inverse function \(F^{-1}\), i.e. critical and asymptotic values and their accumulation points in \(V\). The _post-singular set_ of \(F\) is defined as \[\mathcal{P}(F)=\bigcup_{n=0}^{\infty}F^{n}(\operatorname{Sing}(F)),\] neglecting the cases when \(F^{n}\) is not defined. We also define the _post-critical_ and _post-asymptotic set_ of \(F\) as \[\mathcal{P}_{crit}(F) =\{F^{n}(v):v\text{ is a critical value of }F,\,n\geq 0\},\] \[\mathcal{P}_{asym}(F) =\{F^{n}(v):v\text{ is an asymptotic value of }F,\,n\geq 0\},\] again neglecting the cases when \(F^{n}\) is not defined. For \(z_{0}\in V^{\prime}\) we denote by \(\deg_{z_{0}}F\) the local degree of \(F\) at \(z_{0}\), i.e. the positive integer \(d\) such that \(F(z)=F(z_{0})+a(z-z_{0})^{d}+\cdots\) for \(a\neq 0\) if \(F(z_{0})\in\mathbb{C}\), and \(F(z)=a(z-z_{0})^{-d}+\cdots\) for \(a\neq 0\) if \(F(z_{0})=\infty\). ### Conformal metrics By a _conformal metric_ on an open set \(V\subset\mathbb{C}\) we mean a Riemannian metric of the form \(d\rho(z)=\rho(z)|dz|\) for a positive continuous function \(\rho\) on \(V\), where \(|dz|\) denotes the standard (Euclidean) metric in \(\mathbb{C}\). On each component \(\tilde{V}\) of \(V\), the distance between points \(z_{1},z_{2}\) with respect to this metric, denoted \(\operatorname{dist}_{\rho}(z_{1},z_{2})\), is defined as the infimum of the lengths of piecewise \(C^{1}\)-curves \(\gamma\) joining \(z_{1}\) and \(z_{2}\) within this component, counted with respect to the metric \(d\rho\) and denoted by \(\operatorname{length}_{\rho}(\gamma)\), where \[\operatorname{length}_{\rho}\gamma=\int_{\gamma}\rho(z)|dz|.\] The diameter of a set \(A\subset\tilde{V}\) with respect to \(d\rho\) is defined as \[\operatorname{diam}_{\rho}A=\sup\{\operatorname{dist}_{\rho}(z_{1},z_{2}):z_{1 },z_{2}\in A\},\] while \(\mathcal{D}_{\rho}(z,r)\) denotes the disc of center \(z\in\tilde{V}\) and radius \(r>0\) with respect to \(d\rho\). The area of a measurable set \(A\subset V\) with respect to \(d\rho\) is denoted by \[\operatorname{area}_{\rho}A=\int_{A}\rho^{2}(z)|dz|^{2}.\] If \(V\subset\mathbb{C}\) is a hyperbolic domain, then we denote by \(d\varrho_{V}\) the _hyperbolic metric_ in \(V\). The standard _spherical metric_ is defined as \[d\sigma_{sph}=\sigma_{sph}(z)|dz|=\frac{2|dz|}{1+|z|^{2}}.\] for \(z\in\mathbb{C}\). Note that \(d\sigma_{sph}\) extends to a Riemannian metric on the Riemann sphere \(\widehat{\mathbb{C}}\) by the use of the coordinates \(z\mapsto 1/z\) near infinity. For simplicity, we write \(\operatorname{dist}_{sph}\), \(\operatorname{diam}_{sph}\), \(\operatorname{area}_{sph}\) and \(\mathcal{D}_{sph}(z,r)\) for the spherical distance, diameter, area and disc, respectively. The derivative of a holomorphic map \(F\) with respect to the metric \(d\rho\) on \(V\) is equal to \[|F^{\prime}(z)|_{\rho}=\frac{\rho(F(z))}{\rho(z)}|F^{\prime}(z)|,\] provided \(F\) is defined in a neighbourhood of a point \(z\in V\cap F^{-1}(V)\). For the spherical metric we use the symbol \(|F^{\prime}(z)|_{sph}\) instead of \(|F^{\prime}(z)|_{\sigma_{sph}}\). We say that a holomorphic map \(F\) is _locally contracting_ (resp. _locally expanding_) with respect to the metric \(d\rho\) on a set \(V^{\prime}\subset V\cap F^{-1}(V)\) if \(|F^{\prime}(z)|_{\rho}<1\) (resp. \(|F^{\prime}(z)|_{\rho}>1\)) for every \(z\in V^{\prime}\). We also say that in this case the metric \(d\rho\) is locally contracting/expanding with respect to \(F\). We will use the following version of the Koebe Distortion Theorem for the spherical metric (for the proof see [1, p. 1170]). **Theorem 2.1** (**Spherical Koebe distortion theorem)**.: _Let \(0<r_{1},r_{2}<\operatorname{diam}_{sph}\widehat{\mathbb{C}}\). Then there exists a constant \(c>0\) depending only on \(r_{1},r_{2}\), such that for every spherical disc \(D=\mathcal{D}_{sph}(z,r)\) and every univalent holomorphic map \(F\colon D\to\widehat{\mathbb{C}}\) with \(z\in\widehat{\mathbb{C}}\), \(r>0\), \(\operatorname{diam}_{sph}D<r_{1}\) and \(\operatorname{diam}_{sph}(\widehat{\mathbb{C}}\setminus F(D))>r_{2}\), if \(z_{1},z_{2}\in\mathcal{D}_{sph}(z,\lambda r)\) for some \(0<\lambda<1\), then_ \[\frac{|F^{\prime}(z_{1})|_{sph}}{|F^{\prime}(z_{2})|_{sph}}\leq\frac{c}{(1- \lambda)^{4}}.\] ### Orbifolds We recall some facts about hyperbolic orbifolds (for details, see [10, SS19]). Let \(V\subset\mathbb{C}\) be a hyperbolic domain. A _hyperbolic orbifold_ over \(V\) is a pair \((V,\nu)\), where \(\nu\colon V\to\mathbb{N}\) is a function such that the set of points \(z\in V\) with \(\nu(z)>1\) has no accumulation points in \(V\). Every orbifold \((V,\nu)\) has a universal branched covering, i.e. a holomorphic map \(\pi\colon\mathbb{D}\to V\) onto \(V\), such that \[\deg_{w}\pi=\nu(\pi(u))\qquad\text{for $u\in\mathbb{D}$}. \tag{1}\] (see e.g. [14, Theorem E.1]). This implies that the Riemannian metric \[d\rho=\rho|dz|=\pi_{*}(d\varrho_{\mathbb{D}}),\] which is the push-forward under \(\pi\) of the hyperbolic metric \(d\varrho_{\mathbb{D}}\) on \(\mathbb{D}\), is well-defined on \(V\setminus\{z:\nu(z)>1\}\) (if \(\nu(z)=1\), then different points in \(\pi^{-1}(z)\) are related via automorphisms of \(\mathbb{D}\), so the metric is independent of the choice of a point in the fiber). The metric \(d\rho\) is called the _orbifold metric_. Note that if \(\nu(z)=1\) for every \(z\in V\), then \(\pi\) is univalent in a neighbourhood of every point \(w\in\pi^{-1}(z)\), the map \(\pi\) is a universal covering of \(V\) and \(d\rho\) is equal to the hyperbolic metric \(d\varrho_{V}\) on \(V\). If \(\nu(z_{0})>1\) for some \(z_{0}\in V\), then \(d\rho\) has a singularity at \(z_{0}\). More precisely, there exist constants \(c_{1},c_{2}>0\) such that \[\frac{c_{1}}{|z-z_{0}|^{1-1/\nu(z_{0})}}<\rho(z)<\frac{c_{2}}{|z-z_{0}|^{1-1/ \nu(z_{0})}} \tag{2}\] for \(z\) in some punctured neighbourhood of \(z_{0}\) (see [13, p.183, Appendix A]). Note also that we have \[\rho\geq\varrho_{V}\qquad\text{on}\quad V\setminus\{z:\nu(z)>1\}. \tag{3}\] This holds due to the fact that the identity map \(V\to V\) defines a holomorphic orbifold map1 from \((V,\nu)\) to the orbifold \((V,\tilde{\nu})\) with \(\tilde{\nu}\equiv 1\) and the orbifold metric equal to \(d\varrho_{V}\), so (3) follows from the Schwarz-Pick Orbifold Lemma (see e.g. [13, Theorem A.3]). Footnote 1: A _holomorphic orbifold map_\(\phi\) between orbifolds \((V,\nu)\) and \((\tilde{V},\tilde{\nu})\) is a holomorphic map \(\phi\colon V\to\tilde{V}\), for which \(\nu(z)\deg_{z}\phi\) is divisible by \(\tilde{\nu}(\phi(z))\) for \(z\in V\). ### Logarithmically convex functions Recall that a function \(g\colon(a,b)\to\mathbb{R}_{+}\), for \(a,b\in\mathbb{R}\), is _logarithmically convex_, if \(\ln g\) is convex. We will use the following facts on non-increasing logarithmically convex functions. **Lemma 2.2**.: _Let \(g\colon(t_{0},+\infty)\to\mathbb{R}_{+}\) for some \(t_{0}\in\mathbb{R}\) be a non-increasing logarithmically convex function. Define inductively a sequence \(t_{n}\in(t_{0},+\infty)\), \(n\in\mathbb{N}\), by choosing some \(t_{1}\in(t_{0},+\infty)\) and setting_ \[t_{n+1}=t_{n}+g(t_{n})\] _for \(n\in\mathbb{N}\). Then:_ 1. \(t_{n}\nearrow+\infty\) _as_ \(n\to\infty\) _and the sequence_ \(\frac{t_{n+1}}{t_{n}}\) _is decreasing for sufficiently large_ \(n\)_,_ 2. _the sequence_ \(\frac{g(t_{n+1})}{g(t_{n})}\) _is non-decreasing and converges to_ \(1\) _as_ \(n\to\infty\)_._ Proof.: Since \(g\) is positive, we have \(t_{n+1}>t_{n}\). Note also that \(g\) is convex and hence continuous on \((t_{0},+\infty)\). This implies \(t_{n}\to+\infty\) as \(n\to\infty\), because otherwise \(t_{n}\to\overline{t}\) for some \(\overline{t}\in\mathbb{R}\) and \(g(\overline{t})=\lim_{n\to\infty}g(t_{n})=\lim_{n\to\infty}(t_{n+1}-t_{n})=0\), which contradicts the fact that \(g\) is positive. As \(t_{n}\) increases to \(+\infty\), we have \(t_{n}>0\) for sufficiently large \(n\). For such \(n\), since \(g\) is positive and non-increasing, the sequence \(\frac{g(t_{n})}{t_{n}}\) is decreasing, so the sequence \[\frac{t_{n+1}}{t_{n}}=1+\frac{g(t_{n})}{t_{n}}\] is decreasing. This proves (a). To show (b), note that as \(g\) is non-increasing, we have \[t_{n+1}-t_{n}=g(t_{n})\leq g(t_{n-1})=t_{n}-t_{n-1},\] which gives \(\frac{t_{n-1}+t_{n+1}}{2}\leq t_{n}\). Consequently, setting \(h=\ln g\), we obtain \[\frac{h(t_{n-1})+h(t_{n+1})}{2}\geq h\Big{(}\frac{t_{n-1}+t_{n+1}}{2}\Big{)}\geq h (t_{n}),\] since \(h\) is convex and non-increasing. This implies \[h(t_{n+1})-h(t_{n})\geq h(t_{n})-h(t_{n-1}),\] so the sequence \[\frac{g(t_{n+1})}{g(t_{n})}=e^{h(t_{n+1})-h(t_{n})}\] is non-decreasing. Since \(\frac{g(t_{n+1})}{g(t_{n})}<1\), as remarked above, it follows that \(\frac{g(t_{n+1})}{g(t_{n})}\to q\) for some \(0<q\leq 1\). If \(q<1\), then for large \(n\) we have \(\frac{g(t_{n+1})}{g(t_{n})}<q^{\prime}\) for some constant \(q^{\prime}<1\), so \(\sum_{n=1}^{\infty}(t_{n+1}-t_{n})=\sum_{n=1}^{\infty}g(t_{n})<\infty\) and, consequently, \(t_{n}\to\overline{t}\) for some \(\overline{t}\in\mathbb{R}\), which is impossible. Hence, \[\frac{g(t_{n+1})}{g(t_{n})}\to 1.\] This ends the proof of (b). ## 3. Attracting and repelling petals at a parabolic fixed point Let \(p\in\mathbb{C}\) be a fixed parabolic point of order \(d\) of a holomorphic map of the form \[G(z)=z+a(z-p)^{d+1}+\cdots\] for \(z\) near \(p\), where \(a\in\mathbb{C}\setminus\{0\}\), \(d\in\mathbb{N}\). **Definition 3.1** (**Attracting/repelling petal of a parabolic fixed point**).: By an _attracting petal_ of the map \(G\) at \(p\) we will mean a simply connected domain \(P\) contained in a small neighbourhood of \(p\), such that \(p\in\partial P\), \(\overline{G(P)}\subset P\cup\{p\}\) and \(\bigcap_{n=0}^{\infty}G^{n}(P)=\emptyset\). A simply connected domain \(P\subset\mathbb{C}\) is a _repelling petal_ of the map \(G\) at \(p\), if \(G(P)\) is an attracting petal of a branch \(F\) of \(G^{-1}\) fixing \(p\) (which is well-defined near \(p\)) at \(p\). **Remark 3.2**.: There are several variants of definitions of attracting and repelling petals at parabolic fixed points, which differ in details (see e.g. [10, Definition 10.6] and the discussion afterwards). For \(\varepsilon,\delta>0\) and \(j\in\{0,\ldots,2d-1\}\) let \[U_{j}(\varepsilon,\delta)=\left\{z\in\mathbb{C}\setminus\{p\}:\operatorname{ Arg}(z-p)\in\left(\theta_{j}-\delta,\theta_{j}+\delta\right),\,|z-p|< \varepsilon\right\},\] where \[\theta_{j}=\frac{-\operatorname{Arg}(a)}{d}+\frac{\pi j}{d}\] for odd (resp. even) integers \(j\) are the arguments of _attracting_ (resp. _repelling_) _directions_ of \(p\). The facts described in the following proposition are well-known, see e.g. [1, Chapter II.5], [10, SS10]. **Proposition 3.3**.: 1. _For every_ \(\varepsilon,\delta>0\) _and a compact set_ \(K\) _contained in an attracting petal of_ \(G\) _at_ \(p\)_, there exist an odd integer_ \(j\in\{0,\ldots,2d-1\}\) _and_ \(n_{0}\in\mathbb{N}\) _such that_ \(G^{n}(K)\subset U_{j}(\varepsilon,\delta)\) _for every_ \(n\geq n_{0}\) _._ 2. _There exist_ \(d\) _attracting_ \((\)_resp. repelling_\()\) _petals_ \(P_{j}\) _of_ \(G\) _at_ \(p\)_, for odd_ \((\)_resp. even_\()\) _integers_ \(j\in\{0,\ldots,2d-1\}\)_, with Jordan boundaries, such that_ \(P_{j}\)__\((\)_resp._ \(F(P_{j})\)\()\) _are pairwise disjoint and for every_ \(\delta\in(0,\frac{\pi}{d})\) _one can find_ \(\varepsilon>0\) _such that_ \(P_{j}\supset U_{j}(\varepsilon,\delta)\)_._ See Figure 3. **Proposition 3.4**.: _Let \(P\) be an attracting petal of \(G\) at a parabolic fixed point \(p\) of order \(d\) and let \(\alpha\in[0,1)\). Then the following hold._ 1. \(G\) _is univalent on_ \(P\)_._ 2. \(G^{n}(z)\to p\) _as_ \(n\to\infty\) _for_ \(z\in P\)_._ 3. _For every compact set_ \(K\subset P\) _and_ \(\delta>0\) _there exist_ \(c_{1},c_{2}>0\) _and_ \(n_{0}\geq 0\) _such that for every_ \(z\in\bigcup_{n=n_{0}}^{\infty}G^{n}(K)\)_,_ \[c_{1}|z-p|^{d+1}<|G(z)-z|<c_{2}|z-p|^{d+1},\quad|\operatorname{Arg}(G(z)-z)- \operatorname{Arg}(p-z)|<\delta.\] Proof.: The assertions (a)-(b) follow directly from the definition of an attracting petal at a parabolic fixed point and the Schwarz-Pick lemma. To show (c), note that by Proposition 3.3(c), for sufficiently large \(n_{0}\) we have \(G^{n}(K)\subset U_{j}(\varepsilon,\delta)\) for every \(n\geq n_{0}\), where \(j\in\{0,\ldots,2d-1\}\) is an odd integer and \(\varepsilon,\delta>0\) can be chosen to be arbitrarily small. Hence, the assertion follows directly from the properties of \(G\) and \(U_{j}(\varepsilon,\delta)\). **Remark 3.5**.: Note that by elementary geometry, Proposition 3.4 implies that for every compact set \(K\subset P\) there exist \(c>0\) and \(n_{0}\geq 0\) such that \[|G(z)-p|<|z-p|,\qquad|z-G(z)|\leq c(|z-p|-|G(z)-p|)\] for every \(z\in\bigcup_{n=n_{0}}^{\infty}G^{n}(K)\). Near a parabolic fixed point \(p\in\mathbb{C}\) we will consider a family of conformal metrics in \(\mathbb{C}\setminus\{p\}\) given by \[d\sigma_{p,\alpha}=\sigma_{p,\alpha}(z)|dz|=\frac{|dz|}{|z-p|^{\alpha}},\qquad \alpha\in[0,1).\] Note that for \(\alpha=0\) the metric coincides with the Euclidean one. Now we show that these metrics are locally contracting (resp. expanding) in attracting (resp. repelling) petals at a parabolic fixed point. Figure 3. Attracting and repelling petals of a holomorphic map at a parabolic fixed point \(p\), with \(a=1\) and \(d=3\). **Proposition 3.6** (**Contraction properties in attracting petals at parabolic fixed points)**.: _Let \(P\) be an attracting petal of a map \(G\) at a parabolic fixed point \(p\), let \(K\subset P\) be a compact set and let \(\alpha\in[0,1)\). Then there exists \(n_{0}\in\mathbb{N}\) such that for every \(m\geq n_{0}\) there is a decreasing sequence \((a_{m,n})_{n=1}^{\infty}\), such that \(0<a_{m,n}<1\), \(\sum_{n=1}^{\infty}a_{m,n}<\infty\), \(\frac{a_{m,n+1}}{a_{m,n}}>\frac{a_{m,n}}{a_{m,n-1}}\) for \(n>1\) and_ \[|(G^{n})^{\prime}(z)|_{\sigma_{p,\alpha}}<a_{m,n}\quad\text{for every }z\in G^{m}(K),\;n\in\mathbb{N}.\] Proof.: Suppose \(G(z)=z+a(z-p)^{d+1}+\cdots\) for \(z\) near \(p\), where \(a\in\mathbb{C}\setminus\{0\}\), \(d\in\mathbb{N}\). Let \(P\) be an attracting petal of \(G\) at \(p\). Take a compact set \(K\subset P\). By Proposition 3.3(c), for sufficiently large \(n_{0}\) we have \(G^{k}(K)\subset U_{j}(\varepsilon,\frac{\delta}{d})\) for every \(k\geq n_{0}\), where \(j\in\{0,\ldots,2d-1\}\) is an odd integer and \(\varepsilon,\delta>0\) can be chosen to be arbitrarily small. Then for \(w\in K\) and \(k\geq n_{0}\), we have \(\operatorname{Arg}(a(G^{k}(w)-p)^{d})\in(\pi-\delta,\pi+\delta)\), so \[|G^{\prime}(G^{k}(w))|_{\sigma_{p,\alpha}} =\frac{|G^{k}(w)-p|^{\alpha}|1+(d+1)a(G^{k}(w)-p)^{d}+\cdots|}{|G^ {k}(w)-p+a(G^{k}(w)-p)^{d+1}+\cdots|^{\alpha}}\] \[=|1+(d+1-\alpha)a(G^{k}(w)-p)^{d}+\cdots|\] \[<1-\beta|a||G^{k}(w)-p|^{d},\] where \(\beta\in(d,d+1-\alpha)\) is a fixed number, \(n_{0}\) is chosen sufficiently large and \(\varepsilon,\delta>0\) are chosen sufficiently small. Fix a number \(b\in(1,\frac{\beta}{d})\). By [20, Lemma 10.1], the sequence \(k|G^{k}-p|^{d}\) converges uniformly on \(K\) to \(\frac{1}{|a|d}\) as \(k\to\infty\), so \[|G^{k}(w)-p|^{d}>\frac{b}{\beta|a|k},\] if \(n_{0}\) is chosen sufficiently large. Hence, \[|G^{\prime}(G^{k}(w))|_{\sigma_{p,\alpha}}<1-\frac{b}{k}\] for \(w\in K\) and \(k\geq n_{0}\). For \(m\geq n_{0}\) define \[a_{m,n}=\prod_{k=m}^{n+m-1}\Big{(}1-\frac{b}{k}\Big{)}.\] Then for \(z\in G^{m}(K)\) and \(n\in\mathbb{N}\), taking \(w\in G^{-m}(z)\cap K\) we have \[|(G^{n})^{\prime}(z)|_{\sigma_{p,\alpha}}=\prod_{k=m}^{n+m-1}|(G^{\prime}(G^{ k}(w))|_{\sigma_{p,\alpha}}<a_{m,n}.\] By definition, \(0<a_{m,n}<1\) and \((a_{m,n})_{n=1}^{\infty}\) is decreasing. Moreover, \[a_{m,n}<e^{-b\sum_{k=m}^{n+m-1}1/k}<\frac{2m^{b}}{(n+m)^{b}}\] if \(n_{0}\) is chosen sufficiently large, so the series \(\sum_{n=1}^{\infty}a_{m,n}\) is convergent since \(b>1\). Furthermore, \[\frac{a_{m,n+1}}{a_{m,n}}=1-\frac{b}{n}>1-\frac{b}{n-1}=\frac{a_{m,n}}{a_{m,n- 1}}\] for \(n>1\). ## 4. Attracting and repelling petals at infinity In analogy to the properties of attracting/repelling petals at parabolic fixed points described in Proposition 3.4, we introduce a notion of attracting/repelling petals at infinity of holomorphic maps defined on unbounded domains. **Definition 4.1** (**Attracting/repelling petal at infinity**).: Let \(P\subset\mathbb{C}\) be an unbounded simply connected domain and let \(G\colon P\to\mathbb{C}\) be a holomorphic map extending to a continuous map from \(\overline{P}\) into \(\mathbb{C}\). We call the domain \(P\) an _attracting petal of \(G\) at infinity_, if 1. \(\overline{G(P)}\subset P\), 2. \(\bigcap_{n=0}^{\infty}G^{n}(P)=\emptyset\), 3. for every compact set \(K\subset P\) there exist \(c_{1},c_{2}>0\), \(0<\delta<\frac{\pi}{2}\), \(n_{0}\in\mathbb{N}\) and a non-increasing logarithmically convex function \(g\colon(t_{0},+\infty)\to\mathbb{R}_{+}\), \(t_{0}>0\), with \(\{|z|:z\in\bigcup_{n=n_{0}}^{\infty}G^{n}(K)\}\subset(t_{0},+\infty)\), such that \[c_{1}g(|z|)<|G(z)-z|<c_{2}g(|z|),\qquad|\operatorname{Arg}(G(z)-z)- \operatorname{Arg}(z)|<\delta\] for \(z\in\bigcup_{n=n_{0}}^{\infty}G^{n}(K)\). See Figure 4. An unbounded simply connected domain \(P\subset\mathbb{C}\) is a _repelling petal at infinity_ of a holomorphic map \(F\colon P\to\mathbb{C}\), if \(F\) is univalent and \(F(P)\) is an attracting petal at infinity of the map \(G=F^{-1}\). We also say that \(P\) is an attracting/repelling petal at the point \(p=\infty\). **Remark 4.2**.: Typical examples of logarithmically convex functions \(g\) which can be used in Definition 4.1(c) are: \[g(t) =1,\] \[g(t) =\frac{1}{t^{a}},\qquad a>0,\] \[g(t) =e^{a/t^{b}},\quad a,b>0,\] \[g(t) =e^{-at^{b}},\quad a>0,\ 0<b\leq 1.\] Analogously to the properties described in Remark 3.5, the following hold. Figure 4. Location of \(G(z)\) with respect to \(z\) in an attracting petal of \(G\) at infinity. **Proposition 4.3**.: _Let \(P\subset\mathbb{C}\) be an attracting petal at infinity of a map \(G\). Then the following hold._ 1. \(G^{n}(z)\to\infty\) _as_ \(n\to\infty\) _for_ \(z\in P\)_._ 2. _For every compact set_ \(K\subset P\) _there exist_ \(c>0\) _and_ \(n_{0}\geq 0\) _such that_ \[|z|<|G(z)|\leq|z|+c,\qquad|G(z)-z|\leq c(|G(z)|-|z|).\] _for every_ \(z\in\bigcup_{n=n_{0}}^{\infty}G^{n}(K)\)_._ Proof.: The statement (a) and the second estimate in (b) follow directly from Definition 4.1 and elementary geometry. Together with the fact that \(g\) is non-increasing, this implies \(|z|<|G(z)|\leq|z|+c\) for a suitable \(c>0\). Examples of attracting and repelling petals at infinity are presented in the following proposition. **Proposition 4.4**.: _Let_ \[V_{j}(r,\delta,d,a)=\left\{z\in\mathbb{C}:\operatorname{Arg}(z)\in(\theta_{j} -\delta,\theta_{j}+\delta),\,|z|>r\right\},\] _where \(r,\delta>0\), \(j\in\{0,\ldots,2d-1\}\), \(d\in\mathbb{N}\), and_ \[\theta_{j}=\frac{\operatorname{Arg}(a)}{d}+\frac{\pi j}{d}\] _for \(a\in\mathbb{C}\setminus\{0\}\). Suppose \(P\subset\mathbb{C}\) is an unbounded simply connected domain and \(G\colon P\to P\) is a holomorphic map extending continuously to \(\overline{P}\), such that \(\overline{G(P)}\subset P\subset V_{j}(r,\delta,d,a)\) for some \(0<\delta<\frac{\pi}{d}\), a large number \(r>0\) and an even integer \(j\in\{0,\ldots,2d-1\}\), where_ \[G(z)=z+\frac{a}{z^{d-1}}+o\left(\frac{1}{|z|^{d-1}}\right)\qquad\text{for }\;z\in P\quad\text{as }|z|\to\infty.\] _Then \(P\) is an attracting petal of \(G\) at infinity._ _Analogously, if \(F\colon P\to\mathbb{C}\) is a univalent map such that \(F^{-1}\) extends continuously to \(\overline{F(P)}\) and \(\overline{P}\subset F(P)\subset V_{j}(r,\delta,d,a)\) for some \(0<\delta<\frac{\pi}{d}\), a large number \(r>0\) and an odd integer \(j\in\{0,\ldots,2d-1\}\), where_ \[F(z)=z+\frac{a}{z^{d-1}}+o\left(\frac{1}{|z|^{d-1}}\right)\qquad\text{for }\;z\in P\quad\text{as }|z|\to\infty,\] _then \(P\) is a repelling petal of \(F\) at infinity._ Proof.: We proceed as in the case of parabolic fixed points (see e.g. [1, Chapter II.5]). Consider first the case \(d=a=1\). Then, given a compact set \(K\subset P\), we have \(K\subset V_{0}(r,\delta,d,a)\), where \(0<\delta<\pi\) and \[G(z)=z+1+o(1)\qquad\text{for }z\in P\quad\text{as }|z|\to\infty.\] Assuming \(r\) sufficiently large, we see that \(K\subset V^{\prime}\), where \[V^{\prime}=\{z\in\mathbb{C}\setminus\{r^{\prime}\}:\operatorname{Arg}(z-r^{ \prime})\in(-\delta^{\prime},\delta^{\prime})\}\] for a large \(r^{\prime}>0\) and \(\delta<\delta^{\prime}<\pi\). Then \(\operatorname{Re}(G(z))>\operatorname{Re}(z)+\frac{1}{2}\) and \(|\operatorname{Arg}(G(z)-z)|<\min(\delta^{\prime},\frac{\pi}{5})\) for \(z\in P\cap V^{\prime}\). This implies that \(G(P\cap V^{\prime})\subset P\cap V^{\prime}\) and, consequently, there exists \(n_{0}\in\mathbb{N}\) such that \(G^{n}(K)\subset V_{0}(r,\frac{\pi}{4})\) for every \(n\geq n_{0}\). Consequently, \[|\operatorname{Arg}(G(z)-z)-\operatorname{Arg}(z)|<\frac{\pi}{3}\qquad\text{ for }z\in G^{n}(K)\] for every \(n\geq n_{0}\) by the definition of \(V_{0}(r,\frac{\pi}{4})\). This proves the second condition from Definition 4.1(c). Taking \(g(t)=\frac{1}{t^{d-1}}\), we immediately obtain the first condition. The case \(a\neq 1\), \(d=1\) can be reduced to the previous one by a linear change of coordinates \(w=\frac{z}{a}\). In the case \(d>1\), a change of coordinates \(w=bz^{d}\) on \(V_{j}(r,\delta,d,a)\) for a suitable \(b\in\mathbb{C}\) reduces it to the case \(d=1\). To deal with the case of a repelling petal, it is sufficient to note that if \(F\) is univalent with \[F(z)=z+\frac{a}{z^{d-1}}+o\left(\frac{1}{|z|^{d-1}}\right)\] on \(P\subset\overline{P}\subset F(P)\subset V_{j}(r,\delta,d,a)\) for an odd integer \(j\in\{0,\dots,2d-1\}\), then \[G(z)=F^{-1}(z)=z-\frac{a}{z^{d-1}}+o\left(\frac{1}{|z|^{d-1}}\right)\] on \(F(P)\subset V_{j}(r,\delta,d,a)=V_{j^{\prime}}(r,\delta,d,-a)\), where \(j^{\prime}=j\pm 1\) is an even integer in \(\{0,\dots,2d-1\}\). In particular, Proposition 4.4 provides the following example, which will be considered in detail in Section 8. **Example 4.5**.: Let \(f(z)=z-\tan z\), Newton's method for \(\sin z\). Then the half-planes \(P_{\pm}=\{z\in\mathbb{C}:\pm\mathrm{Im}(z)>M\}\) for sufficiently large \(M>0\) are repelling petals of \(f\) at infinity. In the further considerations, we will use the following lemma. **Lemma 4.6**.: _Let \(P\subset\mathbb{C}\) be an attracting petal at infinity of a map \(G\). Consider a function \(g\) from Definition 4.1 for a compact set \(K\subset P\) and let \((t_{n})_{n=1}^{\infty}\) be the sequence defined in Lemma 2.2. Then there exists \(M\in\mathbb{N}\) such that for every \(z\in K\) and \(n\in\mathbb{N}\),_ 1. _the interval_ \([t_{n},t_{n+1}]\) _contains less than_ \(M\) _numbers_ \(|G^{k}(z)|\)_,_ \(k\geq 0\)_,_ 2. _the interval_ \(\mathrm{conv}\{|G^{n}(z)|,|G^{n+1}(z)|\}\) _contains less than_ \(M\) _numbers_ \(t_{k}\)_,_ \(k\geq 1\)_._ Proof.: Take \(n_{0}\) satisfying the conditions of Definition 4.1 and Proposition 4.3, chosen for the set \(K\). To show (a), suppose \([t_{n},t_{n+1}]\) contains \(N\) numbers \(|G^{k}(z)|\), \(k\geq 0\), for some \(z\in K\) and \(N>n_{0}+1\). Then \([t_{n},t_{n+1}]\) contains at least \(N-n_{0}\) numbers \(|G^{k}(z)|\), \(k\geq n_{0}\), so by Definition 4.1 and Proposition 4.3, there exist \(k_{0}\geq n_{0}\) and \(c>0\) such that \[t_{n}\leq|G^{k_{0}}(z)|\leq\dots\leq|G^{k_{0}+N-n_{0}-1}(z)|\leq t_{n+1}\] and \[g(t_{n}) =t_{n+1}-t_{n}\geq|G^{k_{0}+N-n_{0}-1}(z)|-|G^{k_{0}}(z)|\] \[=|G^{k_{0}+N-n_{0}-1}(z)|-|G^{k_{0}+N-n_{0}-2}(z)|+\dots+|G^{k_{0} +1}(z)|-|G^{k_{0}}(z)|\] \[\geq c(g(|G^{k_{0}+N-n_{0}-2}(z)|)+\dots+g(|G^{k_{0}}(z)|))\geq c (N-n_{0}-1)g(t_{n+1}),\] so \(\frac{g(t_{n+1})}{g(t_{n})}\leq\frac{1}{c(N-n_{0}-1)}\). By Lemma 2.2(b), the sequence \(\frac{g(t_{n+1})}{g(t_{n})}\) is bounded from below by a positive constant, which implies that \(N\) is bounded from above by some \(M>0\) independent of \(n\in\mathbb{N}\). To show (b), note that by Definition 4.1 and Proposition 4.3, there exists \(c>0\) such that \(g(|G^{n}(z)|)\geq c(|G^{n+1}(z)|-|G^{n}(z)|)\) for every \(z\in K\) and \(n\geq n_{0}\). Let \[q=\frac{1}{1+c/2}.\] Since \(q<1\) and the sequence \(\frac{g(t_{k+1})}{g(t_{k})}\) converges to \(1\) by Lemma 2.2, we can find \(k_{0}\in\mathbb{N}\) such \[\frac{g(t_{k+1})}{g(t_{k})}\geq q\qquad\text{for }k\geq k_{0}.\] Suppose \(\operatorname{conv}\{|G^{n}(z)|,|G^{n+1}(z)|\}\) contains \(N\) numbers \(t_{k}\), \(k\geq 1\), for some \(z\in K\) and a large \(N>n_{0}\). Since \(G^{n}(K)\) is bounded and \(t_{k}\to\infty\) as \(k\to\infty\), we can assume \(n\geq n_{0}\) and \(|G^{n}(z)|\geq t_{k_{0}}\). Then by Definition 4.1, Proposition 4.3 and the fact that the sequence \(\frac{g(t_{n+1})}{g(t_{n})}\) is non-decreasing by Lemma 2.2, \[t_{k_{0}}\leq|G^{n}(z)|\leq t_{k_{0}+1}\leq\ldots\leq t_{k_{0}+N}\leq|G^{n+1}(z)|\] and \[g(t_{k_{0}}) \geq g(|G^{n}(z)|)\] \[\geq c(|G^{n+1}(z)|-|G^{n}(z)|)\geq c(t_{k_{0}+N}-t_{k_{0}+1})\] \[=c(t_{k_{0}+N}-t_{k_{0}+N-1}+\cdots+t_{k_{0}+2}-t_{k_{0}+1})\] \[=c(g(t_{k_{0}+1})+\cdots+g(t_{k_{0}+N-1}))\] Consequently, we have \[\frac{q}{2(1-q)} =\frac{1}{c}\geq\frac{g(t_{k_{0}+1})}{g(t_{k_{0}})}+\cdots+\frac {g(t_{k_{0}+N-1})}{g(t_{k_{0}})}\] \[\qquad=\frac{g(t_{k_{0}+1})}{g(t_{k_{0}})}+\cdots+\frac{g(t_{k_{0 }+1})}{g(t_{k_{0}})}\cdots\frac{g(t_{k_{0}+N-1})}{g(t_{k_{0}+N-2})}\] \[\qquad\geq\frac{g(t_{k_{0}+1})}{g(t_{k_{0}})}+\cdots+\Big{(}\frac {g(t_{k_{0}+1})}{g(t_{k_{0}})}\Big{)}^{N-1}\] \[\qquad\geq q+\cdots+q^{N-1}=q\frac{1-q^{N-1}}{1-q}.\] Hence, \[q^{N-1}\geq\frac{1}{2},\] so \[N\leq-\frac{\ln 2}{\ln q}+1.\] Analogously to the case of petals at parabolic fixed points, we consider a family of conformal metrics given by \[d\sigma_{\alpha}=\sigma_{\alpha}(z)|dz|=\frac{|dz|}{|z|^{\alpha}},\qquad \alpha>1,\] for \(z\in\mathbb{C}\) near infinity, which have some contracting (resp. expanding) property inside attracting (resp. repelling) petals at infinity. The following theorem, showing the contracting/expanding properties of the metric \(\sigma_{\alpha}\) on attracting/repelling petals at infinity, is one of the main tools used to prove the local connectivity of the boundaries of the Fatou components considered in this paper. **Theorem 4.7** (**Contraction properties in attracting petals at infinity**).: _Let \(P\) be an attracting petal at infinity of a map \(G\), let \(K\subset P\) be a compact set and let \(\alpha>1\). Then there exist \(A>0\) and \(n_{0}\in\mathbb{N}\) such that for every \(m\geq n_{0}\) there is a decreasing sequence \((a_{m,n})_{n=1}^{\infty}\) such that \(0<a_{m,n}\leq A\), \(\sum_{n=1}^{\infty}a_{m,n}<\infty\), \(\frac{a_{m,n+1}}{a_{m,n}}\geq\frac{a_{m,n}}{a_{m,n-1}}\) for \(n>1\) and_ \[|(G^{n})^{\prime}(z)|_{\sigma_{\alpha}}<a_{m,n}\quad\text{for every $z\in G^{m}(K)$},\;n\in \mathbb{N}.\] Proof.: Take a compact set \(K\subset P\). By Cowen's Theorem [13, Theorem 3.2] (see also [13, Theorem 2.7]), there exist a holomorphic map \(\psi\colon P\to\Omega\) (where \(\Omega\subset\mathbb{C}\) is an open horizontal strip, an open upper half-plane or the plane) and a domain \(V\subset P\), such that: 1. \(\psi(G(z))=\psi(z)+1\) for \(z\in P\), 2. \(\psi\) is univalent on \(V\), 3. for every compact set \(K_{1}\subset P\) there exists \(n_{1}\in\mathbb{N}\) such that \(G^{n}(K_{1})\subset V\) for \(n\geq n_{1}\), 4. for every compact set \(K_{2}\subset\Omega\) there exists \(n_{2}\in\mathbb{N}\) such that \(K_{2}+n\subset\psi(V)\) for \(n\geq n_{2}\). By (iii), we can choose \(n_{1}\in\mathbb{N}\) such that \(G^{m}(K)\subset V\) for \(m\geq n_{1}\). Note that by (i) and the definition of \(\Omega\), \[L=\operatorname{conv}(\psi(G^{n_{1}}(K)\cup G^{n_{1}+1}(K)))=\operatorname{ conv}(\psi(G^{n_{1}}(K))\cup(\psi(G^{n_{1}}(K))+1))\] is a compact subset of \(\Omega\), and so is the set \(K_{2}=\overline{\bigcup_{w\in L}\mathbb{D}(w,\varepsilon)}\) for a small \(\varepsilon>0\). Therefore, by (i) and (iv), there exists \(n_{0}>n_{1}\) such that \[\bigcup\{\mathbb{D}(w,\varepsilon):w\in\operatorname{conv}(\psi(G^{m+n}(K) \cup G^{m+n+1}(K)))\}=K_{2}+m+n-n_{1}\subset\psi(V) \tag{4}\] for \(m\geq n_{0}\) and \(n\geq 0\). Enlarging \(n_{0}\), we can assume that the properties listed in Definition 4.1 and Proposition 4.3 hold for every \(z\in\bigcup_{m=n_{0}}^{\infty}G^{m}(K)\). Choose a point \(z_{0}\in K\) and let \[z_{l}=G^{l}(z_{0})\in G^{l}(K)\] for \(l\geq n_{0}\). Consider \[z\in G^{m}(K)\qquad\text{for $m\geq n_{0}$}.\] This notation will be used throughout the subsequent part of the proof. **Convention**.: By \(c_{0},c_{1},\ldots\) we denote constants independent of \(z\in G^{m}(K)\), \(m\geq n_{0}\) and \(n\geq 0\). Moreover, we write \(g_{m,n}(z)\asymp h_{m,n}(z)\) if \[\frac{1}{c}<\frac{g_{m,n}(z)}{h_{m,n}(z)}<c\] for a constant \(c>0\) independent of \(z\in G^{m}(K)\), \(m\geq n_{0}\) and \(n\geq 0\). By (4) and the Koebe Distortion Theorem, \(\psi^{-1}\) is defined on \(\operatorname{conv}(\psi(G^{m+n}(K)\cup G^{m+n+1}(K)))\) with distortion bounded by a constant independent of \(m\geq n_{0},n\geq 0\). In particular, \[|\psi^{\prime}(G^{n}(z))|\asymp|\psi^{\prime}(z_{m+n})| \tag{5}\] for \(n\geq 0\). Moreover, by (4), the bounded distortion of \(\psi^{-1}\) and the fact \(\psi(G^{n+1}(z))-\psi(G^{n}(z))=1\), we obtain \[|G^{n+1}(z)-G^{n}(z)|\asymp|z_{m+n+1}-z_{m+n}|\asymp\frac{1}{|\psi^{\prime}(z_ {m+n})|}. \tag{6}\] Let \(w\in G^{n_{0}}(K)\) be such that \(z=G^{m-n_{0}}(w)\). By (6), Proposition 4.3 and the compactness of \(G^{n_{0}}(K)\), \[|z_{m+n}| \leq|z_{m+n}-z_{m+n-1}|+\cdots+|z_{n_{0}+1}-z_{n_{0}}|+|z_{n_{0}}|\] \[\leq c_{1}(|G^{m-n_{0}+n}(w)-G^{m-n_{0}+n-1}(w)|+\cdots+|G(w)-w|+| w|)\] \[\leq c_{2}(|G^{m-n_{0}+n}(w)|-|G^{m-n_{0}+n-1}(w)|+\cdots+|G(w)|-| w|+|w|)\] \[=c_{2}|G^{m-n_{0}+n}(w)|=c_{3}|G^{n}(z)|\] for constants \(c_{1},c_{2}>0\). Analogously, \[|G^{n}(z)| =|G^{m-n_{0}+n}(w)|\] \[\leq|G^{m-n_{0}+n}(w)-G^{m-n_{0}+n-1}(w)|+\cdots+|G(w)-w|+|w|\] \[\leq c_{3}(|z_{m+n}-z_{m+n-1}|+\cdots+|z_{n_{0}+1}-z_{n_{0}}|+|z_ {n_{0}}|)\] \[\leq c_{4}(|z_{m+n}|-|z_{m+n-1}|+\cdots+|z_{n_{0}+1}|-|z_{n_{0}}|+ |z_{n_{0}}|)=c_{4}|z_{m+n}|\] for constants \(c_{3},c_{4}>0\). We conclude \[|G^{n}(z)|\asymp|z_{m+n}|. \tag{7}\] Furthermore, by (i), \[|(G^{n})^{\prime}(z)| =|\psi^{\prime}(z)||(\psi^{-1})^{\prime}(\psi(z)+n)|=\frac{|\psi^ {\prime}(z)|}{|\psi^{\prime}(G^{n}(z))|},\] \[|(G^{n})^{\prime}(z_{m})| =|\psi^{\prime}(z_{m})||(\psi^{-1})^{\prime}(\psi(z_{m})+n)|=\frac {|\psi^{\prime}(z_{m})|}{|\psi^{\prime}(z_{m+n}))|},\] which together with (5) and (6) gives \[|(G^{n})^{\prime}(z)|\asymp|(G^{n})^{\prime}(z_{m})|\asymp\frac{|G^{n+1}(z)-G^{ n}(z)|}{|G(z)-z|}\asymp\frac{|z_{m+n+1}-z_{m+n}|}{|z_{m+1}-z_{m}|}. \tag{8}\] Fix \(\alpha>1\). Using (7) and (8) we obtain \[|(G^{n})^{\prime}(z)|_{\sigma_{\alpha}}=\frac{|z|^{\alpha}|(G^{n})^{\prime}(z) |}{|G^{n}(z)|^{\alpha}}\leq c_{5}\frac{|z_{m}|^{\alpha}|z_{m+n+1}-z_{m+n}|}{|z _{m+n}|^{\alpha}|z_{m+1}-z_{m}|} \tag{9}\] for \(n\in\mathbb{N}\) and some constant \(c_{5}>0\) (note that the metric \(\sigma_{\alpha}\) is well-defined at \(z\) and \(G^{n}(z)\) if \(n_{0}\) is chosen sufficiently large). Consider now the function \(g\) from Definition 4.1 for the set \(K\), and the sequence \((t_{j})_{j=1}^{\infty}\) defined in Lemma 2.2. Enlarging \(n_{0}\) if necessary, we can assume \(t_{1}\leq|z_{n_{0}}|\). Note that the sequence \(|z_{l}|\), \(l\geq n_{0}\), is increasing by Proposition 4.3. Let \[j_{l}=\max\{j\in\mathbb{N}:t_{j}\leq|z_{l}|\},\qquad l\geq n_{0}.\] By definition, \[t_{j_{l}}\leq|z_{l}|<t_{j_{l}+1} \tag{10}\] and the sequence \((j_{l})_{l=n_{0}}^{\infty}\) (and also \((t_{j_{l}})_{l=n_{0}}^{\infty}\)) is non-decreasing. Choosing \(n_{0}\) sufficiently large, we can assume \(t_{j_{n_{0}}}>0\) and, by Lemma 2.2, \[1>\frac{t_{j}}{t_{j+1}}>\frac{t_{j-1}}{t_{j}}>q,\qquad 1\geq\frac{g(t_{j+1})}{g(t_ {j})}\geq\frac{g(t_{j})}{g(t_{j-1})}>\frac{1}{2} \tag{11}\] for \(j\geq j_{n_{0}}-1\) and some constant \(q>0\). Let \(m\geq n_{0}\), \(n\in\mathbb{N}\). Note that Lemma 4.6 implies that there exists an integer \(M>1\), such that \[j_{m+n}\geq j_{m}+\frac{n+1}{M}-1\geq j_{m}+\left\lfloor\frac{n-2}{M}\right\rfloor -1. \tag{12}\] Using consecutively (9), Definition 4.1, (10), (11) and (12), we obtain \[\begin{split}|(G^{n})^{\prime}(z)|_{\sigma_{\alpha}}& \leq c_{6}\frac{|z_{m}|^{\alpha}g(|z_{m+n}|)}{|z_{m+n}|^{\alpha}g(|z_ {m}|)}<c_{6}\frac{t_{j_{m}+1}^{\alpha}g(t_{j_{m+n}})}{t_{j_{m+n}}^{\alpha}g(t_{ j_{m}+1})}\\ &<c_{7}\frac{t_{j_{m}}^{\alpha}g(t_{j_{m+n}})}{t_{j_{m+n}}^{ \alpha}g(t_{j_{m}})}\leq c_{7}\frac{t_{j_{m}}^{\alpha}g\big{(}t_{j_{m}+\lfloor \frac{n-2}{M}\rfloor-1}\big{)}}{t_{j_{m}+\lfloor\frac{n-2}{M}\rfloor-1}^{ \alpha}g(t_{j_{m}})}\end{split} \tag{13}\] for some constants \(c_{6},c_{7}>0\), so that \[|(G^{n})^{\prime}(z)|_{\sigma_{\alpha}}<\tilde{a}_{m,n} \tag{14}\] for \(n\in\mathbb{N}\), where \[\tilde{a}_{m,n}=c_{7}\frac{t_{j_{m}}^{\alpha}g\big{(}t_{j_{m}+\lfloor\frac{n-2 }{M}\rfloor-1}\big{)}}{t_{j_{m}+\lfloor\frac{n-2}{M}\rfloor-1}^{\alpha}g(t_{ j_{m}})}.\] Note that by definition, \[\begin{split}\tilde{a}_{m,1}&=c_{7}\frac{t_{j_{m}}^ {\alpha}g(t_{j_{m}-2})}{t_{j_{m}-2}^{\alpha}g(t_{j_{m}})},\\ \tilde{a}_{m,kM+r}&=\tilde{a}_{m,kM+1}& =c_{7}\frac{t_{j_{m}}^{\alpha}g(t_{j_{m}+k-1})}{t_{j_{m}+k-1}^{ \alpha}g(t_{j_{m}})}\qquad\text{for }k\geq 0,\ r\in\{2,\ldots,M+1\},\end{split} \tag{15}\] so by (11), \[\tilde{a}_{m,1}>\tilde{a}_{m,2}=\cdots=\tilde{a}_{m,M+1}>\tilde{a}_{m,M+2}= \cdots=\tilde{a}_{m,2M+1}>\cdots\] and \[0<\tilde{a}_{m,n}\leq\tilde{a}_{m,1}\leq A\] for \[A=\frac{4c_{7}}{q^{2\alpha}}.\] Moreover, (15) and (11) imply \[1>\frac{\tilde{a}_{m,(k+1)M+1}}{\tilde{a}_{m,kM+1}}\geq\frac{\tilde{a}_{m,kM+1 }}{\tilde{a}_{m,(k-1)M+1}} \tag{16}\] and \[\tilde{a}_{m,1}+\cdots+\tilde{a}_{m,(k+1)M+1} =c_{7}+c_{7}M\frac{t_{j_{m}}^{\alpha}}{g(t_{j_{m}})}\left(\frac{g(t _{j_{m}})}{t_{j_{m}}^{\alpha}}+\cdots+\frac{g(t_{j_{m}+k-1})}{t_{j_{m}+k-1}^{ \alpha}}\right)\] \[<c_{7}+\frac{c_{7}M}{q^{\alpha}}\frac{t_{j_{m}}^{\alpha}}{g(t_{j _{m}})}\left(\frac{g(t_{j_{m}})}{t_{j_{m}+1}^{\alpha}}+\cdots+\frac{g(t_{j_{m}+ k-1})}{t_{j_{m}+k}^{\alpha}}\right)\] \[=c_{7}+\frac{c_{7}M}{q^{\alpha}}\frac{t_{j_{m}}^{\alpha}}{g(t_{j _{m}})}\left(\frac{t_{j_{m}+1}-t_{j_{m}}}{t_{j_{m}+1}^{\alpha}}+\cdots+\frac{t_ {j_{m}+k}-t_{j_{m}+k-1}}{t_{j_{m}+k}^{\alpha}}\right)\] \[<c_{7}+\frac{c_{7}M}{q^{\alpha}}\frac{t_{j_{m}}^{\alpha}}{g(t_{j _{m}})}\left(\int_{t_{j_{m}}}^{t_{j_{m}+1}}\frac{dt}{t^{\alpha}}+\cdots+\int_{ t_{j_{m}+k-1}}^{t_{j_{m}+k}}\frac{dt}{t^{\alpha}}\right)\] \[=c_{7}+\frac{c_{7}M}{q^{\alpha}}\frac{t_{j_{m}}^{\alpha}}{g(t_{j _{m}})}\int_{t_{j_{m}}}^{t_{j_{m}+k}}\frac{dt}{t^{\alpha}}<c_{7}+\frac{c_{7}M} {q^{\alpha}}\frac{t_{j_{m}}^{\alpha}}{g(t_{j_{m}})}\int_{t_{j_{m}}}^{\infty} \frac{dt}{t^{\alpha}}<\infty\] for \(k\in\mathbb{N}\). Hence, \[\sum_{n=1}^{\infty}\tilde{a}_{m,n}<\infty.\] For \(m\geq n_{0}\) define a sequence \((a_{m,n})_{n=1}^{\infty}\) setting \[a_{m,kM+s}=\tilde{a}_{m,kM+1}\left(\frac{\tilde{a}_{m,(k+1)M+1}}{\tilde{a}_{m,kM+1}}\right)^{\frac{s-1}{M}}\] for \(k\geq 0\), \(s\in\{1,\ldots,M\}\). By (16), \[\tilde{a}_{m,M(k+1)+1}=a_{m,M(k+1)+1}<a_{m,kM+s}\leq a_{m,Mk+1}=\tilde{a}_{m,Mk +1},\] which implies \[\sum_{n=1}^{\infty}a_{m,n}=\sum_{k=0}^{\infty}\sum_{s=1}^{M}a_{m,kM+s}\leq M \sum_{k=0}^{\infty}\tilde{a}_{m,Mk+1}<\infty.\] and (together with (15)) \[0<\tilde{a}_{m,n}\leq a_{m,n}\leq A.\] Note that this and (14) imply \[|(G^{n})^{\prime}(z)|_{\sigma_{\alpha}}<a_{m,n}\] for \(z\in G^{m}(K)\), \(m\geq n_{0}\), \(n\in\mathbb{N}\). Furthermore, \[\frac{a_{m,kM+s+1}}{a_{m,kM+s}}=\left(\frac{\tilde{a}_{m,(k+1)M+1}}{\tilde{a} _{m,kM+1}}\right)^{\frac{1}{M}},\] so by (16), \[1>\frac{a_{m,n+1}}{a_{m,n}}\geq\frac{a_{m,n}}{a_{m,n-1}}\] for \(n>1\). This ends the proof. ## 5. Construction of an expanding metric The goal of this section is to prove the following result. **Theorem 5.1** (**Existence of an expanding metric - general version**).: _Let \(F\colon V^{\prime}\to V\) be a holomorphic map onto \(V\), where \(V\subset\mathbb{C}\) is a hyperbolic domain and \(V^{\prime}\) is a domain such that \(V^{\prime}\subset V\), \(V^{\prime}\neq V\). Let \(W\subset V\) be an open set such that \(\bigcap_{n=0}^{\infty}F^{-n}(W)=\emptyset\). Assume that the following hold._ 1. \(F\) _has no asymptotic values._ 2. _For every_ \(z\in V\)_, we have_ \(\sup\{\deg_{w}F^{n}:w\in F^{-n}(z),\,n\in\mathbb{N}\}<\infty\)_._ 3. \(\mathcal{P}_{crit}(F)\) _has no accumulation points in_ \(V\)_._ 4. \(F\) _extends meromorphically to a neighbourhood_ (_in_ \(\mathbb{C}\)_) _of_ \(\overline{W\cap F^{-1}(W)}\)_._ 5. _There exist a finite number of repelling petals_ \(P_{i}\)_,_ \(i\in\mathcal{I}=\mathcal{I}_{par}\cup\mathcal{I}_{\infty}\)_, of the map_ \(F\)__(_or its holomorphic extension_ ) _such that:_ * \(P_{i}\) _for_ \(i\in\mathcal{I}_{par}\) _is a repelling petal at a parabolic fixed points_ \(p_{i}\) _of_ \(F\) _in_ \(\partial W\)_,_ * \(P_{i}\) _for_ \(i\in\mathcal{I}_{\infty}\) _is a repelling petal at infinity,_ * \(F(P_{i})\cap P_{i^{\prime}}=\emptyset\) _for_ \(i,i^{\prime}\in\mathcal{I}\)_,_ \(i\neq i^{\prime}\)_,_ * \(\overline{W}\subset V\cup\{p_{i}\}_{i\in\mathcal{I}_{par}}\)_,_ * \(\overline{W}\setminus\Big{(}\bigcup_{i\in\mathcal{I}_{par}}(P_{i}\cup\{p_{i}\}) \cup\bigcup_{i\in\mathcal{I}_{\infty}}P_{i}\Big{)}\) _is compact._ * \(F(W\cap P_{i})\subset W\) _for_ \(i\in\mathcal{I}\)_._ _Then one can find \(N\in\mathbb{N}\) such that for every compact set \(K\subset\overline{W}\setminus\{p_{i}\}_{i\in\mathcal{I}_{par}}\) there exist a conformal metric \(d_{\varsigma}=\varsigma|dz|\) on \(W\cap F^{-1}(W)\cap\ldots\cap F^{-N}(W)\) and a decreasing sequence \((b_{n})_{n=1}^{\infty}\) of numbers \(b_{n}\in(0,1)\) with \(\sum_{n=1}^{\infty}b_{n}<\infty\), satisfying_ \[|(F^{n})^{\prime}(z)|_{\varsigma}>\frac{1}{b_{n}}\] _for every \(z\in W\cap F^{-1}(W)\cap\ldots\cap F^{-(n+N)}(W)\cap F^{-n}(K)\), \(n\in\mathbb{N}\)._ The construction of the metric \(d_{\varsigma}\) follows the ideas of [1, 1, 1, 2] in the case of polynomials and rational maps and [1, 2, 3] in a transcendental context. However, due to lack of compactness on unbounded parts of the set \(W\), to estimate \(|(F^{n})^{\prime}|_{\varsigma}\) we must take a different approach than the ones used in the cited references. Since the proof is rather involved, first we present its general description. A main idea is to construct a metric \(\varsigma\) which is uniformly expanding on some compact set in \(V^{\prime}\) and 'glue' it with suitable metrics in the petals \(P_{i}\), such that the derivative of the iterations of \(F\) along a block of length \(n\) within \(P_{i}\) with respect to this metric is larger than \(1/\beta_{n}\), where \(\beta_{n}\) is a term of a converging series such that \(\frac{\beta_{n+1}}{\beta_{n}}\geq\frac{\beta_{n}}{\beta_{n-1}}\). This is described precisely in the following proposition. For simplicity, here and in the sequel we write \[W_{n}=W\cap F^{-1}(W)\cap\ldots\cap F^{-n}(W),\qquad n\geq 0.\] **Proposition 5.2**.: _Under the assumptions of Theorem 5.1, one can find \(N\in\mathbb{N}\) such that for every compact set \(K\subset\overline{W}\setminus\{p_{i}\}_{i\in\mathcal{I}_{par}}\) there exist a conformal metric \(d\varsigma=\varsigma|dz|\) on \(W_{N}\), a compact set \(\widehat{K}\subset V\setminus\{p_{i}\}_{i\in\mathcal{I}_{par}}\) containing \(K\), a number \(Q>1\) and a decreasing sequence \((\beta_{n})_{n=1}^{\infty}\) of numbers \(\beta_{n}\in(0,1)\), satisfying the following properties:_ 1. \(\sum_{n=1}^{\infty}\beta_{n}<\infty\)_,_ 2. \(\frac{\beta_{n+1}}{\beta_{n}}\geq\frac{\beta_{n}}{\beta_{n-1}}\) _for every_ \(n>1\) _._ 3. \(|F^{\prime}|_{\varsigma}>Q\) _on_ \(W_{N}\cap\widehat{K}\)_,_ 4. \(|(F^{n})^{\prime}(z)|_{\varsigma}>\frac{1}{\beta_{n}}\) _for every_ \(z\in(W_{N}\setminus\widehat{K})\cap F^{-1}(W_{N}\setminus\widehat{K})\cap \ldots\cap F^{-(n-1)}(W_{N}\setminus\widehat{K})\cap F^{-n}(W_{N}\cap\widehat{ K})\)_,_ \(n\in\mathbb{N}\)_._ First, we show how to prove Theorem 5.1 using this proposition. _Proof of Theorem 5.1 assuming Proposition 5.2._ Set \[b_{n}=\max\Big{(}\frac{1}{Q^{n/2}},\beta_{\lceil n/2\rceil}\Big{)}.\] Note that \(b_{n}\in(0,1)\), the sequence \((b_{n})_{n=1}^{\infty}\) is decreasing, and \(\sum_{n=1}^{\infty}\beta_{\lceil n/2\rceil}\leq 2\sum_{n=1}^{\infty}\beta_{n}<\infty\), so \(\sum_{n=1}^{\infty}b_{n}<\infty\). Take a point \(z\in W_{n+N}\) for some \(n\in\mathbb{N}\) such that \(F^{n}(z)\in K\) (and hence \(F^{n}(z)\in\widehat{K}\)). We can divide the set \([0,\ldots,n-1]\) into consecutive disjoint blocks \(A_{1},B_{1},\ldots,A_{l},B_{l}\) of (maximal) lengths (i.e. numbers of elements), respectively, \(k_{1},m_{1},\ldots,k_{l},m_{l}\) for some \(l\in\mathbb{N}\), such that if \(s\in A_{j}\) for some \(j\), then \(F^{s}(z)\in W_{N}\setminus\widehat{K}\) and if \(s\in B_{j}\) for some \(j\), then \(F^{s}(z)\in W_{N}\cap\widehat{K}\). We have \(k_{1}+\cdots+k_{l}+m_{1}+\cdots+m_{l}=n\). We have \(k_{j},m_{j}>0\) except for \(k_{1}\) and \(m_{l}\), which can be equal to \(0\). Let \[\Delta(A_{j}) =|(F^{k_{j}})^{\prime}(F^{k_{1}+m_{1}+\cdots+k_{j-1}+m_{j-1}})(z) |_{\varsigma},\] \[\Delta(B_{j}) =|(F^{m_{j}})^{\prime}(F^{k_{1}+m_{1}+\cdots+k_{j-1}+m_{j-1}+k_{j }})(z))|_{\varsigma},\] where we set \(\Delta(\emptyset)=1\) for an empty block. Then \[|(F^{n})^{\prime}(z)|_{\varsigma}=\prod_{j=1}^{l}\Delta(A_{j})\Delta(B_{j}).\] By Proposition 5.2, \[\Delta(A_{j})>\frac{1}{\beta_{k_{j}}},\qquad\Delta(B_{j})>Q^{m_{j}}.\] Hence, \[|(F^{n})^{\prime}(z)|_{\varsigma}>\frac{Q^{m_{1}+\cdots+m_{l}}}{\beta_{k_{1}} \cdots\beta_{k_{l}}}.\] If \(m_{1}+\cdots+m_{l}>\frac{n}{2}\), then, since \(\beta_{k_{j}}<1\), we have \[|(F^{n})^{\prime}(z)|_{\varsigma}>Q^{m_{1}+\cdots+m_{l}}=Q^{n/2}\geq\frac{1}{ b_{n}}.\] On the other hand, if \(m_{1}+\cdots+m_{l}\leq\frac{n}{2}\), then \(k_{1}+\cdots+k_{l}>\frac{n}{2}\) and \[|(F^{n})^{\prime}(z)|_{\varsigma}>\frac{1}{\beta_{k_{1}}\cdots\beta_{k_{l}}}.\] Set \(\beta_{0}=1\). Then, by Proposition 5.2, we have \(\frac{\beta_{n+1}}{\beta_{n}}\geq\frac{\beta_{n}}{\beta_{n-1}}\) for every \(n\in\mathbb{N}\). Consequently, for every \(p,q\in\mathbb{N}\), \[\beta_{p+q}=\frac{\beta_{p+q}}{\beta_{p+q-1}}\frac{\beta_{p+q-1}}{\beta_{p+q-2 }}\cdots\frac{\beta_{q+1}}{\beta_{q}}\beta_{q}\geq\frac{\beta_{p}}{\beta_{p-1 }}\frac{\beta_{p-1}}{\beta_{p-2}}\cdots\frac{\beta_{1}}{\beta_{0}}\beta_{q}= \beta_{p}\beta_{q}.\] Applying this inductively, we obtain \[|(F^{n})^{\prime}(z)|_{\varsigma}>\frac{1}{\beta_{k_{1}}\cdots\beta_{k_{l}}}\geq \frac{1}{\beta_{k_{1}+\cdots+k_{l}}}\geq\frac{1}{\beta_{\lceil n/2\rceil}}\geq \frac{1}{b_{n}}.\] We conclude that in both cases \(|(F^{n})^{\prime}(z)|_{\varsigma}>\frac{1}{b_{n}}\), which ends the proof. The plan to prove Proposition 5.2 is as follows. First, in Subsection 5.1, we introduce notation and describe the dynamics of the map \(F\) within the repelling petals \(P_{i}\). Then, in Subsection 5.2, we show that \(V\) has an orbifold structure (in the sense of Subsection 2.3) and the orbifold metric \(d\rho\) is strictly expanding on compact sets in \(V^{\prime}\) outside the singularities of the metric \(d\rho\) (Lemma 5.5). This part follows a classical reasoning described e.g. in [10]. In Subsection 5.3 we construct a suitable metric within the repelling petals \(P_{i}\). For a petals at a parabolic fixed point \(p_{i}\) we use the locally expanding metric \(d\sigma_{p_{i},\alpha_{par}}\) for a suitable \(\alpha_{par}\in(0,1)\), defined in Section 3 and estimates provided by Proposition 3.6. In the case of a petal \(P_{i}\) at infinity we use the metric \(d\sigma_{\alpha_{\infty}}\) for a suitable \(\alpha_{\infty}>1\), introduced in Section 4 and estimates given by Theorem 4.7. However, to obtain local expansion of \(d\sigma_{\alpha_{\infty}}\) on suitable parts of \(P_{i}\) (Lemma 5.8), we must first correct the metric by multiplying it by an appropriate real function \(h_{i}\). Finally, in Subsection 5.4, we define a metric \(\varsigma\) by gluing previously constructed metrics and show that it has required properties (Lemmas 5.10 and 5.11). Now we provide a detailed proof of Proposition 5.2 along the lines presented above. ### Petal dynamics For convenience, set \[p_{i}=\infty\qquad\text{for }\ i\in\mathcal{I}_{\infty}.\] By the assumption (e), the set \[Y=\overline{W}\setminus\bigcup_{i\in\mathcal{I}}(P_{i}\cup\{p_{i}\})\] is a compact subset of \(V\setminus\{p_{i}\}_{i\in\mathcal{I}}\). Consider \(i\in\mathcal{I}\). By the definition of repelling petals, denoting a possible holomorphic extension of \(F\) to \(P_{i}\) by the same symbol, we have \(p_{i}\notin P_{i}\), \(F\) is univalent on \(P_{i}\) and \(F(P_{i})\) is an attracting petal at \(p_{i}\) of the map \[G_{i}\colon F(P_{i})\to P_{i},\qquad G_{i}=(F|_{P_{i}})^{-1},\] where \(G_{i}^{n}\to p_{i}\) as \(n\to\infty\), \(\bigcap_{n=0}^{\infty}G_{i}^{n}(P_{i})=\emptyset\) and \(G_{i}\) extends continuously to \(\overline{F(P_{i})}\) (see Section 3 and Proposition 4.3). Note that in the case \(i\in\mathcal{I}_{par}\) we have \(p_{i}\in\overline{P_{i}}\) and \(G_{i}\) extends holomorphically to a neighbourhood of \(p_{i}\). Let \[K_{i}=G_{i}(Y\cap\overline{F(P_{i})}).\] See Figure 5. By Definition 4.1, the set \(K_{i}\) is a compact subset of \(F(P_{i})\). Furthermore, \[K_{i}\subset\overline{P_{i}}\setminus(G_{i}(P_{i})\cup\{p_{i}\}),\] so \(G_{i}(K_{i})\subset\overline{G_{i}(P_{i})}\setminus(G_{i}^{2}(P_{i})\cup\{p_{ i}\})\subset P_{i}\setminus G_{i}^{2}(P_{i})\), which implies \[G_{i}^{n_{1}}(K_{i})\cap G_{i}^{n_{2}}(K_{i})=\emptyset\quad\text{for }\ n_{1},n_{2}\geq 0,\,|n_{1}-n_{2}|>1. \tag{17}\] We will use frequently the following fact. **Lemma 5.3**.: \[\overline{W}\cap\overline{P_{i}}\setminus\{p_{i}\}\subset\bigcup_{n=0}^{\infty}G_ {i}^{n}(K_{i}).\] Proof.: We show inductively \[\overline{W}\cap\overline{P_{i}}\subset\bigcup_{k=0}^{n-1}G_{i}^{k}(K_{i}) \cup\overline{G_{i}^{n+1}(W\cap F(P_{i}))} \tag{18}\] for \(n\geq 0\) (with the convention that a union over an empty set is empty). To do it, note first that the last property of the assumption (e) gives \(\overline{W}\cap\overline{P_{i}}\subset\overline{G_{i}(W\cap F(P_{i}))}\), which shows (18) for \(n=0\). Suppose now (18) holds for some \(n\geq 0\). To prove (18) for \(n+1\), it is enough to verify \[\overline{G_{i}^{n+1}(W\cap F(P_{i}))}\subset G_{i}^{n}(K_{i})\cup\overline{G _{i}^{n+2}(W\cap F(P_{i}))}. \tag{19}\] To show (19), note that by the definition of \(Y\), we have \[W\setminus Y\subset\bigcup_{i\in\mathcal{I}}P_{i},\] so, as \(F(P_{i})\cap P_{i^{\prime}}=\emptyset\) for \(i,i^{\prime}\in\mathcal{I}\), \(i\neq i^{\prime}\) by the assumption (e), \[(W\setminus Y)\cap F(P_{i})\subset W\cap P_{i}.\] This together with the last property of the assumption (e) implies \[(W\setminus Y)\cap F(P_{i})\subset G_{i}(W\cap F(P_{i})),\] which gives \[W\cap F(P_{i})\subset(W\cap Y\cap F(P_{i}))\cup G_{i}(W\cap F(P_{i}))\subset F (K_{i})\cup G_{i}(W\cap F(P_{i}))\] and, consequently, \[G_{i}^{n+1}(W\cap F(P_{i}))\subset G_{i}^{n}(K_{i})\cup G_{i}^{n+2}(W\cap F(P _{i})).\] This shows (19), completing the inductive proof of (18). Figure 5. The set \(K_{i}\). Using (18), we obtain \[\overline{W}\cap\overline{P_{i}}\subset\bigcap_{n=0}^{\infty}\Big{(}\bigcup_{k=0}^ {n-1}G_{i}^{k}(K_{i})\cup\overline{G_{i}^{n+1}(W\cap F(P_{i}))}\Big{)}\subset \bigcup_{n=0}^{\infty}G_{i}^{k}(K_{i})\cup\bigcap_{n=0}^{\infty}\overline{G_{i}^ {n+1}(W\cap F(P_{i}))}. \tag{20}\] By the definition of attracting petals, we have \(\overline{G_{i}(P_{i})}\subset P_{i}\cup\{p_{i}\}\) and \(\bigcap_{n=0}^{\infty}G_{i}^{n}(P_{i})=\emptyset\), which implies \[\bigcap_{n=0}^{\infty}\overline{G_{i}^{n+1}(W\cap F(P_{i}))}=\overline{ \bigcap_{n=0}^{\infty}G_{i}^{n+1}(W\cap F(P_{i}))}\subset\overline{\bigcap_{n= 0}^{\infty}G_{i}^{n}(P_{i})}\subset\{p_{i}\}.\] This together with (20) proves the lemma. Fix a number \(n_{0}\in\mathbb{N}\), which is larger than all the numbers \(n_{0}\) appearing in Proposition 3.4, Proposition 3.6, Definition 4.1, Proposition 4.3 and Theorem 4.7, suited for the petals \(P_{i}\) and compact sets \(K_{i}\), \(i\in\mathcal{I}\) (some other requirements for \(n_{0}\) will be specified later). Let \[\tilde{K}_{0}=Y\cup\bigcup_{i\in\mathcal{I}}\bigcup_{n=0}^{n_{0}-1}(\overline {W}\cap G_{i}^{n}(K_{i})).\] The set \(\tilde{K}_{0}\) is a compact subset of \(\overline{W}\subset V\setminus\{p_{i}\}_{i\in\mathcal{I}_{par}}\), and so is \(\partial\tilde{K}_{0}\). Hence, by the assumption (c), the set \(\partial\tilde{K}_{0}\cap\mathcal{P}_{crit}(F)\) consists of at most a finite number of points, which are isolated in \(\mathcal{P}_{crit}(F)\). Consequently, for a sufficiently small \(\varepsilon_{0}>0\), the set \[\tilde{K}=\tilde{K}_{0}\cup\bigcup_{z\in\partial\tilde{K}_{0}\cap\mathcal{P}_ {crit}(F)}\overline{\mathbb{D}(z,\varepsilon_{0})}\] is a compact subset of \(V\setminus\{p_{i}\}_{i\in\mathcal{I}_{par}}\) satisfying \[\partial\tilde{K}\cap\mathcal{P}_{crit}(F)=\emptyset. \tag{21}\] By Lemma 5.3, \[(\overline{W}\setminus\tilde{K})\setminus\{p_{i}\}_{i\in\mathcal{I}_{par}} \subset\bigcup_{i\in\mathcal{I}}\bigcup_{n=n_{0}}^{\infty}G_{i}^{n}(K_{i}) \subset\bigcup_{i\in\mathcal{I}}P_{i}. \tag{22}\] Another useful property of \(\tilde{K}\) is described in the following lemma. **Lemma 5.4**.: _There exist a finite number of points \(z_{i,j}\in\overline{W_{1}}\cap\tilde{K}\), \(j=1,\ldots,l_{i}\), \(l_{i}\in\mathbb{N}\), such that_ \[W\cap\tilde{K}\cap F^{-1}(W\setminus\tilde{K})\subset\bigcup_{j=1}^{l_{i}} \mathbb{D}(z_{i,j},r)\] _for some \(r>0\), where \(\mathbb{D}(z_{i,j},r)\) are pairwise disjoint for distinct \((i,j)\), and \(r\) can be assumed to be arbitrarily small if \(n_{0}\) is chosen large enough._ Proof.: First, we show \[W\cap\tilde{K}\cap F^{-1}(W\setminus\tilde{K})\cap\bigcup_{i\in\mathcal{I}}P _{i}=\emptyset. \tag{23}\] To prove (23), suppose there exists \(z\in W\cap\tilde{K}\cap F^{-1}(W\setminus\tilde{K})\cap P_{i}\) for some \(i\in\mathcal{I}\). By Lemma 5.3 and assumption (e), we have \(z\in\bigcup_{n=n_{0}+1}^{\infty}G_{i}^{n}(K_{i})\subset\overline{G_{i}^{n_{0} +1}(P_{i})}\) By the definition of \(\tilde{K}\), there is a point \(z_{0}\in\tilde{K}_{0}\) with \(|z-z_{0}|<\varepsilon_{0}\). Note that since \(z\) is in \(\bigcup_{n=n_{0}+1}^{\infty}G_{i}^{n}(K_{i})\cap F(\tilde{K})\), which is a compact subset of \(G_{i}^{n_{0}}(P_{i})\), we have \(z_{0}\in G_{i}^{n_{0}}(P_{i})\), provided \(\varepsilon_{0}\) is chosen sufficiently small according to \(n_{0}\). On the other hand, by Lemma 5.3 and the definition of \(K\), \(z_{0}\in\bigcup_{n=0}^{n_{0}-1}G_{i}^{n}(K_{i})\subset\overline{P_{i}}\setminus G _{i}^{n_{0}}(P_{i})\), which makes a contradiction. Now the lemma easily follows from (23), the compactness of \(K\), the assumption (d) and the fact that \(W\setminus\tilde{K}\cap P_{i}\) is arbitrarily close (in the spherical metric) to \(p_{i}\) if \(n_{0}\) is chosen large enough. ### Orbifold metric For \(z\in V\) let \(\nu(z)\) be equal to the least common multiple of the elements of the set \(\{\deg_{w}F^{n}:w\in F^{-n}(z)\), \(n\in\mathbb{N}\}\). By the assumption (b), the function \(\nu\) is well-defined. Note that \[\nu(z)>1\iff z\in\mathcal{P}_{crit}(F) \tag{24}\] and \[\nu(F(z))\text{ is a multiple of }\nu(z)\deg_{z}F\qquad\text{for }z\in V^{\prime}. \tag{25}\] By the assumption (c), the pair \((V,\nu)\) is a hyperbolic orbifold as defined in Subsection 2.3. Let \(d\rho\) be the orbifold metric on \(V\). A standard estimate of the density \(\varrho_{V}\) of the hyperbolic metric \(d\varrho_{V}\) on \(V\) (see e.g. [1, Lemmas 2.1 and 2.3]) is that for \(z\in V\) we have \[\varrho_{V}(z)\geq\frac{c}{|z-p_{i}|\ln|z-p_{i}|}\] if \(|z-p_{i}|\) is sufficiently small and \[\varrho_{V}(z)\geq\frac{c}{|z|\ln|z|}\] if \(|z|\) is sufficiently large, for a constant \(c>0\). Hence, (3) and 24 imply that for every \(0<\delta<1\) and \(z\in V\setminus\mathcal{P}_{crit}(F)\), \[\lim_{z\to p_{i}}\frac{\rho(z)}{\sigma_{1-\delta}(z)}=\infty\quad\text{for }i \in\mathcal{I}_{par},\qquad\lim_{|z|\to\infty}\frac{\rho(z)}{\sigma_{1+\delta}( z)}=\infty\quad\text{for }i\in\mathcal{I}_{\infty}. \tag{26}\] We now described the expanding properties of the orbifold metric \(d\rho\) with respect to \(F\). **Lemma 5.5**.: _The map \(F\) is locally expanding on \(V^{\prime}\setminus F^{-1}(\mathcal{P}_{crit}(F))\) with respect to \(d\rho\). Moreover, for every compact subset \(L\) of \(V^{\prime}\) there exists \(Q>1\) such that \(|F^{\prime}|_{\rho}>Q\) on \(L\setminus F^{-1}(\mathcal{P}_{crit}(F))\)._ Proof.: Let \(\pi\colon\mathbb{D}\to V\) be a universal branch covering of the orbifold \((V,\nu)\) (see Subsection 2.3). By (1) and (25), \[\deg_{u}\pi\text{ is a multiple of }\deg_{v}(F\circ\pi)\text{ \ for }z\in V,\,w\in F^{-1}(z),\,u\in\pi^{-1}(z),\,v\in\pi^{-1}(w). \tag{27}\] Consider a branch \(H\) of \(F^{-1}\) defined on some simply connected domain \(U\subset V\setminus\mathcal{P}_{crit}(F)\). By (24), \(\nu(z)=\nu(H(z))=1\) for every \(z\in U\). Hence, (27) implies that \(H\) lifts to a holomorphic map \(\tilde{H}\colon\tilde{U}\to\mathbb{D}\) for some simply connected domain \(\tilde{U}\subset\mathbb{D}\). In fact, \(\tilde{H}\) extends to a holomorphic map \(\tilde{H}\colon\mathbb{D}\to\mathbb{D}\). To see it, we extend \(\tilde{H}\) holomorphically as a branch of \((F\circ\pi)^{-1}\circ\pi\) along any curve \(\gamma\) in \(\mathbb{D}\) starting from a given point of \(\tilde{U}\). Such extension exists by (27) and the fact that \(F\) has no asymptotic values. Then by the simple connectedness of \(\mathbb{D}\) we conclude that \(\tilde{H}\colon\mathbb{D}\to\mathbb{D}\) is well-defined as a holomorphic map. Since \(V^{\prime}\neq V\), we have \(\tilde{H}(\mathbb{D})\subset\pi^{-1}(V^{\prime})\neq\pi^{-1}(V)=\mathbb{D}\), so \(\tilde{H}\) cannot be an isometry with respect to \(d\varrho_{\mathbb{D}}\). Hence, by the Pick-Schwarz Lemma, \(\tilde{H}\) is locally contracting on \(\mathbb{D}\) with respect to \(d\varrho_{\mathbb{D}}\), so \[\left|\tilde{H}^{\prime}(u)\right|_{\varrho_{\mathbb{D}}}<1\qquad\text{for} \quad u\in\mathbb{D}. \tag{28}\] Let \(z\in V\). By the assumption (c), the set \(\mathcal{P}_{crit}(F)\) is discrete in \(V\), so we can take a small open disc \(U_{z}\subset V\) around \(z\), such that \((U_{z}\setminus\{z\})\cap\mathcal{P}_{crit}(F)=\emptyset\). Let \(\tilde{U}_{z}\) be a component of \(\pi^{-1}(U_{z})\) and let \(w\in F^{-1}(z)\). Suppose first \(z\notin\mathcal{P}_{crit}(F)\). Then \(U_{z}\cap\mathcal{P}_{crit}(F)=\emptyset\), so there exists a branch \(H_{w}\) of \(F^{-1}\) defined on \(U_{z}\), such that \(H_{w}(z)=w\). As explained above, there exists a holomorphic extension \(\tilde{H}_{w}\colon\mathbb{D}\to\mathbb{D}\) of the lift of \(H_{w}\) to \(\tilde{U}_{z}\), which is locally contracting on \(\mathbb{D}\) with respect to \(d\varrho_{\mathbb{D}}\). Thus, (28) gives \[\sup_{\tilde{U}_{z}}|\tilde{H}^{\prime}_{w}|_{\varrho_{\mathbb{D}}}<q\] for some \(q\in(0,1)\). Since \(U_{z}\cap\mathcal{P}_{crit}(F)=\emptyset\), the metric \(\rho\) has no singularities on \(U_{z}\cup H_{w}(U_{z})=\pi(\tilde{U}_{z})\cup\pi(\tilde{H}_{w}(\tilde{U}_{z}))\), so \[\inf_{H_{w}(U_{z})}|F^{\prime}|_{\rho}=\frac{1}{\sup_{U_{z}}|H^{\prime}_{w}|_{ \rho}}=\frac{1}{\sup_{\tilde{U}_{z}}|\tilde{H}^{\prime}_{w}|_{\varrho_{ \mathbb{D}}}}>\frac{1}{q}.\] Hence, \(|F^{\prime}|_{\rho}>\frac{1}{q}>1\) in a neighbourhood of \(w\). This shows that \(F\) is locally expanding on \(V^{\prime}\setminus F^{-1}(\mathcal{P}_{crit}(F))\) with respect to \(d\rho\). Suppose now \(z\in\mathcal{P}_{crit}(F)\). Then there are a finite number of branches \(H_{w,j}\) of \(F^{-1}\), defined on some simply connected domains \(U_{z,j}\subset U_{z}\setminus\{z\}\), such that \(\bigcup_{j}H_{w,j}(U_{z,j})\) contains a punctured neighbourhood of \(w\). Repeating the previous arguments and applying (28) to extensions \(\tilde{H}_{w,j}\) of lifts of \(H_{w,j}\) to some domains \(\tilde{U}_{z,j}\subset\tilde{U}_{z}\), we obtain \[\sup_{\tilde{U}_{z,j}}|\tilde{H}^{\prime}_{w,j}|_{\varrho_{\mathbb{D}}}<q_{j}\] for some \(q_{j}\in(0,1)\). As \((U_{z}\setminus\{z\})\cap\mathcal{P}_{crit}(F)=\emptyset\), the metric \(\rho\) has no singularities on \(U_{z,j}\cup H_{w,j}(U_{z,j})\), so \[\inf_{H_{w,j}(U_{z,j})}|F^{\prime}|_{\rho}=\frac{1}{\sup_{U_{z,j}}|H^{\prime}_ {w,j}|_{\rho}}>\frac{1}{q_{j}}.\] Hence, \(|F^{\prime}|_{\rho}>\min_{j}\frac{1}{q_{j}}>1\) in a punctured neighbourhood of \(w\). We conclude that for every \(w\in V^{\prime}\) there exists a neighbourhood \(U(w)\) of \(w\) and a number \(Q_{w}>1\) such that \(|F^{\prime}|_{\rho}>Q_{w}\) on \(U(w)\setminus F^{-1}(\mathcal{P}_{crit}(F))\). This provides both assertions of the lemma. ### Petal metric Fix numbers \(\alpha_{par}\in(0,1)\), \(\alpha_{\infty}\in(1,\infty)\) such that \[\begin{split}\alpha_{par}&>1-\frac{1}{\max_{i\in \mathcal{I}_{par}}\max_{j\in\{1,\ldots,l_{i}\}}\{\nu(z_{i,j})\deg_{z_{i,j}}F \}},\\ \alpha_{\infty}&<1+\frac{1}{\max_{i\in\mathcal{I}_{ \infty}}\max_{j\in\{1,\ldots,l_{i}\}}\{\nu(z_{i,j})\deg_{z_{i,j}}F\}}.\end{split} \tag{29}\] We will use the following estimate. **Lemma 5.6**.: \[|F^{\prime}(z)|\frac{\sigma_{p_{i},\alpha_{par}}(F(z))}{\rho(z)} \to\infty\quad\text{as }z\to z_{i,j},\;z\in W_{1}\qquad\text{for }\;i\in\mathcal{I}_{par},\;j\in\{1,\ldots,l_{i}\}\] \[|F^{\prime}(z)|\frac{\sigma_{\alpha_{\infty}}(F(z))}{\rho(z)} \to\infty\quad\text{as }z\to z_{i,j},\;z\in W_{1}\qquad\text{for }\;i\in\mathcal{I}_{\infty},\;j\in\{1,\ldots,l_{i}\}\] Proof.: Take \(i\in\mathcal{I}\), \(j\in\{1,\ldots,l_{i}\}\) and \(z\in W_{1}\) close to \(z_{i,j}\). Let \(d_{i,j}=\deg_{z_{i,j}}F\). We write \(a(z)\asymp b(z)\) if \(c_{1}<\frac{a(z)}{b(z)}<c_{2}\) for some constants \(c_{1},c_{2}>0\). By (2), \[\rho(z)\asymp\frac{1}{|z-z_{i,j}|^{1-1/\nu(z_{i,j})}}.\] Suppose first \(i\in\mathcal{I}_{par}\). Then \[|F(z)-p_{i}|\asymp|z-z_{i,j}|^{d_{i,j}},\qquad|F^{\prime}(z)|\asymp|z-z_{i,j} |^{d_{i,j}-1},\] so \[\sigma_{p_{i},\alpha_{par}}(F(z))\asymp\frac{1}{|z-z_{i,j}|^{d_{i,j}\alpha_{ par}}},\] and by (29), \[|F^{\prime}(z)|\frac{\sigma_{p_{i},\alpha_{par}}(F(z))}{\rho(z)}\geq\frac{c}{ |z-z_{i,j}|^{d_{i,j}(\alpha_{par}-1)+1/\nu(z_{i,j})}}\geq\frac{c}{|z-z_{i,j}|^{ \delta}}\] for some \(c,\delta>0\). This shows the first assertion of the lemma. Suppose now \(i\in\mathcal{I}_{\infty}\). Then \[|F(z)|\asymp\frac{1}{|z-z_{i,j}|^{d_{i,j}}},\qquad|F^{\prime}(z)|\asymp\frac{ 1}{|z-z_{i,j}|^{d_{i,j}+1}},\] so \[\sigma_{p_{i},\alpha_{\infty}}(F(z))\asymp|z-z_{i,j}|^{d_{i,j}\alpha_{\infty}},\] and (29) gives \[|F^{\prime}(z)|\frac{\sigma_{p_{i},\alpha_{\infty}}(F(z))}{\rho(z)}\geq\frac{ c}{|z-z_{i,j}|^{d_{i,j}(1-\alpha_{\infty})+1/\nu(z_{i,j})}}\geq\frac{c}{|z-z_{i,j}|^{ \delta}}\] for some \(c,\delta>0\). This gives the second assertion of the lemma. For \(i\in\mathcal{I}_{\infty}\) consider the function \(g_{i}=g\) from Definition 4.1 for the attracting petal \(F(P_{i})\) of the map \(G_{i}\) and the compact set \(K_{i}\). Let \(A_{\infty}\) be equal to the maximum over \(i\in\mathcal{I}_{\infty}\) of the constants \(A\) from Theorem 4.7 for \(\alpha=\alpha_{\infty}\). By Definition 4.1 and Proposition 4.3, there exists \(c_{0}>0\) such that \[|z|-|F(z)|\geq c_{0}g_{i}(|F(z)|)\quad\text{for }z\in\bigcup_{i\in\mathcal{I}_{ \infty}}\bigcup_{n=n_{0}}^{\infty}G_{i}^{n}(K_{i}) \tag{30}\] Fix a large constant \(C_{1}>0\). By (26), we can fix an arbitrarily large \(R^{-}>0\) such that \[\tilde{K}\subset\mathbb{D}(0,R^{-}) \tag{31}\] and \[\rho(z)>A_{\infty}^{\frac{C_{1}}{\delta_{\infty}}}\sigma_{\alpha_{\infty}}(z) \qquad\text{for }\quad z\in V\setminus(\mathbb{D}(0,R^{-})\cup\mathcal{P}_{crit}(F)). \tag{32}\] For \(i\in\mathcal{I}_{\infty}\) consider the function \(t\mapsto t-R^{-}-C_{1}g_{i}(t)\), \(t\in[R^{-},+\infty)\). It is negative for \(t=R^{-}\) and tends to \(+\infty\) as \(t\to+\infty\) since \(g_{i}\) are bounded by definition. Hence, it attains zero at some point \(t=R_{i}^{+}>R^{-}\), so that \[R_{i}^{+}=R^{-}+C_{1}g_{i}(R_{i}^{+}).\] Define \[h_{i}(t)=\begin{cases}A_{\infty}^{\frac{C_{1}}{A_{\infty}^{0}}}+C_{2}(R^{-}-t )&\text{for }t\in[0,R^{-})\\ A_{\infty}^{\frac{R_{i}^{+}-t}{c_{0}g_{i}(R_{i}^{+})}}&\text{for }t\in[R^{-},R_{i}^{+}) \\ 1&\text{for }t\in[R_{i}^{+},+\infty)\end{cases}\] for a large constant \(C_{2}>0\). Then \(h_{i}\colon[0,+\infty)\to\mathbb{R}\) is a positive continuous non-increasing function, such that \[\min_{[0,R^{-}]}h_{i}=\max_{[R^{-},+\infty)}h_{i}=h_{i}(R^{-})=A_{\infty}^{ \frac{C_{1}}{A_{\infty}^{0}}}. \tag{33}\] By (30), if \(z\in\bigcup_{n=n_{0}}^{\infty}G_{i}^{n}(K_{i})\) and \(|z|\leq R_{i}^{+}\), \(|F(z)|\geq R^{-}\), then \[|z|-|F(z)|\geq c_{0}g_{i}(|F(z)|)\geq c_{0}g_{i}(R_{i}^{+}),\] and by the definition of \(h_{i}\), \[\frac{h_{i}(|F(z)|)}{h_{i}(|z|)}\geq A_{\infty}.\] Therefore, choosing \(C_{2}\) sufficiently large, by compactness we can assume \[\frac{h_{i}(|F(z)|)}{h_{i}(|z|)}\geq A_{\infty}\qquad\text{for }z\in\overline{W \setminus\bar{K}}\cap\bigcup_{i\in\mathcal{I}_{\infty}}P_{i}\cap\overline{ \mathbb{D}(0,R_{i}^{+})}. \tag{34}\] Let \[R_{i}=R_{i}^{+}-\ln C_{1}\;g_{i}(R_{i}^{+}).\] Obviously, \[R^{-}<R_{i}<R_{i}^{+}.\] Moreover, the following holds. **Lemma 5.7**.: _If \(z\in\bigcup_{i\in\mathcal{I}_{\infty}}\bigcup_{n=n_{0}}^{\infty}G_{i}^{n}(K_{ i})\) and \(|F(z)|\leq R_{i}<|z|\), then_ \[R^{-}\leq|F(z)|<|z|\leq R_{i}^{+}.\] Proof.: Take \(z\in G_{i}^{n}(K_{i})\) for some \(i\in\mathcal{I}_{\infty}\), \(n\geq n_{0}\). By Lemma 4.6, considering the sequence \((t_{n})_{n=1}^{\infty}\) for the function \(g_{i}\), we find \(j\in\mathbb{N}\) such that \(t_{j}\leq|F(z)|<|z|\leq t_{j+M}\) for a constant \(M\in\mathbb{N}\). Suppose \(|z|>R_{i}^{+}\). Then by Definition 4.1, Proposition 4.3 and Lemma 2.2, \[\ln C_{1}\;g_{i}(R_{i}^{+})=R_{i}^{+}-R_{i}<|z|-|F(z)|\leq c_{2}g_{i}(|F(z)|) \leq c_{2}g_{i}(t_{j})\leq c_{3}g_{i}(t_{j+M})\leq c_{3}g_{i}(R_{i}^{+})\] for some constants \(c_{2},c_{3}>0\) (independent of \(C_{1}\)), which is impossible if \(C_{1}\) was chosen sufficiently large. Therefore, \(|z|\leq R_{i}^{+}\). Suppose now \(|F(z)|<R^{-}\). Then, analogously as previously, we obtain \[\begin{split}(C_{1}-\ln C_{1})g_{i}(R_{i}^{+})&=R_{ i}-R^{-}<|z|-|F(z)|\leq c_{2}g_{i}(|F(z)|)\\ &\leq c_{2}g_{i}(t_{j})\leq c_{3}g_{i}(t_{j+M})\leq c_{3}g_{i}(R _{i}).\end{split} \tag{35}\] Take the maximal \(j_{0}\in\mathbb{N}\) and the minimal \(N\geq 0\) such that \[t_{j_{0}}\leq R_{i}<R_{i}^{+}\leq t_{j_{0}+N}.\] If \(N>2\), then \(t_{j_{0}}\leq R_{i}<t_{j_{0}+1}<\cdots<t_{j_{0}+N-1}<R_{i}^{+}\leq t_{j_{0}+N}\), so by the definition of the sequence \((t_{n})_{n=1}^{\infty}\), \[\ln C_{1}\,g_{i}(R_{i}^{+}) =R_{i}^{+}-R_{i}\geq t_{j_{0}+N-1}-t_{j_{0}+1}\geq t_{j_{0}+N-1}-t_ {j_{0}+N-2}+\cdots+t_{j_{0}+2}-t_{j_{0}+1}\] \[=g_{i}(t_{j_{0}+N-2})+\cdots+g_{i}(t_{j_{0}+1})\geq(N-2)g_{i}(R_{i }^{+}).\] This shows \(N\leq 2+\ln C_{1}\). By Lemma 2.2, assuming that \(n_{0}\) is chosen sufficiently large, we have \(\frac{g_{i}(t_{n})}{g_{i}(t_{n+1})}<e^{\frac{1}{2}}\) for \(n\geq j_{0}\), so \[\frac{g_{i}(R_{i})}{g_{i}(R_{i}^{+})}\leq\frac{g_{i}(t_{j_{0}})}{g_{i}(t_{j_{0 }+N})}=\frac{g_{i}(t_{j_{0}})}{g_{i}(t_{j_{0}+1})}\cdots\frac{g_{i}(t_{j_{0}+N- 1})}{g_{i}(t_{j_{0}+N})}\leq e^{\frac{N}{2}}\leq e^{1+\frac{\ln C_{1}}{2}}=e\, C_{1}^{\frac{1}{2}}.\] This together with (35) gives \[(C_{1}-\ln C_{1})g_{i}(R_{i}^{+})\leq ec_{3}C_{1}^{\frac{1}{2}}g_{i}(R_{i}^{+ }),\] which is impossible if \(C_{1}\) was chosen sufficiently large. Therefore, \(|F(z)|\geq R^{-}\). Let \[\varsigma_{i}(z) =C\sigma_{p_{i},\alpha_{par}}(z)\] for \[z\in\bigcup_{i\in\mathcal{I}_{par}}\big{(}\overline{W\setminus\tilde{K}}\cap P _{i}\big{)},\] \[\varsigma_{\infty}(z) =h_{i}(|z|)\sigma_{\alpha_{\infty}}(z)\] for \[z\in\bigcup_{i\in\mathcal{I}_{\infty}}\big{(}\overline{W\setminus\tilde{K}} \cap P_{i}\big{)},\] where \(C>0\) is a large constant. **Lemma 5.8**.: _The map \(F\) is locally expanding with respect to \(d\varsigma_{i}\) on \(\bigcup_{i\in\mathcal{I}_{par}}\big{(}\overline{W_{1}\setminus\tilde{K}}\cap P _{i}\big{)}\) and locally expanding with respect to \(d\varsigma_{\infty}\) on \(\bigcup_{i\in\mathcal{I}_{\infty}}\big{(}\overline{W_{1}\setminus\tilde{K}} \cap P_{i}\cap\overline{\mathbb{D}(0,R_{i}^{+})}\big{)}\)._ Proof.: Consider \(i\in\mathcal{I}_{par}\). By (22) and Proposition 3.6 for the attracting petal \(G(P_{i})\) of the map \(F_{i}\), for the compact set \(K_{i}\) and \(\alpha=\alpha_{par}\), we have \(|F^{\prime}(z)|_{\sigma_{p_{i},\alpha_{par}}}>1\) for \(z\in\overline{W_{1}\setminus\tilde{K}}\cap P_{i}\). By the definition of \(\varsigma_{i}\) and (22), we conclude that \(F\) is locally expanding with respect to \(d\varsigma_{i}\) on \(\overline{W_{1}\setminus\tilde{K}}\cap P_{i}\). Assume now \(i\in\mathcal{I}_{\infty}\). By Theorem 4.7 for the attracting petal \(G(P_{i})\) of the map \(F_{i}\), for the compact set \(K_{i}\) and \(\alpha=\alpha_{\infty}\), we obtain \(|F^{\prime}(z)|_{\sigma_{\alpha_{\infty}}}>1/A_{\infty}\) for \(z\in\overline{W_{1}\setminus\tilde{K}}\cap P_{i}\). Using (34), the definition of \(\varsigma_{\infty}\) and (22), we see that \(F\) is locally expanding with respect to \(d\varsigma_{\infty}\) on \(\overline{W_{1}\setminus\tilde{K}}\cap P_{i}\cap\overline{\mathbb{D}(0,R_{i}^ {+})}\). ### Construction of the expanding metric Now we construct the required conformal metric \(d\varsigma\) on \(W_{N}\) for a large \(N\). By the assumption (c), the set \(\tilde{K}\) contains at most a finite number of points from \(\mathcal{P}_{crit}(F)\). Moreover, by the assumptions of Theorem 5.1, we have \(W_{n+1}\cap\tilde{K}\subset W_{n}\cap\tilde{K}\) for \(n\geq 0\) and \(\bigcap_{n=0}^{\infty}W_{n}\cap\tilde{K}=\emptyset\), so we can find a number \(N\in\mathbb{N}\) such that \[W_{N}\cap\tilde{K}\cap\mathcal{P}_{crit}(F)=\emptyset. \tag{36}\] Let \[\widehat{K}=\tilde{K}\cup\bigcup_{i\in\mathcal{I}_{par}}\bigcup_{n=n_{0}}^{m_{0}} (\overline{W}\cap G_{i}^{n}(K_{i}))\cup\bigcup_{i\in\mathcal{I}_{\infty}}\big{(} \overline{W}\cap P_{i}\cap\overline{\mathbb{D}(0,R_{i})}\big{)}\] for a large \(m_{0}>n_{0}\). Then \(\widehat{K}\) is a compact subset of \(V\setminus\{p_{i}\}_{i\in\mathcal{I}}\). Note that choosing \(m_{0}\) and \(R_{i}\) sufficiently large and using (22), we can assume \(\widehat{K}\supset K\) for an arbitrary given compact set \(K\subset\overline{W}\setminus\{p_{i}\}_{i\in\mathcal{I}_{par}}\). We define a conformal metric \(d\varsigma=\varsigma|dz|\) on \(W_{N}\), setting \[\varsigma=\begin{cases}\rho&\text{on }W_{N}\cap\tilde{K}\\ \min(\rho,\varsigma_{i})&\text{on }\bigcup_{i\in\mathcal{I}_{par}}\big{(}(W_{N} \setminus\tilde{K})\cap P_{i}\big{)}\setminus\mathcal{P}_{crit}(F)\\ \varsigma_{i}&\text{on }\bigcup_{i\in\mathcal{I}_{par}}\big{(}(W_{N} \setminus\tilde{K})\cap P_{i}\big{)}\cap\mathcal{P}_{crit}(F)\\ \min(\rho,\varsigma_{\infty})&\text{on }\bigcup_{i\in\mathcal{I}_{\infty}} \big{(}(W_{N}\setminus\tilde{K})\cap P_{i}\big{)}\setminus\mathcal{P}_{crit}(F) \\ \varsigma_{\infty}&\text{on }\bigcup_{i\in\mathcal{I}_{\infty}}\big{(}(W_{N} \setminus\tilde{K})\cap P_{i}\big{)}\cap\mathcal{P}_{crit}(F)\end{cases}.\] To show that \(d\varsigma\) is a conformal metric on \(W_{N}\), note first that by (36), (24) and (22), the function \(\rho\) has no singularities in \(W_{N}\cap\tilde{K}\) and \(\varsigma\) is well-defined and positive on \(W_{N}\). It is obvious that \(\varsigma\) is continuous on \(W_{N}\setminus\big{(}\bigcup_{i\in\mathcal{I}}(\partial\tilde{K}\cap P_{i}) \cup\mathcal{P}_{crit}(F)\big{)}\). Observe also that for \(i\in\mathcal{I}\), by (21) and (24), we have \(\nu(z)=1\) on the compact subset \(\overline{W_{N}}\cap\partial\tilde{K}\cap\overline{P_{i}}\) of \(V\) and hence \(\rho\) is well-defined and bounded on this set. Consequently, for \(i\in\mathcal{I}_{par}\) we have \(\rho<\varsigma_{i}\) on \(W_{N}\cap\partial\tilde{K}\cap P_{i}\) provided \(C\) is chosen sufficiently large, which implies that \(\varsigma=\rho\) on \(W_{N}\cap\partial\tilde{K}\cap P_{i}\) and hence \(\varsigma\) is continuous on \(W_{N}\cap\partial\tilde{K}\cap P_{i}\). Similarly, for \(i\in\mathcal{I}_{\infty}\), (31) and (33) imply that \(\rho<\varsigma_{\infty}\) on \(W_{N}\cap\partial\tilde{K}\cap P_{i}\) provided \(C_{1}\) is sufficiently large, so \(\varsigma=\rho\) on \(W_{N}\cap\partial\tilde{K}\cap P_{i}\) and \(\varsigma\) is continuous on \(W_{N}\cap\partial\tilde{K}\cap P_{i}\). Furthermore, if \(z_{0}\in W_{N}\cap\mathcal{P}_{crit}(F)\), then \(z_{0}\in(W_{N}\setminus\tilde{K})\cap P_{i}\) for some \(i\in\mathcal{I}\), so by (24) and (2), we have \(\rho(z)\to+\infty\) as \(z\to z_{0}\), which implies that in a punctured neighbourhood of \(z_{0}\) there holds \(\varsigma=\varsigma_{i}\) if \(i\in\mathcal{I}_{par}\) and \(\varsigma=\varsigma_{\infty}\) if \(i\in\mathcal{I}_{\infty}\). This shows that \(\varsigma\) is continuous at \(z_{0}\). We conclude that \(\varsigma\) is well-defined, positive and continuous on \(W_{N}\), so \(d\varsigma\) is a conformal metric on \(W_{N}\). Notice that by (26), \[\rho>\varsigma_{i}\quad\text{on }\bigcup_{i\in\mathcal{I}_{par}}\bigcup_{n=m_{0}}^ {\infty}G_{i}(K_{i})\setminus\mathcal{P}_{crit}(F)\supset\bigcup_{i\in \mathcal{I}_{par}}\big{(}\overline{W\setminus\widehat{K}}\cap P_{i}\big{)} \setminus\mathcal{P}_{crit}(F), \tag{37}\] if \(m_{0}\) was chosen sufficiently large. Similarly, by (32) and (33), we have \[\rho>\varsigma_{\infty}\quad\text{on }\bigcup_{i\in\mathcal{I}_{\infty}} \big{(}\overline{W\setminus\tilde{K}}\cap P_{i}\big{)}\setminus(\mathbb{D}(0,R ^{-})\cup\mathcal{P}_{crit}(F))\supset\bigcup_{i\in\mathcal{I}_{\infty}} \big{(}\overline{W\setminus\widehat{K}}\cap P_{i}\big{)}\setminus\mathcal{P}_{ crit}(F). \tag{38}\] In the further considerations, we will need the following lemma. **Lemma 5.9**.: _If \(z_{n}\in W_{N}\) and \(\varsigma(z_{n})\to 0\), then \(z_{n}\to\infty\)._ Proof.: Suppose \(z_{n}\in W_{N}\), \(\varsigma(z_{n})\to 0\) and \(z_{n}\not\to\infty\). Passing to a subsequence, we can assume \(z_{n}\to z\in\overline{W_{N}}\) and one of the three cases appears: 1. \(\varsigma(z_{n})=\rho(z_{n})\) for all \(n\), 2. \(\varsigma(z_{n})=\varsigma_{i}(z_{n})\) for all \(n\) and some fixed \(i\in\mathcal{I}_{par}\), 3. \(\varsigma(z_{n})=\varsigma_{\infty}(z_{n})\) for all \(n\). In the case (i), we have \(z_{n}\in W_{N}\cap\widehat{K}\) by (37) and (38), so \(z\in\overline{W_{N}}\cap\widehat{K}\subset V\). If \(z\notin\mathcal{P}_{crit}(F)\), then \(\varsigma(z_{n})=\rho(z_{n})\to\rho(z)>0\), and if \(z\in\mathcal{P}_{crit}(F)\), then \(\varsigma(z_{n})=\rho(z_{n})\to\infty\) by (24) and (2). Both possibilities lead to a contradiction. In the case (ii), there holds \(\varsigma(z_{n})=\varsigma_{i}(z_{n})>c\) for some constant \(c>0\) by the definition of \(\varsigma_{i}\) and the fact that \(z_{n}\in(W_{N}\setminus\widehat{K})\cap P_{i}\), so \(z_{n}\) lies in a small neighbourhood of \(p_{i}\). Again, this is impossible. Finally, in the case (iii), \(z_{n}\) is in a small neighbourhood of infinity and \(\varsigma(z_{n})=\varsigma_{\infty}(z_{n})\to\varsigma_{\infty}(z)>0\). This makes a contradiction. Now we show expanding properties of the metric \(d\varsigma\). **Lemma 5.10**.: _The map \(F\) is locally expanding on_ \[W_{N+1}\setminus\bigcup_{i\in\mathcal{I}_{\infty}}\big{(}P_{i}\setminus \overline{\mathbb{D}(0,R_{i}^{+})}\big{)}\] _with respect to \(d\varsigma\). Moreover, there exists \(Q>1\) such that \(|F^{\prime}|_{\varsigma}>Q\) on \(W_{N+1}\cap\widehat{K}\)._ Proof.: Note first that by the definition of \(\varsigma\), (37) and (38), we have \(\varsigma\leq\rho\) on \(W_{N}\cap\widehat{K}\setminus\mathcal{P}_{crit}(F)\) and \(\varsigma<\rho\) on \(W_{N}\setminus(\widehat{K}\cup\mathcal{P}_{crit}(F))\). In view of this and (36), it is straightforward to check that for \(z\in W_{N+1}\) there holds one of the three following (not necessarily disjoint) cases. * \(z\notin F^{-1}(\mathcal{P}_{crit}(F)),\ \ \varsigma(z)\leq\rho(z),\ \ F(z)\in W_{N}\cap \widehat{K},\ \varsigma(F(z))=\rho(F(z))\), * \(z\in\tilde{K},\ \ F(z)\in(W_{N}\setminus\tilde{K})\cap P_{i},\ \ \varsigma(F(z))=\begin{cases} \varsigma_{i}(F(z))&\text{for }i\in\mathcal{I}_{par}\\ \varsigma_{\infty}(F(z))&\text{for }i\in\mathcal{I}_{\infty}\end{cases},\) * \(z\notin\tilde{K},\ \ F(z)\in(W_{N}\setminus\tilde{K})\cap P_{i},\ \ \varsigma(F(z))=\begin{cases} \varsigma_{i}(F(z))&\text{for }i\in\mathcal{I}_{par}\\ \varsigma_{\infty}(F(z))&\text{for }i\in\mathcal{I}_{\infty}\end{cases}.\) We show the first part of the lemma, considering successively the cases (i)-(iii). Take \(z\in W_{N+1}\setminus\bigcup_{i\in\mathcal{I}_{\infty}}\big{(}P_{i}\setminus \overline{\mathbb{D}(0,R_{i}^{+})}\big{)}\). In the case (i) we have \(|F^{\prime}(z)|_{\varsigma}\geq|F^{\prime}(z)|_{\rho}>1\) by Lemma 5.5. In the case (ii), \[|F^{\prime}(z)|_{\varsigma}=\begin{cases}|F^{\prime}(z)|\frac{\varsigma_{i}(F(z ))}{\rho(z)}=|F^{\prime}(z)|\frac{C\sigma_{p_{i},\alpha_{par}}(F(z))}{\rho(z) }&\text{for }i\in\mathcal{I}_{par}\\ |F^{\prime}(z)|\frac{\varsigma_{\infty}(F(z))}{\rho(z)}=|F^{\prime}(z)|\frac{ h_{i}(|z|)\sigma_{\alpha_{\infty}}(F(z))}{\rho(z)}&\text{for }i\in\mathcal{I}_{\infty}\end{cases}>2\] by Lemma 5.4 (where we can make \(r\) arbitrarily small by enlarging \(n_{0}\)), Lemma 5.6 and the fact \(h_{i}\geq 1\). In the case (iii), by (22) and since \(F(P_{i})\cap P_{i^{\prime}}=\emptyset\) for \(i,i^{\prime}\in\mathcal{I}\), \(i\neq i^{\prime}\) by the assumption (e), there holds \(z,F(z)\in P_{i}\) for some \(i\in\mathcal{I}\). If \(i\in\mathcal{I}_{par}\), then \(\varsigma(z)\leq\varsigma_{i}(z)\) by the definition of \(\varsigma\), so \(|F^{\prime}(z)|_{\varsigma}\geq|F^{\prime}(z)|_{\varsigma_{i}}>1\) by Lemma 5.8. Similarly, if \(i\in\mathcal{I}_{\infty}\), then \(\varsigma(z)\leq\varsigma_{\infty}(z)\) by the definition of \(\varsigma\), so \(|F^{\prime}(z)|_{\varsigma}\geq|F^{\prime}(z)|_{\varsigma_{\infty}}>1\) by Lemma 5.8. Now we prove the second part of the lemma. Again, we examine the cases (i)-(iii), using the previous considerations. In the case (i), to use Lemma 5.5, we show that \(W_{N+1}\cap\widehat{K}\cap F^{-1}(W_{N}\cap\widehat{K})\) is contained in a compact subset \(L\) of \(V^{\prime}\). Suppose it is not the case. Then there exists a sequence of points \(z_{n}\in W_{N+1}\) such that \(z_{n}\to z\in\overline{W_{N+1}}\cap\widehat{K}\cap\partial V^{\prime}\) and \(F(z_{n})\to w\in\overline{W_{N}}\cap\widehat{K}\subset V\). By the assumption (d), the map \(F\) extends holomorphically to a small disc \(D\) centered at \(z\), such that \(F(D)\subset V\). Then taking \(z_{n}\in D\), we can find a curve \(\gamma\colon[0,+\infty)\to V^{\prime}\cap D\) with \(\gamma(0)=z_{n}\), \(\lim_{t\to+\infty}\gamma(t)\to z^{\prime}\) and \(\lim_{t\to+\infty}F(\gamma(t))\to w^{\prime}\) for some \(z^{\prime}\in\partial V^{\prime}\cap D\) and \(w^{\prime}\in V\), which shows that \(w^{\prime}\) is an asymptotic value of \(F\) and contradicts the assumption (a). Hence, \(W_{N+1}\cap\widehat{K}\cap F^{-1}(W_{N}\cap\widehat{K})\) is contained in a compact set \(L\subset V^{\prime}\), so by Lemma 5.5, there exists \(Q_{1}>1\) such that \(|F^{\prime}|_{\rho}>Q_{1}\) on \(W_{N+1}\cap\widehat{K}\cap F^{-1}(W_{N}\cap\widehat{K})\), in particular \(|F^{\prime}(z)|_{\rho}>Q_{1}\) for \(z\in W_{N+1}\cap\widehat{K}\) fulfilling the condition (i). In the case (ii), we have already showed that for \(z\in W_{N+1}\cap\widehat{K}\) there holds \(|F(z)^{\prime}|_{\rho}>Q_{2}\) for \(Q_{2}=2\). In the case (iii) we have \(|F^{\prime}(z)|_{\rho}>Q_{3}\) for \(z\in W_{N+1}\cap\widehat{K}\) with some \(Q_{3}>1\) by Lemma 5.8, since \(\bigcup_{i\in\mathcal{I}_{par}}\big{(}\overline{W_{N+1}\cap\widehat{K}\setminus \widehat{K}}\cap P_{i}\big{)}\) is a compact subset of \(\bigcup_{i\in\mathcal{I}_{par}}\big{(}\overline{W_{1}\setminus\widehat{K}} \cap P_{i}\big{)}\) and \(\bigcup_{i\in\mathcal{I}_{\infty}}\big{(}\overline{W_{N+1}\cap\widehat{K} \setminus\widehat{K}}\cap P_{i}\big{)}\) is a compact subset of \(\bigcup_{i\in\mathcal{I}_{\infty}}\big{(}\overline{W_{1}\setminus\widehat{K} }\cap P_{i}\cap\overline{\mathbb{D}(0,R_{i}^{+})}\big{)}\). This shows that the second assertion of the lemma holds with \(Q=\min(Q_{1},Q_{2},Q_{3})\). Now we can prove the following fact, which completes the proof of Proposition 5.2. **Lemma 5.11**.: _There exists a decreasing sequence \((\beta_{n})_{n=1}^{\infty}\) of numbers \(\beta_{n}\in(0,1)\), such that_ 1. \(\sum_{n=1}^{\infty}\beta_{n}<\infty\)_,_ 2. \(\frac{\beta_{n+1}}{\beta_{n}}\geq\frac{\beta_{n}}{\beta_{n-1}}\) _for every_ \(n>1\)_,_ 3. \(|(F^{n})^{\prime}(z)|_{\varsigma}>\frac{1}{\beta_{n}}\) _for every_ \(z\in(W_{N}\setminus\widehat{K})\cap F^{-1}(W_{N}\setminus\widehat{K})\cap \ldots\cap F^{-(n-1)}(W_{N}\setminus\widehat{K})\cap F^{-n}(W_{N}\cap\widehat{ K})\)_,_ \(n\in\mathbb{N}\)_._ Proof.: For \(i\in\mathcal{I}_{par}\) let \((\beta_{i,n})_{n=1}^{\infty}\) be the sequence \((a_{m,n})_{n=1}^{\infty}\) from Proposition 3.6 for the attracting petal \(F(P_{i})\) of the map \(G_{i}\) at \(p_{i}\), for the compact set \(K_{i}\), \(m=m_{0}\) from the definition of \(\widehat{K}\) and \(\alpha=\alpha_{par}\). Then \(\beta_{i,n}\in(0,1)\), the sequence \((\beta_{n})_{n=1}^{\infty}\) is decreasing, \(\sum_{n=1}^{\infty}\beta_{i,n}<\infty\), \(\frac{\beta_{i,n+1}}{\beta_{i,n}}\geq\frac{\beta_{i,n}}{\beta_{i,n-1}}\) for every \(n>1\) and \[|(F^{n})^{\prime}(z)|_{\sigma_{p_{i}},\alpha_{par}}>\frac{1}{\beta_{i,n}} \tag{39}\] for \(z\in G_{i}^{m_{0}+n}(K_{i})\). Furthermore, note that by (22) and the compactness of \(\widehat{K}\), we have \[(W_{N}\cap\widehat{K}\setminus\tilde{K})\cap\bigcup_{i\in\mathcal{I}_{\infty }}P_{i}\subset\bigcup_{i\in\mathcal{I}_{\infty}}\bigcup_{n=n_{0}}^{m_{1}}G_{i }^{n}(K_{i})\] for some \(m_{1}\in\mathbb{N}\). For \(i\in\mathcal{I}_{\infty}\) and \(n_{0}\leq m\leq m_{1}\), let \((\beta_{m,n}^{(i)})_{n=1}^{\infty}\) be the sequence \((a_{m,n})_{n=1}^{\infty}\) from Theorem 4.7 for the attracting petal \(F(P_{i})\) of the map \(G_{i}\) at \(p_{i}\), for the compact set \(K_{i}\) and \(\alpha_{\infty}\). Then \(\beta_{m,n}^{(i)}\in(0,A_{\infty})\), \(\sum_{n=1}^{\infty}\beta_{m,n}^{(i)}<\infty\), \(\frac{\beta_{m,n+1}^{(i)}}{\beta_{m,n}^{(i)}}\geq\frac{\beta_{m,n}^{(i)}}{\beta _{m,n-1}^{(i)}}\) for every \(n>1\) and \[|(F^{n})^{\prime}(z)|_{\sigma_{\alpha_{\infty}}}>\frac{1}{\beta_{m,n}^{(i)}} \tag{40}\] for \(z\in G_{i}^{m+n}(K_{i})\). Finally, let \[\beta_{n}=\max\Big{(}\max_{i\in\mathcal{I}_{par}}\beta_{i,n},\max_{i\in \mathcal{I}_{\infty}}\max_{m\in\{n_{0},\ldots,m_{1}\}}\frac{\beta_{m,n}^{(i)} }{A_{\infty}}\Big{)}\] Obviously, we have \(0<\beta_{n}<1\) and \(\sum_{n=1}^{\infty}\beta_{n}<\infty\). This gives the assertion (a). The assertion (b) follows from the analogous properties for the sequences \((\beta_{i,n})_{n=1}^{\infty}\) and \((\beta_{m,n}^{(i)}/A_{\infty})_{n=1}^{\infty}\). To show the assertion (c), take \(z\in(W_{N}\setminus\widehat{K})\cap F^{-1}(W_{N}\setminus\widehat{K})\cap \cdots\cap F^{-(n-1)}(W_{N}\setminus\widehat{K})\cap F^{-n}(W_{N}\cap\widehat{ K})\), \(n\in\mathbb{N}\). Then by (22), (17) and Lemma 5.7, we have \(z\in G_{i}^{m_{0}+n}(K_{i})\) for some \(i\in\mathcal{I}_{par}\) or \(z\in G_{i}^{m+n}(K_{i})\) for some \(i\in\mathcal{I}_{\infty}\), \(m\in\{n_{0},\ldots,m_{1}\}\) and \(R^{-}\leq|F^{n}(z)|<|F^{n-1}(z)|\leq R_{i}^{+}\). In the first case, (37) and (39) imply \[|(F^{n})^{\prime}(z)|_{\varsigma}=|(F^{n})^{\prime}(z)|_{\varsigma_{i}}=|(F^{n} )^{\prime}(z)|_{C\sigma_{p_{i},\alpha_{par}}}=|(F^{n})^{\prime}(z)|_{\sigma_{p _{i},\alpha_{par}}}>\frac{1}{\beta_{i,n}}\geq\frac{1}{\beta_{n}},\] which gives the assertion (c). In the second case, by (38), (34) and (40), \[|(F^{n})^{\prime}(z)|_{\varsigma} =|(F^{n})^{\prime}(z)|_{\varsigma_{\infty}}=\frac{h_{i}(|F^{n}(z) |)}{h_{i}(|z|)}|(F^{n})^{\prime}(z)|_{\sigma_{\alpha_{\infty}}}\] \[\geq\frac{h_{i}(|F^{n}(z)|)}{h_{i}(|F^{n-1}(z)|)}|(F^{n})^{\prime }(z)|_{\sigma_{\alpha_{\infty}}}\geq A_{\infty}\,|(F^{n})^{\prime}(z)|_{\sigma _{\alpha_{\infty}}}\] \[>\frac{A_{\infty}}{\beta_{m,n}^{(i)}}\geq\frac{1}{\beta_{n}},\] providing the assertion (c). ## 6. Proof of Theorem B Before proceeding with the proof of Theorem B, we prove Proposition 1.3, which describes some properties of the maps and basins we are dealing with. We start by proving a useful lemma. **Lemma 6.1**.: _Under the hypotheses of Theorem A and the notation of Definition 1.1, the repelling petals \(P_{i}\), \(i\in\mathcal{I}_{\infty}\), of \(f\) at infinity can be assumed to satisfy the following:_ 1. \(\overline{U}\setminus\big{(}\bigcup_{i\in\mathcal{I}_{par}}(P_{i}\cup\{p_{i} \})\cup\bigcup_{i\in\mathcal{I}_{\infty}}P_{i}\big{)}\) _is compact, where_ \(\mathcal{I}_{par}\) _is a finite set such that_ \(P_{i}\) _for_ \(i\in\mathcal{I}_{par}\) _is a repelling petal of_ \(f\) _at a parabolic fixed point_ \(p_{i}\in\partial U\) _and_ \(\{p_{i}\}_{i\in\mathcal{I}_{par}}\) _contain all parabolic fixed points of_ \(f\) _in_ \(\partial U\)_,_ 2. \(f(P_{i})\) _are pairwise disjoint for_ \(i\in\mathcal{I}_{par}\cup\mathcal{I}_{\infty}\)_,_ 3. \(\bigcup_{i\in\mathcal{I}_{\infty}}f(P_{i})\) _is contained in an arbitrarily small neighbourhood of infinity._ **Remark 6.2**.: We do not assume \(p_{i}\neq p_{i^{\prime}}\) for \(i,i^{\prime}\in\mathcal{I}_{par}\), \(i\neq i^{\prime}\) (i.e. a parabolic fixed point of \(f\) in \(\partial U\) can correspond to several repelling petals \(P_{i}\), \(i\in\mathcal{I}_{par}\)). Proof of Lemma 6.1.: First, we show that \[\overline{U}\setminus\bigcup_{i\in\mathcal{I}_{\infty}}G_{i}(P_{i})\quad\text {is compact}, \tag{41}\] where \(G_{i}=(f|_{P_{i}})^{-1}\). Suppose otherwise. Then there exists a sequence \(z_{n}\in\overline{U}\setminus\bigcup_{i\in\mathcal{I}_{\infty}}G_{i}(P_{i})\) such that \(z_{n}\to\infty\). Since \(\overline{U}\setminus\bigcup_{i\in\mathcal{I}_{\infty}}P_{i}\) is compact by the assumption (c) of Theorem A, we have \(z_{n}\in\overline{U}\cap\big{(}\bigcup_{i\in\mathcal{I}_{\infty}}P_{i}\big{)} \setminus\bigcup_{i\in\mathcal{I}_{\infty}}G_{i}(P_{i})\) for sufficiently large \(n\). Consequently, there exists \(i\in\mathcal{I}_{\infty}\) such that \(z_{n}\in\overline{U}\cap P_{i}\setminus G_{i}(P_{i})\) for infinitely many \(n\). By Definition 4.1 and the invariance of \(U\), for such \(n\) we have \(f(z_{n})\in\overline{U}\cap f(P_{i})\setminus P_{i}\) and hence, by the assumption (c) of Theorem A, there holds \(f(z_{n})\in\overline{U}\setminus\bigcup_{i^{\prime}\in\mathcal{I}_{\infty}}P_{i ^{\prime}}\) for infinitely many \(n\). As remarked above, \(\overline{U}\setminus\bigcup_{i^{\prime}\in\mathcal{I}_{\infty}}P_{i^{\prime}}\) is compact, so \(f(z_{n_{k}})\to w\) for some subsequence \(n_{k}\) and \(w\in\overline{U}\cap\overline{f(P_{i})}\). Then \(z_{n_{k}}=G_{i}(f(z_{n_{k}}))\to G_{i}(w)\in\overline{P_{i}}\), which makes a contraction. This ends the proof of (41). By (41), in Theorem A we can replace \(P_{i}\) by \(G_{i}(P_{i})\) for \(i\in\mathcal{I}_{\infty}\) and, inductively, by \(G_{i}^{n}(P_{i})\) for any given \(n\in\mathbb{N}\). By Definition 4.1, this shows that in Theorem A we can assume that \(f(P_{i})\), \(i\in\mathcal{I}_{\infty}\), are pairwise disjoint and contained in an arbitrarily small neighbourhood of infinity (which proves the assertion (c)), and \(\bigcup_{i\in\mathcal{I}_{\infty}}\overline{f(P_{i})}\) is disjoint from the set of parabolic fixed points in \(\partial U\). Let \(p\in\partial U\) be a parabolic fixed point of \(f\). By Proposition 3.3, a small punctured neighbourhood of \(p\) is covered by a finite union of attracting and repelling petals of \(f\) at \(p\), where the attracting petals are contained in the immediate basin of attraction to \(p\). It follows that \(\overline{U}\setminus\{p\}\) in a small neighbourhood of \(p\) is contained in a finite union of repelling petals \(P_{i}\), \(i\in\mathcal{I}(p)\), of \(f\) at the point \(p_{i}=p\). This together with the assumption (c) of Theorem A shows the assertion (a), where \(\mathcal{I}_{par}\) is the union of the sets \(\mathcal{I}(p)\) over all parabolic fixed points \(p\) of \(f\) in \(\partial U\). To check the assertion (b), it is enough to notice that we have already showed that \(f(P_{i})\), \(i\in\mathcal{I}_{\infty}\), are pairwise disjoint and \(\bigcup_{i\in\mathcal{I}_{\infty}}\overline{f(P_{i})}\) is disjoint from \(\{p_{i}\}_{i\in\mathcal{I}_{par}}\), so by Proposition 3.3, we can choose \(P_{i}\), \(i\in\mathcal{I}_{par}\), such that \(f(P_{i})\) are pairwise disjoint for \(i\in\mathcal{I}_{par}\) and disjoint from \(\bigcup_{i\in\mathcal{I}_{\infty}}f(P_{i})\). Proof of Proposition 1.3.: Assume the hypotheses of Theorem A together with the additional properties of the repelling petals \(P_{i}\) which hold by Lemma 6.1. Under the notation of Lemma 6.1, similarly as in Section 5, we write \[\mathcal{I}=\mathcal{I}_{par}\cup\mathcal{I}_{\infty}\] and \[p_{i}=\infty\quad\text{for}\quad i\in\mathcal{I}_{\infty}.\] By the definition of repelling petals, for \(i\in\mathcal{I}\) we have \(\bigcap_{n=0}^{\infty}G_{i}^{n}(P_{i})=\emptyset\) for \(G_{i}=(f|_{P_{i}})^{-1}\), which implies \[P_{i}=\bigcup_{n=0}^{\infty}(G_{i}^{n}(P_{i})\setminus G_{i}^{n+1}(P_{i})). \tag{42}\] Moreover, \[\text{if}\quad z_{n}\in P_{i},\ z_{n}\xrightarrow[n\to\infty]{}p_{i},\quad \text{then}\quad f(z_{n})\xrightarrow[n\to\infty]{}p_{i}. \tag{43}\] To check (43), observe first that it is obvious for \(i\in\mathcal{I}_{par}\), so we can assume \(p_{i}=\infty\). Suppose \(z_{n}\in P_{i}\), \(z_{n}\to\infty\), \(f(z_{n})\not\to\infty\). Passing to a subsequence, we can assume \(f(z_{n})\to w\in\overline{f(P_{i})}\). Then \(z_{n}=G_{i}(f(z_{n}))\to G_{i}(w)\in\overline{P_{i}}\subset\mathbb{C}\), which is a contradiction. Furthermore, by Lemma 6.1 there exists a compact set \(K\subset\overline{U}\setminus\{p_{i}\}_{i\in\mathcal{I}_{par}}\), such that \[\overline{U}\setminus K\subset\bigcup_{i\in\mathcal{I}_{par}}(P_{i}\cup\{p_{i }\})\cup\bigcup_{i\in\mathcal{I}_{\infty}}P_{i}.\] Now we proceed with the proof of the proposition. First, note that since the map \(f\) is univalent in each repelling petal \(P_{i}\), \(i\in\mathcal{I}_{\infty}\) and \(\overline{U}\setminus\bigcup_{i\in\mathcal{I}_{\infty}}P_{i}\) is compact, either \(U\) is bounded or every point in \(U\cap\bigcup_{i\in\mathcal{I}_{\infty}}P_{i}\) has a finite number of preimages in \(U\). This proves the assertion (a). Consider the assertion (b). Since the degree of \(f\) on \(U\) is finite, there are only a finite number of critical points of \(f\) in \(U\). Moreover, as \(f\) is univalent inside the repelling petals \(P_{i}\), \(i\in\mathcal{I}\), all critical points of \(f\) in \(\partial U\) are contained in \(K\). Since the set of critical points cannot have an accumulation point in \(K\), there are only a finite number of critical points of \(f\) in \(\partial U\). To prove the assertion (c), suppose that \(z\in\partial U\) is a post-critical point of \(f\) with infinite orbit. Obviously, this implies \(w\notin\bigcup_{n=0}^{\infty}f^{-n}(\{p_{i}\}_{i\in\mathcal{I}})\). Then, by the assumption (a) of Theorem A, the set \(K\) cannot contain an infinite number of elements of this orbit. Consequently, the orbit contains a point \(w\in P_{i}\) for some \(i\in\mathcal{I}\) such that the orbit of \(w\) is contained in \(\bigcup_{i^{\prime}\in\mathcal{I}}P_{i^{\prime}}\). By (42), \(w\in G_{i}^{n}(P_{i})\setminus G_{i}^{n+1}(P_{i})\) for some \(n\geq 0\), so \(f^{n}(w)\in P_{i}\) and \(f^{n+1}(w)\notin P_{i}\), hence \(f^{n+1}(w)\in P_{i^{\prime}}\) for some \(i^{\prime}\in\mathcal{I}\), \(i^{\prime}\neq i\). However, this contradicts the property (b) in Lemma 6.1. To prove the assertion (d), consider an asymptotic curve \(\gamma\colon[0,+\infty)\to\mathbb{C}\) of an asymptotic value \(v\in\overline{U}\). If \(\gamma\) is not eventually contained in \(\mathbb{C}\setminus\overline{U}\), then there is a sequence of points \(z_{n}\in\gamma\cap\overline{U}\), such that \(z_{n}\to\infty\), \(f(z_{n})\to v\). Passing to a subsequence, we can assume \(z_{n}\in P_{i}\) for some \(i\in\mathcal{I}_{\infty}\). However, this contradicts (43). Finally, we show the assertion (e). Let us consider \(w\in\overline{U}\cup\{\infty\}\) and analyze some different cases. If \(w\in U\), then \(w\) has \(d\) preimages in \(U\) counting multiplicities and there is nothing to prove. Hence, we can assume \(w\in\partial U\cup\{\infty\}\). Suppose \(w\in\partial U\) and consider a sequence \(w_{n}\to w\) with \(w_{n}\in U\). Let \(z_{n}\) be any sequence of preimages in \(U\) such that \(f(z_{n})=w_{n}\). Passing to a subsequence, we may assume \(z_{n}\to z\in\overline{U}\cup\{\infty\}\). If \(z=\infty\), then (again after passing to a subsequence), we have \(z_{n}\in P_{i}\) for some \(i\in\mathcal{I}_{\infty}\), which contradicts (43). Hence, \(z\in\overline{U}\). Observe that by continuity, \(f(z)=w\) and \(z\in\partial U\) by the maximum principle. It remains to consider the case \(w=\infty\). Take a sequence \(w_{n}\to w\) with \(w_{n}\in U\). Since \(U\) is a simply connected invariant attracting basin of finite degree, it contains a critical point and \(d=\deg f|_{U}\geq 2\). Hence, there exist two sequences \(z_{n}^{(1)},z_{n}^{(2)}\in U\), such that \(f(z_{n}^{(1)})=f(z_{n}^{(2)})=w_{n}\), \(z_{n}^{(1)}\neq z_{n}^{(2)}\). Passing to subsequences, we can assume \(z_{n}^{(1)}\to z^{(1)}\), \(z_{n}^{(2)}\to z^{(1)}\) for some \(z^{(1)},z^{(2)}\in\overline{U}\cup\{\infty\}\). If \(z^{(1)}=z^{(2)}=\infty\), then (again after passing to a subsequence), we have \(w_{n}\in P_{i}\), \(z_{n}^{(1)}\in P_{i_{1}}\), \(z_{n}^{(2)}\in P_{i_{2}}\) for some \(i,i_{1},i_{2}\in\mathcal{I}\). By the second property in Lemma 6.1, we have \(i=i_{1}=i_{2}\), which contradicts the injectivity of \(f\) in repelling petals. Hence, one of the points \(z^{(1)},z^{(2)}\) is in \(\partial U\). By continuity, this point is mapped by \(f\) to \(w=\infty\). Proof of Theorem B.: We will show that one may define appropriate sets in the dynamical plane of \(f\) satisfying the hypotheses of Theorem 5.1, which will provide a metric with suitable expanding properties defined near the boundary of \(U\). Let \(U\) be an invariant simply connected attracting basin of a meromorphic map \(f\colon\mathbb{C}\to\widehat{\mathbb{C}}\) satisfying the assumption of Theorem A. Let \(\varphi\colon\mathbb{D}\to U\) be a Riemann map, such that \(0\in\mathbb{D}\) is the fixed point of the map \(g=\varphi^{-1}\circ f\circ\varphi\colon\mathbb{D}\to\mathbb{D}\). Since the degree of \(f\) in \(U\) is finite by Proposition 1.3, the map \(g\) is a finite Blaschke product. Let \[E=\varphi(\mathbb{D}(0,r))\] for \(r\in(0,1)\) close to \(1\). Obviously, \(E\) is a Jordan domain and \(\overline{E}\subset U\). By the Schwarz lemma, we have \[\overline{f(E)}\subset E. \tag{44}\] Notice that \(E\) is an absorbing domain for \(f\) in \(U\) (i.e. every compact set in \(U\) is mapped by an iterate of \(f\) into \(E\)), since \(\mathbb{D}(0,r)\) is absorbing for \(g\). Moreover, by the assumption (a) of Theorem A and since \(f\) has finite degree on \(U\), we can choose \(r\) so that \[\big{(}\big{(}\overline{\mathcal{P}_{asym}(f)}\cup\operatorname{Acc}(\mathcal{ P}_{crit}(f))\cap U\big{)}\cup\overline{\operatorname{Crit}(f|_{U})\cup \mathcal{P}_{crit}(f|_{U})}\subset E \tag{45}\] and \[L\subset f(E) \tag{46}\] for the compact set \(L\subset U\) from the assumption (b) of Theorem A. As in the proof of Proposition 1.3, we assume the additional properties of the repelling petals \(P_{i}\) described in Lemma 6.1. In particular, the assertion (c) of this lemma shows that we can assume \[\overline{E}\cap\bigcup_{i\in\mathcal{I}_{\infty}}f(P_{i})=\emptyset. \tag{47}\] We now want to define hyperbolic domains \(V^{\prime}\subsetneq V\subset\mathbb{C}\), an open set \(W\subset V\) and a holomorphic map \(F\colon V^{\prime}\to V\) satisfying the hypotheses of Theorem 5.1. In particular, \(F\) must map \(V^{\prime}\) onto \(V\), and no point \(z\in W\) can have its entire orbit contained in \(W\). To do so, set \[W=U\setminus\overline{E}.\] See Figure 6. Since \(E\) is an absorbing domain, every orbit in \(W\subset U\) eventually enters \(E\) and remains there, leaving \(W\). Now define \(V\) to be the connected component of \[\mathbb{C}\setminus\left(\overline{f(E)}\cup\overline{\mathcal{P}_{asym}(f)} \cup\operatorname{Acc}(\mathcal{P}_{crit}(f))\cup\left(\mathcal{P}_{crit}(f) \setminus\bigcup_{n=0}^{\infty}f^{-n}(\overline{U})\right)\right)\] containing \(f(W)=U\setminus\overline{f(E)}\), and let \(V^{\prime}\) be the connected component of \(f^{-1}(V)\) containing \(W\). Note that neither \(V\) nor \(V^{\prime}\) contains parabolic fixed points of \(f\) in \(\partial U\), since they belong to \(\overline{\mathcal{P}_{asym}(f)}\cup\operatorname{Acc}(\mathcal{P}_{crit}(f))\). On the other hand, \[\overline{W}\subset V\cup\{p_{i}\}_{i\in\mathcal{I}_{par}} \tag{48}\] by (44), (45) and the assumption (a) of Theorem A. Let \[F=f|_{V^{\prime}}.\] Note that by definition, \(V^{\prime},V,W\) are domains in \(\mathbb{C}\). Note also that \(V\) is hyperbolic since it is disjoint from the domain \(f(E)\), which in turn implies that \(V^{\prime}\) is disjoint from \(E\), so \(V^{\prime}\) is hyperbolic. It is straightforward to check \[W\subset V\cap V^{\prime}. \tag{49}\] Figure 6. The sets \(E,W\). Hence, as \(V^{\prime}\subset f^{-1}(V)\) and \(W\) is connected, \(F\) maps \(V^{\prime}\) into \(V\). We have \(V^{\prime}\neq V\) since \(V\setminus V^{\prime}\supset\overline{E}\setminus f(E)\). Now we check \(V^{\prime}\subset V\). By (49) and the connectedness of \(W\), it is sufficient to show \[V^{\prime}\subset\mathbb{C}\setminus\Big{(}\overline{f(E)}\cup\overline{ \mathcal{P}_{asym}(f)}\cup\operatorname{Acc}(\mathcal{P}_{crit}(f))\cup\Big{(} \mathcal{P}_{crit}(f)\setminus\bigcup_{n=0}^{\infty}f^{-n}(\overline{U}) \Big{)}\Big{)}.\] This follows since the sets \(\overline{f(E)}\), \(\overline{\mathcal{P}_{asym}(f)}\), \(\operatorname{Acc}(\mathcal{P}_{crit}(f))\) and \(\mathcal{P}_{crit}(f)\setminus\bigcup_{n=0}^{\infty}f^{-n}(\overline{U})\) are forward-invariant. To see that \(F\colon V^{\prime}\to V\) is onto (i.e. \(f(V^{\prime})=V\)), observe that every point in \(V\setminus f(V^{\prime})\) is a locally omitted value and hence an asymptotic value of \(f\) (the argument for this fact, using Gross' theorem, is described e.g. in [11, proof of Theorem 2]). This shows \(V\setminus f(V^{\prime})=\emptyset\) since \(V\) contains no asymptotic values of \(f\) by definition. The condition \(\bigcap_{n=0}^{\infty}F^{-n}(W)=\emptyset\) follows from the fact that \(E\) is an absorbing domain for \(f|_{U}\). It remains to check the conditions (a)-(e) in Theorem 5.1, which follow in a fairly direct way from the definitions of \(V,V^{\prime}\) and \(W\). Indeed, \(V\) is defined not to contain asymptotic values of \(f\) and \(V^{\prime}\) is a component of \(f^{-1}(V)\), which gives (a). To check (b), take a point \(z\in\mathcal{P}_{crit}(F)\) and suppose there exist points \(w_{k}\in F^{-n_{k}}(z)\) for some \(n_{k}>0\), \(k\in\mathbb{N}\), such that \(\deg_{w_{k}}F^{n_{k}}\to+\infty\). Note that by the definition of \(V\), we have \(f^{n_{0}}(z)\in\overline{U}\) for some \(n_{0}\geq 0\). Moreover, as \(V\cap L=\emptyset\) by (46), we have \(f^{n_{0}}(z)\notin L\), where \(L\subset U\) is the compact set from the assumption (b) of Theorem A. Then \(w_{k}\in f^{-(n_{k}+n_{0})}(f^{n_{0}}(z))\) and \(\deg_{w_{k}}f^{n_{k}+n_{0}}\geq\deg_{w_{k}}F^{n_{k}}\to+\infty\), which contradicts the assumption (b) of Theorem A, as \(f^{n_{0}}(z)\in\mathcal{P}_{crit}\cap\overline{U}\setminus L\). This shows (b). The condition (c) follows from the definition of \(V\), while (d) is trivial since \(f\) is meromorphic on \(\mathbb{C}\). The condition (e) follows from Lemma 6.1, (48), (47) and the invariance of \(U\). Having checked that \(F\), \(V\), \(V^{\prime}\) and \(W\) satisfy the hypotheses of Theorem 5.1, we can use this theorem and (44) to find \(N\in\mathbb{N}\) such that for every compact set \(K_{0}\subset\overline{U}\setminus(E\cup\{p_{i}\}_{i\in\mathcal{I}_{par}})\) there exist a conformal metric \(d\varsigma=\varsigma|dz|\) on \(U\setminus\overline{f^{-N}(E)}\) and a decreasing sequence \((b_{n})_{n=1}^{\infty}\) of numbers \(b_{n}\in(0,1)\) with \(\sum_{n=1}^{\infty}b_{n}<\infty\), satisfying \[|(f^{n})^{\prime}(z)|_{\varsigma}>\frac{1}{b_{n}} \tag{50}\] for every \(z\in(U\setminus f^{-n}(\overline{f^{-N}(E)}))\cap f^{-n}(K_{0})\), \(n\in\mathbb{N}\). Now we conclude the proof of Theorem B. Let \(K\) be a compact set in \(U\). Set \[K_{0}=K\setminus E\] and \[A=f^{-N}(E)\cap U.\] Note that by (45), the set \(A\) is a Jordan domain, such that \(\overline{A}\subset U\) and \(U\setminus\overline{A}=U\setminus\overline{f^{-N}(E)}\). Moreover, \(K_{0}\) is a compact subset of \(\overline{U}\setminus(E\cup\{p_{i}\}_{i\in\mathcal{I}_{par}})\) and \[(U\setminus f^{-n}(\overline{f^{-N}(E)}))\cap f^{-n}(K_{0})=(U\setminus f^{-n }(\overline{A}))\cap f^{-n}(K)=f^{-n}(K\setminus\overline{A}),\] so by (50), there exist a conformal metric \(d\varsigma\) on \(U\setminus\overline{A}\) and a decreasing sequence \((b_{n})_{n=1}^{\infty}\) of numbers \(b_{n}\in(0,1)\) with \(\sum_{n=1}^{\infty}b_{n}<\infty\), such that \[|(f^{n})^{\prime}(z)|_{\varsigma}>\frac{1}{b_{n}}\] for \(z\in U\cap f^{-n}(K\setminus\overline{A})\), \(n\in\mathbb{N}\). This ends the proof of Theorem B. ## 7. Local connectivity of boundaries of Fatou components: proof of Theorem A To prove the local connectivity of the boundary of a simply connected invariant attracting basin \(U\) satisfying the conditions of Theorem A, we will construct a sequence of Jordan curves \(\{\gamma_{n}\}_{n=0}^{\infty}\) which approximate \(\partial U\cup\{\infty\}\) and satisfying the uniform Cauchy condition with respect to the spherical metric, thus showing that its limit is also a curve in \(\widehat{\mathbb{C}}\) and hence is locally connected. For simplicity, we use the symbol \(\gamma\) indistinctly for a curve \(\gamma\colon[a,b]\to\widehat{\mathbb{C}}\) and for its image in \(\widehat{\mathbb{C}}\). ### Equipotential curves and ray germs Consider a simply connected domain \(A\subset U\) satisfying the properties listed in Theorem B. The goal of this subsection is to construct an increasing family of Jordan domains \(A_{n}\subset U\setminus\overline{A}\), \(n=0,1,\ldots\), exhausting \(U\) and such that \(f\) maps \(\overline{A_{n+1}}\setminus A_{n}\) onto \(\overline{A_{n}}\setminus A_{n-1}\) as a degree \(d\) covering, where \(d=\deg f|_{U}\). The Jordan curves \(\gamma_{n}\) mentioned above will be defined as the boundaries of these domains. **Proposition 7.1**.: _Let \(f\colon\mathbb{C}\to\widehat{\mathbb{C}}\) be a transcendental meromorphic function with a simply connected immediate basin of attraction \(U\) of an attracting fixed point \(\zeta\), such that \(d=\deg f|_{U}<\infty\), and let \(A\subset U\) be a domain such that \(\overline{A}\subset U\). Then there exists a family of Jordan domains \(\{A_{n}\}_{n=0}^{\infty}\) with smooth boundaries \(\gamma_{n}\colon[0,1]\to U\), such that_ * \(A_{0}\) _contains_ \(\zeta\) _and all the critical points of_ \(f|_{U}\)_,_ * \(\overline{A}\subset A_{0}\)_,_ \(\overline{A_{n}}\subset A_{n+1}\) _for_ \(n\geq 0\) _and_ \(\bigcup_{n=0}^{\infty}A_{n}=U\)_,_ * \(f\) _maps_ \(\overline{A_{n+1}}\) _onto_ \(\overline{A_{n}}\) _as a proper map of degree_ \(d\) _such that_ \(\overline{A_{n+1}}\setminus A_{n}\) _is mapped onto_ \(\overline{A_{n}}\setminus A_{n-1}\) _for_ \(n\geq 1\) _as an_ \((\)_unbranched_\()\) _degree_ \(d\) _covering,_ * \(f(\gamma_{n+1}(\theta))=\gamma_{n}(d\theta\bmod 1)\) _for_ \(n\geq 0\)_,_ \(\theta\in[0,1]\)_._ _Moreover, for every \(\theta\in[0,1]\) there exists a smooth arc \(\alpha_{\theta}\colon[0,1]\to\overline{A_{1}}\setminus A_{0}\), joining \(\gamma_{0}(\theta)\) and \(\gamma_{1}(\theta)\), such that_ \[\alpha_{\theta}((0,1))\subset A_{1}\setminus\overline{A_{0}}\quad\text{ and }\quad\operatorname{length}\alpha_{\theta}\leq M,\] _where \(M\) is a constant independent of \(\theta\) and \(\operatorname{length}\) denotes the Euclidean length._ Proof.: Similarly as in the proof of Theorem B, let \(\varphi\colon\mathbb{D}\to U\) be a Riemann map, such that \(\varphi^{-1}(\zeta)=0\) is the fixed point of the degree \(d\) Blaschke product \(g=\varphi^{-1}\circ f\circ\varphi\colon\mathbb{D}\to\mathbb{D}\). Note that \(d\geq 2\) since \(U\) must contain a critical point of \(f\). We define the domain \(A_{0}\) as \[A_{0}=\varphi(\mathbb{D}(0,r_{0}))\] for \(r_{0}\in(0,1)\) so close to \(1\), that \[\overline{\operatorname{Crit}(f|_{U})\cup\mathcal{P}_{crit}(f|_{U})}\cup \overline{A}\subset A_{0}.\] Then \(A_{0}\) is a Jordan domain with a smooth boundary \(\gamma_{0}\), such that \(\gamma_{0}\subset U\). Let \[A_{n}=f^{-1}(A_{n-1})\cap U,\qquad n=1,2,\ldots.\] By the Schwarz lemma and the fact that \(A_{0}\) contains all the critical points and values of \(f\) in \(U\), the sets \(A_{n}\) are Jordan domains with smooth boundaries \(\gamma_{n}\subset U\), such that \[A\Subset A_{0}\Subset A_{1}\Subset\cdots\Subset A_{n}\Subset A_{n+1}\Subset\cdots\] and \(U=\bigcup_{n=0}^{\infty}A_{n}\). Moreover, \(f\) maps \(\overline{A_{n+1}}\) onto \(\overline{A_{n}}\) for \(n\geq 0\) as a proper map of degree \(d\), and \(\overline{A_{n+1}}\setminus A_{n}\) is mapped onto \(\overline{A_{n}}\setminus A_{n-1}\) for \(n\geq 1\) as a covering of degree \(d\). See Figure 7. Let us choose a smooth parametrization of \(\gamma_{0}=\partial A_{0}\) and denote it by \(\gamma_{0}\colon[0,1]\to\partial A_{0}\). For every \(n>0\), parametrize the boundary of \(A_{n}\) by \(\gamma_{n}\colon[0,1]\to\partial A_{n}\) in such a way that \[f(\gamma_{n}(\theta))=\gamma_{n-1}(d\theta\bmod 1).\] This parametrization is not unique but once we choose \(\gamma_{n}(0)\) to be one of the \(d\) preimages of \(\gamma_{n-1}(0)\), then there is only one continuous choice for \(\gamma_{n}(\theta)\), \(\theta\in[0,1]\) satisfying the relation above. We now show the existence of the transversal curves \(\alpha_{\theta}\). Set \(\widetilde{\gamma}_{0}=\varphi^{-1}(\gamma_{0})=\{z:|z|=r\}\). By choosing \(r_{0}\) sufficiently close to \(1\), the Jordan curve \(\widetilde{\gamma}_{1}:=g^{-1}(\widetilde{\gamma}_{0})\) can be written in polar coordinates as \((r(\theta),\theta)\) for a smooth function \(r(\theta)\), \(\theta\in[0,1]\) (see e.g. [20, p. 208]). In particular, this implies that any two points in the closed annulus \(R\subset\mathbb{D}\) bounded by \(\widetilde{\gamma}_{1}\) and \(\widetilde{\gamma}_{0}\) can be joined by a curve of length smaller than \(4\pi\). By applying the Riemann map \(\varphi\), whose derivative is bounded in modulus on the compact set \(R\), we deduce that any two points in the annulus \(\overline{A_{1}}\setminus A_{0}=\varphi(R)\subset U\) can be joined by a curve of length \(M\) for some constant \(M>0\). In particular, for every \(\theta\in[0,1]\) there exists an arc \(\alpha_{\theta}\) of length bounded by \(M\), joining \(\gamma_{0}(\theta)\) and \(\gamma_{1}(\theta)\). **Remark**.: For given angle \(\theta\), the sequence \(\{\gamma_{n}(\theta)\}_{n}\) accumulates at the boundary of \(U\) in \(\widehat{\mathbb{C}}\). In general, the curves \(\gamma_{n}\) may have no limit and accumulate at a non-locally connected boundary. We will show that under the conditions of Theorem A this cannot occur. ### Local connectivity of the boundary of \(U\) Let \(A\subset U\) be a simply connected domain satisfying the properties listed in Theorem B. Our goal is to show that under the assumptions of Theorem A, the curves \(\gamma_{n}\) constructed in Proposition 7.1 for the domain \(A\), converge uniformly on \([0,1]\) with respect to the spherical metric, which will imply that the boundary of \(U\) in \(\widehat{\mathbb{C}}\) is locally connected. By Theorem B for \(K=\overline{A_{1}}\setminus A_{0}\), there exists a conformal metric \(d\varsigma=\varsigma|dz|\) on \(U\setminus\overline{A}\) and numbers \(b_{n}\in(0,1)\), \(n\in\mathbb{N}\), such that \(\sum_{n=1}^{\infty}b_{n}<\infty\) and \[|(f^{n})^{\prime}(z)|_{\varsigma}>\frac{1}{b_{n}}\] for every \(z\in f^{-n}(\overline{A_{1}}\setminus A_{0})\cap U\), i.e. for \(z\in\overline{A_{n+1}}\setminus A_{n}\). Proposition 7.1 shows that for every \(\theta\in[0,1]\), the point \(\gamma_{0}(\theta)\) can be joined to \(\gamma_{1}(\theta)\) by a \(C^{1}\)-arc \(\alpha_{\theta}\subset\overline{A_{1}}\setminus A_{0}\) of Euclidean length bounded by a constant \(M\) independent of \(\theta\). Since \(\overline{A_{1}}\setminus A_{0}\) is a compact subset of \(U\setminus\overline{A}\), the Figure 7. Sketch of the construction of the sets \(A_{n}\), the family of Jordan curves \(\{\gamma_{n}\}_{n}\) and the transversal ray germs \(\alpha_{\theta}\). density \(\varsigma\) of the metric \(d\varsigma\) is bounded on this set, so the lengths of \(\alpha_{\theta}\) with respect to the metric \(d\varsigma\) are also uniformly bounded, that is \[\operatorname{length}_{\varsigma}\alpha_{\theta}\leq M^{\prime}<\infty,\] where \(M^{\prime}>0\) is a constant independent of \(\theta\). We will show that the sequence \(\gamma_{n}\) satisfies the uniform Cauchy condition with respect to the metric \(d\varsigma\). By Proposition 7.1, we can define inductively families of arcs \(\alpha_{\theta}^{(n)}\), \(n\geq 0\), \(\theta\in[0,1]\) by \[\alpha_{\theta}^{(0)} =\alpha_{\theta},\] \[\alpha_{\theta}^{(n+1)} =f_{n,\theta}^{-1}\big{(}\alpha_{d\theta\bmod 1}^{(n)}\big{)},\] where \(f_{n,\theta}^{-1}\) is the branch of \(f^{-1}\) mapping \(\gamma_{n}(d\theta\bmod 1)\) to \(\gamma_{n+1}(\theta)\). Then \(\alpha_{\theta}^{(n)}\) is a \(C^{1}\)-arc joining \(\gamma_{n}(\theta)\) to \(\gamma_{n+1}(\theta)\) within \(\overline{A_{n+1}}\setminus A_{n}\) and \(f^{n}\) maps \(\alpha_{\theta}^{(n)}\) injectively onto \(\alpha_{d^{n}\theta\bmod 1}\). Consequently, \[\operatorname{dist}_{\varsigma}(\gamma_{n}(\theta),\gamma_{n+1}(\theta))\leq \int_{\alpha_{\theta}^{(n)}}\varsigma(z)|dz|\leq\frac{\operatorname{length}_{ \varsigma}\alpha_{d^{n}\theta\bmod 1}}{\inf_{\alpha_{\theta}^{(n)}}|(f^{n})^{ \prime}|_{\varsigma}}<M^{\prime}b_{n}.\] It follows that for every \(\theta\in[0,1]\) and \(m>n>0\) we have \[\operatorname{dist}_{\varsigma}(\gamma_{n}(\theta),\gamma_{m}(\theta))\leq \operatorname{dist}_{\varsigma}(\gamma_{n+1}(\theta),\gamma_{n}(\theta))+\cdots +\operatorname{dist}_{\varsigma}(\gamma_{m}(\theta),\gamma_{m-1}(\theta))\leq M ^{\prime}\sum_{k=n}^{m}b_{k}.\] Since \(\sum_{k=1}^{\infty}b_{k}\) is a convergent series, the sequence \(\gamma_{n}\) satisfies the uniform Cauchy condition with respect to the metric \(d\varsigma\). We show that this implies that \(\gamma_{n}\) satisfies also the uniform Cauchy condition with respect to the spherical metric. Suppose this does not hold. Then there exists \(\varepsilon>0\) and sequences \(n_{k},m_{k}\to\infty\), \(\theta_{k}\in[0,1]\), such that \[\operatorname{dist}_{sph}(\gamma_{n_{k}}(\theta_{k}),\gamma_{m_{k}}(\theta_{k }))\geq\varepsilon.\] Passing to a subsequence, we can assume \(\gamma_{n_{k}}(\theta_{k})\to z\), \(\gamma_{m_{k}}(\theta_{k})\to w\) for some distinct points \(z,w\in\widehat{\mathbb{C}}\), where at least one of them (say \(z\)) is not equal to \(\infty\). By Lemma 5.9, there exist \(\delta,c>0\), such that \(w\notin\mathbb{D}(z,\delta)\) and \(\varsigma>c\) on \(W_{N}\cap\mathbb{D}(z,\delta)\). Then for sufficiently large \(k\) we have \(\gamma_{n_{k}}(\theta_{k})\in\mathbb{D}(z,\frac{\delta}{3})\) and \(\gamma_{m_{k}}(\theta_{k})\notin\mathbb{D}(z,\frac{2\delta}{3})\), so \[\operatorname{dist}_{\varsigma}(\gamma_{n_{k}}(\theta_{k}),\gamma_{m_{k}}( \theta_{k}))\geq\frac{c\delta}{3},\] which contradicts the uniform Cauchy condition with respect to the metric \(d\varsigma\). Therefore, the sequence \(\gamma_{n}\) satisfies the uniform Cauchy condition with respect to the spherical metric, which implies that it has a limit \(\gamma\), which is equal to the boundary of \(U\) in \(\widehat{\mathbb{C}}\), providing its local connectivity. ## 8. Local connectivity of Julia sets: Proof of Theorem C Throughout this section, we consider the transcendental meromorphic map \[f(z)=z-\tan z,\] which is the Newton map of the entire transcendental function \(F(z)=\sin z\). We will prove that the Julia set \(J(f)\) is locally connected, establishing Theorem C. See Figure 2. Throughout the section, we will understand the word 'boundary' as the boundary in \(\widehat{\mathbb{C}}\). A useful tool to prove local connectivity of a compact connected set in \(\widehat{\mathbb{C}}\) (like the Julia set of a rational or transcendental map) is Whyburn's Theorem, which reads as follows. **Theorem 8.1** ([10, Theorem 4.4, p. 113]).: _A compact connected set \(J\subset\widehat{\mathbb{C}}\) is locally connected if and only if it satisfies the following properties._ 1. _The boundary of every connected component of_ \(\widehat{\mathbb{C}}\setminus J\) _is locally connected._ 2. _For every_ \(\varepsilon>0\)_, only a finite number of connected components of_ \(\widehat{\mathbb{C}}\setminus J\) _have spherical diameter greater than_ \(\varepsilon\)_._ To check the conditions of Whyburn's Theorem for \(J=J(f)\), we first recall some properties of the map \(f\). By [1], \(J\) is connected as the Julia set of Newton's method for a transcendental entire map. Note that \[f(z+\pi)=f(z)+\pi,\qquad z\in\mathbb{C}, \tag{51}\] which implies a 'translation invariance' of the dynamical plane of \(f\) (see Figure 2). The properties listed below are proved in [1, Example 7.2] and [1, Proposition 4.1]. **Lemma 8.2**.: _The following statements hold._ 1. \(f\) _has infinitely many simply connected immediate basins of attraction_ \(U_{k}\)_,_ \(k\in\mathbb{Z}\)_, of superattracting fixed points_ \[c_{k}=k\pi,\quad k\in\mathbb{Z},\] _such that_ \(U_{k}=U_{0}+k\pi\) _and_ \(\deg_{c_{k}}f=\deg f|_{U_{k}}=3\)_. The points_ \(c_{k}\)_,_ \(k\in\mathbb{Z}\)_, are the only critical points of_ \(f\)_. They are located in the vertical lines_ \[\ell_{k}(t)=k\pi+it,\qquad t\in\mathbb{R},\;k\in\mathbb{Z},\] _which are invariant and contained in_ \(U_{k}\)_, respectively. Moreover,_ \(f\) _has no finite asymptotic values._ 2. _The poles of_ \(f\)_,_ \[p_{k}=\frac{\pi}{2}+k\pi,\;k\in\mathbb{Z},\] _are simple. They are located in the vertical lines_ \[r_{k}(t)=\frac{\pi}{2}+k\pi+it,\qquad t\in\mathbb{R},\;k\in\mathbb{Z},\] _which are contained in_ \(J(f)\)_._ 3. _The asymptotics of_ \(f\) _for_ \(\operatorname{Im}(z)\to\pm\infty\) _is given by_ \[f(z)=\begin{cases}z-i+\mathcal{O}(e^{-2\operatorname{Im}(z)})&\text{as }\; \operatorname{Im}(z)\to+\infty\\ z+i+\mathcal{O}(e^{2\operatorname{Im}(z)})&\text{as }\;\operatorname{Im}(z)\to-\infty \end{cases}.\] 4. _We have_ \[J(f)\cap\mathbb{C}=\bigcup_{k\in\mathbb{Z}}J_{k},\] _where_ \(J_{k}\) _is the connected component of_ \(J(f)\cap\mathbb{C}\) _containing the line_ \(r_{k}\)_. For every_ \(k\in\mathbb{Z}\)_, the basin_ \(U_{k}\) _has exactly two accesses to infinity, and_ \(\partial U_{k}\) _contains exactly two poles of_ \(f\)_, i.e._ \(p_{k-1},p_{k}\)_, which are accessible from_ \(U_{k}\)_._ 5. _All Fatou components of_ \(f\) _are preperiodic and eventually mapped by iterates of_ \(f\) _into_ \(U_{k}\) _for some_ \(k\in\mathbb{Z}\)_. In particular,_ \(f\) _has no wandering domains._ See Figure 8. The following proposition establishes the first condition in Whyburn's criterium (Theorem 8.1). **Proposition 8.3**.: _Every Fatou component of \(f\) has locally connected boundary._ Proof.: First, we show that the boundaries of the attracting basins \(U_{k}\) are locally connected. To this end, we check that they fulfil the assumptions of Theorem A. Recall that the basins \(U_{k}\) are simply connected, since \(J(f)\) is connected. By Lemma 8.2, we have \(\deg f|_{U_{k}}=3\) and \[\operatorname{Crit}(f)=\mathcal{P}(f)=\{c_{k}\}_{k\in\mathbb{Z}}=\{k\pi\}_{k \in\mathbb{Z}}. \tag{52}\] This immediately implies the assumptions (a)-(b) of Theorem A. To check the assumption (c), note that by Lemma 8.2(c), for \(M>0\) large enough, the map \(f\) on the half planes \[P_{+}=\{z\in\mathbb{C}:\operatorname{Im}(z)>M\},\qquad P_{-}=\{z\in\mathbb{C} :\operatorname{Im}(z)<-M\} \tag{53}\] is arbitrarily close to \(z\mapsto z\mp i\). In particular, this implies \(\overline{P_{\pm}}\subset f(P_{\pm})\) and \(f(P_{+})\cap f(P_{-})=\emptyset\). We show that \(P_{\pm}\) are repelling petals of \(f\) at infinity. In view of Lemma 8.2(c), the univalency of \(f\) on \(P_{\pm}\) follows easily from Rouche's Theorem, while the remaining conditions of Definition 4.1 with \(g(t)\equiv 1\) are satisfied by Proposition 4.4, where we take \(r=M\), \(\delta=\frac{\pi}{2}\), \(d=1\), \(a=\mp i\) and \(j=1\). Hence, \(P_{\pm}\) are repelling petals of \(f\) at infinity. Note that Lemma 8.2 asserts that the basins \(U_{k}\) are located between the vertical lines \(r_{k-1}\) and \(r_{k}\), both contained in the Julia set of \(f\). Hence, outside a compact set, \(U_{k}\) is contained in the two repelling petals \(P_{\pm}\) of \(f\) at infinity, which shows that \(f\) and \(U_{k}\) satisfy the assumptions of Theorem A. Hence, the boundaries of \(U_{k}\) are locally connected. By Lemma 8.2(e), all Fatou components of \(f\) are successive preimages of the basins \(U_{k}\) which have locally connected boundaries, so their boundaries are also locally connected. It remains to show the second condition in Whyburn's criterium. Note that in contrast to the rational or even entire (hyperbolic) cases, the meromorphic setting presents some Figure 8. The dynamical plane of the map \(f(z)=z-\tan z\). additional difficulties, since for any given \(n\), components of preperiod \(n\) can accumulate at poles and prepoles. The idea of the proof is as follows. We show that the sum of the squares of the spherical diameters of all Fatou components is finite, which immediately implies that only a finite number of them can have diameter larger than any given \(\varepsilon\). To this end, we prove that the spherical distortion of branches of \(f^{-n}\) on Fatou components \(U\subset f^{-1}(U_{k})\setminus U_{k}\), \(k\in\mathbb{Z}\), is uniformly bounded. From this, it follows that the ratio between the square of the spherical diameter and the spherical area is approximately the same for the component \(U\) and its all inverse images by branches of \(f^{-n}\), \(n>0\). Since the sum of spherical areas over all components is finite (smaller than the spherical area of the whole sphere), the same holds for the sum of the squares of spherical diameters. Now we proceed to present the proof in detail. The following lemma is straightforward to check. **Lemma 8.4**.: _For every \(r_{0}>0\) there exist \(c_{1},c_{2}>0\) such that_ \[\mathcal{D}_{\text{sph}}\Big{(}z,\frac{c_{1}r}{|z|^{2}+1}\Big{)}\subset \mathbb{D}(z,r)\subset\mathcal{D}_{\text{sph}}\Big{(}z,\frac{c_{2}r}{|z|^{2}+1} \Big{)}\] _for every \(z\in\mathbb{C}\) and every \(0<r<r_{0}\)._ For \(n\geq 0\) define \[\mathcal{F}_{n}=\{U: \text{$U$ is a Fatou component of $f$},\] \[\text{and $n$ is minimal such that $f^{n}(U)\subset U_{k}$ for some $k\in\mathbb{Z}$}\}.\] In particular, \(\mathcal{F}_{0}=\{U_{k}\}_{k\in\mathbb{Z}}\). In the following lemma we describe the Fatou components from \(\mathcal{F}_{1}\). Let \[R_{k}=R_{k}(\delta,R)=\{z\in\mathbb{C}:\text{Re}(z)\in[k\pi+\delta,(k+1)\pi- \delta],\ \text{Im}(z)\in[-R,R]\},\quad k\in\mathbb{Z}\] for \(\delta,R>0\). See Figure 8. **Lemma 8.5**.: _We have_ \[\mathcal{F}_{1}=\{U_{k,l}:k\in\mathbb{Z},\,l\in\mathbb{Z}\setminus\{k,k+1\}\},\] _where_ \[U_{k,l}\subset R_{k},\qquad U_{k,l}=U_{0,l-k}+k\pi,\qquad f(U_{k,l})=U_{l}\] _and \(R_{k}=R_{k}(\delta,R)\) for some \(\delta,R>0\). Moreover, \(f\) is univalent on \(U_{k,l}\)._ Proof.: Fix a large \(\tilde{R}>0\) such that \(\{z\in\mathbb{C}:|\text{Im}(z)|>\tilde{R}\}\subset P_{+}\cup P_{-}\) for the repelling petals \(P_{\pm}\) from (53). Take an arbitrary point \(z_{0}\in U_{0}\) with \(|\text{Im}(z_{0})|>\tilde{R}\) (note that such points exist since \(\ell_{0}\subset U_{0}\)) and let \(z_{l}=z_{0}+l\pi\) for \(l\in\mathbb{Z}\). By (51) and Lemma 8.2(a), we have \(z_{l}\in U_{l}\) and \(f^{-1}(z_{l})\subset P_{+}\cup P_{-}\cup\bigcup_{k\in\mathbb{Z}}\mathbb{D}(p_{ k},\varepsilon)\) for a small fixed \(\varepsilon>0\), provided \(\tilde{R}\) was chosen sufficiently large. Moreover, since \(f\) is univalent on \(P_{\pm}\) with \(f(z)\sim z\mp i\), the set \(f^{-1}(z_{l})\cap(P_{+}\cup P_{-})\) consists of exactly one point \(z_{l}^{\prime}\in P_{\pm}\) close to \(z_{l}\pm i\). Similarly, as the pole \(p_{0}\) is simple, the map \(f\) is univalent on \(\mathbb{D}(p_{0},\varepsilon)\) and \(f^{-1}(z_{l})\cap\mathbb{D}(p_{0},\varepsilon)\) consists of exactly one point \(z_{0,l}\). Consequently, by (51), \(f\) is univalent on \(\mathbb{D}(p_{k},\varepsilon)\) and \(f^{-1}(z_{l})\cap\mathbb{D}(p_{k},\varepsilon)\) consists of exactly one point \(z_{k,l}\), where \[z_{k,l}\in\mathbb{D}(p_{k},\varepsilon),\qquad z_{k,l}=z_{0,l-k}+k\pi,\qquad k,l\in\mathbb{Z}.\] Hence, \[f^{-1}(z_{l})=\{z_{l}^{\prime}\}\cup\{z_{k,l}:k\in\mathbb{Z}\}, \tag{54}\] where all the points \(z_{k,l}\), \(k\in\mathbb{Z}\setminus\{l-1,l\}\) are outside \(P_{+}\cup P_{-}\cup S_{l}\) for \[S_{l}=\Big{\{}z\in\mathbb{C}:\operatorname{Re}(z)\in\Big{[}\frac{\pi}{2}+(l-1) \pi,\frac{\pi}{2}+l\pi\Big{]}\Big{\}}.\] This implies \(f^{-1}(z_{l})\cap(P_{+}\cup P_{-}\cup S_{l})\subset\{z_{l}^{\prime},z_{l-1,l}, z_{l,l}\}\). On the other hand, since \(\deg f|_{U_{l}}=3\), the point \(z_{l}\) has exactly three preimages under \(f\) in \(U_{l}\). As \(U_{l}\subset S_{l}\), we conclude \[f^{-1}(z_{l})\cap(P_{+}\cup P_{-}\cup S_{l})=f^{-1}(z_{l})\cap U_{l}=\{z_{l}^{ \prime},z_{l-1,l},z_{l,l}\}. \tag{55}\] Take \(U\in\mathcal{F}_{1}\). Then \(U\cap\bigcup_{k\in\mathbb{Z}}U_{k}=\emptyset\) and \(f(U)\subset U_{l}\) for some \(l\in\mathbb{Z}\). Suppose there exists a point \(w\in U\) with \(|\operatorname{Im}(w)|>\check{R}+2\). Then \(w\in P_{+}\cup P_{-}\setminus U_{l}\) and \(f(w)\in U_{l}\) with \(|\operatorname{Im}(f(w))|>\check{R}\), contradicting (55) for \(z_{0}=f(w)-l\pi\). Hence, \(U\subset\{z\in\mathbb{C}:|\operatorname{Im}(z)|\leq R\}\) for \(R=\check{R}+2\). Moreover, since \(\ell_{k}\subset U_{k}\) and \(U_{k}=U_{0}+k\pi\) for \(k\in\mathbb{Z}\) by Lemma 8.2(a), the set \(\{z\in\mathbb{C}:\operatorname{Re}(z)\in(k\pi-\delta,k\pi+\delta),\;| \operatorname{Im}(z)|\leq R\}\) is contained in \(U_{k}\) for a sufficiently small \(\delta>0\) independent of \(k\in\mathbb{Z}\). Consequently, \(U\subset\bigcup_{k\in\mathbb{Z}}R_{k}\) for \(R_{k}=R_{k}(\delta,R)\). Since the sets \(R_{k}\) are disjoint and compact, and \(U\) is connected, in fact \(U\subset R_{k}\) for some \(k\in\mathbb{Z}\). As \(U_{l}\) is simply connected and \(R_{k}\cap\operatorname{Crit}(f)=\emptyset\) by (52), the component \(U\) is the image of \(U_{l}\) under a well-defined branch \(g\) of \(f^{-1}\). Note that by (54) and (55), we have \(g(z_{l})=z_{k^{\prime},l}\in U\) for some \(k^{\prime}\in\mathbb{Z}\setminus\{l-1,l\}\). As \(z_{k^{\prime},l}\in R_{k^{\prime}}\) and the sets \(R_{k^{\prime}}\) are disjoint, in fact \(k^{\prime}=k\), so \(z_{k,l}\in U\) and \(k\neq l-1,l\). We have proved that for every component \(U\in\mathcal{F}_{1}\) there exist \(k,l\in\mathbb{Z}\) with \(k\neq l-1,l\), such that \(z_{k,l}\in U\subset R_{k}\) and \(f\) maps \(U\) univalently onto \(U_{l}\). On the other hand, (54) and (55) imply that for every \(k,l\in\mathbb{Z}\) with \(k\neq l-1,l\), there exists a component from \(\mathcal{F}_{1}\) containing \(z_{k,l}\). This together with (51) and Lemma 8.2(a) ends the proof. Now we can show the second condition in Whyburn's criterium. **Proposition 8.6**.: _For every \(\varepsilon>0\) there are only a finite number of Fatou components \(U\) of \(f\) with \(\operatorname{diam}_{sph}U\geq\varepsilon\)._ Proof.: Recall that by Lemma 8.2(e), all Fatou components of \(f\) are elements of \(\bigcup_{n=0}^{\infty}\mathcal{F}_{n}\). Since \(U_{k}\subset S_{k}\subset\mathbb{C}\setminus\mathbb{D}(0,\frac{\pi}{2}+(|k|-1)\pi)\) for \(k\in\mathbb{Z}\setminus\{0\}\), the spherical diameter of \(U_{k}\) tends to zero as \(|k|\to\infty\), so we only need to consider components in \(\bigcup_{n=1}^{\infty}\mathcal{F}_{n}\). By Lemma 8.5, for every \(U\in\mathcal{F}_{1}\) we have \(U=U_{k,l}\) for some \(k\in\mathbb{Z}\), \(l\in\mathbb{Z}\setminus\{k,k+1\}\), such that \[U_{k,l}=U_{0,l-k}+k\pi\subset R_{k} \tag{56}\] and \(f\) maps \(U_{k,l}\) univalently onto \(U_{l}\). Since \(U_{l}=U_{0}+l\pi\subset S_{l}\), \(\mathbb{D}(l\pi,r_{0})\subset U_{l}\) for some \(r_{0}>0\) independent of \(l\), and \(p_{0}\) is a simple pole of \(f\), there exists \(c_{1}>0\) such that \[\operatorname{diam}U_{0,l}\leq\frac{c_{1}}{|l|+1},\qquad\operatorname{area}U_{ 0,l}\geq\frac{c_{1}}{l^{4}+1}\] for \(l\in\mathbb{Z}\setminus\{0,1\}\). Consequently, by (56) and since \(\sigma_{sph}|_{R_{k}}\asymp\frac{1}{k^{2}+1}\), we have \[\operatorname{diam}U_{k,l}\leq\frac{c_{1}}{|l-k|+1},\quad\operatorname{area}_ {sph}U_{k,l}\geq\frac{c_{2}}{((l-k)^{4}+1)(k^{4}+1)} \tag{57}\] for \(k\in\mathbb{Z}\), \(l\in\mathbb{Z}\setminus\{k,k+1\}\) and some constant \(c_{2}>0\). By (52), all branches of \(f^{-n}\), \(n\geq 1\), are well-defined on \(\{z\in\mathbb{C}:\operatorname{Re}(z)\in(k\pi,(k+1)\pi)\}\supset R_{k}\) for \(k\in\mathbb{Z}\). Hence, for \(n\geq 1\), \[\mathcal{F}_{n}=\{g_{k,n}(U_{k,l}):k\in\mathbb{Z},\,l\in\mathbb{Z}\setminus\{k, k+1\},\,g_{k,n}\in\mathcal{G}_{k,n}\},\] where \(\mathcal{G}_{k,n}\) is the family of all branches of \(f^{-(n-1)}\) on \(R_{k}\) (note that we include the case \(n=1\), for which \(g_{k,1}=\operatorname{Id}\)). We claim that all the branches \(g_{k,n}\in\mathcal{G}_{k,n}\) have spherical distortion bounded uniformly with respect to \(k,n\). Indeed, for given \(r>0\), any two points \(z,w\in R_{k}\) can be joined by \(N=\lceil 2(2R+\pi)/r\rceil\) Euclidean disks \(\mathbb{D}(z_{1},r),\ldots,\mathbb{D}(z_{N},r)\), such that \(z_{1},\ldots,z_{N}\in R_{k}\), \(z_{1}=z\), \(z_{N}=w\) and \[\mathbb{D}(z_{j},r)\cap\mathbb{D}(z_{j+1},r)\neq\emptyset\quad\text{for}\ \ j=1,\ldots,N-1. \tag{58}\] Define \[r=\frac{c_{1}\delta}{2c_{2}},\qquad r_{j}=\frac{c_{1}\delta}{|z_{j}|^{2}+1}\] for the constants \(c_{1},c_{2}\) from Lemma 8.4, and the constant \(\delta\) from Lemma 8.5. Then, \[\mathbb{D}(z_{j},r)\subset\mathcal{D}_{sph}\Big{(}z_{j},\frac{r_{j}}{2}\Big{)},\qquad\mathcal{D}_{sph}(z_{j},r_{j})\subset\mathbb{D}(z_{j},\delta)\] by Lemma 8.4, so by (58) and (52), \[\mathcal{D}_{sph}\Big{(}z_{j},\frac{r_{j}}{2}\Big{)}\cap\mathcal{D}_{sph}\Big{(} z_{j+1},\frac{r_{j}}{2}\Big{)}\neq\emptyset,\qquad\mathcal{D}_{sph}(z_{j},r_{j}) \cap\mathcal{P}(f)=\emptyset,\] and hence using repeatedly Theorem 2.1 for the chain of spherical disks with \(\lambda=1/2\), we conclude that \[\frac{|g^{\prime}_{k,n}(z)|_{sph}}{|g^{\prime}_{k,n}(w)|_{sph}}\leq c_{3} \tag{59}\] for some \(c_{3}>0\) independent of \(k,n,g_{k,n},z,w\). Let \[M_{g_{k,n}}=\max_{R_{k}}|g^{\prime}_{k,n}|_{sph}.\] By (57) and since \(\sigma_{sph}|_{R_{k}}\asymp\frac{1}{k^{2}+1}\), \[\operatorname{diam}_{sph}g_{k,n}(U_{k,l})\leq\frac{c_{4}M_{g_{k,n}}\, \operatorname{diam}U_{k,l}}{k^{2}+1}\leq\frac{c_{1}c_{4}M_{g_{k,n}}}{(|l-k|+1)( k^{2}+1)}\] and \[\operatorname{area}_{sph}g_{k,n}(U_{k,l}) \geq\big{(}\min_{R_{k}}|g^{\prime}_{k,n}|^{2}_{sph}\big{)} \operatorname{area}_{sph}U_{k,l}\] \[\geq\frac{M_{g_{k,n}}^{2}}{c_{3}^{2}}\operatorname{area}_{sph}U_{ k,l}\geq\frac{c_{2}}{c_{3}^{2}}\frac{M_{g_{k,n}}^{2}}{((l-k)^{4}+1)(k^{4}+1)},\] for a constant \(c_{4}>0\). Hence, for some constants \(c_{5},c_{6},c_{7}>0\), \[\sum_{U\in\bigcup_{n=1}^{\infty}\mathcal{F}_{n}}(\operatorname{diam }_{sph}U)^{2} =\sum_{k\in\mathbb{Z}}\sum_{l\in\mathbb{Z}\setminus\{k,k+1\}}\sum_{ \begin{subarray}{c}n\geq 1,\\ g_{k,n}\in\mathcal{G}_{k,n}\end{subarray}}(\operatorname{diam}_{sph}g_{k,n}(U_{ k,l}))^{2}\] \[\leq c_{5}\sum_{k\in\mathbb{Z}}\sum_{l\in\mathbb{Z}\setminus\{k, k+1\}}\sum_{n,g_{k,n}}\frac{M_{g_{k,n}}^{2}}{((l-k)^{2}+1)(k^{4}+1)}\] \[=c_{5}\sum_{l\in\mathbb{Z}\setminus\{0,1\}}\frac{1}{l^{2}+1}\sum _{k\in\mathbb{Z}}\sum_{n,g_{k,n}}\frac{M_{g_{k,n}}^{2}}{k^{4}+1}\leq c_{6}\sum _{l\in\mathbb{Z}\setminus\{0,1\}}\frac{1}{l^{4}+1}\sum_{k\in\mathbb{Z}}\sum_ {n,g_{k,n}}\frac{M_{g_{k,n}}^{2}}{k^{4}+1}\] \[=c_{6}\sum_{k\in\mathbb{Z}}\sum_{l\in\mathbb{Z}\setminus\{k,k+1\} }\sum_{n,g_{k,n}}\frac{M_{g_{k,n}}^{2}}{((l-k)^{4}+1)(k^{4}+1)}\] \[\leq c_{7}\sum_{k\in\mathbb{Z}}\sum_{l\in\mathbb{Z}\setminus\{k, k+1\}}\sum_{n,g_{k,n}}\operatorname{area}_{sph}g_{k,n}(U_{k,l})\leq c_{6} \sum_{U\in\bigcup_{n=1}^{\infty}\mathcal{F}_{n}}\operatorname{area}_{sph}U\] \[\leq c_{7}\operatorname{area}_{sph}\widehat{\mathbb{C}}<\infty,\] as \(U\in\bigcup_{n=1}^{\infty}\mathcal{F}_{n}\) are pairwise disjoint. Concluding, we have showed that the series \[\sum_{U\in\bigcup_{n=1}^{\infty}\mathcal{F}_{n}}(\operatorname{diam}_{sph}U) ^{2}\] is convergent, which immediately implies that for every \(\varepsilon>0\) there can be only a finite number of \(U\in\bigcup_{n=1}^{\infty}\mathcal{F}_{n}\) with \(\operatorname{diam}_{sph}U\geq\varepsilon\), proving the proposition. Propositions 8.3 and 8.6 complete the proof of Theorem C. ### Acknowledgements The authors thank the Centro di Ricerca Matematica Ennio De Giorgi (Pisa) for its hospitality.
2306.08095
Reconstruction: Rational Approximation of the Complex Error Function and the Electric Field of a Two-Dimensional Gaussian Charge Distribution
This paper resurrects and archives an unpublished original Cornell Laboratory of Nuclear Studies report by Yuko Okamoto and Richard Talman, "Rational Approximation of the Complex Error Function and the Electric Field of a Two-Dimensional Gaussian Charge Distribution" CBN 80-13, dating from September 1980, during the start-up period of the CESR-CLEO $e^+e^-$ collider. This code has played a significant role in the calculation of the beam-beam interaction (in particular the beam-beam tune shift) for subsequent storage ring colliders. Electronic access to the (refactored) original codes is provided by active links.
John Talman, Yuko Okamoto, Richard Talman
2023-06-13T19:28:12Z
http://arxiv.org/abs/2306.08095v1
Reconstruction: Rational Approximation of the Complex Error Function and the Electric Field of a Two-Dimensional Gaussian Charge Distribution ###### Abstract This paper resurrects and archives an unpublished original Cornell Laboratory of Nuclear Studies report by Yuko Okamoto and Richard Talman, "Rational Approximation of the Complex Error Function and the Electric Field of a Two-Dimensional Gaussian Charge Distribution" CBN 80-13, dating from September 1980, during the start-up period of the CESR-CLEO e\({}^{+}\)e\({}^{-}\) collider. This code has played a significant role in the calculation of the beam-beam interaction (in particular the beam-beam tune shift) for subsequent storage ring colliders. Electronic access to the (refactored) original codes is provided by active links. ###### Contents * 1 Introduction * 2 Active Links * 3 Original report * 3.1 Introduction * 3.2 Pade Approximation * 3.3 Asymptotic Expression * 3.4 Regions of Validity of the Three Approximations * 3.5 Boundaries of the Valid Regions of the Three Approximations * 3.6 Electric Field * 3.7 Concluding Remarks * 3.8 Figure Captions * 3.9 Programs * 3.10 Tables * 4 Recreated Table 1, WEXCT ## 1 Introduction Various individuals have suggested we re-create and archive an unpublished 1980 Cornell Laboratory of Nuclear Physics "CBN 80-13" report, with the title given above, written at that time by two of the present authors (Y.O. and R.T.), in order to make it more readily and securely accessible and useable than at present. The main body of the present article is a faithful, page-by-page (late-produced) reproduction of the original report. This includes all tables and includes hand-written annotations (by Y.O.) comparing the precision obtained, as matched with earlier sources (referenced in the report), especially in "difficult" regions of parameter space. The original report is copied verbatim in Section 3. Though ideal for retention of chronology, this text is not at all convenient for modern day application of the original (Fortran) codes. For this reason the original codes have been refactored and made available online, using the active links given in Section 2. Because of the inherent backward compatibility of Fortran it was possible to do this without much risk of introducing errors in this process. To confirm faithful reproduction, and to provide a benchmark for subsequent reproductions, one of the original tables, Table 1, WEXCT, has been reproduced in Section 4. ## 2 Active Links For more information about this report and for accessing the original complex error function Fortran code, the online "Okamoto" repository can be accessed from a web browser at the github URL "[https://github.com/jtalman/ual1/tree/master/Okamoto/fortran](https://github.com/jtalman/ual1/tree/master/Okamoto/fortran)" which contains refactored Fortran code and other related material. _The following "Fortran code" active link to this "git" repository may have been deactivated by the arXiv or by your mail system. If so, it is necessary to access "Fortran code" at the github URL given above. The same is true for the following "Accelerator Simulation Course" link._ Fortran code The following active link points to course notes, in the same github repository, for a UAL (Unified Accelerator Libraries) Simulation course at the 2005 U.S. Particle Accelerator Course given at Cornell by Nikolay Malitsky and Richard Talman. Chapter 9, "Colliding Beams" starting on page 145, explains why the complex error function is needed for simulating the beam-beam interaction in colliding beam storage rings. Section 9.6 and 9.7 provide some of the evolution of the Pade code that had been developed since its original application during the commissioning of the CESR-CLEO e\({}^{+}\)e\({}^{-}\) Collider beginning in 1980, when the code being replicated in the present paper was generated. Accelerator Simulation Course Original report To simulate the beam-beam interaction one needs efficient formulae for the evaluation of the electric field of a two-dimensional Gaussian charge distribution which can be expressed in terms of the complex error function \(w(z)\). This paper shows how to approximate \(w(z)\) by a set of rational functions. The percent error of the approximation is extremely small (\(\sim 10^{-4}\%\) except near the real axis). Computer programs to evaluate \(w(z)\) and the electric field are also provided. ### Introduction For the simulation of the beam-beam interaction one needs to evaluate the electric field of a two-dimensional Gaussian charge distribution. The electric field at the position \((x,y)\) has been found by M. Bassetti and G.A. Erskine [1] to have the following form: [2] \[E_{x}=\frac{Q}{2\epsilon_{0}\sqrt{2\pi(s_{x}^{2}-s_{y}^{2})}}\,\Im\left(w \bigg{(}\frac{x+iy}{\sqrt{2(s_{x}^{2}-s_{y}^{2})}}\bigg{)}-e^{-\big{(}\frac{x^ {2}}{2s_{x}^{2}}+\frac{y^{2}}{2s_{y}^{2}}\big{)}}\,w\bigg{(}\frac{x\frac{s_{y}} {s_{x}}+iy\frac{s_{x}}{s_{y}}}{\sqrt{2(s_{x}^{2}-s_{y}^{2})}}\bigg{)}\right), \tag{1}\] \[E_{y}=\frac{Q}{2\epsilon_{0}\sqrt{2\pi(s_{x}^{2}-s_{y}^{2})}}\,\Re\left(w \bigg{(}\frac{x+iy}{\sqrt{2(s_{x}^{2}-s_{y}^{2})}}\bigg{)}-e^{-\big{(}\frac{x ^{2}}{2s_{x}^{2}}+\frac{y^{2}}{2s_{y}^{2}}\big{)}}\,w\bigg{(}\frac{x\frac{s_{y} }{s_{x}}+iy\frac{s_{x}}{s_{y}}}{\sqrt{2(s_{x}^{2}-s_{y}^{2})}}\bigg{)}\right), \tag{2}\] where \(\Im\) and \(\Re\) stand for imaginary part and real part, respectively, \(Q\) is a constant with a dimension of electric charge, \(\epsilon_{0}\) is the electric permittivity of free space, \(s_{x}\) and \(s_{y}\) (\(s_{x}>s_{y}\) assumed) are the standard deviations of the charge distribution in the \(x\) and \(y\) directions, respectively, and \(w(z)\) is the complex error function [3] defined by \[w(z)=e^{-z^{2}}\,\Big{(}1+\frac{2i}{\sqrt{\pi}}\,\int_{0}^{z}\,e^{u^{2}}\,du \Big{)}. \tag{3}\] We shall approximate \(w(z)\) by rational functions so that a computer can _quickly_ handle the evaluation of \(w(z)\) and thus the electric field of a two-dimensional Gaussian charge distribution. Though we were originally interested in an approximation good to within \(1\%\) error, the result turned out to be a much better approximation. We note that after the approximation of \(w(z)\) the only transcendental function in (1) and (2) which spend a longer computing time than rational functions are the exponential factors. We also note that by the symmetry properties of \(w(z)\)[3] it suffices to approximate \(w(z)\) only in the first quadrant of the complex plane. ### Pade Approximation We shall briefly describe how the Pade approximation is done first, then apply the approximation to the function \(w(z)\). Suppose that we have a complex-valued function \(f(z)\) which is analytic at a point \(z_{0}\) and suppose that we want to approximate it around \(z_{0}\) by a rational function of the form \[f_{\rm Pade}(z)=\frac{\sum_{k=0}^{M}c_{k}(z-z_{0})^{k}}{ 1+\sum_{k=1}^{N}d_{k}(z-z_{0})^{k}}\,, \tag{4}\] where \(c_{k},d_{k}\in\mathfrak{C}\) are unknown (possibly complex) coefficients to be determined. Note: We must have \(d_{0}\neq 0\) because \(f(z)\) is well-behaved at \(z_{0}\). We may set \(d_{0}=1\). For, otherwise, we can always divide both the numerator and denominator by \(d_{0}\). Here we choose \(M\) and \(N\) according to how much accuracy we need. In order to determine the coefficients \(c_{k}\) and \(d_{k}\) we impose a condition on \(f_{\rm Pade}\): \[f-f_{\rm Pade}=A_{1}(z-z_{0})^{M+N+1}+A_{2}(z-z_{0})^{M+N+2}+\cdots\,, \tag{5}\] where \(A_{1},A_{2},\cdots\in\mathfrak{C}\) are some constants. That is, the error introduced by the approximation at \(z\) with \(|z-z_{0}|<1\) is of the order of \(|z-z_{0}|^{M+N+1}\) and very small if \(M\) and \(N\) are large. Since \(f\) is analytic at \(z_{0}\), we have a Taylor series at \(z_{0}\): \[f(z)=\sum_{j=0}^{\infty}\,a_{j}(z-z_{0})^{j};\quad a_{j}\in\mathfrak{C}. \tag{6}\] Then using (6) for \(f\) in (5), multiplying both sides of (5) by the denominator of \(f_{\rm Pade}\), and equating the coefficients of the powers of \((z-z_{0})\) in both sides of the equation, we have the following relationships among \(a_{k}\), \(c_{k}\), and \(d_{k}\): \[\begin{array}{ccc}\mbox{Powers}&\mbox{Relation Among Coefficients}\\ (z-z_{0})^{0}&c_{0}&=a_{0}\\ (z-z_{0})^{1}&c_{1}-a_{0}d_{1}&=a_{1}\\ (z-z_{0})^{2}&c_{2}-a_{1}d_{1}-a_{0}d_{2}&=a_{2}\\ (z-z_{0})^{3}&c_{3}-a_{2}d_{1}-a_{1}d_{2}-a_{0}d_{3}&=a_{3}\\ \cdots\cdots&\cdots\cdots\cdots\\ (z-z_{0})^{k}&c_{k}-a_{k-1}d_{1}-a_{k-2}d_{2}-\cdots-a_{0}d_{k}&=a_{k}\end{array} \tag{7}\] where \(c_{k}=0\) for \(k>M\) and \(d_{k}=0\) for \(k>N\). In a matrix language we have \[\left(\begin{array}{cccccccc}1&0&0&\ldots&0&-a_{0}&0&0&\ldots&0\\ 0&1&0&\ldots&0&-a_{1}&-a_{0}&0&\ldots&0\\ 0&0&1&\ldots&0&-a_{2}&-a_{1}&-a_{0}&\ldots&0\\ \vdots&\vdots&\vdots&\ddots&\vdots&\vdots&\vdots&\ddots&\vdots\\ 0&0&0&\ldots&1&-a_{M-1}&-a_{M-2}&-a_{M-3}&\ldots&-a_{M-N}\\ 0&0&0&\ldots&0&-a_{M}&-a_{M-1}&-a_{M-2}&\ldots&-a_{M-N+1}\\ 0&0&0&\ldots&0&-a_{M+1}&-a_{M}&-a_{M-1}&\ldots&-a_{M-N+2}\\ 0&0&0&\ldots&0&-a_{M+2}&-a_{M+1}&-a_{M}&\ldots&-a_{M-N+3}\\ \vdots&\vdots&\vdots&\ddots&\vdots&\vdots&\vdots&\ddots&\vdots\\ 0&0&0&\ldots&0&-a_{M+N-1}&-a_{M+N-2}&-a_{M+N-3}&\ldots&-a_{M}\end{array}\right) \left(\begin{array}{c}c_{1}\\ c_{2}\\ c_{3}\\ \vdots\\ d_{1}\\ d_{2}\\ d_{3}\\ \vdots\\ d_{N}\end{array}\right)=\left(\begin{array}{c}a_{1}\\ a_{2}\\ a_{3}\\ \vdots\\ a_{M}\\ a_{M+1}\\ a_{M+2}\\ a_{M+3}\\ \vdots\\ a_{M+N}\end{array}\right) \tag{8}\] where \(a_{k}=0\) for \(k<0\). By inverting this matrix, we can determine the coefficients \(c_{j}\) and \(d_{k}\) (\(j=1,\cdots,M\) and \(k=1,\cdots,N\)). Note: The inversion of this kind of matrices is easily done by computers. (Cf. IBM 360 Scientific Subroutine Package (SSP).) (PADE 1) The Taylor series of \(w(z)\) around the origin is [3] \[w(z)=\sum_{j=0}^{\infty}a_{j}z^{j}=\sum_{j=0}^{\infty}\frac{(iz)^{j}}{\Gamma(j/ 2+1)}. \tag{9}\] Let \[u=iz=-ZI+iZ\!R,\quad\mbox{where }z=Z\!R+iZI. \tag{10}\] Then \[w(z)\stackrel{{\rm Def.}}{{=}}G(u)=\sum_{j=0}^{\infty}\,\frac{u^{j}}{ \Gamma(j/2+1)}. \tag{11}\] We shall apply the Pade approximation to \(G(u)\). Considering the behavior of \(w(z)\) \[w(z)\to 0\ {\rm as}\ |z|\to\infty\] for those \(z\) such that \(|Z\!R|>|ZI|\), we take \[M=6\ {\rm and}\ N=7.\] By inverting the matrix (8), we obtain, up to nine significant figures, \[\begin{array}{ll}c_{0}=1\ ({\rm Cf.}\,(7))&d_{1}=-2.38485635\\ c_{1}=-1.25647718&d_{2}=2.51608137\\ c_{2}=8.25059158\times 10^{-1}&d_{3}=-1.52579040\\ c_{3}=-3.19300157\times 10^{-1}&d_{4}=5.75922693\times 10^{-1}\\ c_{4}=7.63191605\times 10^{-2}&d_{5}=-1.35740709\times 10^{-1}\\ c_{5}=-1.04697938\times 10^{-2}&d_{6}=1.85678083\times 10^{-2}\\ c_{6}=6.44878652\times 10^{-4}&d_{7}=-1.14243694\times 10^{-3}\end{array} \tag{12}\] Hence, the approximation of \(w(z)\) near the origin is, by (4), \[w(z)=G(u)\simeq\frac{1+c_{1}u+c_{2}u^{2}+c_{3}u^{3}+c_{4}u^{4}+c_{5}u^{5}+c_{6 }u^{6}}{1+d_{1}u+d_{2}u^{2}+d_{3}u^{3}+d_{4}u^{4}+d_{5}u^{5}+d_{6}u^{6}+d_{7}u ^{7}}, \tag{13}\] where \(u=-ZI+iZR\), and the coefficients \(c_{k}\) and \(d_{k}\) are given by (12). (PADE 2) Since the approximation PADE 1 behaves rather poorly along the real axis right around \(z=3\) (Cf. Table 2), we need a Pade approximation around \(z=3\). The Taylor series of \(w(z)\) at \(z=3\) is \[w(z)=\sum_{j=0}^{\infty}\,a_{j}(z-3)^{j}, \tag{14}\] where \[a_{j}=\frac{w^{(j)}(3)}{j!}. \tag{15}\] The derivatives \(w^{(j)}(3)\) can be expressed in terms of \(w(3)\) by use of the relations [3] \[w^{(j+2)}(z) + 2zw^{(j+1)}(z)+2(j+1)w^{(j)}(z)=0,\quad(j=0,1,2,\cdots)\] \[w^{(0)}(z) = w(z),\quad w^{\prime}(z)=-2zw(z)+\frac{2i}{\sqrt{\pi}}. \tag{16}\] On the other hand, the value of \(w(3)\) is, by (3), \[w(3)=e^{-9}+\frac{2i}{\sqrt{\pi}}\,e^{-9}\,\int_{0}^{3}\,e^{u^{2}}\,du. \tag{17}\] By using Table 2 in Rosser [4] for the value of the second term, we have \(w(3)\) up to nine significant figures: \[w(3)=1.23409804\times 10^{-4}+i2.01157318\times 10^{-1}. \tag{18}\] This time we choose \[M=3\ {\rm and}\ N=4.\] By inverting the matrix (8), we obtain, up to nine significant figures, \[c_{0} = 1.23409804\times 10^{-4}+i2.01157318\times 10^{-1}\ ({\rm Cf.}\,( 7))\] \[c_{1} = 2.33746715\times 10^{-1}+i1.61133338\times 10^{-1}\] \[c_{2} = 1.25689814\times 10^{-1}-i4.04227250\times 10^{-2}\] \[c_{3} = 8.92089179\times 10^{-3}-i1.81293213\times 10^{-2} \tag{19}\] \[d_{1} = 1.19230984-i1.16495901\] \[d_{2} = 8.94015450\times 10^{-2}-i1.07372867\] \[d_{3} = -1.68547429\times 10^{-1}-i2.70096451\times 10^{-1}\] \[d_{4} = -3.20997564\times 10^{-2}-i1.58578639\times 10^{-2}\] Hence, the approximation of \(w(z)\) near \(z=3\) is, by (4), \[w(z)\simeq\frac{c_{0}+c_{1}z+c_{2}z^{2}+c_{3}z^{3}}{1+d_{1}z+d_{2}z^{2}+d_{3}z ^{3}+d_{4}z^{4}}, \tag{20}\] where the coefficients \(c_{k}\) and \(d_{k}\) are given by (19). ### Asymptotic Expression Away from the origin and \(z=3\) we can use the asymptotic expression for \(w(z)\) given by Faddeyeva and Terent'ev (Eqn. (10)) [5]. The formula is \[w(z)\simeq\sum_{k=1}^{n}\,\frac{i\lambda_{k}^{(n)}}{\pi(z-x_{k}^{(n)})}=\sum_{k =1}^{n}\,\frac{ia_{k}^{(n)}}{z-x_{k}^{(n)}},\quad a_{k}^{(n)}=\frac{\lambda_{k }^{(n)}}{\pi}, \tag{21}\] where \(x_{k}^{(n)}\) are the roots of Hermite polynomials and \(\lambda_{k}^{(n)}\) are the corresponding coefficients (and \(n\) is an integer related to the accuracy of the approximation). The values of \(x_{k}^{(n)}\) and \(\lambda_{k}^{(n)}\) are found in Greenwood and Miller [6]. By choosing \(n=10\), we have an asymptotic expression of \(w(z)\) as \[w(z) \simeq \frac{ia_{1}}{z-x_{1}}+\frac{ia_{1}}{z+x_{1}}+\frac{ia_{2}}{z-x_ {2}}+\frac{ia_{2}}{z+x_{2}}+\frac{ia_{3}}{z-x_{3}}+\frac{ia_{3}}{z+x_{3}} \tag{22}\] \[+ \frac{ia_{4}}{z-x_{4}}+\frac{ia_{4}}{z+x_{4}}+\frac{ia_{5}}{z-x_ {5}}+\frac{ia_{5}}{z+x_{5}},\] where the constants are, up to nine or ten significant figures, \[a_{1}=1.94443615\times 10^{-1} x_{1}=3.42901327\times 10^{-1}\] \[a_{2}=7.64384940\times 10^{-2} x_{2}=1.036610830\] \[a_{3}=1.07825546\times 10^{-2} x_{3}=1.756683649 \tag{23}\] \[a_{4}=4.27695730\times 10^{-4} x_{4}=2.532731674\] \[a_{5}=2.43202531\times 10^{-6} x_{5}=3.436159119\] ### Regions of Validity of the Three Approximations The regions of validity of the three approximations are illustrated in Figures 1 and 2, which will be explained below in detail. In order to check our approximations we used the tables of \(w(z)\) by Faddeyeva and Terent'ev [5]. The tables give six-place values of \(w(z)\) for the square \(0\leq Z\!R\leq 3\), \(0\leq ZI\leq 3\) with tabular step of \(0.02\) for each of the variables and six-place values of \(w(z)\) for the range \(3\leq Z\!R\leq 5\), \(0\leq ZI\leq 3\) and \(0\leq Z\!R\leq 5\), \(3\leq ZI\leq 5\) with tabular step of \(0.1\) for each of the variables. We also used, as a reference in computing, the formulae (Cf. Abramowitz and Stegun, Eqn. 7.1.26 and 7.1.29) \[\mbox{erf}(Z\!R)\simeq 1-(a_{1}t+a_{2}t^{2}+a_{3}t^{3}+a_{4}t^{4}+a_{5}t^{5})e^{ -Z\!R^{2}}, \tag{24}\] where \(t=\frac{1}{1+pZ\!R}\) and \(p,a_{1},a_{2},a_{3},a_{4},a_{5}\) are real constants and \[\eqalign{{\rm erf}(Z\!R+i\,ZI)&\simeq\ {\rm erf}(Z\!R)+\frac{e^{-Z\!R^{2}}}{2 \pi Z\!R}\{(1-\cos(2Z\!RZI))+i\sin(2Z\!RZI)\}+\cr&\frac{2e^{-Z\!R^{2}}}{\pi} \sum_{n=1}^{\infty}\frac{e^{-n^{2}/4}}{n^{2}+4Z\!R^{2}}\{f_{n}(Z\!R,ZI)+i\,g_{ n}(Z\!R,ZI)\},}\] where \[\eqalign{&f_{n}(Z\!R,ZI)=2Z\!R-2Z\!R{\rm cosh}(nZI)\!\cos(2Z\!RZI)+n{\rm sinh }(nZI)\!\sin(2Z\!RZI),\cr&\hbox{and}\cr&g_{n}(Z\!R,ZI)=2Z\!R{\rm cosh}(nZI) \!\sin(2Z\!RZI)+n{\rm sinh}(nZI)\!\cos(2Z\!RZI).}\] These formulae allow us to calculate the percent error of the approximations, i.e., \(100\times\) (Approximation - Exact Value) / (Exact Value), by computer (Cf. Program 6). Unfortunately, as we can tell from Table 1, Program 6 which evaluates \(w(z)\) through (24) and (25) does not give quite accurate values, especially for those regions where \(Z\!R\) is small and \(ZI\) is large simultaneously. Thence, the percent errors given in Table 2 through Table 6 are not very reliable in those "bad" regions. In other words our rational approximations are normally more accurate than the reference formula and hence the listed errors are over-estimated. (PADE 1) (The region of validity of PADE 1 is illustrated in Figure 1.) We computed PADE 1, i.e., Eqn. (13) (Cf. Program 3), up to nine significant places in the range \(0\leq Z\!R\leq 5\), \(0\leq ZI\leq 5\) with step of 0.1 and checked the results against the tables by Faddeyeva and Terent'ev. The agreement was excellent except along the real axis with \(Z\!R\) large; even at \(Z\!R=ZI=5\), the real part of PADE 1 agreed with the table up to six places, maximum accuracy of the table, and the imaginary part of PADE 1 agreed with that of the table up to five places. On the real axis, we found percent errors of \(\sim\)1.3% at \(Z\!R=2.9\) and \(\sim\)2.9% at \(Z\!R=3.0\) for the \(real\) part of \(w(z)\), and even larger error for larger \(Z\!R\) (Cf. Table 2). But we note that PADE 1 is very accurate for \(ZI=0.1\) (even with \(Z\!R=5\)). The breakdown does not occur unless \(ZI\) is very small (\(\sim\)0.01 or smaller). We also note that the imaginary part of PADE 1 is very accurate even in this area. (PADE 2) (The region of validity of PADE 2 is illustrated in Figures 1 and 2.) We computed PADE 2, i.e., Eqn. (20) (Cf. Program 4), up to nine significant places in exactly the same region as in PADE 1 and checked the results against the tables by Faddeyeva and Terent'ev. The agreement was good (not as good as in PADE 1) even away from the point \(z=3\). The percent errors were much less than 1% in most of the region except for the points along the real axis with \(Z\!R\) large and the points near the imaginary axis (e.g., at the origin, \(\sim\)2% error and at \(Z\!R=4.0\), \(ZI=0\), \(\sim\)15% error) (Cf. Table 3; also see Figure 1 for the errors near the real axis). We note that the breakdown near the real axis is abrupt just as for PADE 1, i.e., the approximation is good until \(ZI\) gets very small (\(\sim\)0.01 or smaller). Again the imaginary part of PADE 2 is very accurate even on the real axis. (ASYMP) (The region of validity of ASYMP, i.e., the.asymptotic formula (22) (Cf. Program 5), is illustrated in Figures 1 and 2.) Exactly the same procedures as for PADE 1 and PADE 2 were followed. The approximation is excellent for \(ZI\) large enough ( \(\gtrsim\) 1.0 ) or \(Z\!R\) large. But again the real part is a poor approximation on the real axis (Cf. Table 4). In fact, Eqn. (22) implies that the real part of \(w(z)\) is zero on the real axis, which is a 100% error. Hence, even though ASYMP becomes a better approximation as \(Z\!R\) gets larger, the valid region of the real part of ASYMP never reaches the real axis (e.g., Figure 2 implies that ASYMP is good for \(ZI\sim 0.002\) at \(Z\!R\sim 4.2\)). Again the imaginary part of ASYMP is very accurate even in this region. To overcome the difficulty we expanded \(w(z)\) in powers of \(ZI\) and kept only the first power in \(ZI\) as follows. For \(ZI\ll 1\) and \(Z\!RZI\ll 1\) we have, keeping only the first power of \(ZI\) in (3), \[w(z)\simeq e^{-Z\!R^{2}}(1-i\,2Z\!RZI)\left(1+{2i\over\sqrt{\pi}}\int\limits_{ 0}^{Z\!R}e^{u^{2}}du-{2\over\sqrt{\pi}}e^{Z\!R^{2}}ZI\right).\] Thus, the real part is, for \(Z\!RZI\ll 1\) and \(ZI\ll 1\), \[\Re w(z)=e^{-Z\!R^{2}}+2\left\{Z\!R\Im w(Z\!R+i0)-{1\over\sqrt{\pi}}\right\}ZI.\] Note: The formula (26) is plausible because the imaginary part of ASYMP is very accurate for \(Z\!R\) large enough. The condition for (26) to be valid within 1% error is \[Z\!RZI\lesssim 0.01. \tag{27}\] We shall discuss this region of validity more in detail in the next subsection. ### Boundaries of the Valid Regions of the Three Approximations (The reader is again referred to Figures 1 and 2 for illustrations.) Having examined the regions of validity of the three approximations, our next task is to determine where we should set the boundaries of the three approximations so that we have minimum possible errors. Given any two of the three approximations, the idea is to find \(ZI\) (or \(Z\!R\)) for fixed \(Z\!R\) (or \(ZI\)) where we have the _least_ (or _minimum_) _discontinuity_ between the two approximations. The points of least discontinuity are plotted in Figures 1 and 2. The boundaries were set so that they go through as many points of least discontinuity as possible. From the discussions in the previous section we recall that there are bad points for the real part of \(w(z)\) on the real axis inside the PADE 2 region and the ASYMP region. Since the power expansion formula (26) is a good approximation near the real axis (exact on the real axis), we use it there. In Figure 2 we plot the points of least discontinuity both between PADE 2 and the power expansion and between ASYMP and the power expansion. The boundary between PADE 2 and the power expansion is fitted by a straight line \[Z\!RZI=0.0625(Z\!R-3.5). \tag{28}\] The boundary between ASYMP and the power expansion is fitted by \[Z\!RZI=\frac{a}{Z\!R-b}+c,\ \ \ (a,\,b,\,c\ \ \mbox{constants}). \tag{29}\] Using the three points of least discontinuity, \((Z\!R,Z\!RZI)=(3.8,0.044)\), (3.9, 0.0312), and (4.0, 0.022), we find \[a=0.04,\,b=3.29,\ \ \mbox{and}\ c=-0.034. \tag{30}\] For \(Z\!R>4.2\) we use the boundary \[Z\!RZI=0.01. \tag{31}\] To sum up: ASYMP is modified so that it calculates the power expansion formula (26) if \(Z\!R<4.2\) and \(Z\!RZI<\frac{0.04}{Z\!R-3.29}-0.034\), or if \(Z\!R\geq 4.2\) and \(Z\!RZI<0.01\). After this modification, for \(3.5\leq Z\!R<4.1\) use ASYMP if \(Z\!RZI<0.0625(Z\!R-3.5)\), use PADE 2 if \(Z\!RZI\geq 0.0625(Z\!R-3.5)\), for \(Z\!R\geq 4.1\) use ASYMP. ### Electric Field Once we have the function \(w(z)\), we can find the electric field by simply using the formulae (1) and (2). We set, for simplicity, \[\frac{Q}{2\epsilon_{0}\sqrt{\pi}}=1, \tag{32}\] in those formulae. Unfortunately, there is one problem: By symmetry \(E_{y}=0\) for \(y=0\). But we know \(\Re w(z)\) is not approximated well near the real axis, so the two terms in (2) might not cancel out each other to give exactly zero at \(y=0\). This might cause the percent error for \(E_{y}\) to be rather large for \(y=0\) and \(y\) small. To overcome this difficulty we first set \(E_{y}=0\) if \(y=0\) and _linearly interpolate_ the values of \(E_{y}\) for \(y\) very small. That is, for \[\frac{y}{\sqrt{2(s_{x}^{2}-s_{y}^{2})}}<0.002,\] we set \[E_{y}(x,y)=\frac{y}{\sqrt{2(s_{x}^{2}-s_{y}^{2})}}\] \[E_{y}\left(x,0.002\sqrt{2(s_{x}^{2}-s_{y}^{2})}\right). \tag{33}\] (Cf. Program 1 and Table 5.) This also serves to guarantee that \(E_{y}(x,y)\) will be continuous between the first and fourth quadrants. ### Concluding Remarks The program FNCTNW calculates \(w(z)\) quite accurately. The percent error in most of the region is \(\sim 10^{-4}\%\) except for the real part of \(w(z)\) near the real axis for certain values of \(Z\!R\) (near \(Z\!R=2.2\), 3.5, and 4.2) where the percent error could be at most 0.1%. The program GAFELD likewise calculates the electric field with the percent error \(\sim 10^{-4}\%\) except for \(E_{y}\) near the real axis where the percent error is at most of the order of 0.1%. Even though we have rather large percent errors (\(\sim\)0.1%) for \(\Re w(z)\) and \(E_{y}\) near the real axis, the _absolute errors_ are small because \(\Re w(z)\) and \(E_{y}\) take on small absolute values there. We have discussed the accurate evaluation over the entire first quadrant. If used in a computer simulation of beam-beam effects, PADE 1 would be called by far the most, as its region of validity more or less corresponds to where the particles reside. One may be justified, for the sake of simplicity, in regarding PADE 1 as an adequate replacement for the true field, but further investigation would be necessary to confirm this. **Acknowledgements** We would like to thank Professor W. Fuchs in Mathematics Department of Cornell University for various useful discussions.
2303.12613
Noisy recovery from random linear observations: Sharp minimax rates under elliptical constraints
Estimation problems with constrained parameter spaces arise in various settings. In many of these problems, the observations available to the statistician can be modelled as arising from the noisy realization of the image of a random linear operator; an important special case is random design regression. We derive sharp rates of estimation for arbitrary compact elliptical parameter sets and demonstrate how they depend on the distribution of the random linear operator. Our main result is a functional that characterizes the minimax rate of estimation in terms of the noise level, the law of the random operator, and elliptical norms that define the error metric and the parameter space. This nonasymptotic result is sharp up to an explicit universal constant, and it becomes asymptotically exact as the radius of the parameter space is allowed to grow. We demonstrate the generality of the result by applying it to both parametric and nonparametric regression problems, including those involving distribution shift or dependent covariates.
Reese Pathak, Martin J. Wainwright, Lin Xiao
2023-03-22T14:51:11Z
http://arxiv.org/abs/2303.12613v1
# Noisy recovery from random linear observations: ###### Abstract Estimation problems with constrained parameter spaces arise in various settings. In many of these problems, the observations available to the statistician can be modelled as arising from the noisy realization of the image of a random linear operator; an important special case is random design regression. We derive sharp rates of estimation for arbitrary compact elliptical parameter sets and demonstrate how they depend on the distribution of the random linear operator. Our main result is a functional that characterizes the minimax rate of estimation in terms of the noise level, the law of the random operator, and elliptical norms that define the error metric and the parameter space. This nonasymptotic result is sharp up to an explicit universal constant, and it becomes asymptotically exact as the radius of the parameter space is allowed to grow. We demonstrate the generality of the result by applying it to both parametric and nonparametric regression problems, including those involving distribution shift or dependent covariates. ## 1 Introduction In this paper, we study the problem of estimating an unknown vector \(\theta^{\star}\) on the basis of random linear observations corrupted by noise. More concretely, suppose that we observe a random operator \(T_{\xi}\) and a random vector \(y\), which are linked via the equation \[y=T_{\xi}(\theta^{\star})+w. \tag{1}\] This observation model involves two forms of randomness: the unobserved vector \(w\), which is a form of additive observation noise, and the observed operator \(T_{\xi}\), which is random, as indicated by its dependence on an underlying random variable \(\xi\). While relatively simple in appearance, the observation model (1) captures a broad range of statistical estimation problems. **Example 1** (Linear regression).: We begin with a simple but widely used model: linear regression. The goal is to estimate the coefficients \(\theta^{\star}\in\mathbf{R}^{d}\) that define the best linear predictor \(x\mapsto\langle x,\,\theta^{\star}\rangle\) of some real-valued response variable \(Y\in\mathbf{R}\). In order to do so, we observe a collection of \((x_{i},y_{i})\) pairs linked via the noisy observation model \[y_{i}=\langle x_{i},\,\theta^{\star}\rangle+w_{i}\qquad\text{for $i=1,\ldots,n$.}\] If we define the concatenated vector \(y=(y_{1},\ldots,y_{n})\), with an analogous definition for \(w\), this is a special case of our general setup with the random linear operator \(T_{\xi}:\mathbf{R}^{d}\to\mathbf{R}^{n}\) given by \[[T_{\xi}(\theta)]_{i}=\langle x_{i},\,\theta\rangle\quad\text{for $i=1,\ldots,n$.} \tag{2}\] Here, the random index corresponds to the covariate vectors so that \(\xi=(x_{1},\ldots,x_{n})\); note that we have imposed no assumptions on the dependence structure of these covariate vectors. In the classical setting, these covariates are assumed to be drawn in an i.i.d. manner; however, our general set-up is by no means limited to this classical setting. In the sequel, we consider various examples with interesting dependence structure, and our theory gives some very precise insights into the effects of such dependence. **Example 2** (Nonparametric regression).: In the preceding example, we discussed the problem of predicting a response variable \(Y\in\mathbf{R}\) in a linear manner. Let us consider the nonparametric generalization: here our goal is to estimate the regression function \(f^{\star}(x)\coloneqq\mathbf{E}[Y\mid X=x]\), which need not be linear as a function of \(x\). Given observations \(\{(x_{i},y_{i})\}_{i=1}^{n}\), we can write them in the form \[y_{i}=f^{\star}(x_{i})+w_{i},\qquad\text{for $i=1,\ldots,n$,}\] where \(w_{i}=y_{i}-\mathbf{E}[Y\mid X=x_{i}]\) are zero-mean noise variables. Now let us suppose that \(f^{\star}\) belongs to some function class \(\mathcal{F}\) contained with \(L^{2}(\mathcal{X})\), and show how this observation model can be understand as a special case of our setup with \(\theta^{\star}\in\ell^{2}(\mathbf{N})\). Take some orthonormal basis \(\{\phi_{j}\}_{j\geqslant 1}\) of \(L^{2}(\mathcal{X})\). Any function in \(\mathcal{F}\) can then be expanded as \(f=\sum_{j\geqslant 1}\theta_{j}\phi_{j}\) for some sequence \(\theta\in\ell^{2}(\mathbf{N})\). Letting \(\xi=(x_{1},\ldots,x_{n})\), we can define the operator \(T_{\xi}:\ell^{2}(\mathbf{N})\to\mathbf{R}^{n}\) via \[\theta\mapsto[T_{\xi}(\theta)]_{i}\coloneqq\sum_{j=1}^{\infty}\theta_{j}\phi_ {j}(x_{i})\quad\text{for $i=1,\ldots,n$,}\] so that this problem can be written in the form of our general model (1). Observe that the randomness in the observation operator \(T_{\xi}\) arises via the randomness in sampling the covariates \(\{x_{i}\}_{i=1}^{n}\). **Example 3** (Tomographic reconstruction).: The problem of tomographic reconstruction refers to the problem of recovering an image, modeled as a real-valued function \(f^{\star}\) on some compact domain \(\mathcal{X}\subset\mathbf{R}^{2}\), based on noisy integral measurements. Formally, we observe responses of the form \[y_{i}=\int_{\mathcal{X}}h(x_{i},u)f^{\star}(u)\,\mathrm{d}u+w_{i}\qquad\text{ for $i=1,\ldots,n$,}\] where \(h:\mathbf{R}^{2}\times\mathbf{R}^{2}\to\mathbf{R}\) is a known window function. If we again view \(f^{\star}\) as belonging to some function class \(\mathcal{F}\) within \(L^{2}(\mathcal{X})\), then we can write this model in our general form with \[[T_{\xi}(v)]_{i}=\sum_{j\geqslant 1}v_{j}\Big{[}\int_{\mathcal{X}}h(x_{i},u) \phi_{j}(u)\,\mathrm{d}u\Big{]},\quad\text{and $\xi=(x_{1},\ldots,x_{n})$.}\] Here we have followed the same conversion as in Example 2, in particular re-expressing \(f^{\star}\) in terms of its generalized Fourier coefficients with respect to an orthonormal family \(\{\phi_{j}\}_{j\geqslant 1}\). **Example 4** (Error-in-variables).: Consider the Berkson variant [6, 14] of the error-in-variables problem in nonparametric regression. In this problem, an observed covariate \(x\)--instead of being associated with a noisy observation of \(f^{\star}(x)\)--is associated with a noisy observation of the "jittered" evaluation \(f^{\star}(x+u)\), where \(u\in\mathbf{R}\) is the random jitter. Formally, we observe \(n\) pairs \((x_{i},y_{i})\) of the form \[y_{i}=f^{\star}(x_{i}+u_{i})+\varepsilon_{i}\qquad\text{for }i=1,\ldots,n,\] where the unobserved random jitter \(u_{i}\) is drawn independently of the pair \((x_{i},\varepsilon_{i})\). We can re-write these observations as a special case of our general model with \(\xi=(x_{1},\ldots,x_{n})\), and \[[T_{\xi}(f)]_{i}\coloneqq\mathbf{E}_{u}\left[f(x_{i}+u)\right],\quad\text{and }\quad w_{i}\coloneqq\varepsilon_{i}+\left\{f(x_{i}+u_{i})-\mathbf{E}_{u} \left[f(x_{i}+u)\right]\right\}\quad\text{for }i=1,\ldots,n.\] Note that the new noise variables \(w_{i}\) are again zero-mean, and our assumption that \(T_{\xi}\) is observed means that the distribution of the jitter \(u\) is known. These examples (and others, as discussed below in Section 1.2) motivate our study of the operator model (1). As we discuss in further detail later, a key advantage of writing the observation model in this form is that it will allow us to separate three key components of the difficulty of the problem: (i) the distribution of the random operator \(T_{\xi}\), as expressed via the distribution of \(\xi\), (ii) the distribution of the noise variable \(w\coloneqq y-T_{\xi}\theta^{\star}\), and (iii) the constraints on the unknown parameter \(\theta^{\star}\). ### Problem formulation, notation, and assumptions With these motivating examples in mind, we now turn to a more precise mathematical formulation of the estimation problem introduced above. #### 1.1.1 Assumptions on the random variables \((\xi,w)\) Let us start by discussing properties of the random operator \(T_{\xi}\). In the examples previously introduced, the domain of the observation operator \(T_{\xi}\) was either a subset of \(\mathbf{R}^{d}\), or more generally, a subset of the sequence space \(\ell^{2}(\mathbf{N})\). The bulk of our analysis focuses on the finite-dimensional setting --i.e., with domain \(\mathbf{R}^{d}\)--so that \(T_{\xi}\) can be identified with a random matrix \(\mathbf{R}^{n\times d}\), for some pair \((n,d)\) of positive but finite integers. However, as we highlight in Section 3.2, simple approximation arguments can be used to leverage our finite-dimensional results to determine minimax rates of convergence for estimating an element \(\theta^{\star}\) of the infinite-dimensional sequence space \(\ell^{2}(\mathbf{N})\). In terms of the probabilistic structure of \(T_{\xi}\), we assume the random element \(\xi\) lies in the measurable space \((\Xi,\mathcal{E})\), and is drawn from a probability measure \(\mathbb{P}\) on the same space. Throughout we take \(\mathcal{E}\) to be large enough such that linear functionals of \(T_{\xi}\) are measurable. As for the noise vector \(w\in\mathbf{R}^{n}\), we assume it is drawn--conditionally on \(\xi\)--from a noise distribution with conditional mean zero, and bounded conditional covariance. Formally, we assume that \(w\sim\nu(\cdot\mid\xi)\) where \(\nu\) is a Borel regular conditional probability on \(\mathbf{R}^{n}\) that satisfies the following two conditions: 1. For \(\mathbb{P}\)-almost every \(\xi\in\Xi\), we have \(\int\!w\,\nu(\mathrm{d}w\mid\xi)=0\); and 2. For \(\mathbb{P}\)-almost every \(\xi\in\Xi\), we have \[\int(u^{\mathsf{T}}w)^{2}\,\nu(\mathrm{d}w\mid\xi)\leq u^{\mathsf{T}}\Sigma_{w} u,\qquad\text{for any fixed }u\in\mathbf{R}^{n}.\] We write that the measure \(\nu\) lies in the set \(\mathcal{P}(\Sigma_{w})\) when these two conditions are satisfied. In words, Assumption (N1) requires that \(w\) is conditionally centered, and Assumption (N2) assumes that the conditional covariance of \(w\) is almost surely upper bounded in the semidefinite ordering by \(\Sigma_{w}\). Let \(\mathbb{P}\times\nu\) denote the distribution of the tuple \((\xi,w)\); in explicit terms, writing \((\xi,w)\sim\mathbb{P}\times\nu\) means that \(\xi\sim\mathbb{P}\) and \(w\mid\xi\sim\nu(\cdot\mid\xi)\). Having specified the joint law of \((\xi,w)\), the random variable \(y\) then satisfies the stated observation model (1). #### 1.1.2 Decision-theoretic formulation In this paper, our goal to estimate \(\theta^{\star}\) to the best possible accuracy as measured by a fixed quadratic form. To make this rigorous, we introduce two symmetric positive definite matrices \(K_{e}\) and \(K_{c}\), which induce (respectively) the squared norms \[\|\theta\|_{K_{e}}^{2}\coloneqq\langle\theta,K_{e}\theta\rangle\quad\text{and} \quad\|\theta\|_{K_{c}^{-1}}^{2}\coloneqq\langle\theta,K_{c}^{-1}\theta\rangle,\] defined for any \(\theta\in\mathbf{R}^{d}\). We seek estimates \(\widehat{\theta}\) of \(\theta^{\star}\) that have low squared _estimation error_\(\|\widehat{\theta}-\theta^{\star}\|_{K_{e}}^{2}\), as defined by the matrix \(K_{e}\). In parallel, we assume that underlying parameter is bounded in the _constraint norm_, so that it lies in the ellipse \[\Theta(\varrho,K_{c})\coloneqq\Big{\{}\,\theta\in\mathbf{R}^{d}:\|\theta\|_{ K_{c}^{-1}}\leq\varrho\,\Big{\}}\] with radius \(R\), as defined by the matrix \(K_{c}\). With this notation in hand, the central object of study in this paper is the _minimax risk_ \[\mathfrak{M}(T,\mathbb{P},\Sigma_{w},\varrho,K_{e},K_{c})\coloneqq\inf_{ \begin{subarray}{c}\theta^{\star}\in\Theta(\varrho,K_{c})\\ \nu\in\mathcal{P}(\Sigma_{w})\end{subarray}}\mathbf{E}_{(\xi,w)\sim\mathbb{P }\times\nu}\Big{[}\|\widehat{\theta}-\theta^{\star}\|_{K_{e}}^{2}\Big{]}, \tag{3}\] where the infimum ranges over all measurable functions \(\widehat{\theta}\equiv\widehat{\theta}(T_{\xi},y)\) that map the observed pair \((T_{\xi},y)\) to \(\mathbf{R}^{d}\). ### Examples of choices of sampling laws, constraints and error norms As discussed previously, our general theory accommodates various forms of the random linear operators \(T_{\xi}\). As might one expect, the sampling law \(\mathbb{P}\) for \(\xi\) changes the statistical structure of the observations, and so influences the quality of the best possible estimates. Moreover, the interaction between \(\mathbb{P}\) and the geometry of the error norm, as defined by the matrix \(K_{e}\), plays an important role. Finally, both of these factors interact with the geometry of the constraint set, as determined by the matrix \(K_{c}\). Below we discuss some examples of these types of interactions. To be clear, each of these statistical settings have been considered separately in the literature previously; one benefit of our approach is that it provides a unifying framework that includes each of these problems as special cases. **Example 5** (Covariate shift in linear regression).: Recall the set-up for linear regression, as introduced in Example 1. In practice, the _source distribution_ from which the covariates \(x\) are sampled when constructing an estimate of \(\theta^{\star}\) need not be the same as the _target distribution_ of covariates on which the predictor is to be deployed. This phenomenon--a discrepancy between the source and target distributions--is known as _covariate shift_. It is now known to arise in a wide variety of applications (e.g., see the papers [45, 40] and references therein for more details). As one concrete example, in healthcare applications, the covariate vector \(x\in\mathbf{R}^{d}\) might correspond to various diagnostic measures run on a given patient, and the response \(y\in\mathbf{R}\) could correspond to some outcome variable (e.g., blood pressure). Clinicians might use one population of patients to develop a predictive model relating the diagnostic measures \(x\) to the outcome \(y\), but then be interested in making predictions for a related but distinct population of patients. In our setting, suppose that we use the linear model \(\theta\mapsto\widehat{y}\coloneqq\left\langle\theta,\,x\right\rangle\) to make predictions over a collection of covariates with distribution \(Q\). A simple computation shows that the mean-squared prediction error, averaging over both the noise \(w\) and random covariates \(x\), takes the form \[\mathbf{E}\left[(\widehat{y}-y)^{2}\right]=\underbrace{(\theta- \theta^{\star})^{\mathsf{T}}\Sigma_{Q}(\theta-\theta^{\star})}_{=\,L_{Q}( \widehat{\theta},\theta^{\star})}+c,\qquad\text{where}\quad\Sigma_{Q} \coloneqq\mathbf{E}_{Q}[x\otimes x],\] and \(c\) is a constant independent of the pair \((\theta,\theta^{\star})\). Thus, the excess prediction error over the new population \(Q\) corresponds to taking \(K_{e}=\Sigma_{Q}\) in our general set-up. Similarly, if one wanted to assess parameter error, then studying the minimax risk with the choice \(K_{e}=I_{d}\) would be reasonable. Finally, the error in the original population (denoted \(P\)) can be assessed with the choice \(K_{e}=\Sigma_{P}\coloneqq\mathbf{E}_{P}[x\otimes x]\). Among the claims in the paper of Mourtada [53] is the following elegant result: when no constraints are imposed on \(\theta^{\star}\), the minimax risk in the squared metric \(L_{Q}(\widehat{\theta},\theta^{\star})=\|\widehat{\theta}-\theta^{\star}\|_{ \Sigma_{Q}}^{2}\) is equal to \[\inf_{\widehat{\theta}}\sup_{\theta^{\star}\in\mathbf{R}^{d}} \mathbf{E}\left[L_{Q}(\widehat{\theta},\theta^{\star})\right]=\frac{\sigma^{2 }}{n}\,\mathbf{E}[\mathbf{Tr}(\Sigma_{n}^{-1}\Sigma_{Q})], \tag{4}\] where \(\Sigma_{n}\) denotes the sample covariance matrix \((1/n)\sum_{i=1}^{n}x_{i}\otimes x_{i}\), and the expectation is over \(x_{1},\ldots,x_{n}\stackrel{{\text{IID}}}{{\sim}}P\). Thus, the fundamental rate of estimation depends on the distribution of the sample covariance matrix, the noise level, and the target distribution \(Q\). In this paper, we derive related but more general results that allow for many other choices of the error metric and, perhaps more importantly, permit the statistician to incorporate constraints on the parameter \(\theta^{\star}\). We demonstrate in Section 3.1.3 that these more general results allow us to recover the known relation (23) via a simple limiting argument where the constraint radius tends to infinity. **Example 6** (Nonparametric regression with non-uniform sampling).: Consider observing covariate-target pairs \(\{(x_{i},y_{i})\}_{i=1}^{n}\) where \(y_{i}\) is modeled as being a noisy realization of a conditional mean function; _i.e._, we have \(y_{i}=f^{\star}(x_{i})+w_{i}\) where \(f^{\star}(x)=\mathbf{E}[Y\mid X=x]\), analogously to Example 2. When \(f^{\star}\) is appropriately smooth and the covariates are drawn from a uniform distribution over some compact domain, this problem has been intensively studied, and the minimax risks are well-understood. However, when the sampling of the covariates \(x_{i}\) is non-uniform, the possible rates of estimation can deteriorate drastically--see for instance the papers [23, 25, 24, 26, 32, 2]. Using tools from the theory of reproducing kernel Hilbert spaces (RKHSs), one can formulate this problem as an infinite-dimensional counterpart to our model (1), where the constraint parameters \((\varrho,K_{c})\) are determined by the Hilbert radius and the eigenvalues of the integral operator associated with the kernel. Although formally our minimax risk is defined for finite dimensional problems, via limiting arguments, it is straightforward to obtain consequences for the infinite-dimensional problem of the type discussed here, which discuss in Section 3.2. **Example 7** (Covariate shift in nonparametric regression).: Combining the scenarios in Examples 5 and 6, now consider the problem of covariate shift in a nonparametric setting. We observe samples \((x_{i},y_{i})\) where the covariates have been drawn according to some law \(P\), and our goal is to construct a predictor with low risk in the squared norm defined by some other covariate law \(Q\). In our study of this setting, the constraint set is determined by the underlying function class in a manner analogous to Example 6, and the error metric is determined by the new distribution of covariates on which the estimates must be deployed, analogously to Example 5. Some recent work has studied general conditions on the pair \((P,Q)\) and the corresponding optimal rates of estimation [41, 27, 56, 47, 59, 67, 60, 28]. Among the consequences of our work are more refined results that are instance-dependent, in the sense that we characterize optimality for fixed pairs \((P,Q)\), as opposed to optimality over broad classes of \((P,Q)\) pairs. See Section 3.2.3 for a detailed discussion of these refined results. The examples above share the common feature of being problems where estimating a conditional mean function is able to be formulated within the observation model (1). Additionally, in these examples, the fundamental hardness of the problem depends on both the structure of this function (modelled via assumptions on \(\theta^{\star}\)) as well as the distribution of the covariates. The goal of this paper is to build a general theory for these types of observation models, which elucidates how both the structure of \(\theta^{\star}\) as well as the covariate law \(\mathbb{P}\) determine the minimax rate of estimation in finite samples. In Section 3, we give concrete consequences of our general results for these types of problems. ### Connections and relations to prior work Let us discuss in more detail some connections and relations between our problem formulation and results, and various branches of the statistics literature. Connections to random design regressionAs shown by the examples discussed so far, our general set-up includes, among other problems, many variants of _random design regression_. This is a classical problem in statistics, with a large literature; see the sources [33, 65, 35] and references therein for an overview. The recent paper [53] also studies the analogous problem studied here when the vector \(\theta^{\star}\) is allowed to be arbitrary; the only assumption made is that \(\theta^{\star}\in\mathbf{R}^{d}\). In this case, it is possible to use tools from Bayesian decision theory to exhibit the minimax optimality of the ordinary least squares (OLS) estimator [53, Theorem 1]. In Section 3.1.3, we demonstrate how to obtain this result as a corollary of our more general results. Note that in applications, such as those given by the preceding examples, it is important that there is a constraint on \(\theta^{\star}\). For instance, in a nonparametric regression problem, the parameter \(\theta^{\star}\) denotes the coefficients of a series expansion corresponding to a conditional mean function \(f^{\star}(x)=\mathbf{E}[Y\mid X=x]\) in an appropriate orthonormal family of functions. In this case, one can obtain consistent estimators of \(f^{\star}\) only if \(\theta^{\star}\) lies in a compact set. Random design and Bayesian priorsWhen the the norm of the vector \(\theta^{\star}\) is constrained, there are relatively few minimax results in the random design setting. On the other hand, a related Bayesian setting has been studied. In this line of work, the definition of the minimax risk is altered so that the "worst-case" supremum over \(\theta^{\star}\) in the constraint set is replaced with a suitable "average"--namely the expectation over \(\theta^{\star}\) drawn according to a prior distribution over the constraint set. In addition to the clear differences in the formulation, this line of work exhibits two main qualitative differences from our paper. First, these Bayesian results have primarily been established in the proportional asymptotics framework, in the ratio \(d/n\) is assumed to converge towards some aspect ratio \(\gamma>0\) as both \((d,n)\) diverge to infinity. Secondly, by selecting "nice priors", it is possible to leverage certain properties--for instance, equivariance to some group action--that can hold for _both_ the prior and covariate law. On the other hand, our setting is somewhat more challenging in that we make no _a priori_ assumptions about the covariate law and its relationship to the constraint set. In more detail, when the covariates are drawn from a multivariate Gaussian, for certain constraint sets, it is possible to find a prior such that the minimax and Bayesian risks coincide. As one example, Dicker [17] studies the asymptotic minimax risk when the ratio \(d/n\) is allowed to grow, and by using equivariance arguments, he obtains asymptotically minimax procedure. Proposition 3(b) in his paper gives a prior for which the minimax and Bayesian risks coincide. The thesis [52, Corollary 8.2] provides a matching asymptotic lower bound. The relation between Bayes and minimax risks in this line of work cannot be expected in general, as the arguments repose critically on the rotation invariance of the standard multivariate Gaussian. Moreover, this and other classical work on random design regression using Gaussian covariates typically hinges on special, closed-form formulae for quantities related to the distribution of the sample covariance matrix (see, e.g., the papers [62, 12, 1]). Fixed design resultsAlthough we focus on minimax estimation of the unknown parameter \(\theta^{\star}\) in the random design setting, we note that the related fixed design setting is well studied. In fact, in classical work, Donoho studied a very similar operator-based observation model to the one considered here; a key difference is that in that work, the focus is on estimating a (scalar-valued) functional of \(\theta^{\star}\)[18]. By sufficiency arguments, our problem, when instantiated in the setting of fixed design with Gaussian noise, is equivalent to mean estimation on an elliptical parameter set. It is therefore related to classical work on sharp asymptotic minimax estimation in the Gaussian sequence model [57, 31, 20, 19, 5, 29, 30]; see also the monograph [38] for a pedagogical overview of this topic. These works extend the classical line of work on estimating a constrained (possibly multivariate) Gaussian mean [15, 9, 50, 7, 48]. We refer the reader to references [49, 22], which contain a more thorough overview of prior work on minimax estimation of a parameter when a notion of'signal to noise ratio' is fixed. Of course, applying an optimal fixed design estimator cannot be expected to yield an optimal random design estimator in general. This is because in the fixed design formulation, the worst-case \(\theta^{\star}\) could adapt to a single design matrix, whereas in the random design formulation, the worst-case \(\theta^{\star}\) must adapt to the _random ensemble_ of design matrices induced by sampling \(n\) samples in an IID fashion from a fixed covariate law. ## 2 Main results We now turn to the presentation of our main results, which are upper and lower bounds on the minimax rate of estimation as defined in display (3), matching up to a constant pre-factor. These bounds are presented in Section 2.1. ### General upper and lower bounds Our general upper bounds are stated as the following functional of the distribution of the operator \(T_{\xi}\); the noise covariance \(\Sigma_{w}\); the constraint norm, as determined by the pair \((\varrho,K_{c})\); and the estimation norm, as defined by the operator \(K_{e}\), \[\Phi(T,\mathbb{P},\Sigma_{w},\varrho,K_{e},K_{c})\\ \coloneqq\sup_{\Omega}\Big{\{}\;\mathbf{E}\,\mathbf{Tr}\left(K_{e}^ {1/2}(\Omega^{-1}+T_{\xi}^{\mathsf{T}}\Sigma_{w}^{-1}T_{\xi})^{-1}K_{e}^{1/2} \right):\Omega\succ 0,\;\;\mathbf{Tr}(K_{c}^{-1/2}\Omega K_{c}^{-1/2})\leq\varrho^{2} \,\Big{\}}. \tag{5}\] Our first main result is a general upper bound. **Theorem 1** (General minimax upper bound).: _The minimax risk is upper bounded as_ \[\mathfrak{M}(T,\mathbb{P},\Sigma_{w},\varrho,K_{e},K_{c})\leq\Phi(T,\mathbb{P}, \Sigma_{w},\varrho,K_{e},K_{c}). \tag{6}\] See Section 4.1 for the proof. Our second result is a complementary lower bound. **Theorem 2** (Lower bound).: _The minimax risk is lower bounded as_ \[\mathfrak{M}(T,\mathbb{P},\Sigma_{w},\varrho,K_{e},K_{c})\geq\,\Phi(T,\mathbb{ P},\Sigma_{w},\tfrac{\varrho}{2},K_{e},K_{c})\geq\frac{1}{4}\,\Phi(T,\mathbb{P}, \Sigma_{w},\varrho,K_{e},K_{c}). \tag{7}\] See Section 4.2 for the proof. Note that the functional on the righthand side of the display (7) above matches the quantity appearing in our minimax upper bound (6). Thus, in a nonasymptotic fashion, we have determined the minimax risk for this problem up the prefactor \(1/4\). Sharper lower bound constantsThe constant appearing in the lower bound (7) can typically be substantially sharpened. To describe how this can be done via our results, fix a scalar \(\tau\in(0,1]\) and a symmetric positive definite matrix \(\Omega\), and let \(Z\in\mathbf{R}^{d}\) be vector of IID standard Gaussians. Define the scalar \[c\coloneqq\tau^{2}(1-\mathbf{P}\{\tau^{2}\sum_{i=1}^{d}\lambda_{i}Z_{i}^{2}>1 \}),\] where \(\{\lambda_{i}\}_{i=1}^{d}\) are the the eigenvalues of the matrix \((1/\varrho^{2}){K_{e}}^{1/2}\Omega{K_{e}}^{1/2}\). Then, we are able to establish the following minimax lower bound, \[\mathfrak{M}(T,\mathbb{P},\Sigma_{w},\varrho,K_{e},K_{c})\geq\mathbf{E}\, \mathbf{Tr}\left(K_{e}^{1/2}(\frac{1}{c}\Omega^{-1}+T_{\xi}^{\mathsf{T}}\Sigma _{w}^{-1}T_{\xi})^{-1}K_{e}^{1/2}\right), \tag{8}\] provided that the parameter \(\tau\in(0,1]\) and the symmetric positive definite matrix \(\Omega\) is such that \(\mathbf{Tr}(K_{c}^{-1/2}\Omega{K_{c}}^{-1/2})=\varrho^{2}\). With appropriate choices of the pair \((\tau,\Omega)\), the lower bound (8) can lead to pre-factors that are much closer to \(1\), and in some cases, converge to one under various scalings. In Section 3.1.1, we give one illustration of how the family of bounds (8) can be exploited to obtain an improvement of this type. Form of an optimal procedureInspecting the proof of Theorem 1--specifically, as a consequence of Proposition 3--if the supremum on the righthand side of (5) is attained at the matrix \(\Omega_{\star}\), then the following estimator, in view of the lower bound (7), is near minimax-optimal, \[\widehat{\theta}(T_{\xi},y)\coloneqq\big{(}\Omega_{\star}^{-1}+T_{\xi}^{ \mathsf{T}}\Sigma_{w}^{-1}T_{\xi}\big{)}^{-1}T_{\xi}^{\mathsf{T}}\Sigma_{w}^{- 1}y. \tag{9}\] It is perhaps instructive to write this estimator in its "ridge" formulation \[\widehat{\theta}(T_{\xi},y)=\operatorname*{arg\,min}_{\vartheta\in\mathbf{R} ^{d}}\Big{\{}\left\|y-T_{\xi}\vartheta\right\|_{\Sigma_{w}^{-1}}^{2}+\left\| \vartheta\right\|_{\Omega_{\star}^{-1}}^{2}\Big{\}}.\] In the language of Bayesian statistics, our order-optimal procedure is a maximum _a posteriori_ (MAP) estimate for \(\theta^{\star}\) when \(y\sim\mathsf{N}\left(T_{\xi}\theta^{\star},\Sigma_{w}\right)\) and the parameter follows the prior distribution \(\theta^{\star}\sim\mathsf{N}\left(0,\Omega_{\star}\right)\). The optimal prior is identified via the choice of \(\Omega_{\star}\) which is determined by the functional appearing in Theorems 1 and 2. If the supremum in (5) is not attained, then by selecting a sequence of matrices \(\Omega_{k}\) that approach the maximal value of the functional, one can similarly argue there exists a sequence of estimators that approach the order-optimal minimax risk. ### Independent and identically distributed regression models An important application of our general result is for independent and identically distributed (IID) regression models of the form \[y_{i}=\langle\theta^{\star},\psi(x_{i})\rangle+\sigma z_{i},\quad\text{for }i=1, \ldots,n. \tag{10}\] Above, we assume that \(x_{i}\) are independent and identical draws from a fixed covariate distribution \(P\), on some measurable space \(\mathcal{X}\), and that \(\psi\colon\mathcal{X}\to\mathbf{R}^{d}\). The covariates \(\{x_{i}\}_{i=1}^{n}\) are independent and the conditional distribution of \(z\mid x\) is an element of \(\mathcal{P}(I_{n})\). The parameter \(\sigma>0\) indicates the noise level; it is an upper bound on the conditional standard deviation of \(y_{i}-\langle\theta^{\star},\psi(x_{i})\rangle\). For the model described above, the following minimax risk of estimation provides the best achievable performance of any estimator, when \(\theta^{\star}\) lies in a compact ellipse and the error is measured in the quadratic norm \[\mathfrak{M}_{n}^{\text{IID}}\Big{(}\psi,P,\varrho,\sigma^{2},K_{c},K_{e} \Big{)}\coloneqq\inf_{\begin{subarray}{c}\widehat{\theta}\\ \nu\in\mathcal{P}\end{subarray}}\sup_{\begin{subarray}{c}\theta^{\star}\in \Theta(\varrho,K_{c})\\ \nu\in\mathcal{P}\end{subarray}}\mathbf{E}\Big{[}\big{\|}\widehat{\theta}(y_{1 }^{n},x_{1}^{n})-\theta^{\star}\big{\|}_{K_{e}}^{2}\Big{]}. \tag{11}\] Note that this problem can be formulated as an instance of our general operator formulation (1) where we take \(y=(y_{1},\ldots,y_{n})\), \(w=\sigma(z_{1},\ldots,z_{n})\), and \(\xi=(x_{1},\ldots,x_{n})\), so that \(\mathbb{P}=P^{n}\). The operator \(T_{\xi}\) is given by the \(n\times d\)-matrix with rows \(\psi(x_{i})^{\mathsf{T}}\). In this context the following random matrix, which is a rescaling of the operator \(T_{\xi}^{\mathsf{T}}T_{\xi}\), plays an important role: \[\Sigma_{n}\coloneqq\frac{1}{n}\sum_{i=1}^{n}\psi(x_{i})\otimes\psi(x_{i}). \tag{12}\] In order to state the consequence of our more general results for this problem, let us introduce a functional. We denote it by \(d_{n}\) to indicate that it is essentially an "effective statistical dimension" for this problem, \[d_{n}(\psi,P,\varrho,\sigma^{2},K_{e},K_{c})\coloneqq\sup_{\Omega}\Big{\{} \operatorname{Tr}\mathbf{E}_{P^{n}}\big{[}K_{e}^{1/2}(\Sigma_{n}+\Omega^{-1}) ^{-1}K_{e}^{1/2}\big{]}:\Omega>0,\operatorname{Tr}(K_{c}^{-1/2}\Omega K_{c}^ {-1/2})\leq\tfrac{n\varrho^{2}}{\sigma^{2}}\Big{\}}. \tag{13}\] Then an immediate corollary to Theorems 1 and 2 is the following pair of inequalities for the IID minimax risk.1 Footnote 1: Strictly speaking, this result follows immediately if we had defined the minimax risk over estimators which are measurable functions of the variables \(\{(y_{i},\psi(x_{i}))\}\). Nonetheless, since our lower bounds use Gaussian noise, the stated inequalities hold even when defining the minimax risk for estimators which operate on \(\{(y_{i},x_{i})\}\), by a standard sufficiency argument. **Corollary 1**.: _Under the IID regression model (10), the minimax rate of estimation as defined in equation (11) satisfies the following inequalities,_ \[\frac{1}{4}\,\frac{\sigma^{2}}{n}d_{n}(\psi,P,\varrho,\sigma^{2}, K_{e},K_{c})\leqslant\frac{\sigma^{2}}{n}d_{n}(\psi,P,\tfrac{\varrho}{2}, \sigma^{2},K_{e},K_{c})\\ \leqslant\mathfrak{M}_{n}^{\mathrm{IID}}\Big{(}\psi,P,\varrho, \sigma^{2},K_{e},K_{c}\Big{)}\leqslant\frac{\sigma^{2}}{n}d_{n}(\psi,P, \varrho,\sigma^{2},K_{e},K_{c}). \tag{14}\] So as to lighten notation, in the sequel, when the feature map \(\psi\) is the identity mapping \(\psi(x)=x\), we drop the parameter \(\psi\) from the functional \(d_{n}\) and the minimax rate \(\mathfrak{M}_{n}^{\mathrm{IID}}\). ### Some properties of the functional appearing in Theorems 1 and 2 As indicated by Theorem 1 and the subsequent discussion, the extremal quantity \[\sup_{\Omega}\Big{\{}\;\mathbf{E}\,\mathbf{Tr}\left(K_{e}^{1/2}(\Omega^{-1}+T_ {\xi}^{\mathsf{T}}\Sigma_{w}^{-1}T_{\xi})^{-1}K_{e}^{1/2}\right):\Omega \succ 0,\;\;\mathbf{Tr}(K_{c}^{-1/2}\Omega K_{c}^{-1/2})\leqslant\varrho^{2} \,\Big{\}} \tag{15}\] is fundamental in that it determines our minimax risk; moreover when the supremum is attained, the maximizer defines an order-optimal estimation procedure (see equation (9)). Conveniently, it turns out that the maximization problem implied by the display (15) is concave. **Proposition 1** (Concavity of functional).: _The optimization problem_ \[\begin{split}\text{maximize}& f(\Omega) \coloneqq\mathbf{Tr}\,\mathbf{E}\left[K_{e}^{1/2}\big{(}\Omega^{-1}+T_{\xi}^{ \mathsf{T}}\Sigma_{w}^{-1}T_{\xi}\big{)}^{-1}K_{e}^{1/2}\right]\\ \text{subject to}&\Omega\succ 0,\quad\mathbf{Tr}(K_{c} ^{-1/2}\Omega K_{c}^{-1/2})\leqslant\varrho^{2},\end{split} \tag{16}\] _is equivalent to a convex program, with variable \(\Omega\). Formally, the constraint set above is convex, and function \(f\) is concave over this set._ See Appendix A.1 for the proof. Note that this claim implies that, provided oracle access to the objective function \(f\) appearing above, one can in principle obtain a maximizer in a computationally tractable manner, by leveraging algorithms for convex optimization [11]. The functional (15) depends on the distribution of \(T_{\xi}^{\mathsf{T}}\Sigma_{w}^{-1}T_{\xi}\). In general, Jensen's inequality along with the convexity of the trace of the inverse of positive matrices [8, Exercise 1.5.1] implies that it is always lower bounded by \[\sup_{\Omega}\Big{\{}\;\mathbf{Tr}\left(K_{e}^{1/2}(\Omega^{-1}+\mathbf{E}\,T_ {\xi}^{\mathsf{T}}\Sigma_{w}^{-1}T_{\xi})^{-1}K_{e}^{1/2}\right):\Omega \succ 0,\;\;\mathbf{Tr}(K_{c}^{-1/2}\Omega K_{c}^{-1/2})\leqslant\varrho^{2} \,\Big{\}} \tag{17}\] Comparing displays (15) and (17), we have simply moved the expectation over \(\xi\) into the inverse. For certain IID regression models, as described in Section 2.2, we can give a complementary upper bound. To state our result, we define \[\overline{d}_{n}(P,\varrho,\sigma^{2},K_{e},K_{c})\coloneqq\sup_{\Omega} \Big{\{}\,\mathbf{Tr}\left(K_{e}^{1/2}(\mathbf{E}_{P^{n}}\,\Sigma_{n}+\Omega ^{-1})^{-1}K_{e}^{1/2}\right):\Omega\succ 0,\mathbf{Tr}(K_{c}^{-1/2}\Omega K_{c}^{-1/2}) \leqslant\tfrac{n\varrho^{2}}{\sigma^{2}}\Big{\}}\;. \tag{18}\] Note that this quantity only depends on the distribution \(P^{n}\) through the matrix \(\mathbf{E}_{P^{n}}\,\Sigma_{n}\). **Proposition 2** (Comparison of \(d_{n}\) to \(\overline{d}_{n}\)).: _Suppose that \(\Sigma_{P}\coloneqq\mathbf{E}_{P}[\psi(x)\otimes\psi(x)]\) is nonsingular. Define \(\kappa\) to be the \(P\)-essential supremum of \(x\mapsto\left\|{K_{c}}^{1/2}\psi(x)\right\|_{2}\). If \(\kappa<\infty\), then for any \(\varrho>0,\sigma>0\), we have_ \[\overline{d}_{n}(\psi,P,\varrho,\sigma^{2},\Sigma_{P},K_{c})\leq d _{n}(\psi,P,\varrho,\sigma^{2},\Sigma_{P},K_{c})\leq\Big{(}1+\frac{\varrho^{2} \kappa^{2}}{\sigma^{2}}\Big{)}\overline{d}_{n}(\psi,P,\varrho,\sigma^{2}, \Sigma_{P},K_{c}).\] Unpacking this result, when \(K_{c}^{1/2}\psi(x)\) is essentially bounded, for problems where the error is measured in the norm induced by the covariance \(\Sigma_{P}\), we see that the functionals \(\overline{d}_{n}\) and \(d_{n}\) are of the same order when the signal-to-noise ratio satisfies the relation \(\frac{\varrho^{2}}{\sigma^{2}}\lesssim\frac{1}{\kappa^{2}}\). As mentioned in the discussion above, the first inequality above is a consequence of a generic lower bound. See Appendix A.2 for the proof of the upper bound in the claim. ### Asymptotics for a diverging radius In this section, we develop an asymptotic limit relation for the minimax risk (3) as the radius \(\varrho\) of the constraint set \(\Theta(\varrho,K_{c})\) tends to infinity. The relation reveals that the lower bound constant \(1/4\) appearing in the lower bound Theorem 2 can actually be made quite close to \(1\) for large radii. **Corollary 2**.: _Suppose that \(T_{\xi}^{\mathsf{T}}\Sigma_{w}^{-1}T_{\xi}\) is \(\mathbb{P}\)-almost surely nonsingular. Then the minimax risk (3) satisfies_ \[\mathfrak{M}(T,\mathbb{P},\Sigma_{w},\varrho,K_{e},K_{c})=\big{(}1 -o(1)\big{)}\,\Phi(T,\mathbb{P},\Sigma_{w},\varrho,K_{e},K_{c}),\quad\text{as $ \varrho\to\infty$.}\] See Appendix A.3 for a proof of this claim. An immediate consequence is that for IID regression settings as in Section 2.2, we have the following limit relation. **Corollary 3**.: _Suppose that that the empirical covariance matrix \(\Sigma_{n}\) from equation (12) is \(P^{n}\)-almost surely invertible. Then, the minimax risk for an IID observation model (10) satisfies the relation_ \[\mathfrak{M}_{n}^{\mathrm{IID}}\Big{(}\psi,P,\varrho,\sigma^{2}, K_{e},K_{c}\Big{)}=\big{(}1-o(1)\big{)}\,\frac{\sigma^{2}}{n}d_{n}\big{(} \psi,P,\varrho,\sigma^{2},K_{e},K_{c}\big{)},\quad\text{as $\varrho\to\infty$.}\] ## 3 Consequences of main results In this section, we demonstrate consequences of our main results for a variety of estimation problems. In Section 3.1, we develop consequences of our main results for problems where the underlying parameter to be estimated is finite-dimensional. In Section 3.2, we develop consequences of our main results for problems where the underlying parameter is infinite-dimensional. In both cases, we are able to derive minimax rates of estimation, which to the best of our knowledge, are not yet in the literature. Additionally, we are also able to re-derive classical as well as recent results in a unified fashion via our main theorems. ### Applications to parametric models We begin by developing the consequences of our main results for regression problems where the statistician is aiming to estimate a finite-dimensional parameter. Sections 3.1.1, 3.1.2, and 3.1.3 concern IID regression settings of the form described in Section 2.2. In Section 3.1.4, we consider a non-IID regression setting. #### 3.1.1 Linear regression with Gaussian covariates As in the prior work [17], consider a random design IID regression setting of the form presented in the display (10), but with Gaussian data. Formally, we assume Gaussian noise, so that \(z_{i}\stackrel{{\text{IID}}}{{\sim}}\mathsf{N}\left(0,1\right)\), and Gaussian covariates, so that \(x_{i}\stackrel{{\text{IID}}}{{\sim}}\mathsf{N}\left(0,I_{d}\right)\) and \(\psi(x)=x\). Here \(x\) and \(z\) are assumed independent. Then we define \[r(n,d,\varrho,\sigma)\coloneqq\inf_{\hat{\theta}}\sup_{\|\theta\|_{2}\leq \varrho}\mathbf{E}\left[\|\widehat{\theta}-\theta\|_{2}^{2}\right]\text{, \quad and\quad}d_{\text{Dicker}}(n,d,\varrho,\sigma)\coloneqq\mathbf{Tr} \,\mathbf{E}\left[(\Sigma_{n}+\tfrac{\sigma^{2}}{n}\tfrac{d}{\varrho^{2}}I_{d}) ^{-1}\right]\!,\] where the expectations are over the Gaussian covariates and noise pairs \(\{(x_{i},z_{i})\}_{i=1}^{n}\). These quantities correspond, respectively, to the minimax risk and the worst-case risk (rescaled by \(n/\sigma^{2}\)), of a certain ridge estimator [17, Corollary 1] on the sphere \(\{\|\theta\|_{2}=\varrho\}\). Dicker [17, Corollary 3] proves the following limiting result. Under the proportional asymptotics \(d/n\to\gamma\), where the limiting ratio \(\gamma\) lies in \((0,\infty)\), the minimax risk satisfies \[\lim_{d/n\to\gamma}\left|r(n,d,\varrho,\sigma)-\frac{\sigma^{2}}{n}d_{\text{ Dicker}}(n,d,\varrho,\sigma)\right|=0, \tag{19}\] for any radius \(\varrho>0\) and noise level \(\sigma>0\). Let us now demonstrate that our general theory yields a nonasymptotic counterpart of this claim, and taking limits recovers the asymptotic relation (19). **Corollary 4**.: _For linear regression over the \(\varrho\)-radius Euclidean sphere with Gaussian covariates, the minimax risk satisfies the sandwich relation_ \[c_{d}\,\frac{\sigma^{2}}{n}d_{\text{Dicker}}(n,d,\varrho,\sigma)\leq\frac{ \sigma^{2}}{n}d_{\text{Dicker}}(n,d,\sqrt{c_{d}}\varrho,\sigma)\leq r(n,d, \varrho,\sigma)\leq\frac{\sigma^{2}}{n}d_{\text{Dicker}}(n,d,\varrho,\sigma),\] (20a) _where_ \[c_{d}\coloneqq\begin{cases}(1-\tfrac{1}{2d-1})(1-\exp(-\tfrac{d^{3/2}}{4}))&d \geq 2\\ 1/4&d=1\end{cases}. \tag{20b}\] Note that since \(c_{d}=(1-o(1))\) as \(d\to\infty\), the inequalities (20a) allow us to immediately recover Dicker's result. It should be emphasized, however, that Corollary 4, holds for _any_ quadruple \((n,d,\varrho,\sigma)\). In particular, it is valid in a completely nonasymptotic fashion and with explicit constants. We now sketch how this result follows from our main results. As calculated in Appendix B.1.1, our functional for this problem satisfies \[d_{n}(\mathsf{N}\left(0,I_{d}\right),\varrho,\sigma^{2},I_{d},I_{d})=d_{\text {Dicker}}(n,d,\varrho,\sigma).\] (21a) Hence, our Corollary 1 implies the following characterization of the minimax risk, 2 \[\frac{1}{4}\,\frac{\sigma^{2}}{n}d_{\text{Dicker}}(n,d,\varrho,\sigma)\leq r( n,d,\varrho,\sigma)\leq\frac{\sigma^{2}}{n}d_{\text{Dicker}}(n,d,\varrho, \sigma^{2}). \tag{21b}\] To establish our sharper result (20a), we leverage the stronger lower bound (8). The details of this calculation are presented in Appendix B.1.2. Note that in Section 5.1.1, we simulate this problem and find that as suggested by Corollary 4, that, indeed, the gap between our upper and lower bounds is tiny, even for problems with small dimension (see Figure 1). #### 3.1.2 Underdetermined linear regression Consider observing samples from a standard linear regression model; that is, we observe pairs \(\{(x_{i},y_{i})\}\) according to the model (10), with \(\psi(x)=x\). A practical scenario in which some assumption regarding the norm of the underlying parameter is necessary is when the sample covariance matrix \(\Sigma_{n}\), defined in display (12) is singular with positive \(P^{n}\)-probability. This occurs if \(n<d\), or if there is a hyperplane \(H\subset\mathbf{R}^{d}\) such that \(x\sim P\) lies in \(H\) with positive probability. In this setting, the correct dependence of the minimax risk on the geometry of the constraint set and the distribution of sample covariance matrix is relatively poorly understood. For simplicity--although our results are more general than this--let us assume that error is measured in the Euclidean norm and that it is assumed that the underlying parameter \(\theta^{\star}\) has Euclidean norm bounded by \(\varrho>0\), and that the noise is independent Gaussian with variance \(\sigma^{2}\). Then Corollary 1 demonstrates that \[\inf_{\hat{\theta}}\sup_{\|\theta\|_{2}\leq\varrho}\mathbf{E}[\|\hat{\theta}- \theta\|_{2}^{2}]\asymp\frac{\sigma^{2}}{n}d_{n}(P,\varrho,\sigma^{2},I_{d},I_ {d})=\frac{\sigma^{2}}{n}\sup_{\Omega>0}\Bigl{\{}\,\mathbf{Tr}\,\mathbf{E}_{P ^{n}}\bigl{[}(\Sigma_{n}+\Omega^{-1})^{-1}\bigr{]}:\mathbf{Tr}(\Omega)\leq \frac{n\varrho^{2}}{\sigma^{2}}\Bigr{\}}.\] Taking \(\Omega=\frac{n}{d}\frac{\varrho^{2}}{\sigma^{2}}I_{d}\), we obtain the following lower bound on the minimax risk for any covariate law \(P\), \[\tfrac{\sigma^{2}}{n}\,\mathbf{Tr}\,\mathbf{E}_{P^{n}}\bigl{[}(\Sigma_{n}+ \tfrac{\sigma^{2}}{\varrho^{2}}\tfrac{d}{n}I_{d})^{-1}\bigr{]}\asymp\underbrace{ \mathbf{E}\Bigl{[}\sum_{i=1}^{d}\tfrac{\sigma^{2}}{n}\tfrac{1}{\lambda_{i}( \Sigma_{n})}\mathbf{1}\{\lambda_{i}(\Sigma_{n})\geq\tfrac{\sigma^{2}}{n}\tfrac {d}{\varrho^{2}}\}\Bigr{]}}_{\begin{subarray}{c}\text{Estimation error from}\\ \text{large eigenvalues of }\Sigma_{n}\end{subarray}}+\underbrace{ \mathbf{E}\Bigl{[}\sum_{i=1}^{d}\tfrac{\varrho^{2}}{d}\mathbf{1}\{\lambda_{i} (\Sigma_{n})<\tfrac{\sigma^{2}}{n}\tfrac{d}{\varrho^{2}}\}\Bigr{]}}_{ \begin{subarray}{c}\text{Approximation error due}\\ \text{to small eigenvalues of }\Sigma_{n}\end{subarray}}. \tag{22}\] The lower bound (22) is sharp in certain cases. For instance, when \(x_{i}\stackrel{{\text{IID}}}{{\sim}}\mathsf{N}\left(0,I_{d}\right)\) but there are fewer samples than the dimension, so that \(n<d\), it is equal to the minimax risk up to universal constants, following the same argument as in Section 3.1.1. Note that above, \(\lambda_{i}\) denotes the \(i\)th largest (nonnegative) eigenvalue of a symmetric positive semidefinite matrix. One possible interpretation of this lower bound is as follows: the first term indicates the estimation error incurred in directions where the effective signal-to-noise ratio is high; on the other hand, the second term indicates the bias or approximation error that must be incurred in directions where the effective signal-to-noise ratio is low. In fact, the message of this lower bound is that in these directions, no procedure can do much better than estimating \(0\) there. One concrete and interesting takeaway is that if \(\Sigma_{n}\) has an eigenvalue equal to zero, it increases the minimax risk by essentially the same amount as if the eigenvalue were positive and in the interval \((0,\tfrac{\sigma^{2}}{n}\tfrac{d}{\varrho^{2}})\). #### 3.1.3 Linear regression with an unrestricted parameter space In recent work, Mourtada [53] characterizes the minimax risk for random design linear regression problem for an _unrestricted_ parameter space. Consider observing samples \(\{(x_{i},y_{i})\}_{i=1}^{n}\) following the IID model (10) with \(\psi(x)=x\), where the covariates are drawn from some distribution \(P\) on \(\mathbf{R}^{d}\). As argued by Mourtada (see his Proposition 1), or as can be seen by taking \(\varrho\to\infty\) in our singular lower bound (22) from Section 3.1.2, if we impose no constraint on the underlying parameter \(\theta^{\star}\), then it is necessary to assume that the sample covariance matrix \(\Sigma_{n}\) is invertible with probability \(1\) in order to obtain finite minimax risks. Theorem 1 in Mourtada's paper then asserts that under this condition, we have \[\inf_{\hat{\theta}}\sup_{\begin{subarray}{c}\theta^{\star}\in\mathbf{R}^{d}\\ \nu\in\mathcal{P}(\sigma^{2}I_{n})\end{subarray}}\,\mathbf{E}\Bigl{[}\bigl{\|} \hat{\theta}-\theta^{\star}\bigr{\|}_{\Sigma_{P}}^{2}\Bigr{]}=\frac{\sigma^{2} }{n}\,\mathbf{E}\,\bigl{[}\,\mathbf{Tr}(\Sigma_{n}^{-1}\Sigma_{P})\bigr{]}, \tag{23}\] where the expectation is over the data \(\{(x_{i},y_{i})\}_{i=1}^{n}\), and \(\Sigma_{P}\coloneqq\mathbf{E}_{P}[x\!\otimes\!x]\) is the population covariance matrix under \(P\). We now show that this result, with the exact constants, is a consequence of our more general results. We focus on establishing the lower bound, because it is well-known (and easy to show) that the upper bound is achieved by the ordinary least squares estimator.3 Thus for the lower bound, our results imply that Footnote 3: Alternatively, note that if we define \(\widehat{\theta}_{\varrho}\) to be the order-optimal estimator we derive for the constraint set \(\{\|\theta^{\star}\|_{2}^{2}\leq\varrho^{2}\}\) (see equation (9), with \(K_{c}=I_{d}\), \(\Sigma_{w}=\sigma^{2}I_{d}\), and \(T_{\xi}=X\), where \(X\) is the design matrix.), then it converges compactly to the ordinary least squares estimate as \(\varrho\to\infty\). \[\inf_{\widehat{\theta}}\sup_{\begin{subarray}{c}\theta^{\star} \in\mathbf{R}^{d}\\ \nu\in\mathcal{P}(\sigma^{2}I_{n})\end{subarray}}\mathbf{E}\left[\left\| \widehat{\theta}-\theta^{\star}\right\|_{\Sigma_{P}}^{2}\right] \geq\sup_{\varrho>0}\,\left\{\inf_{\widehat{\theta}}\sup_{ \begin{subarray}{c}|\theta^{\star}|_{2}\leq\varrho\\ \nu\in\mathcal{P}(\sigma^{2}I_{n})\end{subarray}}\mathbf{E}\left[\left\| \widehat{\theta}-\theta^{\star}\right\|_{\Sigma_{P}}^{2}\right]\right\} \tag{24a}\] \[=\frac{\sigma^{2}}{n}\lim_{\varrho\to\infty}d_{n}(P,\varrho, \sigma^{2},\Sigma_{P},I_{d}). \tag{24b}\] In order to obtain the relation (24b), we have used the fact that the constrained minimax risk over the set \(\{\|\theta^{\star}\|_{2}\leq\varrho\}\) is nondecreasing in \(\varrho>0\), and have applied our limit relation in Corollary 3. A short calculation, which we defer to Appendix B.1.3, demonstrates that \[\lim_{\varrho\to\infty}d_{n}(P,\varrho,\sigma^{2},\Sigma_{P},I_{d})=\mathbf{E }\left[\,\mathbf{Tr}(\Sigma_{n}^{-1}\Sigma_{P})\right]\!. \tag{25}\] Thus, after combining displays (24b) and (25), we have obtained the lower bound in Mourtada's result (23). One consequence of this argument is that the inequality (24a) is, as may be expected, an equality. That is, we have \[\inf_{\widehat{\theta}}\sup_{\begin{subarray}{c}\theta^{\star} \in\mathbf{R}^{d}\\ \nu\in\mathcal{P}(\sigma^{2}I_{n})\end{subarray}}\mathbf{E}\left[\left\| \widehat{\theta}-\theta^{\star}\right\|_{\Sigma_{P}}^{2}\right]=\sup_{\varrho> 0}\,\left\{\inf_{\widehat{\theta}}\sup_{\begin{subarray}{c}|\theta^{\star}|_ {2}\leq\varrho\\ \nu\in\mathcal{P}(\sigma^{2}I_{n})\end{subarray}}\mathbf{E}\left[\left\| \widehat{\theta}-\theta^{\star}\right\|_{\Sigma_{P}}^{2}\right]\right\}\!.\] Note that establishing this equality directly is somewhat cumbersome, as it requires essentially applying a form of a min-max theorem, which in turn requires compactness and continuity arguments. #### 3.1.4 Regression with Markovian covariates We consider a dataset \(\{(x_{t},y_{t})\}_{t=1}^{T}\) comprising of covariate-response pairs. The covariates are initialized with \(x_{0}=0\), and then proceed via the recursion \[x_{t}=\sqrt{r_{t}}\;x_{t-1}+\sqrt{1-r_{t}}\;z_{t}\quad\text{for $t=1,\ldots,T$}, \tag{26}\] for some collection of parameters \(\{r_{t}\}_{t=1}^{T}\subset[0,1]\), and family of independent standard Gaussian variates \(\{z_{t}\}_{t=1}^{T}\). By construction, the samples \(\{x_{t}\}_{t=1}^{T}\) form a Markov chain--a time-varying AR(1) process with stationary distribution being the standard Gaussian law. At the extreme \(r_{t}\equiv 0\), the sequence \(\{x_{i}\}_{i=1}^{n}\) is IID, whereas for \(r_{t}\in(0,1)\), is a dependent sequence, and its mixing becomes slower as the parameters \(\{r_{t}\}\) get closer to \(1\). In addition to these random covariates, suppose that we also observe responses \(\{y_{t}\}_{t=1}^{T}\) from the model \[y_{t}=x_{t}\theta^{\star}+\sigma w_{t},\qquad\text{for $t=1,\ldots,T$}, \tag{27}\] where \(\sigma>0\) is a noise standard deviation, and the noise sequence \(\{w_{t}\}_{t=1}^{T}\) consists of IID standard Gaussian variates. We assume that \(z_{t}\) and \(x_{t}\) are independent for all \(t=1,\ldots,T\). We now describe how our main results apply to this setting. Let us define a matrix \(M\in\mathbf{R}^{T\times T}\) which is associated to the dynamical system (26). It has entries \[M_{ss^{\prime}}=\sum_{t=s\lor s^{\prime}}^{T}\sqrt{c_{st}c_{s^{ \prime}t}},\quad\text{where}\quad c_{st}\coloneqq(1-r_{s})\prod_{\tau=s+1}^{t }r_{\tau}. \tag{28}\] To give one example, in the special case that \(r_{t}\equiv\alpha\in(0,1)\) for all \(t\), then the matrix \(M\) is similar under permutation to the matrix with entries \[M_{st}=\sqrt{\alpha}^{|s-t|}-\sqrt{\alpha}^{s+t}.\] Evidently, this matrix is a rank-one update to the covariance matrix for the underlying AR(1) process (_i.e._, the Kac-Murdock-Szego matrix [39]); it is easily checked to be symmetric positive definite. We now state the consequences of our main results for this problem. **Corollary 5**.: _The minimax risk for the Markovian observation model described above satisfies_ \[\inf_{\hat{\theta}}\sup_{|\theta^{\star}|\leq\varrho}\mathbf{E} \left[(\hat{\theta}-\theta^{\star})^{2}\right]\asymp\Phi_{T}(\varrho,\sigma) \coloneqq\mathbf{E}\left[\left(\frac{1}{\varrho^{2}}+\frac{z^{\mathsf{T}}Mz}{ \sigma^{2}}\right)^{-1}\right]. \tag{29}\] See Appendix B.1.4 for details of this calculation. Note that in the result above, the expectation on the lefthand side is over the dataset \(\{(x_{i},y_{i})\}_{i=1}^{T}\), under the Markovian model (26) for the covariates, and the expectation on the righthand size is over the Gaussian vector \(z=(z_{1},\ldots,z_{T})\sim\mathsf{N}\left(0,I_{T}\right)\). Corollary 5 gives one example of how our general results can even establish sharp rates for regression problems of the form described in Section 2.2, but with additional dependence among the covariates. ### Applications to infinite-dimensional and nonparametric models In this section, we derive some of the consequences of our main results for infinite-dimensional models, such as those arising in nonparametric regression. The basic idea will be to identify an infinite dimensional parameter space \(\Theta\), typically lying in the Hilbert space \(\ell^{2}(\mathbf{N})\). We then find a nested sequence of subsets \[\Theta_{1}\subset\Theta_{2}\subset\cdots\subset\Theta_{k}\subset \cdots\subset\Theta,\] where \(\Theta_{k}\) are finite-dimensional truncations of \(\Theta\). Under regularity conditions, we can show that the minimax risk for the \(k\)-dimensional problems converge to the minimax risk for the infinite dimensional problem as \(k\to\infty\). Thus, since we have determined the minimax risk for each subset \(\Theta_{k}\) up to universal constants (importantly, constants independent of the underlying dimension), we take the limit of our functional in the limit \(k\to\infty\) to obtain a tight characterization of the minimax risk for the infinite-dimensional set \(\Theta\). In the next few sections, we carry this program out in a few examples. We begin with a study of the canonical Gaussian sequence model in Section 3.2.1. We then turn, in Sections 3.2.2 and 3.2.3, to nonparametric regression models arising from reproducing kernel Hilbert spaces. In this setting, we are able to derive some classical results for Sobolev spaces, derive new and sharper forms of bounds on nonparametric regression with covariate shift, and obtain new results for random design nonparametric models with non-uniform covariate laws. #### 3.2.1 Gaussian sequence model In the canonical Gaussian sequence model, we make a countably infinite sequence of observations of the form \[y_{i}=\theta_{i}^{\star}+\varepsilon_{i}z_{i},\qquad\text{for $i=1,2,3,\ldots$} \tag{30}\] Here the variables \(\{z_{i}\}\) are a sequence of IID standard Gaussian variates, and \(\varepsilon\coloneqq\{\varepsilon_{i}\}\) indicate the noise level (_i.e._, the standard deviation) of the entries of the observation \(y\). It is typically assumed that there is a nondecreasing sequence of divergent, nonnegative numbers \(a\coloneqq\{a_{i}\}\) and radius \(C>0\) such that \[\theta^{\star}\in\Theta(a,C)\coloneqq\Big{\{}\,\theta\in\mathbf{R}^{\mathbf{N }}:\sum_{j\geqslant 1}a_{j}^{2}\theta_{j}^{2}\leqslant C^{2}\,\Big{\}}.\] The minimax risk for this problem is then defined by \[\mathfrak{M}\Big{(}\varepsilon,a,C\Big{)}\coloneqq\inf_{\hat{\theta}}\sup_{ \theta^{\star}\in\Theta(a,C)}\mathbf{E}\Big{[}\sum_{j=1}^{\infty}(\widehat{ \theta}_{j}(y)-\theta_{j}^{\star})^{2}\Big{]},\] where the expectation is over \(y\) according to the observation model (30). Let us define a \(k\)-dimensional truncation, \[\Theta_{k}(a,C)\coloneqq\Big{\{}\,\theta\in\Theta(a,C):\theta_{j}=0,\text{ for all $j>k$}\,\Big{\}}.\] Evidently \(\Theta_{k}(a,C)\) may be regarded as a subset of \(\mathbf{R}^{k}\). Note that the class \(\{\Theta_{k}(a,C)\}_{k\geqslant 1}\) forms a nested sequence of subsets within \(\Theta\). Moreover, we can define the minimax risk for the \(k\)-dimensional problem \[\mathfrak{M}_{k}\Big{(}\varepsilon,a,C\Big{)}\coloneqq\inf_{\hat{\theta}}\sup_ {\theta^{\star}\in\Theta_{k}(a,C)}\mathbf{E}\Big{[}\sum_{j=1}^{k}(\widehat{ \theta}_{j}(y)-\theta_{j}^{\star})^{2}\Big{]}.\] Slightly abusing notation, above we regard \(y,\theta^{\star}\in\mathbf{R}^{k}\), where \(y\) is distributed as the first \(k\) components of the observation model (30). Then, this sequence of minimax risks satisfies the limit relation \[\lim_{k\to\infty}\mathfrak{M}_{k}\Big{(}\varepsilon,a,C\Big{)}=\mathfrak{M} \Big{(}\varepsilon,a,C\Big{)}. \tag{31}\] See Appendix B.2.1 for a proof of this relation. The \(k\)-dimensional problem can be seen as a special case of our operator model (1), with parameters \(T^{(k)},\Sigma_{w}^{(k)},K_{e}{}^{(k)},\varrho^{(k)},K_{c}{}^{(k)}\) defined as, \[\begin{split} T^{(k)}(\xi)\equiv I_{k},\qquad\Sigma_{w}^{(k)}= \mathbf{diag}(\varepsilon_{1}^{2},\ldots,\varepsilon_{k}^{2}),\qquad K_{e}^{ (k)}=I_{k},\\ K_{c}^{(k)}=\mathbf{diag}\,\Big{(}\frac{1}{a_{1}^{2}},\ldots, \frac{1}{a_{k}^{2}}\Big{)},\quad\text{and},\quad\varrho^{(k)}=C.\end{split} \tag{32}\] Computing the functional (15) for the \(k\)-dimensional problem, we find it is equal to \[R_{k}^{\star}\Big{(}\varepsilon,a,C\Big{)}\coloneqq\sup_{\tau_{1},\ldots, \tau_{k}}\Big{\{}\,\sum_{j=1}^{k}\frac{\tau_{j}^{2}\varepsilon_{j}^{2}}{ \tau_{j}^{2}+\varepsilon_{j}^{2}}:\sum_{j=1}^{k}\tau_{j}^{2}a_{j}^{2} \leqslant C^{2}\,\Big{\}}. \tag{33}\] Hence, define the following functional of \(\varepsilon\coloneqq\{\varepsilon_{j}\}_{j\geqslant 1},a\coloneqq\{a_{j}\}_{j\geqslant 1}\), and \(C>0\), \[R^{\star}(\varepsilon,a,C)\coloneqq\sup_{\tau=\{\tau_{j}\}_{j=1}^{\infty}} \Big{\{}\,\sum_{j=1}^{\infty}\frac{\tau_{j}^{2}\varepsilon_{j}^{2}}{\tau_{j}^ {2}+\varepsilon_{j}^{2}}:\sum_{j=1}^{\infty}\tau_{j}^{2}a_{j}^{2}\leqslant C^ {2}\,\Big{\}}. \tag{34}\] Then our main results, Theorems 1 and 2 imply the sandwich relation \[\frac{1}{4}\,R^{\star}(\varepsilon,a,C)\leq\mathfrak{M}\Big{(}\varepsilon,a,C \Big{)}\leq R^{\star}(\varepsilon,a,C). \tag{35}\] See Appendix B.2.2 for verification of this relation as a consequence of our results. Note that this recovers a well-known result for the Gaussian sequence model [65, 38]. Some previous work [20] has shown that the lower bound constant can be slightly improved to \(\frac{1}{1.25}\) by arguments specific to the Gaussian sequence model. Importantly, the Gaussian sequence model is a "deterministic" operator model in the sense that the operator \(T_{\xi}\) has no dependence on \(\xi\) for this problem. The next few examples show some consequences of our theory for infinite-dimensional problems where the corresponding operator \(T_{\xi}\) is truly random. #### 3.2.2 Nonparametric regression over reproducing kernel Hilbert spaces (RKHSs) In this section, we consider a nonparametric regression model of the form \[y_{i}=f^{\star}(x_{i})+w_{i},\quad\text{for }i=1,\ldots,n. \tag{36}\] We assume that \(\{x_{i}\}_{i=1}^{n}\) are IID samples covariate law \(P\) and \(w_{i}\) being conditionally centered with conditional variance bounded above by \(\sigma^{2}\). Equivalently, the noise variables are drawn from a conditional distribution satisfying the noise conditions (N1) and (N2) with \(\Sigma_{w}=\sigma^{2}I_{n}\).4 Footnote 4: The discussion below is unaffected by imposing additional structure on the noise, so long as the family of possible noise distributions includes \(w\sim\mathsf{N}\left(0,\sigma^{2}I_{n}\right)\). We will assume that \(f^{\star}\) lies in a reproducing kernel Hilbert space \(\mathcal{H}\), and has bounded Hilbert norm \(\|f^{\star}\|_{\mathcal{H}}\leq\varrho\). The goal is to estimate \(f^{\star}\). Relating the RKHS observation model (36) with the model (10)We now show that the observation model when \(f^{\star}\in\mathcal{H}\) is an infinite-dimensional version of the observation model (10), as can be made precise with RKHS theory. Indeed, fix a measure space \((\mathcal{X},\mathcal{A},\nu)\), and a measurable positive definite kernel \(k\colon\mathcal{X}\times\mathcal{X}\to\mathbf{R}\) and let \(\mathcal{H}\) denote its reproducing kernel Hilbert space [3]. Under mild regularity assumptions5 Footnote 5: The elliptical representation (38) is available in great generality. Indeed, a sufficient condition is for the map \(x\mapsto\sqrt{k(x,x)}\) to lie in \(L^{2}(\nu)\). It can be shown [63, see Lemma 2.3] that in this case, \(\mathcal{H}\) compactly embeds into \(L^{2}(\nu)\) and that there is a series expansion \[k(x,x^{\prime})=\sum_{j=1}^{\infty}\mu_{j}\phi_{j}(x)\phi_{j}(x^{\prime}), \quad\text{for any }x,x^{\prime}\in\mathcal{X}. \tag{37}\] Here \(\{\mu_{j}\}_{j=1}^{\infty}\) denotes a summable sequence of non-negative eigenvalues, whereas the sequence \(\{\phi_{j}\}_{j=1}^{\infty}\) is an orthonormal family of functions \(\mathcal{X}\to\mathbf{R}\) that lie in \(L^{2}(\nu)\). Finally, the series converges absolutely, for each \(x,x^{\prime}\in\mathcal{X}\). Note that the infinite-dimensional series representation (38) of \(\mathcal{H}\) follows from the series expansion of the underlying kernel (37); see Cucker and Smale [16] for details., the RKHS \(\mathcal{H}\) can be put into one-to-one correspondence with a mapping of \(\ell^{2}(\mathbf{N})\). Formally, we have \[\mathcal{H}=\Big{\{}\,f\coloneqq\sum_{j=1}^{\infty}\theta_{j}\sqrt{\mu_{j}} \phi_{j}\mid\sum_{j=1}^{\infty}\theta_{j}^{2}<\infty\,\Big{\}}. \tag{38}\] for a nonincreasing sequence \(\mu_{j}\to 0\) as \(j\to\infty\), and for an orthonormal sequence \(\{\phi_{j}\}\) in \(L^{2}(\nu)\). This allows us to equivalently write the observations (36) in the form \[y_{i}=\langle\theta^{\star},\Phi(x_{i})\rangle+w_{i},\quad\text{for }\ i=1,\ldots,n. \tag{39}\] Above, we have defined the sequence \(\theta^{\star}\coloneqq(\theta^{\star}_{j})_{j=1}^{\infty}\) and "feature map" \(\Phi(x)\in\ell^{2}(\mathbf{N})\), by the formulas \[\theta^{\star}_{j}\coloneqq\frac{\int_{\mathcal{X}}f^{\star}(x)\phi_{j}(x)\, \mathrm{d}\nu(x)}{\sqrt{\mu_{j}}},\quad\text{and}\quad\big{(}\Phi(x)\big{)}_{j }\coloneqq\sqrt{\mu_{j}}\phi_{j}(x),\qquad\text{for all }j\geq 1.\] With these definitions, note that the inner product in equation (39) is taken in the sequence space \(\ell^{2}(\mathbf{N})\). From the display (39), we see that the RKHS observation model (36) is in fact an infinite-dimensional version of the observation model (10). The remainder of this section is devoted to deriving consequences of our results for this model by various truncation and limiting arguments. Truncation argument for RKHS minimax risksGiven the RKHS ball \(\mathsf{B}_{\mathcal{H}}(\varrho)\coloneqq\big{\{}\,g\in\mathcal{H}:\|g\|_{ \mathcal{H}}\leqslant\varrho\,\big{\}}\), our goal is to characterize the minimax risk \[\mathfrak{M}_{n}(\varrho,\sigma^{2},P)\coloneqq\inf_{\hat{f}}\sup_{ \begin{subarray}{c}f^{\star}\in\mathsf{B}_{\mathcal{H}}(\varrho)\\ \nu\in\mathcal{P}(\sigma^{2}I_{n})\end{subarray}}\mathbf{E}\,\Big{[}\big{\|} \hat{f}-f^{\star}\big{\|}_{L^{2}(\nu)}^{2}\Big{]}. \tag{40}\] It should be noted here that the covariates are drawn from \(P\) and the error is measured in \(L^{2}(\nu)\). In classical work on estimation over RKHSs, it is typical to assume that \(P=\nu\). However, we develop in this section and in Section 3.2.3 some interesting consequences of our theory when \(P\neq\nu\), and so this generality is important for our discussion. To apply our results to this setting, we need to define certain finite-dimensional truncations. We start by defining \[\mathcal{H}_{k}\coloneqq\Big{\{}\,f\coloneqq\sum_{j=1}^{\infty}\theta_{j} \sqrt{\mu_{j}}\phi_{j}\mid\theta_{j}=0,\ \ \text{for all }j>k\,\Big{\}}.\] We then define the minimax risk over the the ball \(\mathsf{B}_{\mathcal{H}}(\varrho)\) restricted to \(\mathcal{H}_{k}\), \[\mathfrak{M}_{n}^{(k)}(\varrho,\sigma^{2},P)\coloneqq\inf_{\hat{f}}\sup_{ \begin{subarray}{c}f^{\star}\in\mathsf{B}_{\mathcal{H}}(\varrho)\cap\mathcal{ H}_{k}\\ \nu\in\mathcal{P}(\sigma^{2}I_{n})\end{subarray}}\mathbf{E}\,\Big{[}\big{\|}\hat{f}-f^{ \star}\big{\|}_{L^{2}(\nu)}^{2}\Big{]}. \tag{41}\] In analogy to the limit relation (31) for the Gaussian sequence model, we can show that \[\lim_{k\to\infty}\mathfrak{M}_{n}^{(k)}(\varrho,\sigma^{2},P)=\mathfrak{M}_{n }(\varrho,\sigma^{2},P). \tag{42}\] See Appendix B.2.3 for a proof of this relation. The \(k\)-dimensional problem associated with the risk (41) can be seen, using the representation (39), as a special case of our IID observation model (10), with parameters, \(P,\varrho,\sigma\) and \[\psi(x)=\Phi_{k}(x)\coloneqq\big{(}\sqrt{\mu_{j}}\phi_{j}(x)\big{)}_{j=1}^{k},\quad K_{e}=M_{k}\coloneqq\mathbf{diag}(\mu_{1},\dots,\mu_{k}),\quad\text{ and}\quad K_{c}=I_{k}. \tag{43}\] Let us define the \(k\times k\) empirical covariance matrix \[\Sigma_{n}^{(k)}\coloneqq\frac{1}{n}\sum_{i=1}^{n}\Phi_{k}(x_{i})\otimes\Phi _{k}(x_{i}).\] Then the using (43), we see that the functional (13) for the \(k\)-dimensional problem is equal to \[d_{n}^{(k)}\coloneqq\sup_{\Omega>0}\,\Big{\{}\,\mathbf{Tr}\,\mathbf{E}_{P^{n }}\big{[}M_{k}^{1/2}(\Sigma_{n}^{(k)}+\Omega^{-1})^{-1}M_{k}^{1/2}\big{]}: \mathbf{Tr}(\Omega)\leqslant\frac{n\varrho^{2}}{\sigma^{2}}\Big{\}} \tag{44}\] Characterizations of RKHS minimax risks of estimationWe now state the consequence of our results for the rate of estimation (40). **Corollary 6**.: _Define \(d_{n}^{\star}=\limsup_{k\to\infty}d_{n}^{(k)}\), where the sequence \(\{d_{n}^{(k)}\}_{k\geqslant 1}\) is defined in display (44). Then the RKHS minimax risk satisfies satisfies the inequalities,_ \[\frac{1}{4}\,\frac{\sigma^{2}}{n}d_{n}^{\star}\leqslant\mathfrak{M}_{n}( \varrho,\sigma^{2},P)\leqslant\frac{\sigma^{2}}{n}d_{n}^{\star}. \tag{45}\] Note that this result is an immediate consequence of Theorems 1 and 2, together with the limit relation (42). Let us now further simplify the characterization (45) in the classical situation where \(P=\nu\). We can give an explicit calculation of the minimax risk as a function of the kernel eigenvalues \(\{\mu_{j}\}\), using Proposition 2, under the additional assumption that the map \(x\mapsto k(x,x)\) is essentially bounded by a finite number \(\kappa\) under \(P\). Let us define two parameters \(\lambda_{n}^{\star},\overline{d}_{n}^{\star}\) by \[\sum_{k=1}^{\infty}\frac{1}{\sqrt{\mu_{k}}}\Big{(}\lambda_{n}^{ \star}-\frac{1}{\sqrt{\mu_{k}}}\Big{)}_{+} =\frac{n\varrho^{2}}{\sigma^{2}},\quad\text{and}, \tag{46a}\] \[\overline{d}_{n}^{\star} \coloneqq\sum_{k=1}^{\infty}\frac{1}{\lambda_{n}^{\star}}\Big{(} \lambda_{n}^{\star}-\frac{1}{\sqrt{\mu_{k}}}\Big{)}_{+}. \tag{46b}\] When \(\kappa<\infty\), the characterization (45) can be further simplified as \[\frac{1}{4}\,\frac{\sigma^{2}}{n}\overline{d}_{n}^{\star}\leqslant\mathfrak{M} _{n}(\varrho,\sigma^{2},P)\leqslant\Big{(}1+\frac{\kappa^{2}\varrho^{2}}{ \sigma^{2}}\Big{)}\frac{\sigma^{2}}{n}\overline{d}_{n}^{\star}. \tag{47}\] It should be noted that relations (45) and (47) establish the nonasymptotic minimax risk of estimation for the RKHS ball of radius \(\rho\), apart from universal constants, in fairly general fashion. The loosened inequalities (47) permit easier calculation, but require \(P=\nu\), \(P\)-essential boundedness of the diagonal of the kernel, and the signal-to-noise ratio \(\frac{\varrho}{\sigma}\lesssim\frac{1}{\kappa}\). Indeed, compared with (45), the key quantity \(\overline{d}_{n}^{\star}\) in (47) can be easier to compute. The cost is that we require additional assumptions and gain the additional prefactor \((1+\frac{\kappa^{2}\varrho^{2}}{\sigma^{2}})\), which can be large when the signal-to-noise ratio is large. Although we have suppressed the dependence of \(\lambda_{n}^{\star},\overline{d}_{n}^{\star}\) on the parameters \(\sigma,\varrho\) in the notation, it should be noted that they do vary with \(\sigma,\varrho\) in general; see display (46). Leveraging our main results, we present the proofs of the characterizations (45) and (47), respectively, in Appendices B.2.4 and B.2.5. Interestingly, we note that our characterizations--even the loosened characterization (47)--does not need the kernel to satisfy an additional eigenvalue decay condition. Indeed, our results hold even if the kernel eigenvalues do not satisfy the requirement of a _regular kernel_ as proposed in prior work [68]. Finally, we mention that--as a sanity check, classical results can be easily derived from (47). To provide one concrete example, when \(P=\nu\) is the uniform distribution on \([0,1]^{d}\), and \(\mathcal{H}\) is the Sobolev space of order \(\beta>d/2\), it can be shown that \(\frac{\sigma^{2}\overline{d}_{n}^{\star}}{n}\asymp\varrho^{2}(\frac{\sigma^{ 2}}{n\varrho^{2}})^{\frac{2\beta}{2\beta+d}}\). This recovers the classical minimax risk of estimation over this function class [37, 64]. We defer this calculation to Appendix B.2.6, making use of (47). #### 3.2.3 Kernel regression under covariate shift We now discuss one important case in which we have \(P\neq\nu\) in the RKHS model (36). In the setting of covariate shift, the model (36) comprises of covariates \(x_{i}\) drawn from a _source_ distribution that is different from the _target_ distribution \(Q\) of covariates on which estimates of the regression function are to be deployed. In this setting, then we take \(\nu=Q\) and \(P\neq Q\). For any such pair, following the argument given previously in Section 3.2, we find that \[\inf_{f}\sup_{f^{\star}\in\mathsf{B}_{\mathcal{H}}(\varrho)}\mathbf{E}\left[ \left\|\hat{f}-f^{\star}\right\|_{L^{2}(Q)}^{2}\right]\asymp\frac{\sigma^{2}}{ n}\limsup_{k\to\infty}d_{n}^{(k)}, \tag{48}\] where the quantity \(d_{n}^{(k)}\) is defined as in display (44). Above, the expectation on the lefthand side is over the noise and the covariates drawn from \(P\) as described by the model (36). Note that the eigenvalues \(\{\mu_{j}\}_{j\geqslant 1}\) here correspond to the diagonalization of the integral kernel operator under the target distribution \(Q\). Let us now compare to past work due to Ma et al. [47], who studied the covariate shift problem in RKHSs. In contrast to this work, our result is _source-target distribution-dependent_: it characterizes, apart from universal constants, the minimax risk for any kernel, any radius, any noise level, and any covariate shift pair \((P,Q)\). By contrast, the results in the paper [47] consider a more restrictive setup in which pair \((P,Q)\) satisfy an absolute continuity condition (\(Q\ll P\)), and moreover, the likelihood ratio is \(P\)-essentially bounded, meaning that there exists some \(B\in[1,\infty)\) such that \[\frac{\mathrm{d}Q}{\mathrm{d}P}(x)\leqslant B,\quad\text{for $P$-almost every $x$}. \tag{49}\] Let \(d_{\infty}(P,Q)\) denote the \(P\)-essential supremum of the likelihood ratio \(\mathrm{d}Q/\mathrm{d}P\) when \(Q\ll P\) and \(d_{\infty}(P,Q)=+\infty\) otherwise. "Uniform" results, where minimax risks of estimation are studied over families of covariate shifts \(P\) relative to \(Q\) where \(d_{\infty}(P,Q)\leqslant B\) for some parameter \(B\) can be derived as a corollary to the sharper rate description (48). To give one simple and concrete illustration of this, we will show how one can derive Theorem 2 in the paper [47]. By Jensen's inequality, we have \[d_{n}^{(k)}\geqslant\sup_{\Omega>0}\,\Big{\{}\,\mathbf{Tr}(\mathbf{E}_{P^{n}} \,M_{k}^{-1/2}\Sigma_{n}^{(k)}M_{k}^{-1/2}+\Omega^{-1})^{-1}:\mathbf{Tr}(M_{k} ^{-1}\Omega)\leqslant\frac{n\varrho^{2}}{\sigma^{2}}\Big{\}}. \tag{50}\] If \(P\) satisfies \(d_{\infty}(P,Q)\leqslant B\), then it follows that we have the ordering \[\mathbf{E}_{P^{n}}\,M_{k}^{-1/2}\Sigma_{n}^{(k)}M_{k}^{-1/2}\geqslant\frac{1} {B}I_{k}. \tag{51}\] Moreover, this lower bound can be achieved by a shift \(P\) whenever the zero sets of the eigenfunctions \(\phi_{j}\) in \(L^{2}(Q)\) of the integral operator associated with the kernel \(k\) have nontrivial intersection. Equivalently, when there exists \[x_{0}\in\bigcap_{j\geqslant 1}\phi_{j}^{-1}(\{0\}), \tag{52}\] then the bound (51) is achieved by the distribution \(P_{x_{0}}\coloneqq\frac{1}{B}Q+\Big{(}1-\frac{1}{B}\Big{)}\delta_{x_{0}}\). This choice is evidently a \(B\)-bounded shift relative to \(Q\). To give an example where the zero set condition (52) holds, note that in the case of where the kernel \(k\) is associated with the periodic \(\beta\)-order Sobolev class on \([0,1]\) and \(Q\) is the uniform law on \([0,1]\), one can take \(x_{0}=0\) as the eigenfunctions are sinusoids. Now, combining relations (48) and (50) with the choice of \(P=P_{x_{0}}\) given above, we have \[\sup_{P:d_{\infty}(P,Q)\leqslant B}\inf_{\hat{f}}\sup_{f^{\star} \in\mathsf{B}_{\mathcal{H}}(\varrho)}\mathbf{E}\left[\left\|\hat{f}-f^{\star} \right\|_{L^{2}(Q)}^{2}\right]\asymp\frac{\sigma^{2}}{n}\sup_{\omega>0}\Big{\{} \sum_{j=1}^{\infty}\frac{B\omega_{j}}{\omega_{j}+B}:\sum_{j=1}^{\infty}\frac {\omega_{j}}{\lambda_{j}}=\frac{n\varrho^{2}}{\sigma^{2}}\Big{\}}\\ \asymp\varrho^{2}\,\sup_{\lambda}\Big{\{}\sum_{j=1}^{\infty} \frac{\sigma^{2}B}{n\varrho^{2}}\wedge\lambda_{j}\mu_{j}:\lambda_{j}\geqslant 0,\ \sum_{j=1}^{\infty}\lambda_{j}=1\,\Big{\}}. \tag{53}\] Suppose, following the paper [47], we additionally impose a regularity condition on the decay of the eigenvalues \(\mu_{j}\) of kernel integral operator in \(L^{2}(Q)\). Namely, that there exists a constant \(c\in(0,\infty)\) such that \[\sup_{\delta>0}\frac{\sum_{j>d(\delta)}\mu_{j}}{\delta^{2}d(\delta)}\leq c,\quad \text{where}\quad d(\delta)\coloneqq\inf\{j\geq 1:\mu_{j}\leq\delta^{2}\}. \tag{54}\] Under this condition, we can further lower bound (53), up to universal constants, by \[\varrho^{2}\,\inf_{\delta>0}\Big{\{}\delta^{2}+\frac{\sigma^{2}B}{\varrho^{2}n }d(\delta)\Big{\}}. \tag{55}\] The details of this calculation can be found in Appendix B.2.7. Note that by establishing the lower bound (55), we have recovered Theorem 2 from the paper [47]. We remark that--as seen from the steps taken to arrive at this lower bound--our more general determination of the minimax rate (48) is sharper in that it holds for a fixed pair \((P,Q)\) rather than uniformly over the larger class \(\{P:d_{\infty}(P,Q)\leq B\}\). Moreover, our result, as compared to the work [47], requires fewer regularity assumptions on the underlying kernel and its diagonalization in the target Hilbert space \(L^{2}(Q)\). In fact, as demonstrated in Appendix B.2.7, the regularity condition (54) is _not_ necessary for us to establish the lower bound (55). ## 4 Proofs of Theorems 1 and 2 In this section, we present the proofs of our main results. In Section 4.1, we provide the proof of our minimax upper bound (cf. Theorem 1). In Section 4.2, we provide the proof of our minimax lower bound. Some calculations and routine verifications are deferred to Appendix C. ### Proof of Theorem 1 In this section, we develop an upper bound on the minimax risk. In order to do so, so, we define the risk function \[r(\widehat{\theta},\theta^{\star})\coloneqq\sup_{\nu\in\mathcal{P}(\Sigma_{w}) }\mathbf{E}_{(\xi,w)\sim\mathbb{P}\times\nu}\,\mathbf{E}\,\Big{[}\big{\|} \widehat{\theta}(T_{\xi},T_{\xi}\theta^{\star}+w)-\theta^{\star}\big{\|}_{K_{ \epsilon}}^{2}\Big{]}.\] defined for any measurable estimator \(\widehat{\theta}\) of \((T_{\xi},y)\), and any \(\theta^{\star}\in\Theta(\varrho,K_{c})\). Evidently, the minimax risk we are bounding is then expressible as \[\mathfrak{M}(T,\mathbb{P},\Sigma_{w},\varrho,K_{\epsilon},K_{c})=\inf_{ \widehat{\theta}}\sup_{\theta^{\star}\in\Theta(\varrho,K_{c})}r(\widehat{ \theta},\theta^{\star}). \tag{56}\] In order to derive an upper bound, we restrict our focus to estimators that are _conditionally linear_. Formally, we consider the class of procedures \[\widehat{\theta}_{C}(T_{\xi},y)\coloneqq C(T_{\xi})T_{\xi}^{\mathsf{T}}\Sigma _{w}^{-1}y, \tag{57}\] where \(C\) is a \(\mathbf{R}^{d\times d}\)-valued measurable function of \(T_{\xi}\). Our strategy involves the following three steps: 1. First, we compute the supremum risk over the parameter set \(\Theta(\varrho,K_{c})\) and all \(\nu\in\mathcal{P}(\Sigma_{w})\). 2. Second, compute the minimizer of the supremum risk in the choice of \(C\) in (57). 3. Finally, by using the curvature of the supremum risk and appealing to a min-max theorem, we put the pieces together to determine the final minimax risk. The following subsections are devoted to the details associated with each of these three steps. In all cases, we defer routine calculations and verification to Appendix C.1. #### 4.1.1 Supremum risk of estimator \(\widehat{\theta}_{C}\) Starting with the definition (57), for any matrix \(C\), we have \[\widehat{\theta}_{C}-\theta^{\star}=(C(T_{\xi})T_{\xi}^{\mathsf{T}}\Sigma_{w}^{ -1}T_{\xi}-I_{d})\theta^{\star}+C(T_{\xi})T_{\xi}^{\mathsf{T}}\Sigma_{w}^{-1}w.\] Therefore, the risk \(r(\widehat{\theta}_{C},\theta^{\star})\) associated with \(\widehat{\theta}_{C}\) can be bounded as \[r(\widehat{\theta}_{C},\theta^{\star}) \coloneqq\sup_{\nu\in\mathcal{P}(\Sigma_{w})}\mathbf{E}\left[ \|\widehat{\theta}_{C}(X,y)-\theta^{\star}\|_{K_{e}}^{2}\right]\] \[=\mathbf{Tr}\left\{K_{e}^{1/2}\,\mathbf{E}_{\xi}\left[(C(T_{\xi} )T_{\xi}^{\mathsf{T}}\Sigma_{w}^{-1}T_{\xi}-I_{d})\theta^{\star}\otimes\theta^ {\star}(C(T_{\xi})T_{\xi}^{\mathsf{T}}\Sigma_{w}^{-1}T_{\xi}-I_{d})^{\mathsf{T}}\right.\right.\] \[\qquad\qquad\qquad+C(T_{\xi})T_{\xi}^{\mathsf{T}}\Sigma_{w}^{-1} T_{\xi}C(T_{\xi})^{\mathsf{T}}\Big{]}K_{e}^{1/2}\right\}. \tag{58}\] The equality above uses the property (N2) of distributions \(\nu\in\mathcal{P}(\Sigma_{w})\); note that it is achieved by the Gaussian distribution \(\nu=\mathsf{N}\left(0,\Sigma_{w}\right)\). #### 4.1.2 Curvature and minimizers of the functional \(r(\widehat{\theta}_{C},\theta^{\star})\) We begin by observing that the function \(r(\widehat{\theta}_{C},\cdot)\colon\Theta(\varrho,K_{c})\to\mathbf{R}_{+}\) can be replaced by an equivalent mapping--which, with a slight abuse of notation we denote by the same symbol \(r\)-- on the space of symmetric positive definite matrices of the form \[\mathcal{K}(\varrho,K_{c})\coloneqq\Big{\{}\,\Omega\succcurlyeq 0\mid\mathbf{Tr}(K_{c}^{-1/2}\Omega K_{c}^{-1/2})\leq\varrho^{2}\,\Big{\}}.\] We define (in a sense, this is can be regarded as an extension to the set \(\mathcal{K}(\varrho,K_{c})\)) \[r(\widehat{\theta}_{C},\Omega)\coloneqq\mathbf{Tr}\left\{K_{e} ^{1/2}\,\mathbf{E}_{\xi}\left[(C(T_{\xi})T_{\xi}^{\mathsf{T}}\Sigma_{w}^{-1}T_ {\xi}-I_{d})\Omega(C(T_{\xi})T_{\xi}^{\mathsf{T}}\Sigma_{w}^{-1}T_{\xi}-I_{d}) ^{\mathsf{T}}\right.\right.\\ \left.\left.+C(T_{\xi})T_{\xi}^{\mathsf{T}}\Sigma_{w}^{-1}T_{\xi} C(T_{\xi})^{\mathsf{T}}\right]\!K_{e}^{1/2}\right\}. \tag{59}\] Note that \(r(\widehat{\theta}_{C},\theta^{\star})=r(\widehat{\theta}_{C},\theta^{\star} \otimes\theta^{\star})\) for \(\theta^{\star}\in\Theta(\varrho,K_{c})\). We claim that the suprema over \(\Theta(\varrho,K_{c})\) and \(\mathcal{K}(\varrho,K_{c})\) are the same. **Lemma 1**.: _The suprema of the risk functional \(r\) taken over either the set \(\Theta(\varrho,K_{c})\) or the set \(\mathcal{K}(\varrho,K_{c})\) are equal--that is, we have_ \[\sup_{\theta^{\star}\in\Theta(\varrho,K_{c})}r(\widehat{\theta}_{C},\theta^{ \star})=\sup_{\Omega\in\mathcal{K}(\varrho,K_{c})}r(\widehat{\theta}_{C}, \Omega),\] _for every conditionally linear estimator \(\widehat{\theta}_{C}\) of the form (57)._ See Appendix C.1.1 for the proof of this claim. Our next result characterizes some properties of the mapping \((C,K)\mapsto r(\widehat{\theta}_{C},K)\). **Lemma 2**.: _Over the set of measurable functions \(C\) and matrices \(\Omega\in\mathcal{K}(\varrho,K_{c})\), the mapping \((C,\Omega)\mapsto r(\widehat{\theta}_{C},\Omega)\) is affine in \(\Omega\) and convex in \(C\)._ See Appendix C.1.2 for the proof of this claim. Our next claim determines the minimizer of \(r(\cdot,\Omega)\) over estimators \(\widehat{\theta}_{C}\) of the form (57), provided that \(\Omega\) is strictly positive definite. **Proposition 3**.: _Let \(\Omega\) be a symmetric positive definite matrix. Then_ \[\inf_{C}r(\widehat{\theta}_{C},\Omega)=\mathbf{Tr}\left\{K_{e}^{1/2}\,\mathbf{ E}_{\xi}(\Omega^{-1}+T_{\xi}^{\mathsf{T}}\Sigma_{w}^{-1}T_{\xi})^{-1}K_{e}^{1/2}\right\} \tag{60}\] _Moreover, the infimum is attained with the choice \(C(T_{\xi})=(\Omega^{-1}+T_{\xi}^{\mathsf{T}}\Sigma_{w}^{-1}T_{\xi})^{-1}\)._ See Appendix C.1.3 for the proof. #### 4.1.3 Proof of Theorem 1 We now piece together the previous lemmas to establish our main upper bound, as claimed in Theorem 1. In view of the relation (56) and the bound (58), we find that \[\mathfrak{M}(T,\mathbb{P},\Sigma_{w},\varrho,K_{e},K_{c}) \leq\inf_{C}\sup_{\theta^{\star}\in\Theta(\varrho,K_{c})}r( \widehat{\theta}_{C},\theta^{\star}) \tag{61a}\] \[=\inf_{C}\sup_{\Omega\in\mathcal{K}(\varrho,K_{c})}r(\widehat{ \theta}_{C},\Omega)\] (61b) \[=\sup_{\Omega\in\mathcal{K}(\varrho,K_{c})}\inf_{C}r(\widehat{ \theta}_{C},\Omega)\] (61c) \[=\sup_{\Omega}\Big{\{}\;\mathbf{E}\,\mathbf{Tr}\left(K_{e}^{1/2} (\Omega^{-1}+T_{\xi}^{\mathsf{T}}\Sigma_{w}^{-1}T_{\xi})^{-1}K_{e}^{1/2} \right):\] \[\Omega>0,\mathbf{Tr}(K_{c}^{-1/2}\Omega K_{c}^{-1/2})\leq\varrho^ {2}\,\Big{\}}. \tag{61d}\] To clarify, in the first display (61a) and below, the infimum over \(C\) denotes an infimum over all \(\mathbf{R}^{d\times d}\)-valued measurable functions of \(T_{\xi}\). In display (61b), we have applied Lemma 1. Relation (61c) follows from the Ky Fan min-max theorem [21, 10] together with Lemma 2. Note that the set \(\mathcal{K}(\varrho,K_{c})\) is evidently a compact convex subset of \(\mathbf{R}^{d\times d}\). The final equality (61d) is essentially an application of Proposition 3; see Appendix C.1.4 for the details of this verification. ### Proof of lower bound, Theorem 2 In this section, we prove our lower bound on the minimax risk. In order to do so, we focus on lower bounding the Gaussian minimax risk \[\mathfrak{M}^{\mathrm{G}}(T,\mathbb{P},\Sigma_{w},\varrho,K_{e},K_{c})\coloneqq \inf_{\widehat{\theta}}\sup_{\theta^{\star}\in\Theta(\varrho,K_{c})}\mathbf{ E}_{(\xi,w)\sim\mathbb{P}\times\mathsf{N}(0,\Sigma_{w})}\Big{[}\|\widehat{ \theta}(T_{\xi},T_{\xi}\theta^{\star}+w)-\theta^{\star}\|_{K_{e}}^{2}\Big{]}.\] Evidently, the Gaussian minimax risk lower bounds the general minimax risk, so that we have \(\mathfrak{M}^{\mathrm{G}}\leq\mathfrak{M}\). In Section 4.2.1, we reduce this Gaussian minimax risk to yet another Gaussian observation model. A minimax lower bound for this auxiliary problem is then presented as Proposition 4 in Section 4.2.2. This result is the bulk of the proof of the lower bound, and it quickly allows us to establish our main result, Theorem 2. In Section 4.2.3, we then complete the proof of Proposition 4. #### 4.2.1 Reduction to an alternate observation model To establish the lower bound, we first show that the minimax risk associated with our estimation problem is equivalent to another, perhaps simpler, minimax risk. An auxiliary observation modelThis observation model is defined by a random quadruple \((r,V,\Lambda,\Upsilon)\). The triple \((r,V,\Lambda)\) comprises a random integer \(r\), a random orthogonal matrix \(V\in\mathbf{R}^{d\times r}\) satisfying \(V^{\mathsf{T}}V=I_{r}\), and a random, \(r\times r\) diagonal positive definite matrix \(\Lambda\). Conditional on \((r,V,\Lambda)\), the observation \(\Upsilon\) is a Gaussian random variable, satisfying the equation \[\Upsilon=VV^{\mathsf{T}}\eta^{\star}+V\Lambda^{-1/2}z,\quad\text{where}\quad z \thicksim\mathsf{N}\left(0,I_{r}\right). \tag{62}\] Above, the random vector \(z\) is drawn from the multivariate Gaussian with identity covariance in \(\mathbf{R}^{r}\); it is independent of \((r,V,\Lambda)\). If \(\omega\coloneqq(r,V,\Lambda)\) is distributed according to \(\mathbb{Q}\), we denote the minimax risk for this observation model as \[\mathfrak{M}^{\mathrm{G}}_{\mathrm{red}}(\mathbb{Q},K)\coloneqq\inf_{\widehat {\eta}}\,\sup_{\eta\in\Theta(K)}\mathbf{E}_{(\omega,\Upsilon)}\,\Big{[}\| \widehat{\eta}(\omega,\Upsilon)-\eta\|_{2}^{2}\Big{]}.\] Above, the expectation indexed by \((\omega,\Upsilon)\) is over \(\omega\sim\mathbb{Q}\) and \(\Upsilon\) as in (62). The infimum is over measurable functions of \((\omega,\Upsilon)\). The set \(\Theta(K)\) is a shorthand for the set \(\Theta(1,K)=\{\|\theta\|_{K}\leqslant 1\}\). Reduction to the new observation modelWe formally reduce the minimax risk \(\mathfrak{M}^{\mathrm{G}}\) to the reduction \(\mathfrak{M}^{\mathrm{G}}_{\mathrm{red}}\), as follows. **Lemma 3**.: _Let \(\widehat{\mathbb{P}}\) denote the distribution of the triple \((r(\xi),V_{\xi},\Lambda_{\xi})\) under \(\mathbb{P}\), where \(r(\xi)\) is the (finite) rank of \(Q_{\xi}={K_{e}}^{-1/2}T_{\xi}^{\mathsf{T}}\Sigma_{w}^{-1}T_{\xi}{K_{e}}^{-1/2}\), and \(Q_{\xi}=V_{\xi}\Lambda_{\xi}V_{\xi}^{\mathsf{T}}\) denotes the diagonalization of this positive definite matrix. Then, for any \((T,\mathbb{P},\Sigma_{w},\varrho,K_{c},K_{e})\), we have_ \[\mathfrak{M}^{\mathrm{G}}(T,\mathbb{P},\Sigma_{w},\varrho,K_{c},K_{e})= \mathfrak{M}^{\mathrm{G}}_{\mathrm{red}}(\widehat{\mathbb{P}},\varrho^{2}{K_{ e}}^{1/2}K_{c}K_{e}^{1/2}).\] See Appendix C.2.1 for a proof of this claim. #### 4.2.2 Lower bounding the minimax risk We now focus on lower bounding \(\mathfrak{M}^{\mathrm{G}}_{\mathrm{red}}\). The following result is a formal statement of the lower bound for the "reduced" minimax risk. **Proposition 4**.: _For any \(\tau\in(0,1]\) and any \(\Pi\succ 0\) such that \(\mathbf{Tr}(K^{-1/2}\Pi K^{-1/2})\leqslant 1\), we have_ \[\mathfrak{M}^{\mathrm{G}}_{\mathrm{red}}(\mathbb{Q},K)\geq\mathbf{E}\,\mathbf{ Tr}\,\Big{(}(\tfrac{1}{c(\tau,\Pi)}\Pi^{-1}+V\Lambda V^{\mathsf{T}})^{-1} \Big{)}, \tag{63}\] _where the constant \(c(\tau,\Pi)\) is defined in Lemma 6. Moreover, we have the lower bounds_ \[\mathfrak{M}^{\mathrm{G}}_{\mathrm{red}}(\mathbb{Q},K) \geq\sup_{\Pi}\Big{\{}\;\mathbf{E}\,\mathbf{Tr}\,\Big{(}(\Pi^{-1} +V\Lambda V^{\mathsf{T}})^{-1}\Big{)}:\Pi\succ 0,\;\;\mathbf{Tr}(K^{-1/2}\Pi K^{-1/2}) \leqslant 1/4\,\Big{\}} \tag{64a}\] \[\geq\frac{1}{4}\,\sup_{\Pi}\Big{\{}\;\mathbf{E}\,\mathbf{Tr}\, \Big{(}(\Pi^{-1}+V\Lambda V^{\mathsf{T}})^{-1}\Big{)}:\Pi\succ 0,\;\;\mathbf{Tr}(K^{-1/2} \Pi K^{-1/2})\leqslant 1\,\Big{\}}. \tag{64b}\] Proof of Theorem 2We take the claim of Proposition 4 as given for the moment, and use it to derive our minimax lower bound. As mentioned, we may restrict to Gaussian noise to establish the lower bound; formally, we have \(\mathfrak{M}\geq\mathfrak{M}^{\mathrm{G}}\). Additionally, the reduction given in Lemma 3 combined with the stronger lower bound (64a) in Proposition 4 gives us \[\mathfrak{M}(T,\mathbb{P},\Sigma_{w},\varrho,K_{e},K_{c})\] \[\geq\sup_{\Pi}\Big{\{}\;\mathbf{E}\,\mathbf{Tr}\,\Big{(}(\Pi^{-1 }+{K_{e}}^{-1/2}T_{\xi}^{\mathsf{T}}\Sigma_{w}^{-1}T_{\xi}K_{e}^{-1/2})^{-1} \Big{)}:\Pi\succ 0,\mathbf{Tr}({K_{e}}^{-1/2}{\Pi K_{e}}^{-1/2}{K_{e}}^{-1}) \leqslant\tfrac{\varrho^{2}}{4}\,\Big{\}}.\] Now define the matrix \(\Omega=K_{e}^{-1/2}\Pi K_{e}^{-1/2}\). Then, the quantity on the righthand side is equal to \[\sup_{\Omega}\Big{\{}\;\mathbf{E}\,\mathbf{Tr}\left(K_{e}^{1/2}(\Omega^{-1}+T_{ \xi}^{\mathsf{T}}\Sigma_{w}^{-1}T_{\xi})^{-1}K_{e}^{1/2}\right):\Omega\succ 0,\;\; \mathbf{Tr}(K_{c}^{-1/2}\Omega K_{c}^{-1/2})\leqslant\tfrac{\varrho^{2}}{4}\, \Big{\}},\] which furnishes the first inequality in Theorem 2. With similar manipulations to the weaker lower bound (64b) in Proposition (4), or by arguing directly from the display above, the second inequality in Theorem 2 follows. In order to establish the more detailed lower bound (8), we repeat the argument above but use (63). #### 4.2.3 Proof of Proposition 4 The lower bound proceeds in five steps: * We first lower bound the minimax risk in terms of the expected conditional Bayesian risk over any prior on the parameter set \(\Theta(K)\). * We then demonstrate that, conditionally, there is a family of auxiliary Bayesian estimation problems, indexed by a parameter \(\lambda>0\), which are all no harder than the Bayesian estimation problem implied by the conditional Bayesian risk. * We compute, in closed form, the Bayesian risk for any prior and any parameter \(\lambda>0\). We are able to show that the Bayesian risk is a functional of the Fisher information of the marginal distribution of the observed data under the prior and sampling model. * For each \(\lambda>0\), we then calculate a lower bound on the Fisher information for a prior obtained by conditioning a Gaussian distribution with mean zero and covariance \(\Pi\) to the parameter space. * We put the pieces together: optimizing over all covariance operators \(\Pi\), and the family of "easier" problems (_i.e._, optimizing over \(\lambda>0\)), we obtain our claimed lower bound. Next, we present the details of the steps outlined above. Extended calculations and routine verification are deferred to Appendix C.2. Step 1: Reduction to conditional Bayesian riskWe begin by lower bounding the minimax risk via the Bayes risk. Owing to the standard relation between minimax and Bayesian risks, we have for any prior \(\pi\) on \(\Theta(K)\) that \[\mathfrak{M}_{\text{red}}^{\text{G}}(\mathbb{Q},K)\;=\;\inf_{\widehat{\eta}} \sup_{\eta\in\Theta(K)}\mathbf{E}_{(\omega,\Upsilon)}\left[\|\widehat{\eta}( \omega,\Upsilon)-\eta\|_{2}^{2}\right]\;\geqslant\;\inf_{\widehat{\eta}} \mathbf{E}_{\eta\sim\pi}\,\mathbf{E}_{(\omega,\Upsilon)}\left[\|\widehat{\eta}- \eta\|_{2}^{2}\right]\;\Rightarrow\;B(\pi). \tag{65}\] The quantity \(B(\pi)\) appearing above is the Bayesian risk when the parameter \(\eta\) is drawn from the prior \(\pi\). The following observation is key for the lower bound. After moving to Bayesian risks, we can condition on the "design", denoted by the random tuple \(\omega=(r,V,\Lambda)\), and consider the conditional Bayesian risk. Formally, we have \[B(\pi)=\inf_{\widehat{\eta}}\mathbf{E}_{\eta\sim\pi}\,\mathbf{E}_{(\omega, \Upsilon)\sim\mathcal{D}_{\eta}}\left[\left\|\widehat{\eta}-\eta\right\|_{2}^{2 }\right]\geqslant\mathbf{E}_{\omega\sim\mathbb{Q}}\left[\;\inf_{\widehat{\eta }_{\omega}}\mathbf{E}_{\eta\sim\pi}\,\mathbf{E}_{\Upsilon}\big{\|}\widehat{ \eta}_{\omega}(\Upsilon)-\eta\big{\|}_{2}^{2}\right]. \tag{66}\] Above, the inequality follows by observing that if the function \(\widehat{\eta}\colon(\omega,\Upsilon)\mapsto\widehat{\eta}\in\mathbf{R}^{d}\) is measurable, then \(\widehat{\eta}_{\omega}(\Upsilon)\coloneqq\widehat{\eta}(\omega,\Upsilon)\) is a measurable of \(\Upsilon\). Note that the infimum on the righthand side is restricted to those maps which are measurable function of \(\omega\); note that they may depend on \(\omega\), and therefore we have included a subscript depending on \(\omega\) to indicate this.6 To lighten notation in the subsequent discussion, we define the _conditional Bayesian risk_ under \(\pi\) and for a realization of the random variable \(\omega=\omega_{0}\), Footnote 6: In some cases, this inequality may hold with equality. However, to be clear, in general the inequality arises since if \(\{\widehat{\eta}_{\omega}\}_{\omega}\) is a family of measurable functions (of \(\Upsilon\)) for each \(\omega\) in the support of \(\mathbb{Q}\), it is not necessarily the case that \(\widehat{\eta}(\omega,\Upsilon)\coloneqq\widehat{\eta}_{\omega}(\Upsilon)\) is measurable. \[B(\pi\mid\omega_{0})\coloneqq\inf_{\widehat{\eta}}\mathbf{E}_{\eta\sim\pi} \,\mathbf{E}_{z\sim\mathsf{N}\left(0,I_{r_{0}}\right)}\Big{[}\big{\|}\widehat {\eta}(V_{0}V_{0}^{\mathsf{T}}\eta+V_{0}\Lambda_{0}^{-1/2}z)-\eta\big{\|}_{2} ^{2}\Big{]},\quad\text{where}\ \ \omega_{0}=(r_{0},V_{0},\Lambda_{0}).\] Using this definition, along with the two inequalities (65) and (66), we have demonstrated \[\mathfrak{M}^{\mathrm{G}}_{\mathrm{red}}(\mathbb{Q},K)\geq\mathbf{E}_{\omega \sim\mathbb{Q}}\big{[}B(\pi\mid\omega)\big{]},\qquad\text{for any prior $\pi$ on $\Theta(K)$.} \tag{67}\] Therefore, it suffices for us to lower bound \(B(\pi\mid\omega)\). Step 2: Reduction to a family of easier problemsIn this step, we fix a parameter \(\lambda>0\), which will index yet another auxiliary Bayesian estimation problem. The intuition will be that as \(\lambda\to 0^{+}\), we are "approaching" the difficulty of the original Bayesian estimation problem. Formally, fix \(\omega=(r,V,\Lambda)\). Throughout we will let \(V_{\perp}\colon\mathbf{R}^{d}\to\mathbf{ran}(V)^{\perp}\) denote the projection of an element \(\eta\in\mathbf{R}^{d}\) to the orthogonal complement of the closed subspace \(\mathbf{ran}(V)\). We now consider the observation, where for an independent random Gaussian variable \(z\sim\mathsf{N}\left(0,I_{d}\right)\) \[\Upsilon_{\lambda}=\underbrace{\left(VV^{\mathsf{T}}+\lambda V_{\perp}\right) }_{=X_{\lambda}}\eta+V\Lambda^{-1/2}w+\sqrt{\lambda}V_{\perp}z=X_{\lambda} \eta+(V\Lambda^{-1}V^{\mathsf{T}}+\lambda V_{\perp})^{1/2}w^{\prime}, \tag{68}\] where the last equality holds in distribution. Define \(\Sigma_{\lambda}\coloneqq V\Lambda^{-1}V^{\mathsf{T}}+\lambda V_{\perp}\); evidently \(\Sigma_{\lambda}\) is a symmetric positive definite matrix for any \(\lambda>0\). Then, \(\Upsilon_{\lambda}\) has distribution \(\mathsf{N}\left(X_{\lambda}\eta,\Sigma_{\lambda}\right)\). We remark that the observation \(\Upsilon_{\lambda}\) is more convenient than \(\Upsilon\) as its covariance is nonsingular and moreover its mean is a nonsingular linear transformation of \(\eta\)--note that neither of these properties hold for \(\Upsilon\). Our goal is to show that the observation \(\Upsilon_{\lambda}\) is more "informative" than \(\Upsilon\). To do this, we now define the (conditional) Bayesian risk for \(\Upsilon_{\lambda}\), \[B_{\lambda}(\pi\mid\omega)\coloneqq\inf_{\widehat{\eta}}\Big{\{}B_{\lambda} (\widehat{\eta},\pi\mid\omega)\coloneqq\mathbf{E}\left[\|\widehat{\eta}( \Upsilon_{\lambda})-\eta\|_{2}^{2}\right]\Big{\}}.\] The main claim is that this provides a lower bound on our original conditional Bayesian risk. **Lemma 4**.: _For any \(\omega\) and \(\lambda>0\), we have_ \[B(\pi\mid\omega)\geq B_{\lambda}(\pi\mid\omega).\] See Appendix C.2.2 for a proof of this claim. Step 3: Calculation of Bayesian risk \(B_{\lambda}(\pi\mid\omega)\), for a fixed prior \(\pi\) and parameter \(\lambda>0\)To compute the Bayesian risk for a fixed prior \(\pi\) and parameter \(\lambda>0\), we develop a variant of Tweedie's formula (also sometimes referred to as Brown's identity, when applied to Bayesian risks) [66, 58, 13]. To state the result, we need to introduce some notation. We define the marginal and conditional densities of \(\Upsilon_{\lambda}\)--disregarding normalization constants--as, \[p(y)\coloneqq\int p(y\mid\eta)\,\pi(\mathrm{d}\eta)\qquad\text{where}\quad p(y \mid\eta)\coloneqq\exp\Big{(}-\frac{1}{2}\|y-X_{\lambda}\eta\|_{\Sigma_{\lambda }^{-1}}^{2}\Big{)}.\] Finally we define the Fisher information of the marginal distribution of \(\Upsilon_{\lambda}\), which is given by \[\mathcal{I}\left(\Upsilon_{\lambda}\right)\coloneqq\mathbf{E}[\nabla\log p( \Upsilon_{\lambda})\otimes\nabla\log p(\Upsilon_{\lambda})].\] With this notation in hand, we can now state our formula for the Bayesian risk under the prior \(\pi\) and for parameter \(\lambda>0\). **Lemma 5**.: _Fix \(\omega=(r,V,\Lambda)\). Define \(X_{\lambda}\coloneqq VV^{\mathsf{T}}+\lambda V_{\perp}\) and \(\Sigma_{\lambda}\coloneqq V\Lambda^{-1}V^{\mathsf{T}}+\lambda V_{\perp}\). Fix prior \(\pi\), and parameter \(\lambda>0\). Then the conditional Bayesian risk is given by_ \[B_{\lambda}(\pi\mid\omega)=\mathbf{Tr}\,\Big{(}X_{\lambda}^{-1}\Sigma_{ \lambda}\big{[}\Sigma_{\lambda}^{-1}-\mathcal{I}\left(\Upsilon_{\lambda} \right)\big{]}\Sigma_{\lambda}X_{\lambda}^{-1}\Big{)}.\] See Appendix C.2.3 for a proof of this claim. Step 4: Lower bound on Fisher information for conditioned Gaussian priorConsider a prior \(\pi\) which is absolutely continuous with respect to Lebesgue measure on \(\mathbf{R}^{d}\). Furthermore, suppose that its Lebesgue density \(f_{\pi}\coloneqq\frac{\mathrm{d}\pi}{\mathrm{d}\eta}\) has logarithmic gradient almost everywhere. Define \[\mathcal{I}\left(\pi\right)\coloneqq\int\nabla\log f_{\pi}(\eta)\otimes \nabla\log f_{\pi}(\eta)\,\mathrm{d}\pi(\eta).\] Recall also that the Fisher information associated with a Gaussian distribution \(\mathsf{N}\left(\mu,\Pi\right)\) for nonsingular \(\Pi\) is given by \(\Pi^{-1}\)[44, Example 6.3]. Therefore, applying well-known results for the Fisher information [69, eqn. (8) and Corollary 1] \[\mathcal{I}\left(\Upsilon_{\lambda}\right)\preccurlyeq(X_{\lambda}\mathcal{I} \left(\pi\right)^{-1}X_{\lambda}+\Sigma_{\lambda})^{-1}. \tag{69}\] Next, we select a prior distribution and calculate the Fisher information \(\mathcal{I}\left(\Upsilon_{\lambda}\right)\) for the marginal density under this prior. For a parameter \(\tau\in(0,1]\) and symmetric positive definite covariance matrix \(\Pi\), we define the probability measures \[\pi_{\tau,\Pi}^{\mathrm{G}}=\mathsf{N}\left(0,\tau^{2}\Pi\right)\quad\text{ and}\quad\pi_{\tau,\Pi}=\pi_{\tau,\Pi}^{\mathrm{G}}\big{(}\cdot\mid\Theta(K) \big{)}. \tag{70}\] In other words, \(\pi_{\tau,\Pi}\) denotes the probability measure \(\mathsf{N}\left(0,\tau^{2}\Pi\right)\) conditioned on the constraint set. Formally, it is defined by the relation, \[\pi_{\tau,\Pi}(A)\coloneqq\frac{\pi_{\tau,\Pi}^{\mathrm{G}}\big{(}A\cap\Theta (K)\big{)}}{\pi_{\tau,\Pi}^{\mathrm{G}}\big{(}\Theta(K)\big{)}},\] for any event \(A\). For these priors, we have the following claim. **Lemma 6**.: _Let \(\tau\in(0,1]\) and \(\Pi\) be a symmetric positive definite matrix satisfying the relation \(\mathbf{Tr}(\Pi^{1/2}K^{-1}\Pi^{1/2})\leqslant 1\). Then the Fisher information of the conditioned prior \(\pi_{\tau,\Pi}\) satisfies the inequality_ \[\mathcal{I}\left(\pi_{\tau,\Pi}\right)^{-1}\succcurlyeq c(\tau,\Pi)\Pi,\] _where \(c(\tau,\Pi)=\tau^{2}(1-\pi_{\tau,\Pi}^{\mathrm{G}}(\Theta(K)^{c}))>0\)._ See Appendix C.2.4 for the proof of this claim. Step 5: Putting the pieces togetherCombining Lemmas 4 and 5 along with the inequality (69) and Lemma 6, we find that for any \(\tau\in(0,1]\) and symmetric positive definite matrix \(\Pi\) satisfying \(\mathbf{Tr}(\Pi^{1/2}K^{-1}\Pi^{1/2})\leqslant 1\), that \[B(\pi\mid\omega) \geqslant\sup_{\lambda>0}\mathbf{Tr}\,\Big{(}X_{\lambda}^{-1} \Sigma_{\lambda}\big{[}\Sigma_{\lambda}^{-1}-(c(\tau,\Pi)X_{\lambda}\Pi X_{ \lambda}+\Sigma_{\lambda})^{-1}\big{]}\Sigma_{\lambda}X_{\lambda}^{-1}\Big{)}\] \[=\sup_{\lambda>0}\mathbf{Tr}\,\Big{(}(\tfrac{1}{c(\tau,\Pi)}\Pi^{ -1}+X_{\lambda}\Sigma_{\lambda}^{-1}X_{\lambda})^{-1}\Big{)}.\] Above, we used the relation \(A(A^{-1}-(B+A)^{-1})A=(A^{-1}+B^{-1})^{-1}\), valid for any pair \((A,B)\) of symmetric positive definite matrices. Our particular choice of matrices was \(A=\Sigma_{\lambda}\) and \(B=X_{\lambda}\). Note that \[X_{\lambda}\Sigma_{\lambda}^{-1}X_{\lambda}=V\Lambda V^{\mathsf{ T}}+\lambda V_{\perp}.\] Therefore, by continuity, we have \[B(\pi\mid\omega)\geqslant\lim_{\lambda\to 0^{+}}\mathbf{Tr}\,\Big{(}( \tfrac{1}{c(\tau,\Pi)}\Pi^{-1}+V\Lambda V^{\mathsf{T}}+\lambda V_{\perp})^{-1} \Big{)}=\mathbf{Tr}\,\Big{(}(\tfrac{1}{c(\tau,\Pi)}\Pi^{-1}+V\Lambda V^{ \mathsf{T}})^{-1}\Big{)}. \tag{71}\] Taking the expectation over \(\omega\), and applying our minimax lower bound (67), we have established lower bound (63). Note that since \(c(\tau,\Pi)\in(0,1]\), we evidently have from the above display that \[B(\pi\mid\omega)\geqslant c(\tau,\Pi)\,\mathbf{Tr}\,\Big{(}(\Pi^ {-1}+V\Lambda V^{\mathsf{T}})^{-1}\Big{)}.\] Let us define the constant \[c_{\ell}(K)\coloneqq\inf_{\begin{subarray}{c}\Pi>0\\ \mathbf{Tr}(\Pi K^{-1})\leqslant 1\end{subarray}}\sup_{\tau\in(0,1]}c(\tau,\Pi).\] Then combining the conditional lower bound (71) with our minimax lower bound (67), we obtain \[\mathfrak{M}_{\mathrm{red}}^{\mathrm{G}}(\mathbb{Q},K) \geqslant\sup_{\Pi}\Big{\{}\;\mathbf{E}\,\mathbf{Tr}\,\Big{(}( \Pi^{-1}+V\Lambda V^{\mathsf{T}})^{-1}\Big{)}:\Pi>0,\;\;\mathbf{Tr}(\Pi^{1/2}K^ {-1}\Pi^{1/2})\leqslant c_{\ell}(K)\,\Big{\}}\] \[=\sup_{\Pi}\Big{\{}\;\mathbf{E}\,\mathbf{Tr}\,\Big{(}(\tfrac{1}{ c_{\ell}(K)}\Pi^{-1}+V\Lambda V^{\mathsf{T}})^{-1}\Big{)}:\Pi>0,\;\;\mathbf{Tr}(\Pi^{1/2}K^ {-1}\Pi^{1/2})\leqslant 1\,\Big{\}}\] \[\geqslant c_{\ell}(K)\,\sup_{\Pi}\Big{\{}\;\mathbf{E}\,\mathbf{ Tr}\,\Big{(}(\Pi^{-1}+V\Lambda V^{\mathsf{T}})^{-1}\Big{)}:\Pi>0,\;\;\mathbf{Tr}(\Pi^{1/2}K^ {-1}\Pi^{1/2})\leqslant 1\,\Big{\}}.\] To complete the proof, we simply need to lower bound the constant \(c_{\ell}(K)\) universally. **Lemma 7**.: _The constant \(c_{\ell}(K)\) is lower bounded, for any symmetric positive definite \(K\), as_ \[c_{\ell}(K)\geqslant\frac{1}{4}.\] See Appendix C.2.5 for a proof of this claim. ## 5 Discussion In this work, we determined the minimax risk of estimation for observation models of the form (1), where one observes the image of a unknown parameter under a random linear operator with additive noise. Our results reveal the dependence of the rate of convergence on the covariate law, the parameter space, the error metric, and the noise level. We conclude our paper by presenting some simulation results; see Section 5.1 Finally, we note that in this work we studied minimax risks of convergence in expectation. This is convenient, as it requires relatively minor assumptions of the distribution of \(T_{\xi}\). On the other hand, for the setting of random design regression, high-probability results, such as those obtained in the papers [4, 51, 36, 43, 55], typically require stronger assumptions such as the sub-Gaussianity of the covariate distribution. Nonetheless, high-probability guarantees provide a complementary perspective on the problem we consider. Indeed, when the covariate law can be considered "heavy-tailed," it may be more relevant to develop robust estimators that have low risk with high probability. We refer to the survey article [46] for a overview of work in this direction. ### Some illustrative simulations We conclude our paper by presenting the results of some simulations reveal how changes in the distribution of the random operator \(T_{\xi}\) can lead to dramatic changes in the overall minimax risk. In this section, we present simulation results to illustrate the behavior of the functionals appearing in our main results for two versions of random design linear regression. In Section 5.1.1, we present simulation results for a multivariate, random design linear regression setting with IID covariates. Concretely, we provide two different covariate laws, where the minimax error for the same parameter space differs by at least two orders of magnitude. We emphasize this difference in _entirely_ due to the covariate law; the noise, observation model, error metric, and parameter space are fixed in this comparison. Additionally, in Section 5.1.2, we present simulation results for a univariate regression setting where the covariates are sampled from a Markov chain. In both cases, the functional is able to capture the dependence of the minimax rate of estimation on the underlying covariate distribution. #### 5.1.1 Higher-order effects in IID random design linear regression For random design linear regression, higher order properties of the covariate distribution over the covariates can have striking effects on the minimax risk. In order to illustrate this phenomenon, we consider the regression model (10) with feature map \(\psi(x)=x\), and parameter vector \(\theta^{\star}\) constrained to a ball in the Euclidean norm. We then construct a family of distributions over the covariates that are all zero-mean with identity covariance, but differ in interesting ways in terms of their higher-order moment properties. More precisely, we let \(\delta_{0}\) denote the Dirac measure with unit mass at \(0\), and for a mixture weight \(\lambda\in[0,1]\), we consider covariates generated from the probability distribution \[P_{\lambda}\coloneqq\lambda\delta_{0}+(1-\lambda)\mathsf{N}\left(0,\frac{1}{1- \lambda}I_{d}\right). \tag{72}\] By construction, all members of the ensemble have the same behavior with respect to their first and second moments, \[\mathbf{E}_{P_{\lambda}}[x]=0\quad\text{and}\quad\text{Cov}_{P_{\lambda}}(x)= \mathbf{E}_{P_{\lambda}}[x\otimes x]=I_{d},\quad\text{for all }\lambda\in[0,1]. \tag{73}\] In the special case \(\lambda=0\), the distribution \(P_{\lambda}\) corresponds to the standard Gaussian law on \(\mathbf{R}^{d}\), whereas it becomes an increasingly ill-behaved Gaussian mixture distribution as \(\lambda\to 1^{-}\). Following the argument in Section 3.1.1, in this case, the minimax risk is upper and lower bounded as \[\frac{\sigma^{2}}{n}\,\mathbf{E}_{P_{\lambda}^{n}}[\mathbf{Tr}((\Sigma_{n}+ \tfrac{\sigma_{d}\sigma^{2}d}{n\varrho^{2}}I_{d})^{-1})]\leq\mathfrak{M}_{n}^{ \text{IID}}\Big{(}P_{\lambda},\varrho,\sigma^{2},I_{d},I_{d}\Big{)}\leq\frac{ \sigma^{2}}{n}\,\mathbf{E}_{P_{\lambda}^{n}}[\mathbf{Tr}((\Sigma_{n}+\tfrac{ \sigma^{2}d}{n\varrho^{2}}I_{d})^{-1})]. \tag{74}\] Above, the lower bound constant \(c_{d}\) is defined in display (20b). To understand the effect of the covariate law, we fix the signal-to-noise ratio such that \(\frac{\varrho}{\sigma}=\tau\), for \(\tau\in\{1,10\}\). Note that after renormalizing the minimax risk by \(\varrho^{2}\), it only depends on \(\tau\) (and not on the particular choices of \((\varrho,\sigma)\)). Similarly, this invariance relation holds for the functionals appearing on the left- and righthand sides of the display (74)--after normalization by \(1/\varrho^{2}\), they no longer depend on \((\varrho,\sigma)\) except via the ratio \(\tau=\frac{\varrho}{\sigma}\). Additionally, we fix the aspect ratio \(\gamma=\frac{d}{n}\).7By varying \(\gamma\in[0.05,4]\) we are able to illustrate the behavior of the minimax risk, as characterized by our functional, for problems which are both under- and overdetermined. Footnote 7: Specifically, we take \(d=\lceil\gamma n\rceil\). Having fixed the SNR at \(\tau\) and aspect ratio at \(\gamma\), we can somewhat simplify the display (74), by introducing the following quantities which only depend on the parameters \(\tau,\gamma\) and the sample size \(n\) and the mixture parameter \(\lambda\), \[\mathfrak{m}_{n}(\lambda,\tau,\gamma) \coloneqq\frac{\mathfrak{M}_{n}^{\text{IID}}\Big{(}P_{\lambda}, \tau\sigma,\sigma^{2},I_{\lceil\gamma n\rceil},I_{\lceil\gamma n\rceil}\Big{)} }{\tau^{2}\sigma^{2}}, \tag{75a}\] \[u_{n}(\lambda,\tau,\gamma) \coloneqq\frac{1}{\tau^{2}n}\,\mathbf{E}_{P_{\lambda}^{n}}\big{[} \mathbf{Tr}((\Sigma_{n}+\tfrac{\lceil\gamma n\rceil}{n\tau^{2}}I_{\lceil \gamma n\rceil})^{-1})\big{]},\] (75b) \[\ell_{n}(\lambda,\tau,\gamma) \coloneqq\frac{1}{\tau^{2}n}\,\mathbf{E}_{P_{\lambda}^{n}}\big{[} \mathbf{Tr}((\Sigma_{n}+\tfrac{c_{d}\lceil\gamma n\rceil}{n\tau^{2}}I_{\lceil \gamma n\rceil})^{-1})\big{]}. \tag{75c}\] Then, the relations (74), can be equivalently expressed as \[\ell_{n}(\lambda,\tau,\gamma)\leqslant\mathfrak{m}_{n}(\lambda,\tau,\gamma) \leqslant u_{n}(\lambda,\tau,\gamma),\] and moreover this holds for all \(\lambda\in[0,1],\tau>0,\gamma>0\). In our simulation, we use Monte Carlo simulation with 50 trials to estimate the upper and lower bound functionals \(\ell_{n}\) and \(u_{n}\). In our simulations, we take \(\lambda\in\{0,0.9,0.99\}\) and vary \(\gamma\in[0.05,4]\). The results of these simulations are presented in Figure 1; see the caption for a detailed description and commentary. The general pattern should be clear: the covariate law can have a dramatic impact on the overall rate of estimation, even when restricting some moments such as we have with the relations (73). #### 5.1.2 Mixing time effects in Markovian linear regression Covariates need not be drawn in an IID manner, and any dependencies can be expected to affect the minimax risk. Here we illustrate this general phenomena via some simulations for the Markov regression example as outlined in Section 3.1.4. We seek to study a wide range of possible mixing conditions for the Markovian covariate model. In order to do so, we consider covariates generated from the Markovian model (26) with \[r_{t}=\frac{\psi(t-1)}{\psi(t)},\] where \(\psi\colon\mathbf{N}\cup\{0\}\to\mathbf{R}_{+}\) is a nondecreasing function satisfying \(\psi(0)=1\) and \(\lim_{t\to\infty}\psi(t)=\infty\). With this choice, it is easily checked that, marginally \[x_{t}\sim\mathsf{N}\left(0,1-\frac{1}{\psi(t)}\right).\] Therefore, \(x_{t}\to\mathsf{N}\left(0,1\right)\) in distribution as \(t\to\infty\), and the rate of convergence is of order \(1/\psi(t)\). We now illustrate how the minimax rate, as determined in Corollary 5, for this problem behaves for different choices of the function \(\psi\) and the signal-to-noise ratio (SNR). As in Section 5.1.1, we Figure 1: Simulations of random design regression for three covariate laws, \(P_{\lambda}\) as defined in equation (72) with \(\lambda\in\{0,0.9,0.99\}\). For a given choice of the mixture weight \(\lambda\) and signal-to-noise ratio (SNR) \(\tau\), we plot the lower bound \(\ell_{n}(\lambda,\tau,\gamma)\) and upper bound \(u_{n}(\lambda,\tau,\gamma)\) as \(\gamma\) varies between \(0.05\) and \(4\). The normalized minimax risk \(\mathfrak{m}_{n}\) is then guaranteed to lie in the region whose upper and lower envelopes are given by \(u_{n}\) and \(\ell_{n}\), respectively. To facilitate interpretation of these figures, we have shaded this region to highlight where we can guarantee the minimax risk \(\mathfrak{m}_{n}\) must lie. The quantities \(u_{n},\ell_{n},\mathfrak{m}_{n}\) are all defined in display (75). In panels (1a) and (1b), we set the sample size \(n=128\), and set the SNR as \(\tau=1,10\), respectively. In panels (1c) and (1d), we set the sample size \(n=512\), and set the SNR as \(\tau=1,10\), respectively. The plots above demonstrate that as \(\lambda\) increases, the minimax risks are much worse. Numerically, in the setting where \(n=512\) and \(\tau=10\)—as depicted in panel (1d)—our upper and lower bounds guarantee that the minimax risk for the isotropic ensemble (depicted with \(\lambda=0\) above) can be over \(806\) times larger than the minimax risk for the ensemble with \(\lambda=0.99\). It should be noted that in this comparison the first and second moments of the ensemble are held fixed (see equation (73)), and hence the differences between the lines plotted in any given panel can only be explained by differences in higher-order moments within the ensemble \(\{P_{\lambda}\}\). The figures also demonstrate that the gap between our upper and lower bounds is fairly small, particularly whenever \(d>5\). normalize the minimax risk by the squared radius so that it only depends on \(\tau=\frac{\varrho}{\sigma}\). The quantity we then plot is \[\Phi_{T}(\tau)\coloneqq\frac{\Phi_{T}(\tau,1)}{\tau^{2}},\] where \(\Phi_{T}(\varrho,\sigma)\) is the functional appearing in Corollary 5. In the simulation, we consider the following choices of scaling function \(\psi\), \[5^{t},\quad t+1,\quad 1+\log(t+1),\quad\text{and}\quad 1+\log\big{(}1+\log(t+1) \big{)}.\] With the choice \(\psi(t)=5^{t}\), the underlying Markov chain converges geometrically to the standard Normal law. On the other hand, the choice \(\psi(t)=\log(1+\log(1+t))+1\) exhibits much slower convergence--the variational distance between the law of \(x_{t}\) and \(\mathsf{N}\left(0,1\right)\) is of order \(O(1/(\log\log t))\). We simulate each of these chains, computing the normalized functional \(\Phi_{T}(\tau)\) over the course of 5000 Monte Carlo trials. The sample size \(T\) is varied between 10 and 3162. In the simulation we also include the choice \(r_{t}\equiv 0\), which corresponds to IID covariates. The results of the simulation are presented in Figure 2; see the caption for more details and commentary. ### Acknowledgements We thank Jaouad Mourtada for a helpful conversation and useful email exchanges; we also thank Peter Bickel for a helpful discussion regarding his prior work on the Gaussian sequence model. RP was partially supported by a UC Berkeley Chancellor's Fellowship via the ARCS Foundation. MJW and RP were partially funded by ONR grant N00014-21-1-2842 and National Science Foundation grant NSF-DMS grant 2015454. MJW and RP gratefully acknowledge funding support from Meta via the UC Berkeley AI Research (BAIR) Commons initiative. Figure 2: Simulations for five distributions of Markovian covariates. In panel (2a), we set the SNR parameter as \(\tau=1\), and in panel (2b), we set the SNR parameter as \(\tau=10\). As the scaling function \(\psi\) grows more slowly, the chain converges to its stationary distribution more slowly, and the minimax rate decays more slowly, as indicated by the displayed behavior of our functional \(T\mapsto\Phi_{T}(\tau)\). Proofs from Section 2 ### Proof of Proposition 1 The constraint set is evidently convex, as it is formed by the intersection of of two convex sets: the \(d\times d\) real, symmetric positive definite matrices with the hyperplane \(\{\Omega:\mathbf{Tr}({K_{c}}^{-1}\Omega)\leq\varrho^{2}\}\). We claim that the objective function \(f\) is concave over the set of symmetric positive definite matrices. It can be expressed as \[f(\Omega)=\mathbf{E}_{\xi}[g(T_{\xi}^{\mathsf{T}}\Sigma_{w}^{-1}T_{\xi}, \Omega)],\quad\text{where}\quad g(X,\Omega)\coloneqq\mathbf{Tr}({K_{c}}^{1/2}( X+\Omega^{-1})^{-1}{K_{e}}^{1/2}).\] Evidently to establish that \(f\) is concave, it is enough to show that \(g(X,\cdot)\) is concave for every symmetric positive semidefinite \(X\). In order to establish this claim, let us fix some \(\varepsilon>0\), and define \(X(\varepsilon)\coloneqq X+\varepsilon I_{d}.\) By the joint concavity of the harmonic mean of positive operators [61, Corollary 37.2], it follows that for any pair of positive definite matrices \(\Omega,\Omega^{\prime}\), we have \[\Big{(}X(\varepsilon)+\Big{(}\frac{\Omega+\Omega^{\prime}}{2}\Big{)}^{-1} \Big{)}^{-1}\geq\frac{1}{2}\Big{(}X(\varepsilon)+\Omega^{-1}\Big{)}^{-1}+ \frac{1}{2}\Big{(}X(\varepsilon)+(\Omega^{\prime})^{-1}\Big{)}^{-1}.\] Passing to the limit as \(\varepsilon\to 0\) yields \[\Big{(}X+\Big{(}\frac{\Omega+\Omega^{\prime}}{2}\Big{)}^{-1}\Big{)}^{-1}\geq \frac{1}{2}\Big{(}X+\Omega^{-1}\Big{)}^{-1}+\frac{1}{2}\Big{(}X+(\Omega^{ \prime})^{-1}\Big{)}^{-1}.\] Since the trace is a monotone mapping on positive definite matrices, and \(g\) is continuous in its second argument, we obtain the claimed concavity of \(g\). ### Proof of Proposition 2 To establish the upper bound, it suffices to show that for each positive definite \(\Omega\succ 0\) with \(\mathbf{Tr}({K_{c}}^{-1/2}\Omega{K_{c}}^{-1/2})\leq\frac{n\varrho^{2}}{ \sigma^{2}}\) that the following inequality holds \[\mathbf{Tr}\,\Big{(}\,\mathbf{E}\,\big{[}\big{(}\Sigma_{n}+\Omega^{-1}\big{)}^ {-1}\Sigma_{P}\big{]}\Big{)}\leq\Big{(}1+\frac{\varrho^{2}\kappa^{2}}{\sigma^ {2}}\Big{)}\,\mathbf{Tr}\,\big{(}\big{(}\Sigma_{P}+\Omega^{-1}\big{)}^{-1} \Sigma_{P}\Big{)}. \tag{76}\] Our proof of the auxiliary claim (76) is based on exchangeability and operator convexity, and is similar to previous work on the analysis of ridge regression estimators [54]. Let \(x_{n+1}\) be a fresh sample drawn independently from \(\{x_{i}\}_{i=1}^{n}\) with the same distribution \(P\). Letting \(\mathbf{E}\) denote the expectation over the full sequence \(\{x_{i}\}_{i=1}^{n+1}\), we have \[\mathbf{E}\,\big{[}\big{(}\Sigma_{n}+\Omega^{-1}\big{)}^{-1}\Sigma_{P}\big{]}= n\,\mathbf{E}\,\big{[}\big{(}n\Sigma_{n}+n\Omega^{-1}\big{)}^{-1}(\psi(x_{n+1}) \otimes\psi(x_{n+1}))\big{]}. \tag{77}\] Define \(\hat{\Sigma}_{n+1}\coloneqq(n+1)^{-1}\sum_{i=1}^{n+1}\psi(x_{i})\otimes\psi( x_{i})\). Then, by the Sherman-Morrison lemma [34, Section 0.7.4], it follows that \[(n\Sigma_{n}+n\Omega^{-1})^{-1}\psi(x_{n+1})=(1+\big{<}(n\Sigma_{n}+n\Omega^{ -1})^{-1}\psi(x_{n+1}),\psi(x_{n+1})\big{>}\big{(}(n+1)\hat{\Sigma}_{n+1}+n \Omega^{-1}\big{)}^{-1}\psi(x_{n+1}).\] Additionally, by the Cauchy-Schwarz inequality, we have \[\big{<}(n\Sigma_{n}+n\Omega^{-1})^{-1}\psi(x_{n+1}),\psi(x_{n+1})\big{>}\leq \frac{1}{n}\|K_{c}^{-1/2}\Omega\|_{\mathrm{op}}\|{K_{c}}^{1/2}\psi(x_{n+1})\|_{ 2}^{2}\leq\frac{\varrho^{2}\kappa^{2}}{\sigma^{2}},\] where the last inequality holds \(P\)-almost surely. Applying the previous two displays in relation (77), it follows that \[\mathbf{Tr}\,\mathbf{E}\left[\left(\Sigma_{n}+\Omega^{-1}\right)^{- 1}\Sigma_{P}\right] \leqslant\left(1+\frac{\varrho^{2}\kappa^{2}}{\sigma^{2}}\right) \mathbf{Tr}\,\mathbf{E}\left[\left(\hat{\Sigma}_{n+1}+\Omega^{-1}\right)^{-1} \psi(x_{n+1})\otimes\psi(x_{n+1})\right]\] \[=\Big{(}1+\frac{\varrho^{2}\kappa^{2}}{\sigma^{2}}\Big{)}\frac{1 }{n+1}\sum_{i=1}^{n+1}\mathbf{Tr}\,\mathbf{E}\left[\left(\hat{\Sigma}_{n+1}+ \Omega^{-1}\right)^{-1}\psi(x_{i})\otimes\psi(x_{i})\right] \tag{78}\] \[=\Big{(}1+\frac{\varrho^{2}\kappa^{2}}{\sigma^{2}}\Big{)}\, \mathbf{Tr}\,\mathbf{E}\left[\left(\hat{\Sigma}_{n+1}+\Omega^{-1}\right)^{-1} \hat{\Sigma}_{n+1}\right]\] (79) \[\leqslant\Big{(}1+\frac{\varrho^{2}\kappa^{2}}{\sigma^{2}}\Big{)} \,\mathbf{Tr}\,\Big{(}\big{(}\Sigma_{P}+\Omega^{-1}\big{)}^{-1}\Sigma_{P} \Big{)}. \tag{80}\] Above step (79) follows by the exchangeability of \(\{\psi(x_{i})\}_{i=1}^{n+1}\), and step (80) follows by the cyclicity and linearity of the trace, as well as the the fact that for any fixed symmetric positive definite matrix \(B\), the mapping \(A\mapsto(A+B)^{-1}A=I_{d}-(A+B)^{-1}B\) is concave over the set over symmetric positive semidefinite matrices (see Bhatia [8, page 19]). ### Proof of Corollary 2 Combining Theorems 2 and 2, we find that \[\Phi(T,\mathbb{P},\Sigma_{w},\tfrac{\varrho}{2},K_{e},K_{c}) \leqslant\mathfrak{M}(T,\mathbb{P},\Sigma_{w},\varrho,K_{e},K_{c})\leqslant \Phi(T,\mathbb{P},\Sigma_{w},\varrho,K_{e},K_{c}). \tag{81}\] Evidently, by definition of the functional \(\Phi\) (see definition (5)), the map \(\varrho\to\Phi(T,\mathbb{P},\Sigma_{w},\varrho,K_{e},K_{c})\) is nondecreasing. Moreover since \(T_{\xi}^{\mathsf{T}}\Sigma_{w}^{-1}T_{\xi}\) is invertible with probability \(1\), it is a bounded function. Therefore, \[\lim_{\varrho\to\infty}\frac{\Phi(T,\mathbb{P},\Sigma_{w},\varrho,K_{e},K_{c} )}{\Phi(T,\mathbb{P},\Sigma_{w},\varrho/2,K_{e},K_{c})}=1,\] which in view of the sandwich relation (81), furnishes the claim. ## Appendix B Proofs and calculations from Section 3 ### Proof and calculations from Section 3.1 #### b.1.1 Proof of equation (21a) From the definition of the functional (13), we have \[d_{n}(\mathsf{N}\left(0,I_{d}\right),\varrho,\sigma^{2},I_{d},I_{d})=\sup \Big{\{}\,\,\mathbf{E}[\mathbf{Tr}((\Sigma_{n}+\tfrac{\sigma^{2}d}{n\varrho^{ 2}}M^{-1})^{-1})]:M\succ 0,\,\,\,\,\,\mathbf{Tr}(M)=d\,\Big{\}}.\] In this section, all expectations are over \(x_{i}\stackrel{{\text{IID}}}{{\sim}}\mathsf{N}\left(0,I_{d}\right)\). We claim that the supremum above is achieved at \(M=I_{d}\). **Lemma 8**.: _For any positive definite matrix \(M\succ 0\) such that \(\mathbf{Tr}(M)=d\), we have_ \[\mathbf{E}[\mathbf{Tr}((\Sigma_{n}+\tfrac{\sigma^{2}d}{n\varrho^{2}}M^{-1})^ {-1})]\leqslant\mathbf{E}[\mathbf{Tr}((X^{\mathsf{T}}X+\tfrac{d\sigma^{2}}{ \varrho^{2}}I_{d})^{-1})]\] Assuming Lemma 8, we then have \[d_{n}(\mathsf{N}\left(0,I_{d}\right),\varrho,\sigma^{2},I_{d},I_{d})=\mathbf{ E}[\mathbf{Tr}((\Sigma_{n}+\tfrac{\sigma^{2}d}{n\varrho^{2}}I_{d})^{-1})]=d_{ \text{Dicker}}(n,d,\varrho,\sigma),\] which establishes (21a), as needed. Proof of Lemma 8Define the function \(\phi\colon(\Sigma,M)\mapsto(\Sigma+\frac{d\sigma^{2}}{n\varrho^{2}}M^{-1})^{-1}\), where \(\Sigma,M\) are assumed symmetric positive semidefinite and \(M\) is nonsingular. For each \(\Sigma\geq 0\), it is well known that \(\phi(\Sigma,\cdot)\) is operator concave [61, Corollary 37.2]--for any collection \(\{M_{i}\}_{i=1}^{d}\) of symmetric positive definite matrices, one has \[\frac{1}{d}\sum_{i=1}^{d}\phi(\Sigma,M_{i})\leq\phi(\Sigma,\tfrac{1}{d}\sum_{i =1}^{d}M_{i}),\qquad\text{for any $\Sigma\in\mathbb{S}_{+}^{d}$}. \tag{82}\] Now let \(M>0\) satisfying \(\mathbf{Tr}(M)=d\) be given. Diagonalize \(M\) so that \(M=U\Lambda U^{\mathsf{T}}\), where \(\Lambda=\mathbf{diag}(\lambda)>0\), and \(U\) is orthogonal. Consider the cyclic permutations of \(\Lambda\), given by \[\Lambda^{(j)}=\mathbf{diag}(\lambda^{(j)}),\quad\text{where}\quad\lambda_{i}^ {(j)}=\lambda_{i+j}.\] Above, the arithmetic \(i+j\) occurs modulo \(d\). By rotational invariance of the Gaussian and the fact that \(x_{i}\) has iid coordinates, we have \[\mathbf{E}\,\mathbf{Tr}((\Sigma_{n}+\tfrac{d\sigma^{2}}{n\varrho ^{2}}M^{-1})^{-1}) =\mathbf{E}\,\mathbf{Tr}((\Sigma_{n}+\tfrac{d\sigma^{2}}{n\varrho ^{2}}\Lambda^{-1})^{-1})\] \[=\mathbf{E}\left[\frac{1}{d}\sum_{j=1}^{d}\mathbf{Tr}((\Sigma_{n} +\tfrac{d\sigma^{2}}{n\varrho^{2}}(\Lambda^{(j)})^{-1})^{-1})\right]\] \[=\mathbf{Tr}\left\{\,\mathbf{E}\left[\frac{1}{d}\sum_{j=1}^{d} \phi(\Sigma_{n},\Lambda^{(j)})\right]\right\}\] \[\leq\mathbf{Tr}\left\{\,\mathbf{E}\left[\phi(\Sigma_{n},\overline {\Lambda})\right]\right\}\qquad\text{where}\quad\overline{\Lambda}\coloneqq \tfrac{1}{d}\sum_{j=1}^{d}\Lambda^{(j)},\] The final inequality above uses the concavity inequality (82), where we have taken \(M_{i}=\Lambda^{(i)}\). Now note that \[\overline{\Lambda}=\frac{\mathbf{Tr}(\Lambda)}{d}I_{d}=\frac{\mathbf{Tr}(M)} {d}I_{d}=I_{d}.\] Combining the preceding displays furnishes the claim. #### b.1.2 Proof of the lower bound in equation (20a) We apply our our sharp lower bound in Theorem 2 with \(\Omega=\frac{\varrho^{2}}{d}I_{d}\) and \(\tau^{2}=1-\frac{1}{2d-1}\). Let us define \(u=(1-\frac{1}{2d-1})(1-\mathbf{P}\{Z>2d^{2}-d\})\), where \(Z\) is a \(\chi^{2}\)-random variable with \(d\)-degrees of freedom. Note that \(d(d-1)\geq\sqrt{d}t+t\) for \(t=\frac{d^{3/2}}{4}\) for all \(d\geq 2\). Therefore by standard tail bounds for \(\chi^{2}\)-variates [42, pp. 1325], we have \(u\leq\exp(-d^{3/2}/4)\). Applying the sharp lower bound (8) in Theorem 2 then yields the claim. #### b.1.3 Proof of equation (25) Using the semidefinite inequality \[\left(\Sigma_{n}+\Omega^{-1}\right)^{-1}\leq\Sigma_{n}^{-1},\] and the choice \(\Omega=\frac{n}{\sigma^{2}}\frac{\varrho^{2}}{d}I_{d}\), we have the sandwich relation \[\mathbf{Tr}\,\mathbf{E}_{P^{n}}\left[\Sigma_{P}^{1/2}(\Sigma_{n}+\tfrac{ \sigma^{2}}{n}\tfrac{d}{\varrho^{2}}I_{d})^{-1}\Sigma_{P}^{1/2}\right]\leq d _{n}(P,\varrho,\sigma^{2},I_{d},\Sigma_{P})\leq\mathbf{Tr}\,\mathbf{E}_{P^{n} }\left[\Sigma_{P}^{1/2}\Sigma_{n}^{-1}\Sigma_{P}^{1/2}\right]\!, \tag{83}\] for all \(\varrho>0\). Since \(\varrho\mapsto d_{n}(P,\varrho,\sigma^{2},I_{d},\Sigma_{P})\) is nondecreasing, the display above also demonstrates that this map has a limit. Now, note that by continuity, \(P^{n}\)-almost surely we have \[\lim_{\varrho\to\infty}\mathbf{Tr}(\Sigma_{P}^{1/2}(\Sigma_{n}+ \tfrac{\sigma^{2}}{\varrho^{2}}\tfrac{d}{d^{2}}I_{d})^{-1}\Sigma_{P}^{1/2})= \mathbf{Tr}(\Sigma_{P}^{1/2}\Sigma_{n}^{-1}\Sigma_{P}^{1/2}).\] Thus, using the sandwich relation (25) and Fatou's lemma, we have \[\mathbf{Tr}\,\mathbf{E}_{P^{n}}\left[\Sigma_{P}^{1/2}\Sigma_{n}^{ -1}\Sigma_{P}^{1/2}\right]\leq\liminf_{\varrho\to\infty}\mathbf{Tr}\,\mathbf{ E}_{P^{n}}\left[\Sigma_{P}^{1/2}(\Sigma_{n}+\tfrac{\sigma^{2}}{n}\tfrac{d}{ \varrho^{2}}I_{d})^{-1}\Sigma_{P}^{1/2}\right]\\ \leq\lim_{\varrho\to\infty}d_{n}(P,\varrho,\sigma^{2},I_{d}, \Sigma_{P})\leq\mathbf{Tr}\,\mathbf{E}_{P^{n}}\left[\Sigma_{P}^{1/2}\Sigma_{n }^{-1}\Sigma_{P}^{1/2}\right]\!,\] which establishes relation (25), as required. #### b.1.4 Proof of minimax relation (29) Let us state the claim corresponding to relation (29) somewhat more precisely. We define the functional \[\Phi_{T}(\varrho,\sigma)\coloneqq\mathbf{E}\left[\left(\frac{1}{ \varrho^{2}}+\frac{z^{\mathsf{T}}Mz}{\sigma^{2}}\right)^{-1}\right]\] Then the following lemma corresponds to the claim underlying relation (29). **Lemma 9**.: _The minimax risk under the Markovian observation model defined by the displays (26) and (27) satisfies_ \[\frac{1}{4}\,\Phi_{T}(\varrho,\sigma)\leq\inf_{\hat{\theta}}\sup_ {|\theta^{\star}|\leq\varrho}\mathbf{E}\left[(\widehat{\theta}-\theta^{\star} )^{2}\right]\leq\Phi_{T}(\varrho,\sigma).\] The remainder of this section is devoted to the proof of this claim Note that if we define \(\xi=(x_{1},\ldots,x_{T})\), and \(T_{\xi}=x\), then the observation model (27) can be written \[y=T_{\xi}\theta^{\star}+\Sigma_{w}^{1/2}w,\] where \(w\sim\mathsf{N}\left(0,I_{T}\right)\) and \(\Sigma_{w}=\sigma^{2}I_{T}\). We have \(K_{c}=1=K_{e}\), since we are considering a univariate estimation problem. Therefore, since the functional (5) is attained at \(\Omega=\varrho^{2}\), in order to establish Lemma 9, it is sufficient to show that \[T_{\xi}^{\mathsf{T}}\Sigma_{w}^{-1}T_{\xi}=\frac{x^{\mathsf{T}} x}{\sigma^{2}}=\frac{z^{\mathsf{T}}Mz}{\sigma^{2}}. \tag{84}\] However, from display (26), by induction we can establish that \[x_{t}=\sum_{s=1}^{t}\sqrt{c_{st}}\,z_{s},\] where the coefficients \(\{c_{st}\}\) are defined as in display (28). Then, it follows that \[x^{\mathsf{T}}x=\sum_{t=1}^{T}\sum_{s,s^{\prime}=1}^{t}\sqrt{c_{ st}c_{s^{\prime}t}}z_{s}z_{s^{\prime}}=\sum_{s,s^{\prime}=1}^{T}\underbrace{ \sum_{t=s\lor s^{\prime}}\sqrt{c_{st}c_{s^{\prime}t}}}_{=M_{ss^{\prime}}}z_{s} z_{s^{\prime}}.\] Using the display above, we establish the relation (84), which in turn establishes Lemma 9, as needed. ### Proof and calculations from Section 3.2 #### b.2.1 Proof of limit relation (31) To lighten notation in this section, let us define the shorthands \[\mathfrak{M}_{k} \coloneqq\mathfrak{M}_{k}\Big{(}\{\varepsilon_{j}\}_{j=1}^{k}, \Theta_{k}(a,C)\Big{)},\quad\text{and}, \tag{85a}\] \[\mathfrak{M} \coloneqq\mathfrak{M}\Big{(}\{\varepsilon_{j}\}_{j=1}^{\infty}, \Theta(a,C)\Big{)}\coloneqq\inf_{\widehat{\theta}}\sup_{\theta^{\star}\in \Theta(a,C)}\mathbf{E}\Big{[}\sum_{j=1}^{\infty}(\widehat{\theta}_{j}(y)- \theta_{j}^{\star})^{2}\Big{]}. \tag{85b}\] We begin by stating the following sandwich relation for the minimax risks. **Lemma 10**.: _The sequence of minimax risks \(\{\mathfrak{M}_{k}\}\) and infinite-dimensional risk \(\mathfrak{M}\) satisfies the sandwich relation_ \[\mathfrak{M}_{k}\leqslant\mathfrak{M}\leqslant\mathfrak{M}_{k}+\frac{C^{2}}{a _{k+1}^{2}}, \tag{86}\] _for all \(k\geqslant 1\)._ Assuming Lemma 10 for the moment, note that it implies for any divergent sequence \(a_{k}\to\infty\) that \[\lim_{k\to\infty}\mathfrak{M}_{k}=\mathfrak{M}.\] In view of the shorthands (85), the display above establishes our desired limit relation (31). Proof of Lemma 10We begin by establishing the lower bound. Note that \(\Theta_{k}(a,C)\subset\Theta(a,C)\), hence we have \[\mathfrak{M} \geqslant\inf_{\widehat{\theta}}\sup_{\theta^{\star}\in\Theta_{k }(a,C)}\mathbf{E}\Big{[}\sum_{j=1}^{\infty}(\widehat{\theta}_{j}((y_{i})_{i=1 }^{\infty})-\theta_{j}^{\star})^{2}\Big{]}\] \[\geqslant\inf_{\widehat{\theta}}\sup_{\theta^{\star}\in\Theta_{k }(a,C)}\mathbf{E}\Big{[}\sum_{j=1}^{k}(\widehat{\theta}_{j}((y_{i})_{i=1}^{ \infty})-\theta_{j}^{\star})^{2}\Big{]},\] where the last equation arises since \(\theta_{j}^{\star}=0\) for \(j>k\) and thus any minimax optimal estimator over \(\Theta_{k}(a,C)\) satisfies \(\widehat{\theta}_{j}\equiv 0\) for all \(j>k\). The righthand side differs from \(\mathfrak{M}_{k}\) in that \(\widehat{\theta}\) is a function of the full sequence \(y=(y_{i})_{i=1}^{\infty}\). However, note that due to the independence of the noise variables \(z_{i}\), for the observation model (30) restricted to \(\Theta_{k}(a,C)\), the vector \(y^{(k)}=(y_{i})_{i=1}^{k}\) is a sufficient statistic. Hence we have for each \(k\geqslant 1\), \[\mathfrak{M}\geqslant\inf_{\widehat{\theta}}\sup_{\theta^{\star}\in\Theta_{k }(a,C)}\mathbf{E}\Big{[}\sum_{j=1}^{k}(\widehat{\theta}_{j}(y^{(k)})-\theta_{ j}^{\star})^{2}\Big{]}=\mathfrak{M}_{k},\] which establishes the lower bound in relation (86). To establish the upper bound, note that we certainly may restrict the infimum in the definition of \(\mathfrak{M}\) to those estimators taking values in \(\mathbf{R}^{k}\) which only are a function of \(y^{(k)}\). Indeed, we then find \[\mathfrak{M} \leqslant\inf_{\widehat{\theta}\in\mathbf{R}^{k}}\sup_{\theta^{ \star}\in\Theta(a,C)}\mathbf{E}\Big{[}\sum_{j=1}^{k}(\widehat{\theta}_{j}(y^{( k)})-\theta_{j}^{\star})^{2}+\sum_{j>k}(\theta_{j}^{\star})^{2}\Big{]} \tag{87}\] \[\leqslant\mathfrak{M}_{k}+\sup_{\theta^{\star}\in\Theta(a,C)} \sum_{j>k}(\theta_{j}^{\star})^{2}. \tag{88}\] The inequality (88) arises by taking the supremum over the two terms of the risk in display (87), and noting the first term only depends on the first \(k\) coordinate of \(\theta^{\star}\in\Theta(a,C)\), and hence the supremum may be taken over \(\Theta_{k}(a,C)\) in the first term so as to obtain \(\mathfrak{M}_{k}\). Now observe by Holder's inequality, and the membership \(\theta^{\star}\in\Theta(a,C)\), \[\sum_{j>k}(\theta^{\star}_{j})^{2}=\sum_{j>k}\frac{1}{a_{j}^{2}}(a_{j}^{2}( \theta^{\star}_{j})^{2})\leqslant\Big{(}\max_{j>k}\frac{1}{a_{j}^{2}}\Big{)}C ^{2}=\frac{C^{2}}{a_{k+1}^{2}},\] with the last equality arising because \(j\mapsto a_{j}^{2}\) is assumed nondecreasing. Combining the display above with inequality (88) establishes the upper bound in (86), and thus establishes Lemma 10 as needed. #### b.2.2 Proof of relation (35) Let us continue to adopt the shorthands \(\mathfrak{M}_{k}\) and \(\mathfrak{M}\) defined, respectively, in the displays (85a) and (85b). Moreover, we also use the shorthands \[R^{\star}_{k}\coloneqq R^{\star}_{k}\Big{(}\varepsilon,a,C\Big{)},\quad\text{ and}\quad R^{\star}\coloneqq R^{\star}(\varepsilon,a,C),\] corresponding to the functionals (33) and (34), respectively. We prove the following lemma. **Lemma 11**.: _The functionals \(R^{\star}_{k}\), \(R^{\star}\) and minimax risks \(\mathfrak{M}_{k}\) satisfy_ \[\frac{1}{4}R^{\star}_{k}\leqslant\mathfrak{M}_{k}\leqslant R^{ \star}_{k}\quad\text{for all $k\geqslant 1$, and,} \tag{89a}\] \[\lim_{k\to\infty}R^{\star}_{k}=R^{\star}. \tag{89b}\] Assuming Lemma 11 for the moment, note that the two inequalities immediately imply the sandwich relation (35), simply by applying the sandwich (89a) to the terms \(\mathfrak{M}_{k}\) and then applying the limit relations (31) and (89b). Consequently, it suffices to establish Lemma 11. Proof of Lemma 11Recall the settings of the parameters \(T^{(k)},\Sigma^{(k)}_{w},{K_{e}}^{(k)},\varrho^{(k)},{K_{c}}^{(k)}\), corresponding to the \(k\) dimensional minimax risk \(\mathfrak{M}_{k}\), as given in (32). We claim that \[\Phi(T^{(k)},\mathbb{P},\Sigma^{(k)}_{w},\varrho^{(k)},{K_{e}}^{(k)},{K_{c}}^ {(k)})=R^{\star}_{k}. \tag{90}\] (Note by our construction of \(T^{(k)}\) the choice of \(\mathbb{P}\) is irrelevant.) Then the sandwich relation (89a) follows by applying Theorems 1 and 2 to the minimax risk \(\mathfrak{M}_{k}\). To see that relation (90) holds, note that by definition 5, we have \[\Phi(T^{(k)},\mathbb{P},\Sigma^{(k)}_{w},\varrho^{(k)},{K_{e}}^{(k)},{K_{c}}^ {(k)})=\sup_{\Omega>0}\Big{\{}\ \mathbf{Tr}\left((\Omega^{-1}+(\Sigma^{(k)}_{w})^{-1})^{-1}\right):\sum_{j=1}^{k }a_{j}^{2}\Omega_{jj}\leqslant C^{2}\,\Big{\}}.\] We claim that the supremum above can be reduced to diagonal \(\Omega\). To see why, first note that for every nonzero \(\lambda\in\mathbf{R}\) \[\big{(}\Omega^{-1}+(\Sigma^{(k)}_{w})^{-1}\big{)}^{-1}\leqslant\lambda^{2} \Omega+(1-\lambda)^{2}\Sigma^{(k)}_{w}.\] This follows from Lemma 15, with the choices \[A=\Sigma_{w}^{(k)},\quad B=\Omega^{-1},\quad\text{and}\quad D=\lambda I.\] Consequently, we have for every nonzero \(u\in\mathbf{R}^{k}\), that \[u^{\mathsf{T}}\big{(}\Omega^{-1}+(\Sigma_{w}^{(k)})^{-1}\big{)}^{-1}u\leq\inf_{ \lambda\in\mathbf{R}}\lambda^{2}u^{\mathsf{T}}\Omega u+(1-\lambda)^{2}u^{ \mathsf{T}}\Sigma_{w}^{(k)}u=\Big{(}\frac{1}{u^{\mathsf{T}}\Omega u}+\frac{1}{u ^{\mathsf{T}}\Sigma_{w}^{(k)}u}\Big{)}^{-1}.\] Hence taking \(u\) to be elements of the standard basis \(e_{i}\), and summing over \(i=1,\ldots,k\), we obtain, \[\mathbf{Tr}\left(\big{(}\Omega^{-1}+(\Sigma_{w}^{(k)})^{-1}\big{)}^{-1}\right) \leq\sum_{i=1}^{k}\Big{(}\frac{1}{\Omega_{ii}}+\frac{1}{\varepsilon_{i}^{2}} \Big{)}^{-1}=\sum_{i=1}^{k}\frac{\Omega_{ii}\varepsilon_{i}^{2}}{\Omega_{ii}+ \varepsilon_{i}^{2}}.\] Moreover, by taking \(\Omega\) to be diagonal, the inequality above holds with equality. Thus, \[\Phi(T^{(k)},\mathbb{P},\Sigma_{w}^{(k)},\varrho^{(k)},K_{e}{}^{( k)},K_{c}{}^{(k)}) =\sup_{\Omega_{jj}>0}\Big{\{}\sum_{j=1}^{k}\frac{\Omega_{jj} \varepsilon_{j}^{2}}{\Omega_{jj}+\varepsilon_{j}^{2}}:\sum_{j=1}^{k}a_{j}^{2} \Omega_{jj}\leq C^{2}\,\Big{\}}\] \[=\sup_{\tau_{j}^{2}>0}\Big{\{}\sum_{j=1}^{k}\frac{\tau_{j}^{2} \varepsilon_{j}^{2}}{\tau_{j}^{2}+\varepsilon_{j}^{2}}:\sum_{j=1}^{k}a_{j}^{2 }\tau_{j}^{2}\leq C^{2}\,\Big{\}}\] \[=R_{k}^{\star},\] which establishes the relation (90). Note that in the last equality, we have dropped the inequality constraints \(\tau_{j}^{2}>0\), due to the continuity of the map \(\tau\mapsto\sum_{i=1}^{k}\frac{\tau_{j}^{2}\varepsilon_{j}^{2}}{\tau_{j}^{2}+ \varepsilon_{j}^{2}}\) over \(\tau\in\mathbf{R}^{k}\). We now turn to establishing the relation (89b). Note that for any \(\tau\in\mathbf{R}^{\mathbf{N}}\) with \(\sum_{j=1}^{\infty}a_{j}^{2}\tau_{j}^{2}\leq C^{2}\), we have \[\sum_{j=1}^{k}\frac{\tau_{j}^{2}\varepsilon_{j}^{2}}{\tau_{j}^{2}+\varepsilon _{j}^{2}}\leq\sum_{j=1}^{\infty}\frac{\tau_{j}^{2}\varepsilon_{j}^{2}}{\tau_{ j}^{2}+\varepsilon_{j}^{2}}\leq\sum_{j=1}^{k}\frac{\tau_{j}^{2} \varepsilon_{j}^{2}}{\tau_{j}^{2}+\varepsilon_{j}^{2}}+\sup_{\tau\in\mathbf{R} ^{\mathbf{N}}:\sum_{j=1}^{\infty}a_{j}^{2}\tau_{j}^{2}\leq C^{2}}\sum_{j>k}^{ \infty}\tau_{j}^{2}\] By Holder's inequality, the second term is bounded above by \(C^{2}/a_{k+1}^{2}\), hence in view of definitions (33) and (34), we have the sandwich relation \[R_{k}^{\star}\leq R^{\star}\leq R_{k}^{\star}+\frac{C^{2}}{a_{k+1}^{2}},\] which holds for all \(k\geq 1\). Since \(a_{k}\to\infty\), the limit relation (89b) follows. #### b.2.3 Proof of limit relation (42) We claim that the following sandwich relation holds for the minimax risks in this case. **Lemma 12**.: _For all \(k\geq 1\), we have_ \[\mathfrak{M}_{n}^{(k)}(\varrho,\sigma^{2},P)\leq\mathfrak{M}_{n}(\varrho, \sigma^{2},P)\leq\mathfrak{M}_{n}^{(k)}(\varrho,\sigma^{2},P)+\varrho^{2}\mu _{k+1}. \tag{91}\] Assuming Lemma 12, note that since \(\mu_{k}\to 0\) as \(k\to\infty\), it immediately implies limit relation (42) Proof of Lemma 12The proof is quite similar to Lemma 10. We now prove inequality (91). We begin by defining the sets \[\mathcal{B}(\varrho)=\{\theta\in\ell^{2}(\mathbf{N}):\|\theta\|_{2}\leq\varrho\},\quad\text{and}\quad\mathcal{B}_{k}(\varrho)=\{\theta\in\mathcal{B}_{k}( \varrho):\theta_{j}=0,\ \text{ for all }j>k\}.\] By Parseval's identity, we may rewrite the minimax risks in the following form \[\mathfrak{M}_{k} \equiv\mathfrak{M}_{n}^{(k)}(\varrho,\sigma^{2},P)=\inf_{\hat{ \theta}}\sup_{\begin{subarray}{c}\theta^{\star}\in\mathcal{B}_{k}(\varrho)\\ \nu\in\mathcal{P}(\sigma^{2}I_{n})\end{subarray}}\mathbf{E}\,\Big{[}\sum_{j=1 }^{k}\mu_{j}(\widehat{\theta}_{j}(y_{1},\ldots,y_{n},\Phi_{k}(x_{1}),\ldots, \Phi_{k}(x_{n}))-\theta_{j}^{\star})^{2}\Big{]}, \tag{92a}\] \[\mathfrak{M} \equiv\mathfrak{M}_{n}(\varrho,\sigma^{2},P)=\inf_{\hat{\theta}} \sup_{\begin{subarray}{c}\theta^{\star}\in\mathcal{B}(\varrho)\\ \nu\in\mathcal{P}(\sigma^{2}I_{n})\end{subarray}}\mathbf{E}\,\Big{[}\sum_{j=1 }^{\infty}\mu_{j}(\widehat{\theta}_{j}(y_{1},\ldots,y_{n},\Phi(x_{1}),\ldots, \Phi(x_{n}))-\theta_{j}^{\star})^{2}\Big{]}. \tag{92b}\] Evidently, we have \(\mathfrak{M}\geq\mathfrak{M}_{k}\), since \(\mathcal{B}_{k}(\varrho)\subset\mathcal{B}(\varrho)\) and \((y,\Phi_{k}(x))\) are sufficient in this submodel. Similarly, the upper bound follows since by restricting to those estimators \(\widehat{\theta}\) with \(\widehat{\theta}_{j}=0\) for all \(j>k\) that are functions of \((y,\Phi_{k}(x))\), we have \[\mathfrak{M}\leq\mathfrak{M}_{k}+\sup_{\theta\in\mathcal{B}(\varrho)}\sum_{j> k}\mu_{j}\theta_{j}^{2}=\mathfrak{M}_{k}+\varrho^{2}\mu_{k+1},\] which establishes the upper bound. #### b.2.4 Proof of relation (45) Applying Corollary 1 to the minimax risk \(\mathfrak{M}_{k}(\varrho,\sigma^{2},P)\), we find that \[\frac{1}{4}\,\frac{\sigma^{2}}{n}d_{n}^{(k)}\leq\mathfrak{M}_{k}(\varrho, \sigma^{2},P)\leq\frac{\sigma^{2}}{n}d_{n}^{(k)},\] since the quantity \(d_{n}^{(k)}\) equals the functional for this minimax risk (see equation (44)). Therefore passing to the superior limit and applying the limit relation (42), we obtain the result. #### b.2.5 Proof of relation (47) First, we define the population counterparts of the functional \(d_{n}^{(k)}\), as defined in equation (44). Note that under \(P\), we have \(\mathbf{E}\,\Sigma_{n}^{(k)}=\mathbf{diag}(\mu_{1},\ldots,\mu_{k})\). We denote this matrix by \(M_{k}\). Hence, the population counterpart to \(d_{n}^{(k)}\) is \[\overline{d}_{n}^{(k)} \coloneqq\sup_{\Omega>0}\,\Big{\{}\,\mathbf{Tr}\left(M_{k}(M_{k} +\Omega^{-1})^{-1}\right):\mathbf{Tr}(\Omega)\leq\frac{n\varrho^{2}}{\sigma^ {2}}\Big{\}} \tag{93}\] \[=\sup_{\Omega>0}\,\Big{\{}\,\mathbf{Tr}\left((I_{k}+\Omega^{-1}) ^{-1}\right):\mathbf{Tr}(M_{k}^{-1}\Omega)\leq\frac{n\varrho^{2}}{\sigma^{2}} \Big{\}}. \tag{94}\] Using Proposition 2 and the sandwich relation (91), we find \[\frac{1}{4}\,\frac{\sigma^{2}}{n}\overline{d}_{n}^{(k)}\leq\mathfrak{M}_{n}^{ (k)}(\varrho,\sigma^{2},P)\leq\Big{(}1+\frac{\kappa^{2}\varrho^{2}}{\sigma^{2} }\Big{)}\frac{\sigma^{2}}{n}\overline{d}_{n}^{(k)}+\mu_{k+1}\varrho^{2}.\] Since \(\mu_{k}\to 0\) as \(j\to\infty\), it suffices to show that \[\lim_{j\to\infty}\overline{d}_{n}^{(k)}=\overline{d}_{n}^{\star}. \tag{95}\] Proof of relation (95)This limit relation can be established via an argument based on Lagrange multipliers. First, by an eigendecomposition of the variable \(\Omega>0\), we have \[\overline{d}_{n}^{(k)} =\sup\Big{\{}\sum_{j=1}^{k}\frac{\tau_{j}}{1+\tau_{j}}:\tau_{j}>0, \sum_{j=1}^{k}\frac{\tau_{j}}{\mu_{j}}\leqslant\frac{n\varrho^{2}}{\sigma^{2}} \Big{\}}\] \[=\sup\Big{\{}\sum_{j=1}^{k}\frac{\mu_{j}\gamma_{j}}{\frac{\sigma^{ 2}}{n\varrho^{2}}+\mu_{j}\gamma_{j}}:\gamma_{j}\geqslant 0,\sum_{j=1}^{k} \gamma_{j}\leqslant 1\Big{\}}. \tag{96}\] The final equality arises by a rescaling and continuity argument. Note that we may drop the non-negativity constraint, since the sequence \(\{\mu_{j}\}\) is nonnegative. We can compute (96) by introducing dual variables. In particular, we have \[\overline{d}_{n}^{(k)}=\sup_{\gamma}\inf_{\lambda}\sum_{j=1}^{k}\frac{\mu_{j} \gamma_{j}}{\frac{\sigma^{2}}{n\varrho^{2}}+\mu_{j}\gamma_{j}}-\frac{\frac{n \varrho^{2}}{\sigma^{2}}}{\lambda^{2}}\Big{(}\sum_{j=1}^{k}\gamma_{j}-1\Big{)}. \tag{97}\] By simple calculus, we see that the saddle point \((\gamma^{\star},\lambda^{\star})\) satisfies \[\sum_{j=1}^{k}\tau_{j}^{\star}=1\quad\text{and}\quad\Big{(}\frac{\sigma^{2}}{n \varrho^{2}}\Big{)}^{2}\frac{\mu_{j}}{(\frac{\sigma^{2}}{n\varrho^{2}}+\mu_{j} \gamma_{j}^{\star})^{2}}=\frac{1}{(\lambda^{\star})^{2}},\ \text{ for }j=1,\ldots,k.\] Using the fact that \(\gamma_{j}^{\star}\geqslant 0\), we obtain \(\gamma_{j}^{\star}=\frac{\sigma^{2}}{n\varrho^{2}}\frac{1}{\sqrt{\mu_{j}}}( \lambda^{\star}-\frac{1}{\sqrt{\mu_{j}}})_{+}\), where \(\lambda^{\star}\) is chosen such that \[\frac{\sigma^{2}}{n\varrho^{2}}\sum_{j=1}^{k}\frac{1}{\sqrt{\mu_{j}}}\Big{(} \lambda^{\star}-\frac{1}{\sqrt{\mu_{j}}}\Big{)}_{+}=\frac{\sigma^{2}}{n\varrho ^{2}}\sum_{j=1}^{k}\gamma_{j}^{\star}=1.\] Using equation (96), it follows that \[\overline{d}_{n}^{(k)}=\sum_{j=1}^{k}\frac{1}{\lambda^{\star}}\Big{(}\lambda^ {\star}-\frac{1}{\sqrt{\mu_{j}}}\Big{)}_{+}.\] The result then follows by appealing to the following numerical result, with \(a_{j}=1/\sqrt{\mu_{j}}\). **Lemma 13**.: _Let \(\{a_{j}\}_{j\geqslant 1}\) denote a nonnegative, divergent sequence,8 and define \(a_{\star}\coloneqq\inf_{j\geqslant 1}a_{j}\). Consider the functions \(f_{n},f\colon[a_{\star},+\infty)\to\mathbf{R}_{+}\) given by_ Footnote 8: Formally, \(\{a_{j}\}\subset\mathbf{R}_{+}\) and \(\lim_{j\to\infty}a_{j}=+\infty\). \[f_{n}(t)\coloneqq\sum_{k=1}^{n}a_{k}(t-a_{k})_{+}\quad\text{and}\quad f(t) \coloneqq\sum_{k=1}^{\infty}a_{k}(t-a_{k})_{+},\] _and define \(\tau_{n}\) and \(\tau\) via the relations \(f_{n}(\tau_{n})=f_{n}(\tau)=1\). Then:_ * _The function_ \(f\) _and values_ \(\tau_{n},\tau\) _are well-defined; and_ * _We have_ \(\tau_{n}=\tau\) _for_ \(n\) _sufficiently large._ Proof of Lemma 13Since the sequence \(a_{k}\) diverges to infinity, we may assume without loss of generality that \(a_{k}>0\) for all \(k\). For the first claim, note that \(f\) is well-defined. Indeed fix \(t\geq a_{\star}\). Then, there exists \(n\) sufficiently large such that \(t<a_{k}\) for all \(k\geq n\). Consequently, \(f(t)=f_{n}(t)\). Similarly, note that \(f_{n},f\) are strictly increasing, continuous functions with \(f(a_{\star})=f_{n}(a_{\star})=0\), and \(f(x),f_{n}(x)\to\infty\) in the limit as \(x\to\infty\). Therefore, \(\tau_{n},\tau\) exist and are uniquely defined by the equations \(f_{n}(\tau_{n})=1\) and \(f(\tau)=1\), respectively. By the argument given previously, \(f_{n}(\tau)=f(\tau)\) for all \(n\) large enough, and therefore, by uniqueness \(\tau=\tau_{n}\) for \(n\) large enough. #### b.2.6 Proof of Sobolev rate calculation Let \(\{\mu_{j}\}_{j\geq 1}\) denote the eigenvalue sequence associated to the integral operator for the order \(\beta\) Sobolev space on \([0,1]^{d}\). We then define the auxiliary functions \[f(\lambda)\coloneqq\sum_{k=1}^{\infty}\frac{1}{\sqrt{\mu_{k}}} \Big{(}\lambda-\frac{1}{\sqrt{\mu_{k}}}\Big{)}_{+}\quad\text{and}\quad d_{n}( \lambda)\coloneqq\sum_{k=1}^{\infty}\frac{1}{\lambda}\Big{(}\lambda-\frac{1}{ \sqrt{\mu_{k}}}\Big{)}_{+}.\] In view of relation (47), it follows that the minimax risk over the ball of radius \(\varrho>0\) within the order-\(\beta\) Sobolev space in \([0,1]^{d}\) is equal (up to constant pre-factors) to \[\frac{\sigma^{2}}{n}d_{n}(\lambda_{n}^{\star})\quad\text{where} \quad f(\lambda_{n}^{\star})=\frac{n\varrho^{2}}{\sigma^{2}}, \tag{98}\] whenever \(\varrho\lesssim\sigma\).9 In order to simplify the description of the rate above, we claim that Footnote 9: In this subsection, we allow the relations \(\asymp,\lesssim,\gtrsim\) to hide constants which depend on \(\beta,d\) but not on \(n,\varrho,\sigma\). \[d_{n}(\lambda_{n}^{\star})\asymp\Big{(}\frac{\sigma^{2}}{n \varrho^{2}}\Big{)}^{-\frac{d}{2\beta+d}}. \tag{99}\] Assuming equation (99) for the moment, combination with display (98) yields the minimax risk, which is \(\varrho^{2}(\frac{\sigma^{2}}{n\varrho^{2}})^{\frac{2\beta}{2\beta+d}}\), up to constant factors. This is the claimed result. Proof of relation (99)We begin by determining \(\lambda_{n}^{\star}\), apart from constants. For \(\beta>d/2\), the eigenvalues \(\mu_{j}\) satisfy \(\mu_{j}\asymp j^{-2\alpha}\), where \(\alpha\coloneqq\beta/d\). Therefore, it follows that \[f(\lambda)\asymp g(\lambda)\coloneqq\sum_{k=1}^{\infty}k^{ \alpha}(\lambda-k^{\alpha})_{+}.\] Note that both \(f\) and \(g\) are increasing functions. If \(g(\lambda)\asymp g(\lambda^{\prime})\), it follows that \(\lambda\asymp\lambda^{\prime}\), since \(g\) is piecewise affine, and thus locally Lipschitz. It follows that \(\lambda_{n}^{\star}\asymp\widetilde{\lambda}_{n}^{\star}\), where \(g(\widetilde{\lambda}_{n}^{\star})\asymp\frac{n\varrho^{2}}{\sigma^{2}}\). A similar argument shows that \[d_{n}(\lambda)\asymp\widetilde{d}_{n}(\lambda)\coloneqq\sum_{ k=1}^{\infty}\frac{(\lambda-k^{\alpha})_{+}}{\lambda}\] Our argument is based on establishing the following relations, \[g(\lambda)\stackrel{{\text{(i)}}}{{\simeq}} \lambda^{2+1/\alpha}\quad\text{and}\quad\widetilde{d}_{n}(\lambda)\stackrel{{ \text{(ii)}}}{{\asymp}}\lambda^{1/\alpha}. \tag{100}\] Assuming these bounds for a moment, we explain how the claimed result on the minimax risk follows. First, note that since \(f(\lambda_{n}^{\star})=\frac{n\varrho^{2}}{\sigma^{2}}\), the argument above implies that \(\lambda_{n}^{\star}\asymp\widetilde{\lambda}_{n}^{\star}\) where \(\widetilde{\lambda}_{n}^{\star}\) satisfies \(g(\lambda)\asymp\frac{n\rho^{2}}{\sigma^{2}}\). Therefore, from equation (100)(i), it follows that \(\widetilde{\lambda}_{n}^{\star}\asymp(\frac{\sigma^{2}}{n\rho^{2}})^{-\frac{ \alpha}{2\alpha+1}}\). Then, using equation (100)(ii), it follows that \(\widetilde{d}_{n}(\lambda_{n}^{\star})\asymp\widetilde{d}_{n}(\widetilde{ \lambda}_{n}^{\star})\asymp(\frac{\sigma^{2}}{n\rho^{2}})^{-\frac{1}{2\alpha+1}}\), which establishes the claimed inequality (99), after recalling \(\alpha=\beta/d\), and clearing the denominator of the exponent. We now demonstrate scaling relation (100)(i), so that we show that \(g(\lambda)\asymp\lambda^{2+1/\alpha}\), for all \(\lambda\geq 1\). In order to establish this claim, choose the integer \(k\) such that \(\lambda\in(k^{\alpha},(k+1)^{\alpha}]\). Then \[g(\lambda)\leq\lambda\sum_{j=1}^{k}j^{\alpha}\leq\lambda\frac{(k+1)^{\alpha+1} }{\alpha+1}\lesssim\lambda k^{\alpha+1}\lesssim\lambda^{2+1/\alpha}.\] Above, we used an integral approximation for the summation. On the other hand, when \(\lambda\in(k^{\alpha},(k+1)^{\alpha}]\), we have \[g(\lambda)\geq g(k^{\alpha})\geq(k^{\alpha}-\lceil k/2\rceil^{\alpha})\sum_{ j=1}^{\lceil k/2\rceil}j^{\alpha}\gtrsim k^{2\alpha+1}.\] To simplify, the last equality (up to constants) is obtained by an integration argument. Therefore, we have \[\inf_{k\geq 1}\inf_{\lambda\in(k^{\alpha},(k+1)^{\alpha}]}\frac{g(\lambda)}{ \lambda^{2+1/\alpha}}\geq\inf_{k\geq 1}\frac{g(k^{\alpha})}{(k+1)^{2\alpha+1}} \gtrsim 1.\] Thus, we have the bound \(g(\lambda)\gtrsim\lambda^{2+1/\alpha}\) for all \(\lambda\), as needed. We now demonstrate the scaling relation (100)(ii), so that we show \(\widetilde{d}_{n}(\lambda)\asymp\lambda^{1/\alpha}\). To see this, note first that for \(\lambda\in(k^{\alpha},(k+1)^{\alpha}]\), we have the trivial bound \[\widetilde{d}_{n}(\lambda)=\sum_{j=1}^{k}(1-\lambda^{-1}j^{\alpha})_{+}\leq k \lesssim\lambda^{1/\alpha}.\] On the other hand, we have the lower bound \[\widetilde{d}_{n}(\lambda)\geq\sum_{j=1}^{\lceil k/2\rceil}\frac{(k^{\alpha} -j^{\alpha})}{(k+1)^{\alpha}}\geq\frac{k+1}{2}\cdot\Big{(}\frac{k}{k+1}\frac{ \left(k^{\alpha}-\lceil k/2\rceil^{\alpha}\right)}{(k+1)^{\alpha}}\Big{)} \gtrsim k+1\gtrsim\lambda^{1/\alpha}.\] #### b.2.7 Proof of relation (55) Note that the kernel regularity condition is not necessary for our lower bound. Indeed, note that we first have \[\inf_{\delta>0}\Big{\{}\delta^{2}+\frac{\sigma^{2}B}{n\varrho^{2}}d(\delta) \Big{\}}=\inf_{d\geq 1}\Big{\{}\mu_{d}+\frac{\sigma^{2}Bd}{n\varrho^{2}}\, \Big{\}}\] Let \(d_{n}^{\star}\) be the largest integer \(d\) such that \(\mu_{d}\geq\frac{\sigma^{2}Bd}{n\varrho^{2}}\); this must exist since \(\mu_{d}\to 0\). As the two sequences are nonincreasing and strictly increasing, respectively, the display above is bounded above by \[4\Big{(}\mu_{d_{n}^{\star}}\wedge\frac{\sigma^{2}Bd_{n}^{\star}}{n\varrho^{2}} \Big{)}\leq 4\frac{\sigma^{2}Bd_{n}^{\star}}{n\varrho^{2}}.\] Hence, it suffices to establish that the lower bound \(\frac{\sigma^{2}Bd_{n}^{\star}}{n\varrho^{2}}\) can be obtained from our result (53). Note that if \(\mu_{d}\geq\frac{\sigma^{2}Bd}{n\varrho^{2}}\) then the choice of \(\lambda\) in the lower bound (53), given by \[\lambda_{j}=\frac{\sigma^{2}B}{n\varrho^{2}}\frac{1}{\mu_{j}}\mathbf{1}\{j\leq d \},\quad\text{for }j=1,2,3,\ldots,\] satisfies \(\sum_{j}\lambda_{j}\leq 1\). Evaluating the corresponding lower bound, with the maximal choice \(d=d_{n}^{\star}\) yields the lower bound \(\frac{\sigma^{2}Bd}{n\varrho^{2}}\), as needed. ## Appendix C Proofs and calculations from Section 4 ### Deferred proofs from Section 4.1 In this section, we collect proofs of the results underlying the argument establishing our upper bound in Section 4.1 of the paper. #### c.1.1 Proof of Lemma 1 Clearly the lefthand side is less than the right hand side as for \(\theta\in\Theta(\varrho,K_{c})\) we have \(\theta\otimes\theta\succcurlyeq 0\), and \(\mathbf{Tr}(K_{c}^{-1/2}\theta\otimes\theta K_{c}^{-1/2})=\|\theta\|_{K_{c}^{ -1}}^{2}\leq\varrho^{2}\). For the reverse inequality, fix \(\Omega\in\mathcal{K}(\varrho,K_{c})\). We diagonalize the positive semidefinite matrix \(K_{c}^{-1/2}\Omega K_{c}^{-1/2}=UDU^{\mathsf{T}}\), and define \(\theta(\varepsilon)=K_{c}^{1/2}UD^{1/2}\varepsilon\), where \(\varepsilon\in\{\pm 1\}^{d}\). Evidently, \[\|\theta(\varepsilon)\|_{K_{c}^{-1}}^{2}=\|UD^{1/2}\varepsilon\|_{2}^{2}= \mathbf{Tr}(D)=\mathbf{Tr}(K_{c}^{-1/2}\Omega K_{c}^{-1/2})\leq\varrho^{2}.\] Thus, for all \(\varepsilon\in\{\pm 1\}^{d}\), the vector \(\theta(\varepsilon)\) lies in the set \(\Theta(\varrho,K_{c})\). Consequently, we have \[\sup_{\theta\in\Theta(\varrho,K_{c})}r(\hat{\theta}_{C},\theta) \geq\max_{\varepsilon\in\{\pm 1\}^{d}}r(\hat{\theta}_{C},\theta( \varepsilon))\] \[\geq\mathbf{E}_{\varepsilon}\,r(\hat{\theta}_{C},\theta( \varepsilon)) \tag{101}\] \[=r(\hat{\theta}_{C},\Omega). \tag{102}\] Note that \(\Omega\in\mathcal{K}(\varrho,K_{c})\) was arbitrary in this argument, and hence passing to supremum over \(\Omega\) gives us the desired reverse inequality. Above, display (101) follows by lower bounding the maximum over \(\varepsilon\in\{\pm 1\}^{d}\) by the expectation over \(\varepsilon\) where \(\varepsilon_{i}\) are IID Rademacher variables. The relation (102) follows by noting that \(r(\hat{\theta}_{C},\theta(\varepsilon))=r(\hat{\theta}_{C},\theta(\varepsilon )\otimes\theta(\varepsilon))\), and moreover this latter quantity is linear in the rank-one matrix \(\theta(\varepsilon)\otimes\theta(\varepsilon)\), as justified by Lemma 2. By linearity of expectation we can bring the expectation inside, and use the fact that \[\mathbf{E}_{\varepsilon}[\theta(\varepsilon)\otimes\theta( \varepsilon)]=K_{c}^{1/2}UDU^{\mathsf{T}}K_{c}^{1/2}=\Omega.\] #### c.1.2 Proof of Lemma 2 Inspecting the definition of \(r\) (see equation (59)), we see that it is affine in \(\Omega\). To verify that it is convex in \(C\), note that \(r\) can be equivalently expressed as \[r(\hat{\theta}_{C},\Omega)\qquad=\qquad\mathbf{E}_{\xi}\left[\|K_{e}^{1/2}(C( T_{\xi})T_{\xi}^{\mathsf{T}}\Sigma_{w}^{-1}T_{\xi}\ \ -\ \ I_{d})\Omega^{1/2}\|_{\mathrm{F}}^{2}\ \ +\ \ \|K_{e}^{1/2}(C(T_{\xi})T_{\xi}^{\mathsf{T}}\Sigma_{w}^{-1/2}\|_{ \mathrm{F}}^{2}\right].\] Evidently, the display above is convex in \(C\). #### c.1.3 Proof of Proposition 3 In order to prove Proposition 3, we need two results regarding the harmonic mean of positive (semi)definite matrices. For our results, it is important to allow once of these matrices to be (possibly) singular, and so we study (twice) the harmonic mean of \(A\) and the Moore-Penrose pseudoinverse \(B^{\dagger}\)--that is, the quantity \((A^{-1}+B)^{-1}\), where \(B\succcurlyeq 0\) and \(A\succ 0\). Note that since \((B^{\dagger})^{\dagger}=B\), these results also imply bounds for the mean \((A^{-1}+B^{\dagger})^{-1}\). See the reference [8, chap. 4] for additional details about the harmonic mean of positive matrices. **Lemma 14**.: _Suppose that \(A,B\) are two symmetric positive semidefinite matrices, and that \(A\) is nonsingular. For any \(x\in\mathbf{R}^{d}\) and any \(y\) in the range of \(B\), we have_ \[(x-y)^{\mathsf{T}}A(x-y)+y^{\mathsf{T}}B^{\dagger}y\succcurlyeq x^{\mathsf{T}}( A^{-1}+B)^{-1}x,\] _where \(B^{\dagger}\) denotes the Moore-Penrose pseudoinverse associated with \(B\)._ Proof.: Using \(BB^{\dagger}B=B\), the claim is equivalent to showing that \(\inf_{x,u}g(x,u)\geq 0\) where \[g(x,u)\coloneqq(x-Bu)^{\mathsf{T}}A(x-Bu)+u^{\mathsf{T}}Bu-x^{\mathsf{T}}(A^{ -1}+B)^{-1}x.\] Define \(f(u)=\inf_{x}g(x,u)\). A calculation demonstrates that \[f(u) =u^{\mathsf{T}}\Big{[}B+BAB-BA(A-(A^{-1}+B)^{-1})^{\dagger}AB \Big{]}u\] \[=u^{\mathsf{T}}BA^{1/2}\Big{[}K^{\dagger}+I-(I-(I+K)^{-1})^{ \dagger}\Big{]}A^{1/2}Bu. \tag{103}\] Above, \(K\coloneqq A^{1/2}BA^{1/2}\). Diagonalizing \(K\), we may write \(K=UDU^{\mathsf{T}}\) and therefore \(K^{\dagger}=UD^{\dagger}U^{T}\). Applying the similarity transformation under \(U\), we have \[U^{\mathsf{T}}(K^{\dagger}+I-(I-(I+K)^{-1})^{\dagger})U=D^{\dagger}+I-(I-(I+D )^{-1})^{\dagger}=I-D^{\dagger}D\succcurlyeq 0. \tag{104}\] Therefore, combining displays (103) with (104), we obtain \[\inf_{x,u}g(x,u)=\inf_{u}f(u)\geq 0,\] which establishes the desired claim. **Lemma 15**.: _Suppose that \(A,B\) are two symmetric positive semidefinite matrices, and that \(A\) is nonsingular. If \(D^{\mathsf{T}}\in\mathbf{R}^{d\times d}\) has range included in the range of \(B\), then_ \[(I-D)A(I-D)^{\mathsf{T}}+DB^{\dagger}D^{\mathsf{T}}\succcurlyeq(A^{-1}+B)^{-1}.\] _Moreover equality holds with the choice \(D=(A^{-1}+B)^{-1}B\)._ Proof.: Let \(x\in\mathbf{R}^{d}\) and note that if \(y\coloneqq D^{\mathsf{T}}x\), then \[x^{\mathsf{T}}\Big{[}(I-D)A(I-D)^{\mathsf{T}}+DB^{\dagger}D^{ \mathsf{T}}\Big{]}x =(x-y)^{\mathsf{T}}A(x-y)+y^{\mathsf{T}}B^{\dagger}y\] \[\geq x^{\mathsf{T}}(A^{-1}+B)^{-1}x,\] where the final inequality follows from Lemma 14, since \(y\) lies in the range of \(B\). As the inequality holds for arbitrary \(x\in\mathbf{R}^{d}\), we have established the desired matrix inequality. To see the attainment at \(D=(A^{-1}+B)^{-1}B\), first note that \(D^{\mathsf{T}}=B(A^{-1}+B)^{-1}\). Therefore the range of \(D^{\mathsf{T}}\) is exactly the range of \(B\). Additionally, since \(I-D=(A^{-1}+B)^{-1}A^{-1}\), we have \[(I-D)A(I-D)^{\mathsf{T}}+DB^{\dagger}D^{\mathsf{T}}=(A^{-1}+B)^{-1}(A^{-1}+ BB^{\dagger}B)(A^{-1}+B)^{-1}=(A^{-1}+B)^{-1},\] as required. We are now in a situation to prove Proposition 3. Proof of Proposition 3From display (59), to establish the claim, it suffices to lower bound the following matrix in the semidefinite ordering, \[(C(T_{\xi})T_{\xi}^{\mathsf{T}}\Sigma_{w}^{-1}T_{\xi}-I_{d})\Omega (C(T_{\xi})T_{\xi}^{\mathsf{T}}\Sigma_{w}^{-1}T_{\xi}-I_{d})^{\mathsf{T}}\\ +C(T_{\xi})T_{\xi}^{\mathsf{T}}\Sigma_{w}^{-1}T_{\xi}C(T_{\xi})^{ \mathsf{T}}. \tag{105}\] This matrix can be written as \((I-D)\Omega(I-D)^{\mathsf{T}}+DB^{\dagger}D^{\mathsf{T}}\) where we defined \[B\coloneqq T_{\xi}^{\mathsf{T}}\Sigma_{w}^{-1}T_{\xi},\quad\text{and},\quad D \coloneqq C(T_{\xi})T_{\xi}^{\mathsf{T}}\Sigma_{w}^{-1}T_{\xi}.\] Evidently, the range of \(D^{\mathsf{T}}\) is included in the range of \(B\), and so it follows from Lemma 15 that the matrix in equation (105) is lower bounded in the semidefinite ordering by \[(\Omega^{-1}+T_{\xi}^{\mathsf{T}}\Sigma_{w}^{-1}T_{\xi})^{-1}. \tag{106}\] Moreover, Lemma 15 also demonstrates this is established by taking \[D=(\Omega^{-1}+T_{\xi}^{\mathsf{T}}\Sigma_{w}^{-1}T_{\xi})^{-1}T_{\xi}^{ \mathsf{T}}\Sigma_{w}^{-1}T_{\xi},\] which arises from taking \(C(T_{\xi})=(\Omega^{-1}+T_{\xi}^{\mathsf{T}}\Sigma_{w}^{-1}T_{\xi})^{-1}\), as claimed. Evaluating this lower bound matrix (106) in (59) establishes equality (60). #### c.1.4 Proof of equation (61d) Let us formally state our claim, equivalent to equation (61d), as a lemma. **Lemma 16**.: _Let \(\mathcal{K}_{+}(\varrho,K_{c})\) denote the subset of nonsingular matrices in \(\mathcal{K}(\varrho,K_{c})\)--that is, the set \(\{\Omega>0:\Omega\in\mathcal{K}(\varrho,K_{c})\}\). Then, we have_ \[\sup_{\Omega\in\mathcal{K}(\varrho,K_{c})}\inf_{C}r(\widehat{\theta}_{C}, \Omega)=\sup_{\Omega\in\mathcal{K}_{+}(\varrho,K_{c})}\inf_{C}r(\widehat{ \theta}_{C},\Omega).\] We prove this claim now. Evidently, since \(\mathcal{K}_{+}(\varrho,K_{c})\subset\mathcal{K}(\varrho,K_{c})\) it suffices to show that the lefthand side is less than or equal to the righthand side. To begin, we note that for each \(\lambda>0\), we have \[\sup_{\Omega\in\mathcal{K}(\varrho,K_{c})}\inf_{C}r(\widehat{\theta}_{C}, \Omega)\stackrel{{\text{(a)}}}{{\leqslant}}\sup_{\Omega\in \mathcal{K}(\varrho,K_{c})}\inf_{C}r(\widehat{\theta}_{C},\Omega+\tfrac{( \varrho+\lambda)^{2}-\varrho^{2}}{d}K_{c})\stackrel{{\text{(b)}}}{{ \leqslant}}\sup_{\Omega\in\mathcal{K}_{+}(\varrho+\lambda,K_{c})}\inf_{C}r( \widehat{\theta}_{C},\Omega)\eqqcolon f(\lambda).\] Inequality (a) above follows since \(r(\widehat{\theta}_{C},\Omega)\leqslant r(\widehat{\theta}_{C},\Omega^{ \prime})\) for any \(\Omega\preccurlyeq\Omega^{\prime}\)--this follows immediately from display (59). Here we have taken \(\Omega^{\prime}\coloneqq\Omega+\tfrac{(\varrho+\lambda)^{2}-\varrho^{2}}{d}K_ {c}\geqslant\Omega\). Inequality (b) then follows by noting that \(\Omega^{\prime}\) is symmetric positive (strictly) definite, and \(\mathbf{Tr}(K_{c}^{-1/2}\Omega^{\prime}K_{c}^{-1/2})\leqslant(\varrho+ \lambda)^{2}\), since \(\Omega\in\mathcal{K}(\varrho,K_{c})\). Since the displayed relation above holds for any \(\lambda>0\), it suffices to show that \[\inf_{\lambda>0}f(\lambda)=f(0). \tag{107}\] By Proposition 3, we have \[f(\lambda) =\sup_{\Omega}\Big{\{}\;\mathbf{E}\,\mathbf{Tr}\left(K_{c}^{1/2}( \Omega^{-1}+T_{\xi}^{\mathsf{T}}\Sigma_{w}^{-1}T_{\xi})^{-1}K_{e}^{1/2}\right):\] \[\Omega\succ 0,\mathbf{Tr}(K_{c}^{-1/2}\Omega K_{c}^{-1/2})\leq(\varrho+ \lambda)^{2}\,\Big{\}}\] \[=\sup_{\Omega}\Big{\{}\;\mathbf{E}\,\mathbf{Tr}\left(K_{e}^{1/2}( (\tfrac{\varrho+\lambda}{\varrho})^{-2}\Omega^{-1}+T_{\xi}^{\mathsf{T}}\Sigma_ {w}^{-1}T_{\xi})^{-1}K_{e}^{1/2}\right):\] \[\Omega\succ 0,\mathbf{Tr}(K_{c}^{-1/2}\Omega K_{c}^{-1/2})\leq\varrho^{2 }\,\Big{\}}\] \[\leq\Big{(}\frac{\varrho+\lambda}{\varrho}\Big{)}^{2}\;\sup_{ \Omega}\Big{\{}\;\mathbf{E}\,\mathbf{Tr}\left(K^{1/2}((\Omega^{-1}+T_{\xi}^{ \mathsf{T}}\Sigma_{w}^{-1}T_{\xi})^{-1}K_{e}^{1/2}\right):\] \[\Omega\succ 0,\mathbf{Tr}(K_{c}^{-1/2}\Omega K_{c}^{-1/2})\leq\varrho^{2 }\,\Big{\}}\] \[=\Big{(}\frac{\varrho+\lambda}{\varrho}\Big{)}^{2}\,f(0).\] Hence we have established the sandwich relation \[f(0)\leq f(\lambda)\leq\Big{(}\frac{\varrho+\lambda}{\varrho}\Big{)}^{2}\,f(0 ),\qquad\text{for all }\lambda>0.\] Note that \(f(0)\leq f(\lambda^{\prime})\leq f(\lambda)\) whenever \(0<\lambda^{\prime}\leq\lambda\). Thus, \(\inf_{\lambda>0}=\lim_{\lambda\to 0^{+}}f(\lambda)=f(0)\), which establishes (107), completing the proof of the claim. ### Deferred proofs from Section 4.2 In this section, we collect proofs of the results underlying the argument establishing our lower bound in Section 4.2 of the paper. #### c.2.1 Proof of Lemma 3 By parameterizing \(\theta^{\star}=K_{e}^{-1/2}\eta^{\star}\), we have \[\mathfrak{M}^{\text{G}}(T,\mathbb{P},\Sigma_{w},\varrho,K_{c},K_ {e})\] \[\quad=\inf_{\widehat{\eta}}\sup_{\eta^{\star}\in\Theta(\varrho^{2 }K_{e}^{1/2}K_{c}K_{e}^{1/2})}\mathbf{E}_{\xi,w\sim\mathsf{N}(0,I_{n})}\left[ \left\|\widehat{\eta}(T_{\xi}{K_{e}}^{-1/2},T_{\xi}{K_{e}}^{-1/2}\eta^{\star}+ \Sigma_{w}^{1/2}w)-\eta^{\star}\right\|_{2}^{2}\right]\] \[\quad=\inf_{\widehat{\eta}}\sup_{\eta^{\star}\in\Theta(\varrho^{2 }K_{e}^{1/2}K_{c}K_{e}^{1/2})}\mathbf{E}_{\xi,z\sim\mathsf{N}\big{(}0,I_{r(\xi) }\big{)}}\left[\left\|\widehat{\eta}(Q_{\xi},Q_{\xi}\eta^{\star}+V_{\xi}\Lambda _{\xi}^{1/2}z)-\eta^{\star}\right\|_{2}^{2}\right] \tag{108}\] \[\quad=\inf_{\widehat{\eta}}\sup_{\eta^{\star}\in\Theta(\varrho^{2 }K_{e}^{1/2}K_{c}K_{e}^{1/2})}\mathbf{E}_{\omega\sim\widetilde{\mathbb{P}},z \sim\mathsf{N}\big{(}0,I_{r(\xi)}\big{)}}\left[\left\|\widehat{\eta}(\omega,V_{ \xi}V_{\xi}^{\mathsf{T}}\eta^{\star}+V_{\xi}\Lambda_{\xi}^{-1/2}z)-\eta^{ \star}\right\|_{2}^{2}\right]\] (109) \[\quad=\mathfrak{M}^{\text{G}}_{\text{red}}(\widetilde{\mathbb{P}},\varrho^{2}K_{e}^{1/2}K_{c}K_{e}^{1/2}).\] We justify some of the relations in the display above. Since the density of \(v=T_{\xi}{K_{e}}^{-1/2}\eta^{\star}+\Sigma_{w}^{1/2}w\) is, up to constants independent of \(\eta^{\star}\), proportional to \[\exp\Big{(}-\frac{1}{2}\big{\{}\langle\eta^{\star},{K_{e}}^{-1/2}T_{\xi}^{ \mathsf{T}}\Sigma_{w}^{-1}T_{\xi}{K_{e}}^{-1/2}\eta^{\star}\rangle-2\langle v, \Sigma_{w}^{-1}T_{\xi}{K_{e}}^{-1/2}\eta^{\star}\rangle\big{\}}\Big{)},\] factorization arguments imply \(Q_{\xi}\coloneqq{K_{e}}^{-1/2}T_{\xi}^{\mathsf{T}}\Sigma_{w}^{-1}T_{\xi}{K_{e} }^{-1/2}\) and \(v^{\prime}\coloneqq{K_{e}}^{-1/2}T_{\xi}^{\mathsf{T}}\Sigma_{w}^{-1}v\) are sufficient statistics for \(\eta^{\star}\). Note that \(v^{\prime}\) is distributed \(\mathsf{N}\left(Q_{\xi}\eta^{\star},Q_{\xi}\right)\). Thus, as consequence of the Rao-Blackwell theorem, any minimax optimal estimator is a function of \((Q_{\xi},v^{\prime})\), and hence display (108) follows. Similarly, any optimal estimator function is a function of any bijective function of \((Q_{\xi},v^{\prime})\). Evidently one can construct \(Q_{\xi}\) from \(\omega\coloneqq(r(\xi),V_{\xi},\Lambda_{\xi})\), and vice versa. On the other hand, \(v^{\prime}\) lies in the range of \(G(\xi)\coloneqq K_{e}^{-1/2}T_{\xi}^{\mathsf{T}}\Sigma_{w}^{-1/2}\), which is the same as the range of \(G(\xi)G(\xi)^{\mathsf{T}}=Q_{\xi}\); consequently one may replace \(v^{\prime}\) with \(Q_{\xi}^{\dagger}v^{\prime}\equiv V_{\xi}(\Lambda_{\xi})^{-1}V_{\xi}^{ \mathsf{T}}v^{\prime}\), which is distributed \(\mathsf{N}\left(V_{\xi}V_{\xi}^{\mathsf{T}}\eta^{\star},V_{\xi}(\Lambda_{\xi} )^{-1}V_{\xi}^{\mathsf{T}}\right)\), and so that display (109) follows. #### c.2.2 Proof of Lemma 4 In this argument, we use the notation \(B(\widehat{\eta},\pi\mid\omega)\) to denote the Bayes risk of estimator \(\widehat{\eta}\), conditional on \(\omega\), for the original observation \(\Upsilon\). Formally, it is the expectation \(\mathbf{E}[\|\widehat{\eta}(\Upsilon)-\eta\|_{2}^{2}]\), where the expectation is over \(\Upsilon\sim\mathsf{N}\left(VV^{\mathsf{T}},V\Lambda^{-1}V^{\mathsf{T}}\right)\). The main observation is that if we consider the projection of \(\Upsilon_{\lambda}\) onto the range of \(V\), we will recover a random variable with the same distribution as \(\Upsilon\), and therefore the risks are the same. Formally, let \(\widehat{\eta}\) be any estimator which is constant over the fibers of the operator \(VV^{\mathsf{T}}\). Equivalently, it can be written \[\widehat{\eta}(y)=\widehat{\eta}_{0}(VV^{\mathsf{T}}y),\qquad\text{for some measurable }\hat{\eta}_{0}.\] Let this class of estimators be denoted by \(\mathcal{E}_{V}\). Then we evidently have \[B_{\lambda}(\pi\mid\omega)\leqslant\inf_{\widehat{\eta}\in \mathcal{E}_{V}}B_{\lambda}(\widehat{\eta},\pi\mid\omega). \tag{110}\] To complete the proof of the claim, we claim that \[B_{\lambda}(\widehat{\eta},\pi\mid\omega)=B(\widehat{\eta}, \pi\mid\omega),\qquad\text{for any }\widehat{\eta}\in\mathcal{E}_{V}. \tag{111}\] This follows immediately from the fact that \(VV^{\mathsf{T}}\Upsilon_{\lambda}=\Upsilon\) with probability \(1\). We note that combination with (110) furnishes the claim, since it implies that \[B_{\lambda}(\pi\mid\omega)\leqslant\inf_{\widehat{\eta}\in \mathcal{E}_{V}}B(\widehat{\eta},\pi\mid\omega)=B(\pi\mid\omega).\] The final equality occurs since for any measurable estimator \(\widehat{\eta}\notin\mathcal{E}_{V}\), we can define \(\widehat{\eta}_{V}(y)=\widehat{\eta}(VV^{\mathsf{T}}y)\), and since \(\Upsilon=VV^{\mathsf{T}}\Upsilon\) with probability \(1\), and therefore \(B(\widehat{\eta}_{V},\pi\mid\omega)=B(\widehat{\eta},\pi\mid\omega)\), which establishes this claim. #### c.2.3 Proof of Lemma 5 Let \(\widehat{\eta}_{\pi}\) denote the posterior mean \(y\mapsto\mathbf{E}[\eta\mid\Upsilon_{\lambda}=y]\). Then, as the posterior mean \(\widehat{\eta}_{\pi}\) minimizes the Bayes risk \(\widehat{\eta}\mapsto B_{\lambda}(\widehat{\eta},\pi\mid\omega)\) over all measurable estimators \(\widehat{\eta}\), it suffices to compute the risk of \(\widehat{\eta}_{\pi}\). Note that, by definition of conditional expectation, we have \[\widehat{\eta}_{\pi}(y)=\frac{1}{p(y)}\int\eta\,p(y\mid\eta)\, \pi(\mathrm{d}\eta).\] We now compute the derivative of \(p(y)\). Exchanging integration and differentiation,10 Footnote 10: This is valid since \(y\mapsto p(y\mid\eta)\) is differentiable for each \(\eta\), and for each \(y\), we have \(\eta\mapsto p(y\mid\eta)\) and \(\eta\mapsto\nabla_{y}p(y\mid\eta)=\Sigma_{\lambda}^{-1}(X_{\lambda}\eta-y)\) are \(\pi\)-integrable (since \(0\leqslant p(y\mid\eta)\leqslant 1\), and the gradient is an affine function of \(\eta\)). \[\Sigma_{\lambda}\nabla p(y)=\int(X_{\lambda}\eta-y)\,p(y\mid \eta)\,\pi(\mathrm{d}\eta).\] Therefore, we conclude that \[\widehat{\eta}_{\pi}(y)=X_{\lambda}^{-1}\Big{(}y+\Sigma_{\lambda}\nabla\log p(y) \Big{)}.\] Finally, to compute risk of the posterior mean \(\widehat{\eta}_{\pi}(\Upsilon_{\lambda})\coloneqq\mathbf{E}[\eta\mid\Upsilon_{ \lambda}]\), we add and subtract the observation \(X_{\lambda}^{-1}\Upsilon_{\lambda}\), and find that \[\mathbf{E}_{(\eta,\Upsilon_{\lambda})}\left[(\eta-\widehat{\eta}_{\pi}( \Upsilon_{\lambda}))\otimes(\eta-\widehat{\eta}_{\pi}(\Upsilon_{\lambda})) \right]=X_{\lambda}^{-1}\Sigma_{\lambda}X_{\lambda}^{-1}-X_{\lambda}^{-1} \Sigma_{\lambda}\,\mathbf{E}[\nabla\log p(\Upsilon_{\lambda})\otimes\nabla\log p (\Upsilon_{\lambda})]\Sigma_{\lambda}X_{\lambda}^{-1}.\] Identifying the Fisher information in the display above, factoring the expression, and taking the trace yields the desired result. #### c.2.4 Proof of Lemma 6 Note that \(\pi_{\tau,\Pi}\) is evidently absolutely continuous with respect to Lebesgue measure. In particular, on the interior of \(\Theta(K)\), \(\pi_{\tau,\Pi}\) and \(\pi_{\tau,\Pi}^{\mathrm{G}}\) have the same Lebesgue density up to rescaling by \(\pi_{\tau,\Pi}^{\mathrm{G}}(\Theta(K))\). Denote this density by \(f_{\tau,\Pi}\). Therefore, we have \[\mathcal{I}\left(\pi_{\tau,\Pi}^{\mathrm{G}}\right) =\mathbf{E}_{\eta\sim\pi_{\tau,\Pi}^{\mathrm{G}}}\,\mathbf{1}_{ \Theta(K)}(\eta)\nabla\log f_{\tau,\Pi}(\eta)\otimes\nabla\log f_{\tau,\Pi}( \eta)+\mathbf{E}_{\eta\sim\pi_{\tau,\Pi}^{\mathrm{G}}}\,\mathbf{1}_{\Theta(K)^ {c}}(\eta)\nabla\log f_{\tau,\Pi}(\eta)\otimes\nabla\log f_{\tau,\Pi}(\eta)\] \[\geqslant\mathbf{E}_{\eta\sim\pi_{\tau,\Pi}^{\mathrm{G}}}\, \mathbf{1}_{\Theta(K)}(\eta)\nabla\log f_{\tau,\Pi}(\eta)\otimes\nabla\log f _{\tau,\Pi}(\eta)\] \[=\pi_{\tau,\Pi}^{\mathrm{G}}(\Theta(K))\mathcal{I}\left(\pi_{ \tau,\Pi}\right).\] The final equality arises since the boundary of \(\Theta(K)\) has Lebesgue measure zero. Using the well known relation \(\mathcal{I}\left(\pi_{\tau,\Pi}^{\mathrm{G}}\right)=(\tau^{2}\Pi)^{-1}\)[44, Example 6.3], the above display implies that \[\mathcal{I}\left(\pi_{\tau,\Pi}\right)^{-1}\geqslant\pi_{\tau,\Pi}^{\mathrm{G }}(\Theta(K))\tau^{2}\Pi=\tau^{2}(1-\pi_{\tau,\Pi}^{\mathrm{G}}(\Theta(K)^{c} ))\Pi.\] To ensure that \(\eta\sim\pi_{\tau,\Pi}^{\mathrm{G}}\) lies in \(\Theta(K)\) with decent probability, we take \(\Pi\) to satisfy the relation \(\mathbf{Tr}(K^{-1}\Pi)\leqslant 1\). Then defining \[c(\tau,\Pi)\coloneqq\tau^{2}(1-\pi_{\tau,\Pi}^{\mathrm{G}}(\Theta(K)^{c})),\] completes the proof of the claim. #### c.2.5 Proof of Lemma 7 Fix \(\Pi>0\) such that \(\mathbf{Tr}(\Pi^{1/2}K^{-1}\Pi^{1/2})\leqslant 1\). Let \(\lambda=(\lambda_{1},\ldots,\lambda_{d})\) denote the eigenvalues of \(\Pi^{1/2}K^{-1}\Pi^{1/2}\). The vector satisfies the inequalities \(\lambda>0,\lambda^{\mathsf{T}}\mathbf{1}\leqslant 1\). Moreover, by the rotational invariance of the Gaussian, we have for \(g\sim\mathsf{N}\left(0,I_{d}\right)\), that \[\pi_{\tau,\Pi}^{\mathrm{G}}(\Theta(K)^{c})=\mathbf{P}\left\{\tau^{2}g^{\mathsf{ T}}\Pi^{1/2}K^{-1}\Pi^{1/2}g>1\right\}=\mathbf{P}\left\{\tau^{2}\sum_{i=1}^{d} \lambda_{i}g_{i}^{2}>1\right\}.\] Let us make the choice \(\tau^{2}=1/2\). Then, note for any \(\lambda\succ 0,\lambda^{\mathsf{T}}\mathbf{1}\leqslant 1\), by Markov's inequality, \[\mathbf{P}\left\{\,\sum_{i=1}^{d}\lambda_{i}g_{i}^{2}>2\right\}\leqslant \frac{\sum_{i=1}^{d}\lambda_{i}\,\mathbf{E}[g_{i}^{2}]}{2}=\frac{1}{2}.\] Hence, using this bound in the definition of \(c(\tau,\Pi)\), we find \[c_{\ell}(K)\geqslant\inf_{\lambda\succ 0,\lambda^{\mathsf{T}}\mathbf{1}\leqslant 1 }c(1/2,\mathbf{diag}(\lambda))\geqslant\frac{1}{4},\] which completes the proof of the claim.
2308.08539
Constant-depth circuits for Uniformly Controlled Gates and Boolean functions with application to quantum memory circuits
We explore the power of the unbounded Fan-Out gate and the Global Tunable gates generated by Ising-type Hamiltonians in constructing constant-depth quantum circuits, with particular attention to quantum memory devices. We propose two types of constant-depth constructions for implementing Uniformly Controlled Gates. These gates include the Fan-In gates defined by $|x\rangle|b\rangle\mapsto |x\rangle|b\oplus f(x)\rangle$ for $x\in\{0,1\}^n$ and $b\in\{0,1\}$, where $f$ is a Boolean function. The first of our constructions is based on computing the one-hot encoding of the control register $|x\rangle$, while the second is based on Boolean analysis and exploits different representations of $f$ such as its Fourier expansion. Via these constructions, we obtain constant-depth circuits for the quantum counterparts of read-only and read-write memory devices -- Quantum Random Access Memory (QRAM) and Quantum Random Access Gate (QRAG) -- of memory size $n$. The implementation based on one-hot encoding requires either $O(n\log{n}\log\log{n})$ ancillae and $O(n\log{n})$ Fan-Out gates or $O(n\log{n})$ ancillae and $6$ Global Tunable gates. On the other hand, the implementation based on Boolean analysis requires only $2$ Global Tunable gates at the expense of $O(n^2)$ ancillae.
Jonathan Allcock, Jinge Bao, João F. Doriguello, Alessandro Luongo, Miklos Santha
2023-08-16T17:54:56Z
http://arxiv.org/abs/2308.08539v2
Constant-depth circuits for Uniformly Controlled Gates and Boolean functions with application to quantum memory circuits ###### Abstract We explore the power of the unbounded Fan-Out gate and the Global Tunable gates generated by Ising-type Hamiltonians in constructing constant-depth quantum circuits, with particular attention to quantum memory devices. We propose two types of constant-depth constructions for implementing Uniformly Controlled Gates. These gates include the Fan-In gates defined by \(|x\rangle|b\rangle\mapsto|x\rangle|b\oplus f(x)\rangle\) for \(x\in\{0,1\}^{n}\) and \(b\in\{0,1\}\), where \(f\) is a Boolean function. The first of our constructions is based on computing the one-hot encoding of the control register \(|x\rangle\), while the second is based on Boolean analysis and exploits different representations of \(f\) such as its Fourier expansion. Via these constructions, we obtain constant-depth circuits for the quantum counterparts of read-only and read-write memory devices -- Quantum Random Access Memory (QRAM) and Quantum Random Access Gate (QRAG) -- of memory size \(n\). The implementation based on one-hot encoding requires either \(O(n\log n\log\log n)\) ancillae and \(O(n\log n)\) Fan-Out gates or \(O(n\log n)\) ancillae and 6 Global Tunable gates. On the other hand, the implementation based on Boolean analysis requires only 2 Global Tunable gates at the expense of \(O(n^{2})\) ancillae. ## 1 Introduction In this work, we study the power of constant-depth quantum circuits with a focus on circuits designed for quantum memory access and the execution of Boolean functions. Our investigation has two aims: firstly, to fill the theoretical gap in our understanding of quantum memory circuits from a computational complexity perspective and, secondly, to assess the practicality of physically implementing these circuits. We believe that the properties and limitations of these circuits can highlight their feasibility and potential for practical implementations. To obtain constant-depth circuits, we leverage multi-qubit "magic"gates like the Fan-Out gate (a generalization of the CNOT that can target multiple output qubits) and the multi-qubit entangling Global Tunable gate (that arises from the time evolution of Ising-type Hamiltonians). This analysis explores the potential for quantum memory to be accessed using specialized hardware (designed, for instance, to implement such multi-qubit gates), which may differ from the hardware in general-purpose quantum computers. ### Quantum memory Quantum memory, besides being an important component of quantum computers from a theoretical perspective, is also fundamental to many quantum algorithms such as Grover's search [14], solving the dihedral hidden subgroup problem [15], collision finding [16], phase estimation for quantum chemistry [17], pattern recognition and machine learning algorithms [18, 19, 20, 21], cryptanalysis [14], state preparation [13], among others. Traditionally, there are two ways -- via a Quantum Random Access Memory (QRAM) or a Quantum Random Access Gate (QRAG) -- in which memory (classical or quantum) may be accessed quantumly. A QRAM can be seen as a "read-only" gate, while a QRAG can be interpreted as a "read-write" gate since qubits are swapped from memory into the main part of the quantum computer, acted on, and then swapped back. Quantum random access memory.A QRAM is the quantum analogue of a classical Random Access Memory (RAM) device that stores classical or quantum data and allows queries to be performed in superposition. More specifically, a QRAM is a device comprising a memory register M that stores either classical or quantum information, an address register A that points to the memory cell to be addressed, and a target register T, into which the content of the addressed memory cell is copied. If necessary, it also includes an auxiliary register supporting the overall operation, which is reset to its initial state at the end of the computation. A call to a QRAM (of size \(n\)) implements \[|i\rangle_{\textsf{A}}|b\rangle_{\textsf{T}}|x_{0},\ldots,x_{n-1}\rangle_{ \textsf{M}}\mapsto|i\rangle_{\textsf{A}}|b\oplus x_{i}\rangle_{\textsf{T}}|x_ {0},\ldots,x_{n-1}\rangle_{\textsf{M}},\] for all \(x_{0},\ldots,x_{n-1},b\in\{0,1\}\) and \(i=0,\ldots,n-1\). The bits \(x_{0},\ldots,x_{n-1}\) represent the data to be accessed in superposition, which are separate from the qubits in the _work register_ of a fully programmable quantum computer. Quantum random access gate.Another device for random access to a quantum memory is the so-called QRAG, which performs a swap gate between the target register and some portion of the memory register specified by the address register: \[|i\rangle_{\textsf{A}}|b\rangle_{\textsf{T}}|x_{0},\ldots,x_{n-1}\rangle_{ \textsf{M}}\mapsto|i\rangle_{\textsf{A}}|x_{i}\rangle_{\textsf{T}}|x_{0}, \ldots,x_{i-1},b,x_{i+1},\ldots,x_{n-1}\rangle_{\textsf{M}},\] for all \(x_{0},\ldots,x_{n-1},b\in\{0,1\}\) and \(i=0,\ldots,n-1\). While QRAG does not enjoy the same level of publicity as QRAMs, its importance lies in its necessity for quantum algorithms for element distinctness and collision finding [1], as well as other quantum algorithms based on random walks on graphs [1, 2]. ### Multi-qubit "magic" gates Uniformly Controlled Gate and Fan-In gate.The \(f\)-Uniformly Controlled Gate (\(f\)-UCG or simply UCG) is the unitary \(\sum_{x\in\{0,1\}^{n}}|x\rangle\langle x|\otimes f(x)\), where \(f:\{0,1\}^{n}\to\mathcal{U}(\mathbb{C}^{2\times 2})\) is a mapping from \(n\)-bit strings onto the set \(\mathcal{U}(\mathbb{C}^{2\times 2})\) of single-qubit unitaries. Well-known examples of \(f\)-UCGs can be found in quantum state preparation algorithms [13, 14], quantum Monte Carlo algorithms [16] in finance, and HHL-like algorithms [15] in quantum machine learning. The \(f\)-UCG is a generalization of many multi-qubit gates including, in particular, the \(f\)-Fan-In gate (\(f\)-FIN) defined by the mapping \(|x\rangle|b\rangle\mapsto|x\rangle|b\oplus f(x)\rangle\) for a Boolean function \(f:\{0,1\}^{n}\to\{0,1\}\) where \(x\in\{0,1\}^{n}\) and \(b\in\{0,1\}\). Note that an \(f\)-FIN is simply an \(f^{\prime}\)-UCG with \(f^{\prime}(x)=\mathsf{X}^{f(x)}\). Special cases of \(f\)-FIN include OR, AND, PARITY, MAJORITY, and even QRAM, since it can be implemented with \(f:\{0,1\}^{n}\times\{0,\ldots,n-1\}\to\{0,1\}\), \(f(x,i)=x_{i}\). General constructions of \(f\)-UCGs and \(f\)-FINs using single and two-qubit gates can be framed as a unitary synthesis problem. There are several results in this direction for constructing a general \(n\)-qubit unitary [1, 1, 2, 3, 4, 5, 6]. Sun et al. [13] and Yuan and Zhang [14] proposed circuits specifically for \(f\)-UCGs using one and two-qubit gates. Regarding constructions for controlled gates of the form \(|x\rangle\langle x|\otimes\mathsf{U}+\sum_{y\in\{0,1\}^{n}\setminus\{x\}}|y \rangle\langle y|\otimes\mathbb{I}_{m}\), where \(\mathsf{U}\) is an \(m\)-qubit gate, see [1] (using one and two-qubit gates) and [15, 6, 6] (using multi-qubit entangling gates defined below). While general sequential implementations for \(f\)-FINs are folklore, there have been proposals for specific Boolean functions [1] or based on different models of computation like measurement-based quantum computation [16]. The Fan-Out gate.The Fan-Out (FO) gate on \(n+1\) qubits implements the quantum operation \(|b\rangle|x_{0},\ldots,x_{n-1}\rangle\mapsto|b\rangle|x_{0}\oplus b,\ldots,x_{ n-1}\oplus b\rangle\) for all \(x_{0},\ldots,x_{n-1},b\in\{0,1\}\). In other words, it is a sequence of CNOT gates sharing a single control qubit. For this reason, unlike classical Fan-Out gates, the ability to implement quantum Fan-Out gates as a primitive is not usually taken for granted. Indeed, the Fan-Out gate is powerful in the sense that several interesting results follow from its use, especially connected to constant-depth complexity classes (more on this below). Moore [15] and Green et al. [1] proved that Fan-Out is equivalent to the PARITY gate. Hoyer and Spalek [17] proved that \(\mathsf{EXACT}[t]\) gates (which output \(1\) if the input's Hamming weight is \(t\) and \(0\) otherwise) can be approximated with a polynomially small error by Fan-Out and single-qubit gates in constant depth. These in turn can simulate \(\mathsf{AND},\mathsf{OR}\), and THRESHOLD\([t]\) gates. Later, Takahashi and Tani [18] managed to prove that \(\mathsf{EXACT}[t]\) can be simulated _exactly_ by Fan-Out and single-qubit gates in constant depth. Unbounded Fan-Out gates that can act on any number of qubits are used in quantum complexity theory (and in this work) to compile certain circuits in constant depth. Even though unbounded Fan-Out gates are just a theoretical construction, bounded Fan-Out gates are within the reach of next-generation quantum hardware [14, 15, 16, 17, 18, 19, 20, 21, 22] and can serve as building blocks in larger Fan-Out gates, since an \(n\)-size Fan-Out gate can be simulated by \(k\)-size Fan-Out gates in \(O(\log_{k}n)\)-depth, offering interesting trade-offs for hardware implementations. The Global Tunable gate.Another powerful and physically implementable gate is the Global Tunable (GT) gate. In its simplest form, it implements a product of two-qubit controlled-Z gates: \[\prod_{i\neq j\in S}\mathsf{C}_{i}\text{-}\mathsf{Z}_{\to j}\] for some subset \(S\) of the physical qubits, where \(\mathsf{C}_{i}\text{-}\mathsf{Z}_{\to j}\) denotes a Z gate applied to qubit \(j\) controlled on qubit \(i\) being in the \(|1\rangle\) state (for the general definition see Section 4.2). The first proposal for this kind of gate dates back to Molmer and Sorensen [11], and several experimental implementations have been reported [13, 14, 15, 16, 17, 18, 19, 20, 21]. A few studies have explored the use of GT gates in constructing \(n\)-qubit Clifford gates [13, 14, 15, 16, 17], the state-of-the-art construction requires 4GTs and \(n\) ancilla or 26GTs and no ancilla [1]. Similarly to the Fan-Out [15, 18], the GT gate has been used to implement the unbounded OR gate. Constructions for 4-AND gates using 7GT gates and no ancillae were reported in [17, 16, 18]. Regarding general \(n\)-arity \(\mathsf{AND}\), several constructions [18, 19, 20] have been proposed, and improved to the state-of-the-art implementation of [1] using \(O(\log^{*}n)\)\(\mathsf{GT}\) gates1 with \(O(\log n)\) ancillae or using \(4\)\(\mathsf{GT}\) gates with \(O(n)\) ancillae. Footnote 1: \(\log^{*}n\) is the iterated logarithm. ### Our results In this work, we propose new constant-depth quantum circuits, based on Fan-Out and \(\mathsf{GT}\) gates, for \(f\)-\(\mathsf{UCG}\)s, which include \(f\)-\(\mathsf{FIN}\)s and certain quantum memory devices as special cases (see Fig. 1). We use two different techniques: the first based on the one-hot encoding of the input, and the second based on Boolean analysis of the function \(f\). In Section 3, we formalize our model of quantum computers with quantum access to memory. A Quantum Memory Device (\(\mathsf{QMD}\)) of size \(n\) (assume \(n\) to be a power of \(2\)) comprises a \(\log n\)-qubit address register \(\mathtt{A}\), a single-qubit target register \(\mathtt{T}\), a \(\operatorname{poly}(n)\)-qubit auxiliary register \(\mathtt{Aux}\), and an \(n\)-qubit memory \(\mathtt{M}\) consisting of \(n\) single-qubit registers \(\mathtt{M}_{0},\ldots,\mathtt{M}_{n-1}\). A call to the \(\mathsf{QMD}\) implements \[|i\rangle_{\mathtt{A}}|b\rangle_{\mathsf{T}}|x_{i}\rangle_{\mathtt{M}_{i}}|0 \rangle^{\otimes\operatorname{poly}n}_{\mathtt{Aux}}\mapsto|i\rangle_{ \mathtt{A}}\big{(}\mathsf{V}(i)|b\rangle_{\mathsf{T}}|x_{i}\rangle_{\mathtt{ M}_{i}}\big{)}|0\rangle^{\otimes\operatorname{poly}n}_{\mathtt{Aux}},\] where \(\mathsf{V}:\{0,\ldots,n-1\}\to\mathcal{V}\) and \(\mathcal{V}\) is an \(O(1)\)-size subset of two-qubit gates. Our model is general enough to include \(\mathsf{QRAM}\) and \(\mathsf{QRAG}\) as subcases (by letting \(\mathsf{V}(i)\) equal \(\mathsf{CNOT}\) or \(\mathsf{SWAP}\) gates). It also includes \(\mathsf{QMD}\)s that we named \(f\)-\(\mathsf{QRAM}\) for which \(\mathsf{V}(i)=\mathbb{I}_{1}\otimes|0\rangle\langle 0|_{\mathtt{M}_{i}}+f(i) \otimes|1\rangle\langle 1|_{\mathtt{M}_{i}}\), where \(f:\{0,1\}^{\log n}\to\mathcal{U}(\mathbb{C}^{2\times 2})\). As far as we know, a general model for a \(\mathsf{QMD}\) has not been formally defined before. This model allows us to compare the power of different gates with quantum access to memory. In this direction, we show that a \(\mathsf{QRAG}\) can simulate a \(\mathsf{QRAM}\), but not vice-versa, and we discuss the similarities and differences between \(\mathsf{QMD}\) and \(f\)-\(\mathsf{UCG}\). In particular, even though \(f\)-\(\mathsf{UCG}\)s do not contain general \(\mathsf{QMD}\)s, since \(\mathsf{V}(i)\) can act non-trivially on two qubits, an \(f\)-\(\mathsf{QRAM}\) of memory size \(n\) (i.e., \(f:\{0,1\}^{\log n}\to\mathcal{U}(\mathbb{C}^{2\times 2})\)) can be seen as an \(f^{\prime}\)-\(\mathsf{UCG}\) for some function \(f^{\prime}\) on \(\{0,1\}^{n+\log n}\) (see Fig. 1). Figure 1: We give constant-depth circuits for \(f\)-\(\mathsf{UCG}\)s, which contain \(f\)-\(\mathsf{FIN}\)s and a subset of quantum memory devices (\(\mathsf{QMD}\)) including \(\mathsf{QRAM}\) (and its generalization we call \(f\)-\(\mathsf{QRAM}\)) as special cases. A refined analysis gives improved constructions for \(f\)-\(\mathsf{FIN}\)s compared with general \(f\)-\(\mathsf{UCG}\)s. Although \(\mathsf{QRAG}\) is not an \(f\)-\(\mathsf{FIN}\), our (one-hot encoding based) construction for \(\mathsf{QRAM}\) can be adapted to apply to it. In Section 4, we discuss the Fan-Out and \(\mathsf{GT}\) gates in more detail. In Section 5, we develop our quantum circuits based on one-hot encoding. The main idea is to use Fan-Out or \(\mathsf{GT}\) gates to compute, in parallel, the one-hot encoding \(e(x)\in\{0,1\}^{2^{n}}\) of the control register \(|x\rangle\), where \(e(x)_{j}=1\) if and only if \(j=x\), and to apply the single-qubit gate \(f(j)\) controlled on the qubit \(|e(x)_{j}\rangle\), for all \(j\in\{0,1\}^{n}\). By the definition of the one-hot encoding, the correct gate \(f(x)\) is selected. To perform all controlled single-qubit gates \(f(j)\) in parallel, we use the well-known \(\mathsf{Z}\)-decomposition of single-qubit gates stating the existence of functions \(\alpha,\beta,\gamma,\delta:\{0,1\}^{n}\to[-1,1]\) such that \[f(j)=e^{i\pi\alpha(j)}\mathsf{Z}(\beta(j))\mathsf{HZ}(\gamma(j))\mathsf{HZ}( \delta(j)),\] for all \(j\in\{0,1\}^{n}\), where \(\mathsf{H}:=\frac{1}{\sqrt{2}}\left(\begin{smallmatrix}1&1\\ 1&-1\end{smallmatrix}\right)\) and \(\mathsf{Z}(\theta):=\left(\begin{smallmatrix}1&0\\ 0&e^{i\pi\theta}\end{smallmatrix}\right)\) for \(\theta\in[-1,1]\). By a result of Hoyer and Spalek [10], \(m\) commuting gates can be performed in parallel with the aid of \(m-1\) ancillae and \(2\) Fan-Out gates, or simply \(1\)\(\mathsf{GT}\) gate and no ancillae. All the \(\mathsf{Z}(\delta(j))\) gates can thus be performed in parallel (and similarly for \(\mathsf{Z}(\gamma(j))\), \(\mathsf{Z}(\beta(j))\), and \(e^{i\pi\alpha(j)}\)). Naively, one can compute the one-hot encoding of the whole input \(x\). However, if \(f\) is a junta, i.e., it depends on only a few coordinates, one only needs to compute the one-hot encoding of the coordinates on which it depends. More generally, this "compression" idea can be extended to a concept we introduce and call \((J,r)\)-junta, where \(J\subseteq[n]\) and \(r\in\mathbb{N}\). We say \(f:\{0,1\}^{n}\to\mathcal{U}(\mathbb{C}^{2\times 2})\) is a \((J,r)\)-junta if, by fixing the coordinates in \(\overline{J}:=[n]\setminus J\) to any value, the resulting restriction of \(f\) to \(J\) is an \(r\)-junta, i.e., it depends on at most \(r\) of its input coordinates. A fine example of a \((J,r)\)-junta is \(\mathsf{QRAM}\), since by fixing the coordinates of input \(i\), the resulting restriction is a \(1\)-junta (as it depends only on \(x_{i}\)). It is possible to take advantage of this property and simplify our circuit construction: we partition the input \(x\) into sub-strings \(x_{\overline{J}}\) and \(x_{J}\) and compute the one-hot encoding of \(x_{\overline{J}}\) separately from the one-hot encoding of the coordinates in \(J\) that the restriction of \(f\) depends on. Both one-hot encodings are then used to select the correct \(f(x)\) gate as described above. The resources required for our constructions are as follows (see also Table 1). **Result 1** (Informal version of Theorem 26).: _Let \(f:\{0,1\}^{n}\to\mathcal{U}(\mathbb{C}^{2\times 2})\) be a \((J,r)\)-junta, \(|\overline{J}|=t\). The \(f\)-\(\mathsf{UCG}\) can be implemented in constant depth using either \(O(2^{t+r}(t+r)\log(t+r))\) ancillae and \(0(2^{t+r}(t+r))\) Fan-Out gates or \(O(2^{t+r}(t+r))\) ancillae and \(9\)\(\mathsf{GT}\) gates. As a corollary, any \(f^{\prime}\)-\(\mathsf{QRAM}\) of size \(n\), \(f^{\prime}:\{0,1\}^{\log n}\to\mathcal{U}(\mathbb{C}^{2\times 2})\), can be implemented in constant depth using either \(O(n\log n\log\log n)\) ancillae and \(O(n\log n)\) Fan-Out gates or \(O(n\log n)\) ancillae and \(9\)\(\mathsf{GT}\) gates._ We then tailor Result 1 to \(f\)-\(\mathsf{FIN}\)s specifically, given their simpler structure compared to \(f\)-\(\mathsf{UCGs}\). The number of ancillae and Fan-Out gates are asymptotically the same, and the number of \(\mathsf{GT}\) gates is reduced to \(6\) (see Table 2). In particular, we apply the \(f\)-\(\mathsf{FIN}\) results to \(\mathsf{QRAMs}\) and also show how to implement a \(\mathsf{QRAG}\) in constant depth, even though it is not an \(f\)-\(\mathsf{FIN}\) (see Table 3). **Result 2** (Informal version of Theorems 28 and 29).: _A \(\mathsf{QRAM}\) of size \(n\) can be implemented in constant depth using either \(O(n\log n\log\log n)\) ancillae and \(O(n\log n)\) Fan-Out gates or \(O(n\log n)\) ancillae and \(6\)\(\mathsf{GT}\) gates. A \(\mathsf{QRAG}\) of size \(n\) can be implemented in constant depth using either \(O(n\log n\log\log n)\) ancillae and \(O(n\log n)\) Fan-Out gates or \(O(n\log n)\) ancillae and \(9\)\(\mathsf{GT}\) gates._ In Section 6, we extend ideas from [11, 12] to implement \(f\)-\(\mathsf{UCGs}\) in constant depth using tools from the analysis of Boolean functions. We give three slightly different constructions based on different representations of a real-valued Boolean function \(g:\{0,1\}^{n}\to\mathbb{R}\). The first representation is the Fourier expansion (over the reals) \[g(x)=\sum_{S\subseteq[n]}\widehat{g}(S)\chi_{S}(x),\] where \(\widehat{g}(S)=\frac{1}{2^{n}}\sum_{x\in\{0,1\}^{n}}g(x)\chi_{S}(x)\), for \(S\subseteq[n]\), are the Fourier coefficients of \(g\) and \(\chi_{S}(x):=(-1)^{\sum_{i\in S}x_{i}}\) is a \(\mathsf{PARITY}\) function over \(\{-1,1\}\) (also called characteristic function). The second representation is based on the existence of a function \(p:\{0,1\}^{n}\to\mathbb{R}\) with a (potentially) sparse Fourier expansion that approximates \(g\) up to an additive error \(\epsilon>0\), \(\max_{x\in\{0,1\}^{n}}|p(x)-g(x)|\leq\epsilon\). Finally, the third representation is the Fourier expansion of \(g\) using \(\mathsf{AND}\) functions instead of \(\mathsf{PARITY}\) functions, which is sometimes called a real-polynomial representation over \(\{0,1\}\): \[g(x)=\sum_{S\subseteq[n]}\widetilde{g}(S)x^{S},\] for coefficients \(\widetilde{g}(S)=\sum_{T\subseteq S}(-1)^{|S|-|T|}g(T)\) (see Section 2.1 for more details) and \(x^{S}:=\prod_{i\in S}x_{i}\). In the case of Boolean functions \(g:\{0,1\}^{n}\to\{0,1\}\), the above representation over the reals can be "compressed" into a representation over the \(\mathbb{F}_{2}\) field, also known as algebraic normal form, as \[g(x)=\bigoplus_{S\subseteq[n]}\widetilde{g}_{\mathbb{F}_{2}}(S)x^{S},\] where \(\widetilde{g}_{\mathbb{F}_{2}}(S)\in\{0,1\}\) is given by \(\widetilde{g}_{\mathbb{F}_{2}}(S)=\widetilde{g}(S)\mod 2\). This is true since \(\widetilde{g}(S)\in\mathbb{Z}\) for \(g:\{0,1\}^{n}\to\{0,1\}\). The utility of each of the above representations depends on the Boolean properties of \(g\), e.g. its Fourier support \(\operatorname{supp}(g):=\{S\subseteq[n]:\widehat{g}(S)\neq 0\}\), (real) \(\{0,1\}\)-support \(\operatorname{supp}_{\{0,1\}}(g):=\{S\subseteq[n]:\widetilde{g}(S)\neq 0\}\), and, for Boolean functions \(g:\{0,1\}^{n}\to\{0,1\}\), its \(\mathbb{F}_{2}\)-support \(\operatorname{supp}_{\mathbb{F}_{2}}(g):=\{S\subseteq[n]:\widetilde{g}_{ \mathbb{F}_{2}}(S)\neq 0\}\). Other relevant properties of \(g\) are its Fourier support \(\operatorname{supp}^{>k}(g):=\{S\subseteq[n]:|S|>k,\widetilde{g}(S)\neq 0\}\) at degree greater than \(k\) (similarly for \(\operatorname{supp}^{=k}(g)\), \(\operatorname{supp}^{>k}_{\{0,1\}}(g)\), and \(\operatorname{supp}^{>k}_{\mathbb{F}_{2}}(g)\)), its real degree \(\deg(g):=\{|S|:S\in\operatorname{supp}(g)\}\), its Fourier 1-norm \(\hat{\|}g\hat{\|}_{1}:=\sum_{S\subseteq[n]}|\widehat{g}(S)|\), and \(\|g^{>k}\hat{\|}_{1}:=\sum_{S\subseteq[n]:|S|>k}|\widehat{g}(S)|\). We can generalize the above properties to operator-valued functions \(f:\{0,1\}^{n}\to\mathcal{U}(\mathbb{C}^{2\times 2})\) in an indirect way by applying Boolean analysis to the functions \(\alpha,\beta,\gamma,\delta:\{0,1\}^{n}\to[-1,1]\) arising from \(f\)'s \(\mathsf{Z}\)-decomposition and defining, for instance, \(\operatorname{supp}(f):=\operatorname{supp}(\alpha)\cup\operatorname{supp}( \beta)\cup\operatorname{supp}(\gamma)\cup\operatorname{supp}(\delta)\) and \(\deg(f):=\max\{\deg(\alpha),\deg(\beta),\deg(\gamma),\deg(\delta)\}\). Similar definitions apply to \(\operatorname{supp}^{>k}(f)\), \(\operatorname{supp}^{=k}(f)\), \(\operatorname{supp}_{\{0,1\}}(f)\), and \(\operatorname{supp}^{>k}_{\{0,1\}}(f)\). Note that other extensions of Boolean analysis exist in the literature and had been applied to problems in quantum computation [20, 21, 22, 23, 24, 25]. However, this extension based on \(\mathsf{Z}\)-decomposition may be of independent interest. The idea behind our constructions for \(f\)-\(\mathsf{UCS}\)s in Section 6 is to reconstruct the functions \(\alpha,\beta,\gamma,\delta\) using one of the aforementioned representations. Consider e.g. the Fourier expansion of \(\alpha,\beta,\gamma,\delta\). First we compute the terms \(\chi_{S}(x)\) in parallel using Fan-Out or \(\mathsf{GT}\) gates, since \(\chi_{S}(x)\) are \(\mathsf{PARITY}\) functions. Since \(\prod_{S\in\operatorname{supp}(\delta)}\mathsf{Z}(\widehat{\delta}(S)\chi_{S} (x))=\mathsf{Z}(\sum_{S\in\operatorname{supp}(\delta)}\widehat{\delta}(S) \chi_{S}(x))=\mathsf{Z}(\delta(x))\), it is possible to apply \(\mathsf{Z}(\delta(x))\) onto a target qubit by simply applying onto this target qubit a sequence of phases \(\mathsf{Z}(\widehat{\delta}(S))\) controlled on \(\chi_{S}(x)\), for \(S\in\operatorname{supp}(\delta)\). This sequence of controlled phases \(\prod_{S\in\operatorname{supp}(\delta)}\mathsf{Z}(\widehat{\delta}(S)\chi_{S} (x))\) can be performed in constant depth in the case of \(\mathsf{GT}\) gates by definition. In the case of Fan-Outs, it can be done by using techniques from Hoyer and Spalek [14]. More precisely, first compute a cat state \((|0\rangle^{\otimes m}+|1\rangle^{\otimes m})/\sqrt{2}\) from the target qubit using one Fan-Out, where \(m:=|\operatorname{supp}(\delta)|\), followed by applying the controlled phases \(\mathsf{Z}(\widehat{\delta}(S))\) onto _different_ qubits of the cat state. This yields \((|0\rangle^{\otimes m}+(-1)^{\sum_{S}\widehat{\delta}(S)\chi_{S}(x)}|1\rangle ^{\otimes m})/\sqrt{2}=\mathsf{Z}(\delta(x))(|0\rangle^{\otimes m}+|1\rangle^{ \otimes m})/\sqrt{2}\). Finally, uncompute the cat state with another Fan-Out. The same idea applies to \(\alpha,\beta,\gamma\) and the other two representations (for the real \(\{0,1\}\)-representation we compute \(x^{S}\) instead of \(\chi_{S}(x)\)). The resources required for our constructions are stated below (see Table 1). In the following, we say that a quantum circuit implements an \(f\)-\(\mathsf{UCG}\) with spectral norm error at most \(\epsilon\) if it implements an \(f^{\prime}\)-\(\mathsf{UCG}\) such that the spectral norm \(\|f^{\prime}(x)-f(x)\|\) is at most \(\epsilon\) for all \(x\in\{0,1\}^{n}\). **Result 3** (Informal version of Theorems 30, 32, 33).: _Let \(f:\{0,1\}^{n}\to\mathcal{U}(\mathbb{C}^{2\times 2})\) with \(\mathsf{Z}\)-decomposition \(\alpha,\beta,\gamma,\delta:\{0,1\}^{n}\to[-1,1]\). We propose constant-depth quantum circuits that implement \(f\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{--UCG}}}}}}\)_ * _exactly using_ * _either_ \(O\big{(}\sum_{S\in\operatorname{supp}(f)}|S|\big{)}\) _ancillae and_ \(O\big{(}|\operatorname{supp}^{>1}(f)|+\big{|}\bigcup_{S\in\operatorname{supp}^ {>1}(f)}S|\big{)}\) _Fan-Outs,_ * _or_ \(O(|\operatorname{supp}^{>1}(f)|)\) _ancillae and_ \(5\mathsf{\mathsf{\mathsf{GT}}}\) _gates;_ * _with spectral norm error at most_ \(\epsilon>0\) _using_ * _either_ \(O(s\deg(f)+|\operatorname{supp}^{=1}(f)|)\) _ancillae and_ \(O(s+|\bigcup_{S\in\operatorname{supp}^{>1}(f)}S|)\) _Fan-Outs,_ * _or_ \(O(s)\) _ancillae and_ \(5\mathsf{\mathsf{\mathsf{GT}}}\) _gates,_ _where_ \(s:=(n/\epsilon^{2})\sum_{\nu\in\{\alpha,\beta,\gamma,\delta\}}\|\nu^{>1}\|_{1}^ {2}\)_;_ * _exactly using_ * _either_ \(O\big{(}\sum_{S\in\operatorname{supp}_{\{0,1\}}(f)}|S|\log(1+|S|)\big{)}\) _ancillae and_ \(O\big{(}\sum_{S\in\operatorname{supp}^{>1}_{\{0,1\}}(f)}|S|\big{)}\) _Fan-Outs,_ * _or_ \(O\big{(}\sum_{S\in\operatorname{supp}^{>1}_{\{0,1\}}(f)}|S|\big{)}\) _ancillae and_ \(9\mathsf{\mathsf{\mathsf{GT}}}\) _gates._ Similarly to the one-hot-encoding-based constructions, we then simplify our Boolean-based constructions to \(f\mathsf{\mathsf{\mathsf{--FIN}}}\), which mainly reduces the number of \(\mathsf{\mathsf{\mathsf{GT}}}\) gates (see Table 2), and apply them to QRAMs, thus showing that it is possible to use fewer \(\mathsf{\mathsf{\mathsf{GT}}}\) gates at the price of more ancillary qubits (see Table 3). We say that a quantum circuit implements an \(f\mathsf{\mathsf{\mathsf{--FIN}}}\) with spectral norm error at most \(\epsilon\) if it implements an \(f^{\prime}\mathsf{\mathsf{--UCG}}\) such that \(\max_{x\in\{0,1\}^{n}}\|f^{\prime}(x)-\mathsf{X}^{f(x)}\|\leq\epsilon\). **Result 4** (Informal version of Theorem 38).: \(A\) QRAM _of size \(n\) can be implemented in constant depth using either \(O(n^{2}\log n)\) ancillae and \(O(n^{2})\) Fan-Out gates or \(O(n^{2})\) ancillae and \(2\mathsf{\mathsf{\mathsf{GT}}}\) gates._ Depending on the properties of \(f:\{0,1\}^{n}\to\mathcal{U}(\mathbb{C}^{2\times 2})\), one construction can be more desirable compared to the others when it comes to implementing an \(f\mathsf{\mathsf{--UCG}}\) (similarly for \(f\mathsf{\mathsf{--FIN}}\)). A \((J,r)\)-junta for small \(|\overline{J}|\) and \(r\) might call for a one-hot-encoding-based construction, while a function with sparse Fourier expansion could be more easily implementable using a Boolean-based circuit. The four different constructions presented above are thus incomparable. Nonetheless, in the worst case, the Boolean-based implementation using the Fourier expansion (Theorems 30 and 34) requires fewer resources: either \(O(2^{n})\) Fan-Out gates and \(O(2^{n}n)\) ancillae, or \(5\mathsf{\mathsf{\mathsf{GT}}}\) gates and \(O(2^{n})\) ancillae. **Result 5**.: _Any \(f\mathsf{\mathsf{--UCG}}\) with \(f:\{0,1\}^{n}\to\mathcal{U}(\mathbb{C}^{2\times 2})\) can be implemented in constant depth using either \(O(2^{n}n)\) ancillae and \(O(2^{n})\) Fan-Out gates, or \(O(2^{n})\) ancillae and \(5\mathsf{\mathsf{\mathsf{GT}}}\) gates._ \begin{table} \begin{tabular}{|c|c|c|c|} \hline \multirow{2}{*}{Result} & \multicolumn{2}{c|}{Fan-Out construction} & \multicolumn{2}{c|}{\(\mathsf{\mathsf{\mathsf{GT}}}\) construction} \\ \cline{2-3} & \#Fan-Out & \#Ancillae & \#GT & \#Ancillae \\ \hline \(f\mathsf{\mathsf{--UCG}}\) (\(*\)) & \(O(n+2^{t+r}(t+r))\) & \(O(2^{t+r}(t+r)\log(t+r))\) & \(9\) & \(O(2^{t+r}(t+r))\) \\ \hline \(f\mathsf{\mathsf{--UCG}}\) & \(O\big{(}|\operatorname{supp}^{>1}(f)|+|\bigcup_{S\operatorname{supp}^{>1}(f)}S| \big{)}\) & \(O\left(\sum_{S\in\operatorname{supp}(f)}|S|\right)\) & \(5\) & \(O(|\operatorname{supp}^{>1}(f)|)\) \\ \hline \(f\mathsf{\mathsf{--UCG}}\) (\(\dagger\)) & \(O\left(s+|\bigcup_{S\operatorname{supp}^{>1}(f)}S|\right)\) & \(O(s\deg(f)+|\operatorname{supp}^{-1}(f)|)\) & \(5\) & \(O(s)\) \\ \hline \(f\mathsf{\mathsf{--UCG}}\) & \(O\Big{(}\sum_{S\in\operatorname{supp}^{>1}_{\{0,1\}}(f)}|S|\Big{)}\) & \(O\Big{(}\sum_{S\in\operatorname{supp}_{\{0,1\}}(f)}|S|\log(1+|S|)\Big{)}\) & \(9\) & \(O\Big{(}\sum_{S\in\operatorname{supp}^{>1}_{\{0,1\}}(f)}|S|\Big{)}\) \\ \hline \end{tabular} \end{table} Table 1: Main results for \(f\mathsf{\mathsf{--UCG}}\), where \(f:\{0,1\}^{n}\to\mathcal{U}(\mathbb{C}^{2\times 2})\) has the \(\mathsf{Z}\)-decomposition \(\alpha,\beta,\gamma,\delta:\{0,1\}^{n}\to[-1,1]\). In \((*)\), \(f\) is a \((J,r)\)-junta with \(|\overline{J}|=t\). In \((\ddagger)\), the gate is implemented with spectral norm error at most \(\epsilon\) and \(s:=(n/\epsilon^{2})\sum_{\nu\in\{\alpha,\beta,\gamma,\delta\}}\|\nu^{>1}\|_{1}^ {2}\). ### Related work Constant-depth complexity classes.We recall the main classical classes computed by constant-depth and polynomial-size circuits: * \(\mathsf{NC}^{0}\) with \(\mathsf{NOT}\) and bounded \(\mathsf{AND},\mathsf{OR}\) gates; * \(\mathsf{AC}^{0}\) with \(\mathsf{NOT}\) and unbounded \(\mathsf{AND},\mathsf{OR}\) gates; * \(\mathsf{TC}^{0}\) with \(\mathsf{NOT}\) and unbounded \(\mathsf{AND},\mathsf{OR},\mathsf{THRESHOLD}[t]\) gates for all \(t\); * \(\mathsf{AC}^{0}[q]\) with \(\mathsf{NOT}\) and unbounded \(\mathsf{AND},\mathsf{OR},\mathsf{MOD}[q]\) gates; * \(\mathsf{ACC}^{0}=\bigcup_{q}\mathsf{AC}^{0}[q]\). The study of shallow quantum circuit classes was initiated in [10, 10], which introduced a definition of \(\mathsf{QNC}^{0}\), the quantum analogue of the class \(\mathsf{NC}^{0}\). The remaining quantum analogs of the above circuit classes such as \(\mathsf{QAC}^{0},\mathsf{QTC}^{0},\mathsf{QAC}^{0}[q]\), and \(\mathsf{QACC}^{0}\) were later defined in [1]. In the same paper, the authors introduced expanded versions of the aforementioned classes in which Fan-Out gates are also allowed. For example, the class \(\mathsf{QAC}^{0}_{f}\) consists of problems solvable by constant-depth and polynomial-size quantum circuits composed by Fan-Out gates and unbounded \(\mathsf{AND},\mathsf{OR}\) gates (and similarly for the remaining classes \(\mathsf{QTC}^{0}_{f},\mathsf{QAC}^{0}_{f}[q],\mathsf{QACC}^{0}_{f}\)). Moore [11] and Green et al. [1] proved that, for any \(q>1\), \(\mathsf{QAC}^{0}_{f}=\mathsf{QACC}^{0}[q]=\mathsf{QACC}^{0}\). This result differs greatly from the classical result [22] that \(\mathsf{AC}^{0}[p]\neq\mathsf{AC}^{0}[q]\) for primes \(p\neq q\). The power of Fan-Out was further explored in [13] who proved that the bounded-error versions of \(\mathsf{QNC}^{0}_{f},\mathsf{QAC}^{0}_{f},\mathsf{QTC}^{0}_{f}\) are equal. Later, [14] managed to collapse the hierarchy of constant-depth exact quantum circuits: \(\mathsf{QNC}^{0}_{f}=\mathsf{QAC}^{0}_{f}=\mathsf{QTC}^{0}_{f}\). This is in sharp contrast to the classical result \(\mathsf{NC}^{0}\subset\mathsf{AC}^{0}\subset\mathsf{TC}^{0}\). Still, open problems abound, e.g. where \(\mathsf{QAC}^{0}\) and \(\mathsf{QAC}^{0}_{f}\) are equal or not. In this direction, see [13, 14, 15]. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline \multirow{2}{*}{Result} & \multicolumn{2}{c|}{Fan-Out construction} & \multicolumn{2}{c|}{\(\mathsf{GT}\) construction} \\ \cline{2-5} & \#Fan-Out & \#Ancillae & \#\(\mathsf{GT}\) & \#Ancillae \\ \hline \(f\)-FIN (\(*\)) & \(O(n+2^{t+r}(t+r))\) & \(O(2^{t+r}(t+r)\log(t+r))\) & 6 & \(O(2^{t+r}(t+r))\) \\ \hline \(f\)-FIN [10] & \(O\left(|\operatorname{supp}^{>1}(f)|+|\bigcup_{S\operatorname{supp}^{>1}(f)}S|\right)\) & \(O\left(\sum_{S\operatorname{supp}(f)}|S|\right)\) & 2 & \(O(|\operatorname{supp}^{>0}(f)|)\) \\ \hline \(f\)-FIN (\(\ddagger\)) & \(O\left(s+|\bigcup_{S\operatorname{supp}^{>1}(f)}S|\right)\) & \(O(s\deg(f)+|\operatorname{supp}^{=1}(f)|)\) & 2 & \(O(s+|\operatorname{supp}^{=1}(f)|)\) \\ \hline \(f\)-FIN [10] & \(O\left(\sum_{S\operatorname{supp}^{>1}_{2}(f)}|S|\right)\) & \(O\left(\sum_{S\operatorname{supp}^{>1}_{2}(f)}|S|\log(1+|S|)\right)\) & 6 & \(O\left(\sum_{S\operatorname{supp}^{>1}_{2}(f)}|S|\right)\) \\ \hline \end{tabular} \end{table} Table 2: Main results for \(f\)-FIN, where \(f:\{0,1\}^{n}\to\{0,1\}\). In (\(*\)), \(f\) is a \((J,r)\)-junta with \(|\overline{J}|=t\). In (\(\ddagger\)), the gate is implemented with spectral norm error at most \(\epsilon\) and \(s:=n\|f^{>1}\|_{1}^{2}/\epsilon^{2}\). \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline \multirow{2}{*}{Result} & \multicolumn{2}{c|}{Fan-Out construction} & \multicolumn{2}{c|}{\(\mathsf{GT}\) construction} \\ \cline{2-5} & \#Fan-Out & \#Ancillae & \#\(\mathsf{GT}\) & \#Ancillae \\ \hline QRAM [10] & \(O(n\log n)\) & \(O(n\log n\log\log n)\) & 6 & \(O(n\log n)\) \\ \hline QRAM [10] & \(O(n^{2})\) & \(O(n^{2}\log n)\) & 2 & \(O(n^{2})\) \\ \hline QRAG [10] & \(O(n\log n)\) & \(O(n\log n\log\log n)\) & 9 & \(O(n\log n)\) \\ \hline \end{tabular} \end{table} Table 3: Main results for QRAM and QRAG with memory size \(n\). Regarding the class \(\mathsf{QNC}^{0}\) more specifically, it has been an object of great interest since its proposal. A series of works [14, 1, 1, 1, 15] gave evidence that sampling from the output distribution of shallow quantum circuits cannot be simulated by polynomial-time classical computers. Recently, a new line of research starting in [1] is focused in proving unconditional separation between the classical and quantum constant-depth circuits [11, 1, 15, 16, 17, 18, 19, 20, 21]. Quantum state preparation.Quantum state preparation (\(\mathsf{QSP}\)) is the problem of constructing an \(n\)-qubit quantum state \(|\psi\rangle\) starting from the initial state \(|0\rangle^{\otimes n}\) and classical knowledge of the amplitudes of \(|\psi\rangle\). To our knowledge, the first results for efficient state preparation are [1, 1], the latter using oracle access (which can be implemented with a \(\mathsf{QRAM}\)) to a set of precomputed partial integrals. Since then, several constructions have been proposed [1, 1, 1, 1, 1, 1, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29]. ## 2 Preliminaries Denote \(\mathbb{N}=\{1,2,\dots\}\) and \([n]:=\{0,\dots,n-1\}\). Let \([n]^{\leq m}\) be the set of sequences of size at most \(m\). We shall often equate the decimal and binary representations of a given number. Given \(x=x_{0}x_{1}\dots x_{n-1}\in\{0,1\}^{n}\), let \(|x|\) be its Hamming weight and \(\overline{x}\) its bit-wise negation, i.e., \(\overline{x}_{i}=x_{i}\oplus 1\) for all \(i\in[n]\). The one-hot encoding \(e(x)\in\{0,1\}^{2^{n}}\) of a string \(x\in\{0,1\}^{n}\) is defined such that \(e(x)_{j}=1\) if and only if \(j=x\), \(j\in\{0,1\}^{n}\), and can be calculated as \(e(x)_{j}=\bigwedge_{k\in[n]}(x\oplus\overline{j})_{k}\). We take logarithms to the base \(2\). Given \(\mathsf{A}\in\mathbb{C}^{n\times n}\), its spectral norm is \(\|\mathsf{A}\|:=\max_{v\in\mathbb{C}^{n}:\|v\|_{2}=1}\|\mathsf{A}v\|_{2}\). Let \(\mathcal{U}(\mathbb{C}^{n\times n})\) be the set of \(n\times n\) unitary matrices. Let \(\mathbb{I}_{n}\) be the \(2^{n}\times 2^{n}\) identity matrix, \(\mathsf{X},\mathsf{Y},\mathsf{Z}\) the usual Pauli matrices, and \(\mathsf{H}\) the Hadamard gate. For \(\theta\in[-1,1]\), define \(\mathsf{Z}(\theta):=\left(\begin{smallmatrix}1&0\\ 0&e^{i\pi\theta}\end{smallmatrix}\right)\). * Given an ordered sequence \(I\in[n]^{m}\) of \(m\) distinct elements and a unitary \(\mathsf{U}\in\mathcal{U}(\mathbb{C}^{2^{m}\times 2^{m}})\), let \(\mathsf{U}_{\to I}\in\mathcal{U}(\mathbb{C}^{2^{n}\times 2^{n}})\) be the unitary that applies \(\mathsf{U}\) onto qubits in \(I\) and the identity onto the remaining qubits, i.e., \(\mathsf{U}_{\to I}|x\rangle=(\mathsf{U}|x_{I}\rangle)|x_{\overline{I}}\rangle\). If \(I=(i)\in[n]\), write \(\mathsf{U}_{\to i}\). * Given an ordered sequence \(I\in[n]^{m}\) of \(m\) distinct elements, \(S\subseteq[n]\setminus I\), and a unitary \(\mathsf{U}\in\mathcal{U}(\mathbb{C}^{2^{m}\times 2^{m}})\), let \(\mathsf{C}_{S}\text{-}\mathsf{U}_{\to I}\in\mathcal{U}(\mathbb{C}^{2^{n} \times 2^{n}})\) be the unitary that applies \(\mathsf{U}\) onto qubits in \(I\) controlled on all qubits in \(S\) being in the \(|1\rangle\) state and the identity onto the remaining qubits (define \(\mathsf{C}_{\emptyset}\text{-}\mathsf{U}_{\to I}:=\mathsf{U}_{\to I}\) if \(S=\emptyset\)). As an example, \(\mathsf{C}_{S}\text{-}\mathsf{X}_{\to i}\) is the \(\mathsf{X}\) gate applied onto qubit \(i\) controlled on qubits in \(S\) being in the \(|1\rangle\) state (if \(|S|=1\), this is just a \(\mathsf{CNOT}\) gate). Let \(\mathsf{SWAP}_{i\leftrightarrow j}\) be the gate that swaps qubits \(i,j\in[n]\) and \(\mathsf{C}_{k}\text{-}\mathsf{SWAP}_{i\leftrightarrow j}\) its controlled version on qubit \(k\in[n]\setminus\{i,j\}\). In the present work, we use a one and two-qubit universal gate set \(\mathcal{G}\), e.g. \(\mathsf{H}\), \(\mathsf{CNOT}\), and \(\mathsf{Z}(\theta)\) for any \(\theta\in[-1,1]\), supplemented with the global interacting Fan-Out and Global Tunable gates formally defined in Section 4. **Fact 6** (\(\mathsf{Z}\)-decomposition, [11, Theorem 4.1]).: _Let \(f:\{0,1\}^{n}\to\mathcal{U}(\mathbb{C}^{2\times 2})\) be a function onto single-qubit gates. Then there are functions \(\alpha,\beta,\gamma,\delta:\{0,1\}^{n}\to[-1,1]\) such that_ \[f(x)=e^{i\pi\alpha(x)}\mathsf{Z}(\beta(x))\mathsf{HZ}(\gamma(x))\mathsf{HZ}( \delta(x)),\] _for all \(x\in\{0,1\}^{n}\). We say that the tuple \((\alpha,\beta,\gamma,\delta)\) is the \(\mathsf{Z}\)-decomposition of \(f\)._ The size or arity of a gate is the number of qubits on which it depends and/or effects, e.g. a \(\mathsf{C}_{S}\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{\mathsf{ \mathsf{ \mathsf{ }}}}}}}}}}}}}} }\mathsf{U}_{\to i}\) gate has arity \(|S|+1\). For clarity, we may explicitly denote the arity \(k\) of a gate \(\mathsf{U}\) by writing \(\mathsf{U}^{(k)}\). Circuit diagrams in this paper use the following convention for controlled gates. A black circle (\(\bullet\)) denotes a control that is active when the qubit is in the \(|1\rangle\) state, while a white circle (\(\circ\)) denotes a control that is active when the qubit is in the \(|0\rangle\) state (see Figure 3). ### Boolean analysis For an introduction to Boolean analysis, see [11, 12]. In the following, we identify a set \(S\subseteq[n]\) with its characteristic vector \(S\in\{0,1\}^{n}\) such that \(S_{i}=1\) if and only if \(i\in S\). Given a real-valued Boolean function \(f:\{0,1\}^{n}\to\mathbb{R}\), its (unique) real-polynomial representation, or Fourier expansion, is \[f(x)=\sum_{S\subseteq[n]}\widehat{f}(S)\chi_{S}(x),\] where \(\chi_{S}(x):=(-1)^{S\cdot x}=(-1)^{\sum_{i\in S}x_{i}}\) and its Fourier coefficients \(\widehat{f}:2^{[n]}\to\mathbb{R}\) are given by \(\widehat{f}(S)=\frac{1}{2^{n}}\sum_{x\in\{0,1\}^{n}}f(x)\chi_{S}(x)\). The Fourier support of \(f\) is \(\operatorname{supp}(f):=\{S\subseteq[n]:\widehat{f}(S)\neq 0\}\), while its sparsity is \(|\operatorname{supp}(f)|\). Define also \(\operatorname{supp}^{>k}(f):=\{S\subseteq[n]:|S|>k,\widehat{f}(S)\neq 0\}\) (and similarly for \(\operatorname{supp}^{\leq k}(f)\) and \(\operatorname{supp}^{=k}(f)\)). Let \(\deg(f):=\max\{|S|:S\in\operatorname{supp}(f)\}\) be the Fourier degree of \(f\). Let \(f^{>k}=\sum_{S\subseteq[n]:|S|>k}\widehat{f}(S)\chi_{S}\) be the part of \(f\) with degree greater than \(k\) (and similarly for \(f^{\leq k}\)). Let \(\|f\|_{1}:=\sum_{S\subseteq[n]}|\widehat{f}(S)|\) be the Fourier \(1\)-norm of \(f\). The Fourier expansion is a multipolynomial expansion over \(\{-1,1\}\), i.e., it uses PARITY functions. It is possible to represent a function over \(\{0,1\}\), i.e., using AND functions instead. Given \(f:\{0,1\}^{n}\to\mathbb{R}\), its (unique) real-polynomial \(\{0,1\}\)-representation is \[f(x)=\sum_{S\subseteq[n]}\widetilde{f}(S)x^{S},\] where \(x^{S}:=\prod_{i\in S}x_{i}\) and the coefficients \(\widetilde{f}:2^{[n]}\to\mathbb{R}\) are given by \(\widetilde{f}(S)=\sum_{T\subseteq S}(-1)^{|S|-|T|}f(T)\), where we identify \(T\subseteq[n]\) with its \(0\)-\(1\) indicator string (this formula is called Mobius inversion). The \(\{0,1\}\)-support of \(f\) is \(\operatorname{supp}_{\{0,1\}}(f):=\{S\subseteq[n]:\widetilde{f}(S)\neq 0\}\). Define also \(\operatorname{supp}^{>k}_{\{0,1\}}(f):=\{S\subseteq[n]:|S|>k,\widetilde{f}(S) \neq 0\}\) (and similarly for \(\operatorname{supp}^{\leq k}_{\{0,1\}}(f)\) and \(\operatorname{supp}^{=k}_{\{0,1\}}(f)\)). It is possible to prove that the degree of \(f\) over the \(\{0,1\}\)-representation is the same as its Fourier degree, \(\deg_{\{0,1\}}(f):=\max\{|S|:S\in\operatorname{supp}_{\{0,1\}}(f)\}=\deg(f)\). The Fourier expansion (using PARITY or AND functions) is a representation over the _real_ field. In the special case of functions with codomain \(\{0,1\}\), it is possible to represent them over the field \(\mathbb{F}_{2}\) instead. Given \(f:\{0,1\}^{n}\to\{0,1\}\), its (unique) \(\mathbb{F}_{2}\)-polynomial representation (also called algebraic normal form) is \[f(x)=\bigoplus_{S\subseteq[n]}\widetilde{f}_{\mathbb{F}_{2}}(S)x^{S},\] where the coefficients \(\widetilde{f}_{\mathbb{F}_{2}}:2^{[n]}\to\{0,1\}\) are given by \(\widetilde{f}_{\mathbb{F}_{2}}(S)=\widetilde{f}(S)\mod 2=\bigoplus_{x: \operatorname{supp}(x)\subseteq S}f(x)\), with \(\operatorname{supp}(x):=\{i\in[n]:x_{i}\neq 0\}\). The above expansion can be obtained from the real \(\{0,1\}\)-representation by changing the summation over the reals to a summation over \(\mathbb{F}_{2}\) as indicated by \(\widetilde{f}_{\mathbb{F}_{2}}(S)=\widetilde{f}(S)\mod 2\). The \(\mathbb{F}_{2}\)-support of \(f\) is \(\operatorname{supp}_{\mathbb{F}_{2}}(f):=\{S\subseteq[n]:\widetilde{f}_{\mathbb{F} _{2}}(S)\neq 0\}\) and its \(\mathbb{F}_{2}\)-degree is \(\deg_{\mathbb{F}_{2}}(f):=\max\{|S|:S\in\operatorname{supp}_{\mathbb{F}_{2}}(f)\}\). It is possible to prove that \(\deg_{\mathbb{F}_{2}}(f)\leq\deg(f)\). Define \(\operatorname{supp}_{\mathbb{F}_{2}}^{>k}(f):=\{S\subseteq[n]:|S|>k,\widetilde{ f}_{\mathbb{F}_{2}}(S)\neq 0\}\) (similarly for \(\operatorname{supp}_{\mathbb{F}_{2}}^{\leq k}(f)\) and \(\operatorname{supp}_{\mathbb{F}_{2}}^{=k}(f)\)). Given \(f:\{0,1\}^{n}\to\mathcal{U}(\mathbb{C}^{2\times 2})\), consider its \(\mathsf{Z}\)-decomposition \(\alpha,\beta,\gamma,\delta:\{0,1\}^{n}\to[-1,1]\). We extend the above Boolean definitions to \(f\) by defining \(\operatorname{supp}(f):=\operatorname{supp}(\alpha)\cup\operatorname{supp}( \beta)\cup\operatorname{supp}(\gamma)\cup\operatorname{supp}(\delta)\) and \(\deg(f):=\max\{\deg(\alpha),\deg(\beta),\deg(\gamma),\deg(\delta)\}\). Similar definitions apply to \(\operatorname{supp}^{>k}(f)\), \(\operatorname{supp}^{\leq k}(f)\), \(\operatorname{supp}^{=k}(f)\), \(\operatorname{supp}_{\{0,1\}}(f)\), and \(\operatorname{supp}_{\{0,1\}}^{>k}(f)\). More generally, consider a function \(f:\{0,1\}^{n}\to V\), where \(V\) is a complex vector space. Given a partition \((J,\overline{J})\) of \([n]\) and \(z\in\{0,1\}^{|\overline{J}|}\), we write \(f_{J|z}:\{0,1\}^{|J|}\to V\) for the subfunction of \(f\) given by fixing the coordinates in \(\overline{J}\) to the bit values \(z\). We say that \(f:\{0,1\}^{n}\to V\) is an \(r\)-junta for \(r\in\mathbb{N}\) if it depends on at most \(r\) of its input coordinates, i.e., \(f(x)=g(x_{i_{1}},\ldots,x_{i_{r}})\) for some \(g:\{0,1\}^{r}\to V\) and \(i_{1},\ldots,i_{r}\in[n]\). We say that \(f:\{0,1\}^{n}\to V\) is a \((J,r)\)-junta for \(J\subseteq[n]\) and \(r\in\mathbb{N}\) if \(f_{J|z}:\{0,1\}^{|J|}\to V\) is an \(r\)-junta for any \(z\in\{0,1\}^{|\overline{J}|}\). ## 3 Quantum memory architectures In this section, we formally define a model of a quantum computer with quantum access to memory. A simplified model of classical computers can be thought of as (i) a central processing unit (CPU), (ii) a Random Access Memory (RAM) that serves as a temporary storage medium for the CPU to quickly retrieve data, and (iii) auxiliary permanent storage mediums. A RAM constitutes a memory array, an address/input register, and a target/bus/output register. Data is accessed or modified via address lines. When the CPU requires access to the memory, it sends the value from the address register down the address lines, and, depending on the read or write signal, the content of a memory cell is either copied into the target register or stored from the target register into the memory cell. To define a model of a quantum computer with quantum access to memory, it will first be helpful to formally define the quantum processing unit (\(\mathsf{QPU}\)). **Definition 7** (Quantum Processing Unit).: _A Quantum Processing Unit \((\mathsf{QPU})\) of size \(m\) is defined as a tuple \((\mathtt{I},\mathtt{W},\mathcal{G})\) consisting of_ 1. _an_ \(m_{\mathtt{I}}\)_-qubit Hilbert space called_ input register__\(\mathtt{I}\)_;_ 2. _an_ \((m-m_{\mathtt{I}})\)_-qubit Hilbert space called_ workspace__\(\mathtt{W}\)_;_ 3. _a constant-size universal gate set_ \(\mathcal{G}\subset\mathcal{U}(\mathbb{C}^{4\times 4})\)_._ _The qubits in the workspace_\(\mathtt{W}\) _are called ancillary qubits or simply ancillae. An input to the_ \(\mathsf{QPU}\)_, or quantum circuit, is a tuple_ \((T,|\psi_{\mathtt{I}}\rangle,C_{1},\ldots,C_{T})\) _where_ \(T\in\mathbb{N}\)_,_ \(|\psi_{\mathtt{I}}\rangle\in\mathtt{I}\)_, and, for each_ \(t\in\{1,\ldots,T\}\)_,_ \(C_{t}\in\mathcal{I}(\mathcal{G})\) _is an instruction from a set_ \(\mathcal{I}(\mathcal{G})\) _of possible instructions. Starting from the state_ \(|\psi_{0}\rangle:=|\psi_{\mathtt{I}}\rangle|0\rangle_{\mathtt{W}}^{\otimes(m-m_ {\mathtt{I}})}\)_, at each time step_ \(t\in\{1,\ldots,T\}\) _we obtain the state_ \(|\psi_{t}\rangle=C_{t}|\psi_{t-1}\rangle\in\mathtt{I}\otimes\mathtt{W}\)_. The instruction set_ \(\mathcal{I}(\mathcal{G})\subset\mathcal{U}(\mathbb{C}^{2^{m}\times 2^{m}})\) _consists of all_ \(m\)_-qubit unitaries on_ \(\mathtt{I}\otimes\mathtt{W}\) _of the form_ \[\prod_{i=1}^{k}(\mathsf{U}_{i})_{\rightarrow I_{i}}\] _for some_ \(k\in\mathbb{N}\)_,_ \(\mathsf{U}_{1},\ldots,\mathsf{U}_{k}\in\mathcal{G}\) _and pair-wise disjoint non-repeating sequences_ \(I_{1},\ldots,I_{k}\in[m]^{\leq 2}\) _of at most_ \(2\) _elements. We say that_ \(\sum_{i=1}^{k}|I_{i}|\) _is the_ size _of the corresponding instruction. We say that_ \(T\) _is the_ depth _of the input to the_ \(\mathsf{QPU}\)_, while its_ size _is the sum of the sizes of the instructions_ \(C_{1},\ldots,C_{T}\) The extension of this definition to incorporate a quantum memory device (QMD) is then: **Definition 8** (Quantum Processing Unit and Quantum Memory Device).: _We consider a model of computation comprising a \(\mathsf{QPU}\) of size \(\operatorname{poly}\log(n)\) and a Quantum Memory Device (QMD) of \(n\) memory registers, where each register is of \(\ell\)-qubit size (for \(n\) a power of \(2\)). A \(\mathsf{QPU}\) and a \(\mathsf{QMD}\) are collectively defined by a tuple \((\mathsf{I},\mathsf{W},\mathsf{A},\mathsf{T},\mathsf{Aux},\mathsf{M},\mathcal{G },\mathsf{V})\) consisting of_ 1. _two_ \((\operatorname{poly}\log n)\)_-qubit Hilbert spaces called_ input register__\(\mathsf{I}\) _and_ workspace__\(\mathsf{W}\) _owned solely by the_ \(\mathsf{QPU}\)_;_ 2. \(a\)__\((\log n)\)_-qubit Hilbert space called_ address register__\(\mathsf{A}\) _shared by both_ \(\mathsf{QPU}\) _and_ \(\mathsf{QMD}\)_;_ 3. _an_ \(\ell\)_-qubit Hilbert space called_ target register__\(\mathsf{T}\) _shared by both_ \(\mathsf{QPU}\) _and_ \(\mathsf{QMD}\)_;_ 4. \(a\)__\((\operatorname{poly}n)\)_-qubit Hilbert space called_ auxiliary register__\(\mathsf{Aux}\) _owned solely by the_ \(\mathsf{QMD}\)_;_ 5. _an_ \(n\ell\)_-qubit Hilbert space called_ memory__\(\mathsf{M}\) _comprising_ \(n\) _registers_ \(\mathsf{M}_{0},\ldots,\mathsf{M}_{n-1}\)_, each containing_ \(\ell\) _qubits, owned solely by the_ \(\mathsf{QMD}\)_;_ 6. _a constant-size universal gate set_ \(\mathcal{G}\subset\mathcal{U}(\mathbb{C}^{4\times 4})\)_;_ 7. _a function_ \(\mathsf{V}:[n]\to\mathcal{V}\)_, where_ \(\mathcal{V}\subset\mathcal{U}(\mathbb{C}^{2^{2\ell}\times 2^{2\ell}})\) _is a_ \(O(1)\)_-size subset of_ \(2\ell\)_-qubit gates._ _The qubits in \(\mathsf{W}\), \(\mathsf{A}\), \(\mathsf{T}\), and \(\mathsf{Aux}\) are called ancillary qubits or simply ancillae. An input to the \(\mathsf{QPU}\) with a \(\mathsf{QMD}\), or quantum circuit, is a tuple \((T,|\psi_{\mathsf{I}}\rangle,|\psi_{\mathsf{M}}\rangle,C_{1},\ldots,C_{T})\) where \(T\in\mathbb{N}\), \(|\psi_{\mathsf{I}}\rangle\in\mathsf{I}\), \(|\psi_{\mathsf{M}}\rangle\in\mathsf{M}\), and, for each \(t\in\{1,\ldots,T\}\), \(C_{t}\in\mathcal{I}(\mathcal{G},\mathsf{V})\) is an instruction from a set \(\mathcal{I}(\mathcal{G},\mathsf{V})\) of possible instructions. The instruction set \(\mathcal{I}(\mathcal{G},\mathsf{V})\) is the set \(\mathcal{I}(\mathcal{G})\) from Definition 7 of instructions on \(\mathsf{I}\otimes\mathsf{W}\otimes\mathsf{A}\otimes\mathsf{T}\) augmented with the call-to-the-\(\mathsf{QMD}\) instruction that implements the unitary_ \[|i\rangle_{\mathsf{A}}|b\rangle_{\mathsf{T}}|x_{i}\rangle_{\mathsf{M}_{i}}|0 \rangle_{\mathsf{Aux}}^{\otimes\operatorname{poly}n}\mapsto|i\rangle_{ \mathsf{A}}\big{(}\mathsf{V}(i)|b\rangle_{\mathsf{T}}|x_{i}\rangle_{\mathsf{ M}_{i}}\big{)}|0\rangle_{\mathsf{Aux}}^{\otimes\operatorname{poly}n},\qquad\forall i\in[n],b,x_{i} \in\{0,1\}^{\ell}.\] _Starting from \(|\psi_{0}\rangle|0\rangle_{\mathsf{Aux}}^{\otimes\operatorname{poly}n}\), where \(|\psi_{0}\rangle:=|\psi_{\mathsf{I}}\rangle|0\rangle_{\mathsf{W}}^{\otimes \operatorname{poly}\log n}|0\rangle_{\mathsf{A}}^{\otimes\log n}|0\rangle_{ \mathsf{T}}^{\otimes\ell}|\psi_{\mathsf{M}}\rangle\), at each time step \(t\in\{1,\ldots,T\}\) we obtain the state \(|\psi_{t}\rangle|0\rangle_{\mathsf{Aux}}^{\otimes\operatorname{poly}n}=C_{t}( |\psi_{t-1}\rangle|0\rangle_{\mathsf{Aux}}^{\otimes\operatorname{poly}n})\), where \(|\psi_{t}\rangle\in\mathsf{I}\otimes\mathsf{W}\otimes\mathsf{A}\otimes\mathsf{T} \otimes\mathsf{M}\)._ We depict the architecture of a quantum processing unit with access to a quantum memory device in Figure 2. The address register \(\mathsf{A}\) (shared by the \(\mathsf{QPU}\) and \(\mathsf{QMD}\)) is used to select a unitary from \(\mathcal{V}\) and apply it to the target and memory registers \(\mathsf{T}\) and \(\mathsf{M}\) with the help of the auxiliary register \(\mathsf{Aux}\). Even though a call to the \(\mathsf{QMD}\) might require gates from a universal gate set, we stress that the underlying quantum circuit implementing such a call is _fixed_, i.e., does not change throughout the execution of a quantum algorithm by the \(\mathsf{QPU}\), or even between different quantum algorithms. This allows for highly specialized circuits for the \(\mathsf{QMD}\). Note that our definition of circuit size in Definition 7 differs slightly from the standard notion of circuit size (number of gates from \(\mathcal{G}\)) up to a factor of at most \(2\). Note, moreover, that in our framework the address and target register locations are fixed. One could imagine a more general setting where the address and target registers are freely chosen from the workspace. This case can be handled by our model with minimal overhead, e.g. by performing \(\ell\) SWAP gates to move the desired workspace qubits into the address or target register locations. In this work, we focus on constant-depth circuits, and since the size of a constant-depth circuit is just a constant times the number of input qubits plus ancillary qubits, we shall specify only the input and the number of ancillae of a circuit, with its size thus being implicit. In the rest of the paper we assume that each memory cell has size \(\ell=1\). A call to the QMD is defined by the function V and we shall often equate the quantum memory device with the unitary that it implements. In many applications, one is interested in some form of reading a specific entry from the memory, which corresponds to the special cases where the V\((i)\) unitaries are made of controlled single-qubit gates, and to which the traditional QRAM belongs. **Definition 9** (\(f\)-Qram).: _Let \(n\in\mathbb{N}\) be a power of \(2\) and \(f:\{0,1\}^{\log n}\to\mathcal{U}(\mathbb{C}^{2\times 2})\). An \(f\)-quantum random access memory (\(f\)-QRAM) of memory size \(n\) is a_ QMD _with_ V\((i)=\mathsf{C}_{\mathsf{M}}\)_-_f\((i)_{\to\mathsf{T}}\)_,_ \(\forall i\in[n]\)_. Equivalently, it is a_ QMD _that maps_ \[|i\rangle_{\mathsf{A}}|b\rangle_{\mathsf{T}}|x_{0},\ldots,x_{n-1}\rangle_{ \mathsf{M}}\mapsto|i\rangle_{\mathsf{A}}(f(i)^{x_{i}}|b\rangle_{\mathsf{T}}) |x_{0},\ldots,x_{n-1}\rangle_{\mathsf{M}}\qquad\forall i\in[n],b,x_{0},\ldots,x_{n-1}\in\{0,1\}.\] _A special case, normally called simply_ QRAM_, is when \(f(i)=\mathsf{X}\) for all \(i\in[n]\), i.e., \(\mathsf{V}(i)=\mathsf{C}_{\mathsf{M}_{i}}\)-\(\mathsf{X}_{\to\mathsf{T}}\)._ Note that \(f\)-QRAMs are QMDs that can be implemented via UCG (see comment at the end of Section 3.1 and Fig. 1). Another case of interest is writing content from the workspace into memory using SWAP gates. **Definition 10** (Qrag).: _Let \(n\in\mathbb{N}\) be a power of \(2\). A quantum random access gate_ QRAG _of memory size \(n\) is a_ QMD _with_ V\((i)=\mathsf{SWAP}_{\mathsf{M}_{i}\leftrightarrow\mathsf{T}}\)_,_ \(\forall i\in[n]\)_. Equivalently, it is a_ QMD _that maps_ \[|i\rangle_{\mathsf{A}}|b\rangle_{\mathsf{T}}|x_{0},\ldots,x_{n-1}\rangle_{ \mathsf{M}}\mapsto|i\rangle_{\mathsf{A}}|x_{i}\rangle_{\mathsf{T}}|x_{0}, \ldots,x_{i-1},b,x_{i+1},\ldots,x_{n-1}\rangle_{\mathsf{M}}\quad\forall i\in [n],b,x_{0},\ldots,x_{n-1}\in\{0,1\}.\] Figure 2: The architecture of a Quantum Processing Unit (QPU) with access to a quantum memory device (QMD). The QPU encompasses a (\(\operatorname{poly}\log n\))-qubit input register \(\mathsf{I}\) and workspace \(\mathsf{W}\), a (\(\log n\))-qubit address register \(\mathsf{A}\), and an \(\ell\)-qubit target register \(\mathsf{T}\), while the QMD encompasses the address register \(\mathsf{A}\), the target register \(\mathsf{T}\), an \(n\ell\)-qubit memory array \(\mathsf{M}\) composed of \(n\) cells \(x_{0},\ldots,x_{n-1}\in\{0,1\}^{\ell}\) of \(\ell\) qubits each, and a (\(\operatorname{poly}n\))-qubit auxiliary register \(\mathsf{Aux}\). The following lemma shows that a QRAG is at least as powerful as a QRAM. **Lemma 11** (Simulating QRAM with QRAG).: _A query to a QRAM of memory size \(n\) can be simulated using \(2\) queries to a QRAG of memory size \(n\), \(3\) two-qubit gates, and \(1\) workspace qubit._ Proof.: Start with the input \(|i\rangle_{\mathtt{A}}|0\rangle_{\mathtt{Tmp}}|b\rangle_{\mathtt{T}}|x_{0}, \ldots,x_{n-1}\rangle_{\mathtt{M}}\) by using an ancillary qubit Tmp from the workspace. Use a \(\mathsf{SWAP}_{\mathtt{T}\leftrightarrow\mathtt{Tmp}}\) gate to obtain \(|i\rangle_{\mathtt{A}}|b\rangle_{\mathtt{Tmp}}|0\rangle_{\mathtt{T}}|x_{0}, \ldots,0,\ldots,x_{n-1}\rangle_{\mathtt{M}}\). A query to the QRAG then leads to \(|i\rangle_{\mathtt{A}}|b\rangle_{\mathtt{Tmp}}|x_{i}\rangle_{\mathtt{T}}|x_{0},\ldots,0,\ldots,x_{n-1}\rangle_{\mathtt{M}}\). Use a \(\mathsf{C}_{\mathtt{T}}\)-\(\mathsf{X}_{\rightarrow\mathtt{Tmp}}\) gate from register T to register Tmp, and query again the QRAG, followed by a \(\mathsf{SWAP}_{\mathtt{T}\leftrightarrow\mathtt{Tmp}}\) gate, to obtain the desired state \(|i\rangle_{\mathtt{A}}|b\oplus x_{i}\rangle_{\mathtt{T}}|x_{0},\ldots,x_{n-1} \rangle_{\mathtt{M}}\) after discarding the ancillary qubit. On the other hand, in our model, the converse is not true. It is possible, though, to simulate a QRAG using a constant number of QRAM queries in a model where single-qubit gates are allowed to be freely applied to the memory register M. The next lemma formalizes these results. **Lemma 12** (Simulating QRAG with QRAM).: __ * _In the model from Definition_ 8_, a query to a_ QRAG _cannot be simulated by any number of queries to a_ QRAM_._ * _Suppose that single-qubit gates can be freely applied onto the memory register_ M _of any_ QRAM_. Then a_ QRAG _of memory size_ \(n\) _can be simulated using_ \(3\) _queries to a_ QRAM _of memory size_ \(n\) _and_ \(2(n+1)\) _Hadamard gates._ Proof.: For the first statement, consider the simplest case of trying to implement a QRAG with zero address qubits (i.e., there is only one memory cell): given memory qubit M, target qubit T, and an arbitrary number of workspace qubits W. A single action of the QRAM followed by an arbitrary unitary U acting on the target and workspace maps \(|x_{0}\rangle_{\mathtt{M}}|b\rangle_{\mathtt{T}}|\psi\rangle_{\mathtt{W}} \mapsto|x_{0}\rangle_{\mathtt{M}}\mathsf{U}|b\oplus x_{0}\rangle_{\mathtt{T}}| \psi\rangle_{\mathtt{W}}=|x_{0}\rangle_{\mathtt{W}}|\Phi\rangle_{\mathtt{T}}\) and thus leaves the memory register invariant. As we cannot modify the memory register, it is not possible to swap the state of the memory with the contents of the target. The second statement follows from the simple fact that three CNOTs can implement a SWAP, i.e., \(\mathsf{SWAP}_{\mathtt{B}\leftrightarrow\mathtt{D}}=\mathsf{C}_{\mathtt{B}} \mathsf{-X}_{\rightarrow\mathtt{D}}\cdot\mathsf{C}_{\mathtt{D}}\text{-} \mathsf{X}_{\rightarrow\mathtt{B}}\cdot\mathsf{C}_{\mathtt{B}}\text{-} \mathsf{X}_{\rightarrow\mathtt{D}}\), and that one can swap control and target registers of a CNOT as \((\mathsf{H}_{\rightarrow\mathtt{B}}\cdot\mathsf{H}_{\rightarrow\mathtt{D}}) \mathsf{C}_{\mathtt{B}}\mathsf{-X}_{\rightarrow\mathtt{D}}(\mathsf{H}_{ \rightarrow\mathtt{B}}\cdot\mathsf{H}_{\rightarrow\mathtt{D}})=\mathsf{C}_{ \mathtt{D}}\mathsf{-X}_{\rightarrow\mathtt{B}}\), for registers \(\mathtt{B},\mathtt{D}\). Then, starting from the input \(|i\rangle_{\mathtt{A}}|b\rangle_{\mathtt{T}}|x_{0},\ldots,x_{n-1}\rangle_{ \mathtt{M}}\), apply a QRAM followed by the \(n+1\) Hadamard gates \(\mathsf{H}_{\rightarrow\mathtt{T}}\cdot\prod_{j\in[n]}\mathsf{H}_{\rightarrow \mathsf{H}_{j}}\), and then another QRAM query followed by \(\mathsf{H}_{\rightarrow\mathtt{T}}\cdot\prod_{j\in[n]}\mathsf{H}_{\rightarrow \mathsf{M}_{j}}\), and a final QRAM query. Our model can be seen as a refined version of the one described in [1]. Similar to our Definition 8, the authors divide the qubits of a quantum computer into work and memory qubits. Given \(M\) memory qubits, their workspace consists of \(O(\log M)\) qubits, of which the address and target qubits are always the first \(\lceil\log M\rceil+1\) qubits. However, address and target qubits are not considered to be shared by the QMD, and there is no mention of ancillary qubits mediating a call to the QMD. The inner structure of the QMD is abstracted away by assuming access to the unitary of a QRAG as in Definition 10. Our model, in contrast, "opens" the quantum memory device, and allows for general fixed unitaries, including QRAM and QRAG. The first efficient architectures for QRAM were formalized and proposed in [11, 12], namely the Fan-Out and bucket-brigade architectures. These architectures can readily be used for QRAGs, with a simple modification: replacing the last layer of CNOT gates with SWAP gates. Both schemes access the memory cells through a binary tree of size \(O(n)\) and depth \(\log n\). Each qubit of the address register \(|i\rangle_{\mathtt{A}}\) specifies the direction to follow from the root to the correct memory cell, i.e., the \(k\)-th qubit of the address register tells whether to go left or right at a router (or bifurcation) on the \(k\)-th level of the binary tree. The target qubit is sent down the binary tree to the memory cell corresponding to the address register, and the information in the memory cell is copied (\(\mathsf{QRAM}\)) or swapped (\(\mathsf{QRAG}\)), and the target qubit is then sent back up the tree to the root. The Fan-Out and bucket-brigade architectures differ in how the target qubit is routed down the binary tree. In the Fan-Out architecture, the \(k\)-th address qubit controls all the \(2^{k}\) routers on the \(k\)-th level via a Fan-Out gate. The drawback of this scheme is that it requires simultaneous control of all \(n-1\) routers, even though only \(\log n\) routers (in each branch of the wavefunction) are necessary to route the target down the tree. This in turn makes the Fan-Out architecture highly susceptible to noise since each router is maximally entangled with the rest of the system. In the bucket-brigade architecture, on the other hand, all routers are initially in an "idle" state. Each address qubit is sequentially sent down the binary tree and its state is transferred to the first idle router it encounters. This creates a path for the following address qubits to the next idle router and, after all address qubits have been routed down the tree, a path for the target qubits to the correct memory cells. One main advantage of the bucket-brigade architecture is reducing the number of active routers down to \(\log n\) in each component of the superposition. Another advantage is its high resilience to noise due to limited entanglement between the memory components [12, 13, 14, 15]. Several other architectures for \(\mathsf{QRAM}\) have been proposed, including Flip-Flop \(\mathsf{QRAM}\)[26], Entangling Quantum Generative Adversarial Network \(\mathsf{QRAM}\)[16], approximate Parametric-Quantum-Circuit-based \(\mathsf{QRAM}\)[15], and others [17, 18, 19, 20]. Roughly speaking, one can classify the proposals for \(\mathsf{QRAM}\) with classical memory in two ways [13]. In a first way, the classical memory can be explicitly laid out in physical hardware at the end of the quantum circuit implementing a \(\mathsf{QRAM}\), e.g. at the end of the ancillary binary tree in the Fan-Out and bucket-brigade architectures, and then be copied via a \(\mathsf{CNOT}\) gate. The advantage of such "explicit" \(\mathsf{QRAMs}\) is that their underlying circuits must be optimized and compiled just once, while the contents of the memory array can be modified freely. The other way is to encode the memory implicitly in the quantum circuit. This can be achieved by employing multicontrolled \(\mathsf{CNOT}\) gates controlled by bits representing the memory address containing a \(1\). The advantage of such "implicit" \(\mathsf{QRAMs}\) is that in some cases they can be heavily optimized using techniques from Boolean circuits [13, 14]. Another way to distinguish between \(\mathsf{QMD}\)s is in the way the routing operation, i.e., the memory cell selection, is implemented: passively or actively. For example, the architecture in [17] is passive: when the routers (the ancillary qubits of the device) are configured, a photon gets absorbed into a cavity, and then subsequent incoming photons acquire a phase shift depending on the state of the cavity. Active architectures [15], on the other hand, are similar to a traditional gate-based quantum computer, where each \(\mathsf{SWAP}\) or controlled-\(\mathsf{SWAP}\) gate is executed by some control pulse. We point the reader to a few recent surveys on the state of the art of \(\mathsf{QRAMs}\) for more information [1, 13, 12]. ### Uniformly controlled gates An \(f\)-Uniformly Controlled Gate (\(f\)-\(\mathsf{UCG}\) or simply \(\mathsf{UCG}\)) is a unitary that, conditioned on the state of a set of control qubits, implements one of a set of single-qubit gates on a target qubit. **Definition 13** (\(f\)-Uniformly Controlled Gate).: _Let \(m,n\in\mathbb{N}\), \(n<m\). Let \(i\in[m]\). Consider a function \(f:\{0,1\}^{n}\to\mathcal{U}(\mathbb{C}^{2\times 2})\), and let \(S\in([m]\setminus\{i\})^{n}\) be a sequence of \(n\) non-repeating elements from \([m]\setminus\{i\}\). The Uniformly Controlled Gate \(f\text{-}\mathsf{UCG}^{(n)}_{S\to i}\) of size \(n\) is defined as_ \[f\text{-}\mathsf{UCG}^{(n)}_{S\to i}|x_{0}\rangle|x_{1}\rangle\ldots|x_{m-1} \rangle=|x_{0}\rangle\ldots|x_{i-1}\rangle(f(x_{S})|x_{i}\rangle)|x_{i+1} \rangle\ldots|x_{m-1}\rangle,\quad\forall x_{0},\ldots,x_{m-1}\in\{0,1\},\] _where \(x_{S}=x_{S_{1}}\ldots x_{S_{n}}\). When it is clear from context, we shall omit either the superscript \((n)\), or the subscripts corresponding to the target \(i\) and/or control \(S\) from \(f\text{-}\mathsf{UCG}^{(n)}_{S\to i}\). By \(f\text{-}\mathsf{UCG}\) we mean a generic \(f\text{-}\mathsf{UCG}^{(n)}_{S\to i}\) for some \(n,S,i\)._ An \(f\text{-}\mathsf{UCG}\) is normally defined in the literature by listing a set \(\{\mathsf{U}_{0},\ldots,\mathsf{U}_{2^{n}-1}\}\) of single-qubit gates (corresponding to \(f(0^{n}),\ldots,f(1^{n})\)), and writing \[f\text{-}\mathsf{UCG}^{(n)}_{[n]\to n}=\sum_{x\in\{0,1\}^{n}}|x\rangle \langle x|\otimes f(x)=\sum_{x\in\{0,1\}^{n}}|x\rangle\langle x|\otimes\mathsf{ U}_{x},\] where we ignored the qubits on which \(f\text{-}\mathsf{UCG}^{(n)}_{S\to i}\) does not depend (so \(m=n+1\)) and took the target qubit \(i\) to be the last one. Equivalently, its matrix representation is \[f\text{-}\mathsf{UCG}^{(n)}_{[n]\to n}=\begin{pmatrix}\mathsf{U}_{0}&&&\\ &\mathsf{U}_{1}&&\\ &&\ddots&\\ &&&\mathsf{U}_{2^{n}-1}\end{pmatrix}\in\mathbb{C}^{2^{(n+1)}\times 2^{(n+1)}}.\] A possible way to implement \(f\text{-}\mathsf{UCG}^{(n)}_{[n]\to n}\) is shown in Figure 3(a), where each gate \(f(x)=\mathsf{U}_{x}\) is sequentially performed controlled on the state \(|x\rangle\). Well-known examples of \(f\text{-}\mathsf{UCG}\)s can be found in [11, 12, 13, 14]. These algorithms perform a set of controlled rotations \(|x\rangle|0\rangle\mapsto|x\rangle\big{(}\cos\theta(x)|0\rangle+\sin\theta(x)|1 \rangle\big{)}\) on a single qubit for a function \(\theta:\{0,1\}^{n}\to[0,2\pi]\). Another example is the special subclass of \(f\text{-}\mathsf{UCG}\)s known as Fan-In gates (\(f\text{-}\mathsf{FIN}\)), for which \(f:\{0,1\}^{n}\to\{\mathbb{I}_{1},\mathsf{X}\}\), i.e., the \(\mathsf{Z}\)-decomposition of \(f\) is simply \(f(x)=\mathsf{HZ}(\gamma(x))\mathsf{H}=\mathsf{X}^{\gamma(x)}\) for \(\gamma:\{0,1\}^{n}\to\{0,1\}\). Fan-In gates are thus equivalent to gates for which a Boolean function is computed on a subset of the registers and the result is added to a specified register \(|x_{i}\rangle\). Other \(f\text{-}\mathsf{UCG}\)s include phase oracles for which \(f:\{0,1\}^{n}\to\{\mathbb{I}_{1},\mathsf{Z}\}\). **Definition 14** (\(f\text{-}\mathsf{Fan-In gate}\)).: _Let \(m,n\in\mathbb{N}\), \(n<m\). Let \(i\in[m]\). Consider a Boolean function \(f:\{0,1\}^{n}\to\{0,1\}\) on \(n\) bits, and let \(S\in([m]\setminus\{i\})^{n}\) be a sequence of \(n\) non-repeating elements from \([m]\setminus\{i\}\). The Fan-In gate \(f\text{-}\mathsf{FIN}^{(n)}_{S\to i}\) of size \(n\) is defined as_ \[f\text{-}\mathsf{FIN}^{(n)}_{S\to i}|x_{0}\rangle|x_{1}\rangle\ldots|x_{m-1} \rangle=|x_{0}\rangle\ldots|x_{i-1}\rangle|x_{i}\oplus f(x_{S})\rangle|x_{i+1} \rangle\ldots|x_{m-1}\rangle,\quad\forall x_{0},\ldots,x_{m-1}\in\{0,1\},\] _where \(x_{S}=x_{S_{1}}\ldots x_{S_{n}}\). When it is clear from context, we shall omit either the superscript \((n)\), or the subscripts corresponding to the target \(i\) and/or control \(S\) from \(f\text{-}\mathsf{FIN}^{(n)}_{S\to i}\). By \(f\text{-}\mathsf{FIN}\) we mean a generic \(f\text{-}\mathsf{FIN}^{(n)}_{S\to i}\) for some \(n,S,i\)._ Examples of Boolean functions \(f:\{0,1\}^{n}\to\{0,1\}\) include \[f(x)=1\text{ if and only if }\begin{cases}|x|>0&\mathsf{OR}^{(n)},\\ |x|=n&\mathsf{AND}^{(n)}\text{ (generalized Toffoli)},\\ |x|\geq n/2&\mathsf{MAJORITY}^{(n)},\\ |x|\geq t&\mathsf{THRESHOLD}^{(n)}[t],\\ |x|=t&\mathsf{EXACT}^{(n)}[t],\\ |x|\text{ is odd}&\mathsf{PARITY}^{(n)}.\end{cases}\] Another example of \(f\)-FIN is the QRAM itself. Indeed, QRAM is simply the \(f\)-FIN with \(f:\{0,1\}^{n}\times\{0,1\}^{\log n}\to\{0,1\}\) defined by \(f(x,i)=x_{i}\) (also known as selection function). The following simple fact is behind our constructions based on one-hot encoding in Section 5. **Fact 15**.: _Given \(x\in\{0,1\}^{n}\) and \(\mathsf{U}\in\mathcal{U}(\mathbb{C}^{2\times 2})\), the gate \(|x\rangle\langle x|\otimes\mathsf{U}+\sum_{j\in\{0,1\}^{n}\setminus\{x\}}|j \rangle\langle j|\otimes\mathbb{I}_{1}\) can be implemented using two \(\mathsf{AND}^{(n)}\) gates and one ancillary qubit._ Proof.: Given \(n\)-qubit register \(|k\rangle_{\mathsf{I}}=\bigotimes_{j\in[n]}|k_{j}\rangle_{\mathsf{I}_{j}}\) and single-qubit register \(|b\rangle_{\mathsf{T}}\), simply note that \[\left(|x\rangle\langle x|\otimes\mathsf{U}+\sum_{j\in\{0,1\}^{n} \setminus\{x\}}|j\rangle\langle j|\otimes\mathbb{I}_{1}\right)\otimes\mathbb{ I}_{1}|k\rangle_{\mathsf{I}}|b\rangle_{\mathsf{T}\mathsf{D}}\\ =\left(\prod_{j\in[n]}\mathsf{X}_{\to\mathsf{I}_{j}}^{\overline{ x}_{j}}\right)\mathsf{AND}_{\mathsf{I}\to\mathsf{Imp}}^{(n)}\cdot\mathsf{C}_{ \mathsf{Imp}}\mbox{-}\mathsf{U}_{\to\mathsf{T}}\cdot\mathsf{AND}_{\mathsf{I} \to\mathsf{Imp}}^{(n)}\left(\prod_{j\in[n]}\mathsf{X}_{\to\mathsf{I}_{j}}^{ \overline{x}_{j}}\right)|k\rangle_{\mathsf{I}}|b\rangle_{\mathsf{T}}|0 \rangle_{\mathsf{Imp}}\] for all \(k\in\{0,1\}^{n}\) and \(b\in\{0,1\}\) (see Figure 3(b)). Relation between \(\mathsf{QMD}\) and \(f\)-Ucg.Uniformly controlled gates and quantum memory devices are similar but distinct concepts. Since \(\mathcal{V}\subset\mathcal{U}(\mathbb{C}^{4\times 4})\), i.e., \(\mathsf{V}(i)\) can act non-trivially on two qubits for all \(i\in\{0,1\}^{\log n}\) (registers \(\mathtt{T}\) and \(\mathtt{M}_{i}\)), it is clear that \(f\)-UCGs cannot simulate general QMDs. However, if, for all \(i\in\{0,1\}^{\log n}\), \(\mathsf{V}(i)\) is of the form \(f(i)\otimes\mathbb{I}_{1}\) for some \(f:\{0,1\}^{\log n}\to\mathcal{U}(\mathbb{C}^{2\times 2})\), then such \(\mathsf{QMD}\) is simply the \(f\)-UCG\({}^{(\log n)}\). Similarly, an \(f\)-QRAM for \(f:\{0,1\}^{\log n}\to\mathcal{U}(\mathbb{C}^{2\times 2})\) (which is a \(\mathsf{QMD}\) such that \(\mathsf{V}(i)=\mathbb{I}_{1}\otimes|0\rangle\langle 0|_{\mathtt{M}_{i}}+f(i) \otimes|1\rangle\langle 1|_{\mathtt{M}_{i}}\)) is an \(f^{\prime}\)-UCG\({}^{(n+\log n)}\) for some \(f^{\prime}:\{0,1\}^{n}\times\{0,1\}^{\log n}\to\mathcal{U}(\mathbb{C}^{2\times 2})\) that is a \((J=[n],1)\)-junta. In the other direction, the requirement that \(\mathcal{V}\) be a constant-size set2 limits the kind of \(f\)-UCG that can be simulated by a QMD to those where \(f\) has a constant range. Multi-qubit gates as building blocks ### The Fan-Out gate The Fan-Out gate copies a specific register \(|x_{i}\rangle\) into a subset of other registers. It can be thought of as a single-control multiple-target \(\mathsf{CNOT}\) gate. **Definition 16** (Fan-Out gate).: _Let \(m\in\mathbb{N}\). Let \(i\in[m]\) and \(S\subseteq[m]\setminus\{i\}\), with \(|S\cup\{i\}|=:n\). The \(n\)-arity Fan-Out gate \(\mathsf{FO}^{(n)}_{i\to S}\) is defined as_ \[\mathsf{FO}^{(n)}_{i\to S}|x_{0}\rangle|x_{1}\rangle\ldots|x_{m-1}\rangle= \bigotimes_{j\in[m]}\begin{cases}|x_{j}\oplus x_{i}\rangle&\text{ if }j\in S,\\ |x_{j}\rangle&\text{ if }j\notin S,\end{cases}\qquad\quad\forall x_{0}, \ldots,x_{m-1}\in\{0,1\},\] _which copies the bit \(x_{i}\) into the registers in \(S\). Similarly to \(f\)-\(\mathsf{UCG}\), we shall sometimes omit either the superscript \((n)\), or the subscripts corresponding to the control \(i\) and/or target \(S\) from \(\mathsf{FO}^{(n)}_{i\to S}\)._ The Fan-Out gate is known to be powerful, in that other multi-qubit gates can be efficiently implemented if one has access to Fan-Out. In particular, we have the following fact. **Fact 17** ([10, 11]).: _The Fan-Out gate is equivalent to the \(\mathsf{PARITY}\) gate up to a Hadamard conjugation, i.e., for \(i\in[n]\) and \(S\subseteq[n]\setminus\{i\}\),_ \[\mathsf{PARITY}_{S\to i}=\left(\prod_{j\in S\cup\{i\}}\mathsf{H}_{ \to j}\right)\mathsf{FO}_{i\to S}\left(\prod_{j\in S\cup\{i\}}\mathsf{H}_{ \to j}\right).\] It is known that the \(\mathsf{EXACT}^{(n)}\) gate (including \(\mathsf{OR}^{(n)}\) and \(\mathsf{AND}^{(n)}\)) can be simulated exactly in constant depth using Fan-Out and single-qubit gates [16]. Other known constructions with Fan-Out include MAJORITY and THRESHOLD[11, 16]. **Fact 18** ([16, Theorem 1]).: _The \(\mathsf{EXACT}^{(n)}[t]\) gate can be implemented in \(O(1)\)-depth using \(2n\log n+O(n)\) ancillae and \(6n+O(\log n)\) Fan-Out gates with arity at most \(2n\)._ The above result comes from a useful \(\mathsf{OR}\) reduction from \(n\) to \(\lceil\log(n+1)\rceil\) qubits developed in [11]. We include the proof for completeness, and explicitly count the resources required. **Fact 19** ([11, Lemma 5.1]).: _The \(\mathsf{OR}^{(n)}\) gate can be reduced to \(\mathsf{OR}^{(p)}\), \(p=\lceil\log(n+1)\rceil\), in \(O(1)\)-depth using \(2n\lceil\log(n+1)\rceil\) ancillae and \(2n+2\lceil\log(n+1)\rceil\) Fan-Out gates with arity at most \(n\). In other words, there is an \(O(1)\)-depth circuit that maps \(|x\rangle|0\rangle^{\otimes p}\mapsto|x\rangle|\psi_{x}\rangle\) for \(x\in\{0,1\}^{n}\), where \(|\psi_{x}\rangle\in\mathbb{C}^{2^{p}}\) is such that \(\langle 0^{p}|\psi_{x}\rangle=1\) if \(\mathsf{OR}(x)=0\) and \(\langle 0^{p}|\psi_{x}\rangle=0\) if \(\mathsf{OR}(x)=1\)._ Proof.: Given the input \(|x\rangle|0\rangle\), \(x\in\{0,1\}^{n}\), we first show how to compute \(|x\rangle|0\rangle\mapsto|x\rangle|\mu_{\theta}^{|x|}\rangle\) in constant depth, where \(|\mu_{\theta}^{|x|}\rangle:=\frac{1}{2}(1+e^{i\pi\theta|x|})|0\rangle+\frac{1}{ 2}(1-e^{i\pi\theta|x|})|1\rangle\), \(\theta\in[-1,1]\). Attach an ancillary register \(|0\rangle^{\otimes(n-1)}\) and apply a Hadamard gate on the first qubit of \(|0\rangle^{\otimes n}\) followed by a Fan-Out gate copying this first qubit onto the remaining \(n-1\) qubits. This leads to \[|x\rangle|0\rangle^{\otimes n}\mapsto|x\rangle\frac{|0\rangle+|1\rangle}{ \sqrt{2}}|0\rangle^{\otimes(n-1)}\mapsto|x\rangle\frac{|0\rangle^{\otimes n}+ |1\rangle^{\otimes n}}{\sqrt{2}}.\] Apply a \(\mathsf{Z}(\theta x_{i})\) gate on the \(i\)-th qubit of \(\frac{1}{\sqrt{2}}(|0\rangle^{\otimes n}+|1\rangle^{\otimes n})\) controlled on \(|x_{i}\rangle\), for \(i\in[n]\). Thus \[|x\rangle\frac{|0\rangle^{\otimes n}+|1\rangle^{\otimes n}}{\sqrt{2}}\mapsto|x \rangle\frac{|0\rangle^{\otimes n}+e^{i\pi\theta|x|}|1\rangle^{\otimes n}}{ \sqrt{2}}.\] Uncomputing the first step leads to \(|x\rangle|\mu_{\theta}^{|x|}\rangle\) as required. In total, we have used \(n-1\) ancillae and \(2\) Fan-Out gates with arity \(n\). The reduction works by computing in parallel the states \(|\psi_{k}\rangle=|\mu_{\theta_{k}}^{|x|}\rangle\) with \(\theta_{k}=1/2^{k}\), for all \(k\in[p]\), which requires copying the register \(|x\rangle\) a number of \(p-1\) times by using \(n\) Fan-Out gates with arity \(p\). The output \(|\psi\rangle=|\psi_{0}\rangle|\psi_{1}\rangle\ldots|\psi_{p-1}\rangle\) is the desired state. Indeed, if \(|x|=0\), then \(\langle 0^{p}|\psi\rangle=1\), since \(|\psi_{k}\rangle=|0\rangle\) for each \(k\in[p]\). On the other hand, if \(|x|\neq 0\), then \(\langle 0^{p}|\psi\rangle=0\), since at least one qubit \(|\psi_{k}\rangle\) is \(|1\rangle\) with certainty. Indeed, there are integers \(a\in[p]\) and \(b\geq 0\) such that \(|x|=2^{a}(2b+1)\). Then a direct calculation shows that \(\langle 1|\psi_{a}\rangle=1\). This proves the correctness of the reduction. Finally, the whole reduction uses \(n(p-1)+p(n-1)\leq 2np\) ancillae and \(2m+2p\) Fan-Out gates with arity at most \(n\). It is folklore that an \(n\)-arity Fan-Out can be implemented by a \(\mathsf{CNOT}\) circuit of depth \(O(\log n)\) and size \(O(n)\). If low-arity Fan-Out gates are available, it is possible to improve the number and depth of \(\mathsf{CNOT}\) gates required as follows. **Lemma 20**.: _For \(y\in\{0,1\}\), the unitary \(|y\rangle|0\rangle^{\otimes n}\mapsto|y\rangle^{\otimes(n+1)}\) can be implemented with \(\lceil n/(k-1)\rceil\)\(k\)-arity Fan-Out gates in depth \(\lceil\log_{k}(n+1)\rceil\)._ Proof.: Note that, starting from the initial state at depth \(d=0\) until the final state at maximum depth \(d=d_{c}\), the \(i\)-th Fan-Out layer maps the \(i\)-th state onto the \((i+1)\)-th state, for \(i\in[d_{c}].\) We prove by induction that, at depth \(d\), our state is \(|y\rangle^{\otimes k^{d}}|0\rangle^{\otimes(n+1-k^{d})}\). The case \(d=0\) is obvious. Assume the induction hypothesis for \(d\). Then, after applying one layer of \(k^{d}\)\(k\)-arity Fan-Out gates we obtain \[|y\rangle^{\otimes k^{d}}|0\rangle^{\otimes(n+1-k^{d})}\mapsto|y\rangle^{ \otimes k^{d+1}}|0\rangle^{\otimes(n+1-k^{d+1})},\] as wanted. The circuit depth \(d_{c}\) is the minimum \(d\) such that \(n+1-k^{d}\leq 0\), i.e., \(d_{c}=\lceil\log_{k}(n+1)\rceil\). Regarding the size, from depth \(d=0\) to \(d=d_{c}-1\) we require \(\sum_{j=0}^{d_{c}-2}k^{j}=(k^{d_{c}-1}-1)/(k-1)\) Fan-Out gates. In the final layer there are only \(n+1-k^{d_{c}-1}\) qubits left in the \(|0\rangle\) state, thus another \(\lceil\frac{n+1}{k}-k^{d_{c}-2}\rceil\) Fan-Out gates are required. In total, the number of Fan-Outs is at most \[\frac{k^{d_{c}-1}-1}{k-1}+\left\lceil\frac{n+1}{k}-k^{d_{c}-2}\right\rceil= \left\lceil\frac{n+1}{k}+\frac{k^{d_{c}-2}-1}{k-1}\right\rceil\leq\left\lceil \frac{n+1}{k}+\frac{(n+1)/k-1}{k-1}\right\rceil=\left\lceil\frac{n}{k-1}\right\rceil.\qed\] ### The Global Tunable gate **Definition 21** (Global Tunable gate).: _Let \(\Theta\in[-1,1]^{n\times n}\). The \(n\)-arity Global Tunable gate \(\mathsf{GT}_{\Theta}^{(n)}\) is the unitary operator_ \[\mathsf{GT}_{\Theta}^{(n)}=\prod_{1\leq i<j\leq n}\mathsf{C}_{i}\text{-} \mathsf{Z}(\Theta_{ij})_{\to j}.\] The \(\mathsf{GT}\) gate is powerful in that it can perform many Fan-Out gates in parallel. **Claim 22**.: _A number \(l\) of pair-wise commuting Fan-Out gates \(\mathsf{FO}^{(n_{0})},\ldots,\mathsf{FO}^{(n_{l-1})}\) can be performed in depth-\(3\) using one \(\mathsf{GT}\) gate with arity at most \(n\) and at most \(2(n-1)\) Hadamard gates, where \(n:=\sum_{j=0}^{l-1}n_{j}\)._ Proof.: Let \(T\subseteq[l]\). Without lost of generality, for each \(i\in[l]\), consider a Fan-Out gate \(\mathsf{FO}_{q_{i}\to S_{i}}\) controlled on qubit \(q_{i}\in T\) with target qubits in \(S_{i}\subseteq[n]\setminus T\). Note that all Fan-Out gates \(\mathsf{FO}_{q_{i}\to S_{i}}\) commute since the sets of target and control qubits are disjoint. Therefore \[\prod_{i=0}^{l-1}\mathsf{FO}_{q_{i}\to S_{i}}=\prod_{i=0}^{l-1}\prod_{j\in S_ {i}}\mathsf{C}_{q_{i}}\mbox{-}\mathsf{X}_{\to j}=\prod_{i=0}^{l-1}\prod_{j\in S _{i}}\mathsf{H}_{\to j}\cdot\mathsf{C}_{q_{i}}\mbox{-}\mathsf{Z}_{\to j} \cdot\mathsf{H}_{\to j}=\left(\prod_{j\in\bigcup_{i\in[l]}S_{i}}\mathsf{H}_{ \to j}\right)\mathsf{GT}_{\Theta}\left(\prod_{j\in\bigcup_{i\in[l]}S_{i}} \mathsf{H}_{\to j}\right).\] Here \(\Theta\in\{0,1\}^{n\times n}\) is the matrix \(\Theta=\bigoplus_{i\in[l]}\Theta_{q_{i}}\), where \(\Theta_{q_{i}}\in\{0,1\}^{n\times n}\) is the matrix whose \(q_{i}\)-th row is the characteristic vector of the set \(S_{i}\), while the remaining rows are zero (the parity is taken entry-wise). In other words, the \((i,j)\)-entry of \(\Theta\) is the parity of the number of sets \(S_{0},\ldots,S_{l-1}\) that contain \(j\in[n]\setminus T\) and are controlled on qubit \(i\in T\). This is because a \(\mathsf{Z}\) gate is applied onto a qubit \(j\in[n]\setminus T\) only if it is the target to an odd number of Fan-Out gates. The maximum arity happens when \((S_{i}\cup\{q_{i}\})\cap(S_{j}\cup\{q_{j}\})=\emptyset\) for every \(i\neq j\in[l]\), i.e., when each Fan-Out gate acts on a separate set of qubits. The maximum number of Hadamard gates happens when \(q_{0}=\cdots=q_{l-1}\) and \(S_{i}\cap S_{j}=\emptyset\) for every \(i\neq j\in[l]\), i.e., when all Fan-Out gates share the same control qubit but copy it into separate sets of qubits. Thus, up to conjugation by Hadamards, a single \(\mathsf{GT}\) gate can copy a control register into an arbitrary number of target registers. Moreover, from the above fact follows a simple yet interesting result concerning constant-depth quantum circuits. **Lemma 23**.: _Consider a constant-depth circuit that uses \(l\) Fan-Out gates \(\mathsf{FO}^{(n_{0})},\ldots,\mathsf{FO}^{(n_{l-1})}\). There is an equivalent constant-depth circuit that uses \(O(1)\)\(\mathsf{GT}^{(n)}\) gates, where \(n\leq\sum_{j=0}^{l-1}n_{j}\)._ Proof.: The circuit consists of a constant number of layers, each of which contains at most \(l\) disjoint Fan-Out gates. The result follows from Claim 22. An example of this is the following result concerning the \(\mathsf{EXACT}^{(n)}\) gate. **Fact 24** ([1]).: _The \(\mathsf{EXACT}^{(n)}[t]\) gate can be implemented in \(O(1)\)-depth using \(2n+O(\log n)\) ancillae and \(4\)\(\mathsf{GT}\) gates with arity at most \(n+O(\log n)\)._ Note that the original construction of [1] is for \(\mathsf{OR}^{(n)}\) and requires fewer than \(2n\) ancillae because they implement a slightly different gate, \(|x\rangle\mapsto(-1)^{\mathsf{OR}(x)}|x\rangle\). Their construction is similar to the one from [16] and uses the \(\mathsf{OR}\) reduction from [10] adapted to \(\mathsf{GT}\) gates. We include the proof of the \(\mathsf{OR}\) reduction for completeness and explicitly count the resources required. **Fact 25** ([1]).: _The \(\mathsf{OR}^{(n)}\) gate can be reduced to \(\mathsf{OR}^{(p)}\), \(p=\lceil\log(n+1)\rceil\), in \(O(1)\)-depth using \(1\)\(\mathsf{GT}\) gate with arity \(n+\lceil\log(n+1)\rceil\) and no ancillae._ Proof.: The general construction is the same as in Fact 19, the difference being the number of ancillary qubits. When constructing the states \(|\mu_{\theta}^{|x|}\rangle\), there is no need for the ancillary register \(|0\rangle^{\otimes(n-1)}\), since all the \(\mathsf{Z}(\theta x_{i})\) gates controlled on \(|x_{i}\rangle\), \(i\in[n]\), can be performed using a single \(\mathsf{GT}\) gate with arity \(n+1\) on the state \(\frac{1}{\sqrt{2}}(|0\rangle+|1\rangle)\), i.e., the mapping \[|x\rangle\frac{|0\rangle+|1\rangle}{\sqrt{2}}\mapsto|x\rangle\frac{|0\rangle+e ^{i\pi\theta|x|}|1\rangle}{\sqrt{2}}\] requires only one \(\mathsf{GT}\) gate and no ancillae. This procedure actually scales to computing \(|x\rangle|0\rangle^{\otimes p}\mapsto|x\rangle|\psi_{0}\rangle\dots|\psi_{p-1}\rangle\), i.e., all states \(|\psi_{k}\rangle=|\mu^{|x|}_{\theta_{k}}\rangle\) can be computed in parallel with a single \(\mathsf{GT}\) gate with arity \(n+p\) and no ancillary qubits. As another example, it was shown in [14, Theorem 6] that every \(n\)-qubit state can be constructed by a \(\mathsf{QAC}^{0}_{f}\) circuit with \(\widetilde{O}(2^{n})\) ancillae. It follows from Lemma 23 that every \(n\)-qubit state can be constructed with a constant number of \(\mathsf{GT}\) gates and \(\widetilde{O}(2^{n})\) ancillae. Comment on physical implementation of multi-qubit gates.The constant-depth architectures we consider make use of the multi-qubit Fan-Out and \(\mathsf{GT}\) gates. However, the complexity and time required to implement such gates in practice may differ and may be both hardware and code-dependent. For example, if one considers logical qubits encoded via the surface code, then for a fixed code distance \(d\), Fan-Out gates can be performed in a constant number of surface code cycles via braiding [10]. On the other hand, in the non-error-corrected ion trap \(\mathsf{GT}\) gate implementation proposed in [11], each of the \(n\) qubits is simultaneously acted on by a separate sequence of at least \(2n\) constant-duration laser pulses. Assuming a practical lower bound on the duration of any pulse in this sequence, the wall-clock time required to implement a single \(\mathsf{GT}\) gate according to this scheme scales linearly with \(n\) (and uses a linear number of laser sources) and the constant-depth \(\mathsf{GT}\) gate constructions do not necessarily translate to constant-time constructions. This is not surprising, since the \(\mathsf{GT}\) gate is strictly more powerful than Fan-Out. ## 5 Constant-depth circuits based on one-hot encoding In this section, we provide constant-depth circuits for \(f\)-\(\mathsf{UCG}\)s via our first technique based one-hot encoding. We rely on a simple fact regarding the unitary \[\mathsf{C}_{\mathsf{E}_{n-1}\mbox{-}}(\mathsf{U}_{n-1})_{\to\mathsf{T}}\cdots \mathsf{C}_{\mathsf{E}_{1}\mbox{-}}(\mathsf{U}_{1})_{\to\mathsf{T}}\cdot \mathsf{C}_{\mathsf{E}_{0}\mbox{-}}(\mathsf{U}_{0})_{\to\mathsf{T}}\] that sequentially applies the gates \(\mathsf{U}_{0},\dots,\mathsf{U}_{n-1}\) onto a target qubit \(\mathsf{T}\) controlled on the single-qubit registers \(\mathsf{E}_{0},\dots,\mathsf{E}_{n-1}\), respectively, being in the \(|1\rangle\) state (see Figure 4(a)). Let \(|e\rangle_{\mathsf{E}}=\bigotimes_{j\in[n]}|e_{j}\rangle_{\mathsf{E}_{j}}\) be the state of the registers \(\mathsf{E}_{0},\dots,\mathsf{E}_{n-1}\). If \(e\in\{0,1\}^{n}\) has Hamming weight at most \(1\) (i.e., \(|e|\leq 1\)), then we can rearrange the gates from the \(\mathsf{Z}\)-decomposition of \(\mathsf{U}_{j}\) as \[\mathsf{C}_{\mathsf{E}_{n-1}\mbox{-}}(\mathsf{U}_{n-1})_{\to \mathsf{T}}\cdots\mathsf{C}_{\mathsf{E}_{1}\mbox{-}}(\mathsf{U}_{1})_{\to \mathsf{T}}\cdot\mathsf{C}_{\mathsf{E}_{0}\mbox{-}}(\mathsf{U}_{0})_{\to \mathsf{T}}|e\rangle_{\mathsf{E}}|b\rangle_{\mathsf{T}}=\] \[\left(\prod_{j\in[n]}\mathsf{Z}(\alpha_{j})_{\to\mathsf{E}_{j}} \right)\left(\prod_{j\in[n]}\mathsf{C}_{\mathsf{E}_{j}\mbox{-}}\mathsf{Z}( \beta_{j})_{\to\mathsf{T}}\right)\mathsf{H}_{\to\mathsf{T}}\left(\prod_{j\in[n] }\mathsf{C}_{\mathsf{E}_{j}\mbox{-}}\mathsf{Z}(\gamma_{j})_{\to\mathsf{T}} \right)\mathsf{H}_{\to\mathsf{T}}\left(\prod_{j\in[n]}\mathsf{C}_{\mathsf{E}_ {j}\mbox{-}}\mathsf{Z}(\delta_{j})_{\to\mathsf{T}}\right)|e\rangle_{\mathsf{E}} |b\rangle_{\mathsf{T}}.\] The above identity holds because at most one controlled gate \(\mathsf{C}\)-\(\mathsf{U}_{j}\) is "active" for the state \(|e\rangle_{\mathsf{E}}\). This allows us to group together the \(\mathsf{Z}\) operators from the decomposition of \(\mathsf{U}_{0},\dots,\mathsf{U}_{n-1}\) as shown in Figure 4(b). ### Constant-depth circuits for \(f\)-UcGs **Theorem 26** (One-hot-encoding implementation of \(f\)-Ucg).: _Let \(f:\{0,1\}^{n}\to\mathcal{U}(\mathbb{C}^{2\times 2})\) be a \((J,r)\)-junta for \(J\subseteq[n]\) with \(|\overline{J}|=t\) and \(r\in\mathbb{N}\). There is an \(O(1)\)-depth circuit for \(f\)-_Ucg _that uses_ * _either_ \(2(t+r)2^{t+r}\log(t+r)+O((t+r)2^{t+r})\) _ancillae and_ \(2n+6(t+r)2^{t+r}+O(2^{t+r}\log(t+r))\) _Fan-Out gates with arity at most_ \(1+2^{t+r}\)_,_ * _or_ \(3(t+r)2^{t+r}+O(2^{t+r}\log(t+r))\) _ancillae and_ \(9\)__GT _gates with arity at most_ \(n+(t+r)2^{t+r}+O(2^{t+r}\log(t+r))\)_._ Proof.: Given the initial state \(|x\rangle_{\mathtt{I}}|b\rangle_{\mathtt{T}}\) for \(x\in\{0,1\}^{n}\) and \(b\in\{0,1\}\), we wish to perform the mapping \(|x\rangle_{\mathtt{I}}|b\rangle_{\mathtt{T}}\mapsto|x\rangle_{\mathtt{I}}f(x)| b\rangle_{\mathtt{T}}\). For each \(z\in\{0,1\}^{t}\), let \(J_{z}\subseteq J\), with \(|J_{z}|\leq r\), be the subset of coordinates that \(f_{J|z}\) depends on. For \(z\in\{0,1\}^{t}\), let \(g_{z}:\{0,1\}^{|J_{z}|}\to\mathcal{U}(\mathbb{C}^{2\times 2})\) be such that \(f_{J|z}(x_{J})=g_{z}(x_{J_{z}})\). In the following, split the register \(\mathtt{I}\) into registers \(\mathtt{\overline{J}}\) and \(\mathtt{J}\) such that \(\mathtt{\overline{J}}\) contains the coordinates of \(x\) in \(\overline{J}\) and \(\mathtt{J}\) contains the coordinates of \(x\) in \(J\), i.e., \(|x\rangle_{\mathtt{I}}=|x\underline{\gamma}\rangle_{\mathtt{\overline{J}}}|x \rangle_{\mathtt{J}}\rangle_{\mathtt{J}}\). For \(i\in J\), let \(m_{i}:=\sum_{z\in\{0,1\}^{t}:i\in J_{z}}2^{|J_{z}|}\leq 2^{r}\cdot|\{z\in\{0,1\}^{t}:i \in J_{z}\}|\) and \(m:=\sum_{z\in\{0,1\}^{t}}2^{|J_{z}|}\leq 2^{t+r}\). Let \(f(x)=e^{i\pi\alpha(x)}\mathsf{Z}(\beta(x))\mathsf{HZ}(\gamma(x))\mathsf{HZ}( \delta(x))\) be the \(\mathsf{Z}\)-decomposition of each single-qubit gate \(f(x)\), \(x\in\{0,1\}^{n}\). Equivalently, write \(g_{z}(j)=e^{i\pi\alpha_{z}\rangle}\mathsf{Z}(\beta_{z}(j))\mathsf{HZ}(\gamma_ {z}(j))\mathsf{HZ}(\delta_{z}(j))\) for the \(\mathsf{Z}\)-decomposition of the single-qubit gate \(g_{z}(j)\), \(z\in\{0,1\}^{t}\) and \(j\in\{0,1\}^{|J_{z}|}\). From \(f_{J|z}(x_{J})=g_{z}(x_{J_{z}})\) we can establish the correspondence that \(\alpha(x)=\alpha_{z}(j)\) for all \(z\in\{0,1\}^{t}\), \(j\in\{0,1\}^{|J_{z}|}\), and \(x\in\{0,1\}^{n}\) such that \(x_{\overline{J}}=z\) and \(x_{J_{z}}=j\) (and similarly for \(\beta,\gamma,\delta\)). The main idea of the circuits is to compute the one-hot encoding of a compressed version of \(x\). Naively, one could compute the one-hot encoding of the whole string \(x\). However, by breaking \(x\) into the sub-strings \(x_{\overline{J}}\) and \(x_{J}\) indexed by \(\overline{J}\) and \(J\), respectively, it is possible to compute the one-hot encoding of \(x_{\overline{J}}\) separately from the one-hot encoding of \(x_{J}\). In principle, this would not offer any advantage, but since \(f_{J|z}\) depends only on a few coordinates of \(x_{J}\), we can compute the one-hot encoding of the much shorter sub-string \(x_{J_{z}}\) for \(z=x_{\overline{J}}\) instead. Since we do not know \(x_{\overline{J}}\), we must compute the one-hot encoding of \(x_{J_{z}}\) for all \(z\in\{0,1\}^{t}\). We first consider the Fan-Out-based circuit (see Figure 5) and then the one based on \(\mathsf{GT}\) gates. The algorithm is as follows: 1. For each \(z\in\{0,1\}^{t}\) and \(j\in\{0,1\}^{|J_{z}|}\), attach a \(t\)-qubit ancillary register \(\overline{\mathsf{J}}_{z,j}\) and copy the contents of the \(|x_{\overline{J}}\rangle_{\overline{\mathsf{J}}}\) register onto it. Every qubit of \(|x_{\overline{J}}\rangle_{\overline{\mathsf{J}}}\) is thus copied \(m\) times. 2. For each \(z\in\{0,1\}^{t}\) and \(j\in\{0,1\}^{|J_{z}|}\), attach a \(|J_{z}|\)-qubit ancillary register \(\mathsf{J}_{z,j}\). The registers \(\mathsf{J}_{z,j}\), for \(j\in\{0,1\}^{|J_{z}|}\), will be used to compute the one-hot encoding of \(x_{J_{z}}\). For each \(i\in J\) in parallel, copy \(m_{i}\) number of times the qubit \(|x_{i}\rangle_{\mathsf{J}}\) onto the registers \(\{\mathsf{J}_{z,j}\}_{j}\), where \(j\in\{0,1\}^{|J_{z}|}\), for all \(z\) such that \(i\in J_{z}\). Steps 1 and 2 lead to \[|x\rangle_{\mathsf{I}}|b\rangle_{\mathsf{T}}\mapsto|x\rangle_{\mathsf{I}}|b \rangle_{\mathsf{T}}\left(\bigotimes_{z\in\{0,1\}^{t}}\bigotimes_{j\in\{0,1 \}^{|J_{z}|}}|x_{\overline{J}}\rangle_{\overline{\mathsf{J}}_{z,j}}|x_{J_{z} }\rangle_{\mathsf{J}_{z,j}}\right).\] 3. For each \(z\in\{0,1\}^{t}\) and \(j\in\{0,1\}^{|J_{z}|}\), apply the gate \(\bigotimes_{k\in[t]}\mathsf{X}^{\overline{z}_{k}}\) to \(|x_{\overline{J}}\rangle_{\overline{\mathsf{J}}_{z,j}}\) and the gate \(\bigotimes_{k\in[|J_{z}|]}\mathsf{X}^{\overline{\mathsf{J}}_{k}}\) to \(|x_{J_{z}}\rangle_{\mathsf{J}_{z,j}}\). This leads to \[|x\rangle_{\mathsf{I}}|b\rangle_{\mathsf{T}}\left(\bigotimes_{z\in\{0,1\}^{t} }\bigotimes_{j\in\{0,1\}^{|J_{z}|}}|x_{\overline{J}}\oplus\overline{z}\rangle _{\overline{\mathsf{J}}_{z,j}}|x_{J_{z}}\oplus\overline{j}\rangle_{\mathsf{J} _{z,j}}\right).\] 4. For each \(z\in\{0,1\}^{t}\) and \(j\in\{0,1\}^{|J_{z}|}\), attach a single-qubit ancillary register \(\mathtt{E}_{z,j}\) and apply an \(\mathsf{AND}^{(t+|J_{z}|)}_{\{\overline{\mathsf{J}}_{z,j},\mathsf{J}_{z,j}\} \rightarrow\mathtt{E}_{z,j}}\) gate from registers \(\overline{\mathsf{J}}_{z,j}\) and \(\mathsf{J}_{z,j}\) onto register \(\mathtt{E}_{z,j}\) to obtain \[|x\rangle_{\mathsf{I}}|b\rangle_{\mathsf{T}}\left(\bigotimes_{z\in\{0,1 \}^{t}}\bigotimes_{j\in\{0,1\}^{|J_{z}|}}|x_{\overline{J}}\oplus\overline{z} \rangle_{\overline{\mathsf{J}}_{z,j}}|x_{J_{z}}\oplus\overline{j}\rangle_{ \mathsf{J}_{z,j}}|0\rangle_{\mathtt{E}_{z,j}}\right)\] \[\mapsto |x\rangle_{\mathsf{I}}|b\rangle_{\mathsf{T}}\left(\bigotimes_{z \in\{0,1\}^{t}}\bigotimes_{j\in\{0,1\}^{|J_{z}|}}|x_{\overline{J}}\oplus \overline{z}\rangle_{\overline{\mathsf{J}}_{z,j}}|x_{J_{z}}\oplus\overline{j} \rangle_{\mathsf{J}_{z,j}}|\bigwedge_{k\in[t]}(x_{\overline{J}}\oplus\overline{ z})_{k}\cdot\bigwedge_{\mathsf{I}\in[|J_{z}|]}(x_{J_{z}}\oplus\overline{j})_{l} \rangle_{\mathtt{E}_{z,j}}\right)\] \[= |x\rangle_{\mathsf{I}}|b\rangle_{\mathsf{T}}\left(\bigotimes_{z \in\{0,1\}^{t}}\bigotimes_{j\in\{0,1\}^{|J_{z}|}}|x_{\overline{J}}\oplus \overline{z}\rangle_{\overline{\mathsf{J}}_{z,j}}|x_{J_{z}}\oplus\overline{j} \rangle_{\mathsf{J}_{z,j}}|e(x_{\overline{J}})_{z}\cdot e(x_{J_{z}})_{j} \rangle_{\mathtt{E}_{z,j}}\right),\] where \(e(x_{\overline{J}})\in\{0,1\}^{2^{t}}\) and \(e(x_{J_{z}})\in\{0,1\}^{2^{|J_{z}|}}\) are the one-hot encodings of \(x_{\overline{J}}\) and \(x_{J_{z}}\), respectively. Note that \(e(x_{\overline{J}})_{z}\) is the \(z\)-th bit of \(e(x_{\overline{J}})\) and \(e(x_{J_{z}})_{j}\) is \(j\)-th bit of \(e(x_{J_{z}})\). 5a. For each \(z\in\{0,1\}^{t}\) and \(j\in\{0,1\}^{|J_{z}|}\), attach a single-qubit ancillary register \(\mathtt{T}_{z,j}\) and apply a \((1+m)\)-arity Fan-Out gate \(\mathsf{FO}^{(1+m)}_{\mathsf{T}\rightarrow\{\mathtt{T}_{z,j}\}_{z,j}}\) from register \(\mathsf{T}\) to registers \(\mathtt{T}_{z,j}\), \(z\in\{0,1\}^{t}\) and \(j\in\{0,1\}^{|J_{z}|}\) (remember that \(m:=\sum_{z\in\{0,1\}^{t}}2^{|J_{z}|}\)). For each \(z\in\{0,1\}^{t}\) and \(j\in\{0,1\}^{|J_{z}|}\), apply a \(\mathsf{C}_{\mathtt{E}_{z,j}}\mbox{-}\mathsf{Z}(\delta_{z}(j))_{\rightarrow\mathtt{T }_{z,j}}\) gate controlled on register \(\mathtt{E}_{z,j}\) onto register \(\mathtt{T}_{z,j}\). Finally, apply Figure 5: The circuit for an \(f\)-\(\mathsf{UCG}^{(n)}\) using Fan-Out gates, where \(f\) is a \((J,r)\)-junta with \(|\overline{J}|=t\). For simplicity, we include targets from \(|x_{i}\rangle_{\mathrm{J}}\) onto all registers \(\mathsf{J}_{z,j}\), but in reality, \(x_{i}\) is copied onto the registers \(\{\mathsf{J}_{z,j}\}_{j}\) for all \(z\in\{0,1\}^{t}\) such that \(i\in J_{z}\), where \(J_{z}\) is the set of coordinated that \(f_{J|z}\) depends on. Moreover, we omit the indices in the parameters \(\alpha_{z}(j)\), \(\beta_{z}(j)\), \(\gamma_{z}(j)\), \(\delta_{z}(j)\). \(\mathsf{FO}^{(1+m)}_{\mathsf{T}\to\{\mathtt{T}_{z,j}\}_{z,j}}\) again. We shall omit the registers \(\overline{\mathtt{J}}_{z,j}\) and \(\mathtt{J}_{z,j}\) from now on for simplicity. This chain of operations leads to \[|x\rangle_{\mathtt{I}}|b\rangle_{\mathtt{T}}\left(\bigotimes_{z\in \{0,1\}^{t}}\bigotimes_{j\in\{0,1\}^{|J_{z}|}}|e(x_{\overline{\mathtt{J}}})_{ z}e(x_{J_{z}})_{j}\rangle_{\mathtt{E}_{z,j}}|b\rangle_{\mathtt{T}_{z,j}}\right)\] \[\mapsto |x\rangle_{\mathtt{I}}|b\rangle_{\mathtt{T}}\left(\bigotimes_{z \in\{0,1\}^{t}}\bigotimes_{j\in\{0,1\}^{|J_{z}|}}|e(x_{\overline{\mathtt{J}}})_ {z}e(x_{J_{z}})_{j}\rangle_{\mathtt{E}_{z,j}}\mathsf{Z}(\delta_{z}(j))^{e(x_{ \overline{\mathtt{J}}})_{z}e(x_{J_{z}})_{j}}|b\rangle_{\mathtt{T}_{z,j}}\right)\] \[\mapsto |x\rangle_{\mathtt{I}}\mathsf{Z}(\delta(x))|b\rangle_{\mathtt{T }}\left(\bigotimes_{z\in\{0,1\}^{t}}\bigotimes_{j\in\{0,1\}^{|J_{z}|}}|e(x_{ \overline{\mathtt{J}}})_{z}e(x_{J_{z}})_{j}\rangle_{\mathtt{E}_{z,j}}\right),\] where we have used the definition of one-hot encoding, i.e., \(e(x_{\overline{\mathtt{J}}})_{z}=1\) if and only if \(z=x_{\overline{\mathtt{J}}}\) and \(e(x_{J_{z}})_{j}=1\) if and only if \(j=x_{J_{z}}\), and also that \(\delta_{z}(j)=\delta(x)\) for \(z=x_{\overline{\mathtt{J}}}\) and \(j=x_{J_{z}}\). 5. Apply a \(\mathsf{H}_{\to\mathtt{T}}\) gate to register \(\mathtt{T}\) followed by a \((1+m)\)-arity Fan-Out gate \(\mathsf{FO}^{(1+m)}_{\mathsf{T}\to\{\mathtt{T}_{z,j}\}_{z,j}}\) from register \(\mathtt{T}\) to registers \(\{\mathtt{T}_{z,j}\}_{z,j}\). For each \(z\in\{0,1\}^{t}\) and \(j\in\{0,1\}^{|J_{z}|}\), apply a \(\mathsf{C}_{\mathtt{E}_{z,j}}\mathsf{Z}(\gamma_{z}(j))_{\to\mathtt{T}_{z,j}}\) gate controlled on register \(\mathtt{E}_{z,j}\) onto register \(\mathtt{T}_{z,j}\). Finally, apply \(\mathsf{FO}^{(1+m)}_{\mathsf{T}\to\{\mathtt{T}_{z,j}\}_{z,j}}\) again. For simplicity, write \(\mathsf{HZ}(\delta(x))|b\rangle_{\mathtt{T}}=r_{b,x}|0\rangle_{\mathtt{T}}+s_ {b,x}|1\rangle_{\mathtt{T}}\). This chain of operations yields \[|x\rangle_{\mathtt{I}}\mathsf{HZ}(\delta(x))|b\rangle_{\mathtt{T }}\left(\bigotimes_{z\in\{0,1\}^{t}}\bigotimes_{j\in\{0,1\}^{|J_{z}|}}|e(x_{ \overline{\mathtt{J}}})_{z}e(x_{J_{z}})_{j}\rangle_{\mathtt{E}_{z,j}}\right)\] \[\mapsto r_{b,x}|x\rangle_{\mathtt{I}}|0\rangle_{\mathtt{T}}\bigotimes _{z\in\{0,1\}^{t}}\bigotimes_{j\in\{0,1\}^{|J_{z}|}}\left(|e(x_{\overline{ \mathtt{J}}})_{z}e(x_{J_{z}})_{j}\rangle_{\mathtt{E}_{z,j}}|0\rangle_{\mathtt{T }_{z,j}}\right)\] \[\mapsto r_{b,x}|x\rangle_{\mathtt{I}}|0\rangle_{\mathtt{T}}\bigotimes _{z\in\{0,1\}^{t}}\bigotimes_{j\in\{0,1\}^{|J_{z}|}}\left(|e(x_{\overline{ \mathtt{J}}})_{z}e(x_{J_{z}})_{j}\rangle_{\mathtt{E}_{z,j}}\mathsf{Z}(\gamma_{z }(j))^{e(x_{\overline{\mathtt{J}}})_{z}e(x_{J_{z}})_{j}}|0\rangle_{\mathtt{T} _{z,j}}\right)\] \[\qquad\qquad\qquad\qquad\qquad+s_{b,x}|x\rangle_{\mathtt{I}}|1 \rangle_{\mathtt{T}}\bigotimes_{z\in\{0,1\}^{t}}\bigotimes_{j\in\{0,1\}^{|J_{z }|}}\left(|e(x_{\overline{\mathtt{J}}})_{z}e(x_{J_{z}})_{j}\rangle_{\mathtt{E} _{z,j}}\mathsf{Z}(\gamma_{z}(j))^{e(x_{\overline{\mathtt{J}}})_{z}e(x_{J_{z}} )_{j}}|1\rangle_{\mathtt{T}_{z,j}}\right)\] \[\mapsto|x\rangle_{\mathtt{I}}\mathsf{Z}(\gamma(x))\mathsf{HZ}( \delta(x))|b\rangle_{\mathtt{T}}\left(\bigotimes_{z\in\{0,1\}^{t}}\bigotimes_{j \in\{0,1\}^{|J_{z}|}}|e(x_{\overline{\mathtt{J}}})_{z}e(x_{J_{z}})_{j}\rangle_ {\mathtt{E}_{z,j}}\right).\] 5. Apply a \(\mathsf{H}_{\to\mathtt{T}}\) gate to register \(\mathtt{T}\) followed by a \((1+m)\)-arity Fan-Out gate \(\mathsf{FO}^{(1+m)}_{\mathsf{T}\to\{\mathtt{T}_{z,j}\}_{z,j}}\) from register \(\mathtt{T}\) to registers \(\{\mathtt{T}_{z,j}\}_{z,j}\). For each \(z\in\{0,1\}^{t}\) and \(j\in\{0,1\}^{|J_{z}|}\), apply a \(\mathsf{C}_{\mathtt{E}_{z,j}}\mathsf{Z}(\beta_{z}(j))_{\to\mathtt{T}_{z,j}}\) gate controlled on register \(\mathtt{E}_{z,j}\) onto register \(\mathtt{T}_{z,j}\) followed by a \(\mathsf{Z}(\alpha_{z}(j))_{\to\mathtt{E}_{z,j}}\) gate applied onto register \(\mathtt{E}_{z,j}\). Finally, apply \(\mathsf{FO}^{(1+m)}_{\mathsf{T}\to\{\mathtt{T}_{z,j}\}_{z,j}}\) again. Similarly to the previous step, we get \[|x\rangle_{\mathtt{I}}\mathsf{Z}(\gamma(x))\mathsf{HZ}(\delta(x))|b \rangle_{\mathtt{T}}\left(\bigotimes_{z\in\{0,1\}^{t}}\bigotimes_{j\in\{0,1\}^ {|J_{z}|}}|e(x_{\overline{J}})_{z}e(x_{J_{z}})_{j}\rangle_{\mathtt{E}_{z,j}}\right)\] \[\mapsto|x\rangle_{\mathtt{I}}f(x)|b\rangle_{\mathtt{T}}\left( \bigotimes_{z\in\{0,1\}^{t}}\bigotimes_{j\in\{0,1\}^{|J_{z}|}}|e(x_{\overline{ J}})_{z}e(x_{J_{z}})_{j}\rangle_{\mathtt{E}_{z,j}}\right)\,.\] 6. Uncompute Steps 4, 3, 2, and 1. This leads to the desired state \(|x\rangle_{\mathtt{I}}f(x)|b\rangle_{\mathtt{T}}\). We now consider the \(\mathsf{GT}\)-gate-based circuit (see Figure 6), which is basically the same as the Fan-Out-based circuit, but replacing Steps 5a, 5b, and 5c with the following Step 5: 1. Apply the gate \[\left(\prod_{z\in\{0,1\}^{t}}\prod_{j\in\{0,1\}^{|J_{z}|}}\mathsf{ Z}(\alpha_{z}(j))_{\to\mathtt{E}_{z,j}}\right)\left(\prod_{z\in\{0,1\}^{t}} \prod_{j\in\{0,1\}^{|J_{z}|}}\mathsf{C}_{\mathtt{E}_{z,j}}\mathsf{-Z}(\beta_{ z}(j))_{\to\mathtt{T}}\right)\mathsf{H}_{\to\mathtt{T}}\\ \cdot\left(\prod_{z\in\{0,1\}^{t}}\prod_{j\in\{0,1\}^{|J_{z}|}} \mathsf{C}_{\mathtt{E}_{z,j}}\mathsf{-Z}(\gamma_{z}(j))_{\to\mathtt{T}}\right) \mathsf{H}_{\to\mathtt{T}}\left(\prod_{z\in\{0,1\}^{t}}\prod_{j\in\{0,1\}^{|J_ {z}|}}\mathsf{C}_{\mathtt{E}_{z,j}}\mathsf{-Z}(\delta_{z}(j))_{\to\mathtt{T}}\right)\] using 3 \(\mathsf{GT}\) gates (one for each \(\prod_{z\in\{0,1\}^{t}}\prod_{j\in\{0,1\}^{|J_{z}|}}\mathsf{C}_{\mathtt{E}_{z,j}}\mathsf{-Z}(\cdot)_{\to\mathtt{T}}\)). Since \(|e(x_{\overline{J}})|=|e(x_{J_{z}})|=1\), this leads to (omit the registers \(\overline{\mathtt{J}}_{z,j}\) and \(\mathtt{J}_{z,j}\) for simplicity) (see Figure 4) \[|x\rangle_{\mathtt{I}}f(x)|b\rangle_{\mathtt{T}}\left(\bigotimes_{z\in\{0,1\}^ {t}}\bigotimes_{j\in\{0,1\}^{|J_{z}|}}|e(x_{\overline{J}})_{z}e(x_{J_{z}})_{j} \rangle_{\mathtt{E}_{z,j}}\right).\] We now analyse the resources required for each step: * Step 1: the registers \(\overline{\mathtt{J}}_{z,j}\), for \(z\in\{0,1\}^{t}\) and \(j\in\{0,1\}^{|J_{z}|}\), use at most \(t2^{t+r}\) ancillae and copying the register \(\overline{\mathtt{J}}\) requires either \(t\) Fan-Out gates with arity at most \(1+2^{t+r}\) or \(1\)\(\mathsf{GT}\) gate with arity at most \(t(1+2^{t+r})\); * Step 2: the registers \(\mathtt{J}_{z,j}\), for \(z\in\{0,1\}^{t}\) and \(j\in\{0,1\}^{|J_{z}|}\), use at most \(r2^{t+r}\) ancillae and copying the register \(\mathtt{J}\) requires either \(|J|=n-t\) Fan-Out gates with arity at most \(1+\max_{i\in J}m_{i}\leq 1+2^{t+r}\) or \(1\)\(\mathsf{GT}\) (which can be absorbed by the one from Step 1) with arity at most \(n-t+\sum_{i\in J}m_{i}\leq n-t+r2^{t+r}\); * Step 4: the registers \(\mathtt{E}_{z,t}\), for \(z\in\{0,1\}^{t}\) and \(j\in\{0,1\}^{|J_{z}|}\), use at most \(2^{t+r}\) ancillae. The \(m\leq 2^{t+r}\)\(\mathsf{AND}^{(t+|J_{z}|)}_{\{\mathtt{J}_{z,j}\}\to\mathtt{E}_{z,j}}\) gates require either \(2(t+r)2^{t+r}\log{(t+r)}+O((t+r)2^{t+r})\) ancillae by using the construction based on Fan-Out gates (Fact 18) or \(2(t+r)2^{t+r}+O(2^{t+r}\log(t+r))\) ancillae by using the construction based on \(\mathsf{GT}\) gates (Fact 24). Naively, one would expect the \(m\leq 2^{t+r}\)\(\mathsf{AND}^{(t+|J_{z}|)}_{\{\mathtt{J}_{z,j}\}\to\mathtt{E}_{z,j}}\) gates to require either \(6(t+r)2^{t+r}+O(2^{t+r}\log(t+r))\) Fan-Out gates with arity at most \(2(t+r)\) (Fact 18) or \(4\)\(\mathsf{GT}\) gates with arity at most \((t+r)2^{t+r}+O(2^{t+r}\log(t+r))\) (Fact 24), but we can postpone their inner uncomputation part until Step 6 and carry over all the required ancillae. This means that Step 4 requires only \(3(t+r)2^{t+r}+O(2^{t+r}\log(t+r))\) Fan-Out gates or \(2\)\(\mathsf{GT}\) gates; Figure 6: The circuit for an \(f\)-\(\mathsf{UCG}^{(n)}\) using \(\mathsf{GT}\) gates, where \(f\) is a \((J,r)\)-junta with \(|\overline{J}|=t\). We highlight the \(\mathsf{GT}\) gates inside the dashed boxes. For simplicity, we include targets from \(|x_{i}\rangle_{\mathsf{J}}\) onto all registers \(\mathsf{J}_{z,j}\), but in reality, \(x_{i}\) is copied onto the registers \(\{\mathsf{J}_{z,j}\}_{j}\) for all \(z\in\{0,1\}^{t}\) such that \(i\in J_{z}\), where \(J_{z}\) is the set of coordinated that \(f_{J|z}\) depends on. Moreover, we omit the indices in the parameters \(\alpha_{z}(j)\), \(\beta_{z}(j)\), \(\gamma_{z}(j)\), \(\delta_{z}(j)\). * Step 5: the Fan-Out-based circuit requires at most \(2^{t+r}\) ancillae for the registers \(\mathtt{T}_{z,j}\), where \(z\in\{0,1\}^{t}\) and \(j\in\{0,1\}^{|J_{z}|}\), and 6 Fan-Out gates with arity at most \(1+2^{t+r}\). The \(\mathsf{GT}\)-gate-based circuit does not require any ancillae, and only 3 \(\mathsf{GT}\) gates with arity at most \(1+2^{t+r}\); * Step 6: uncomputing requires either \(n+3(t+r)2^{t+r}+O(2^{t+r}\log(t+r))\) Fan-Out gates or 3 \(\mathsf{GT}\) gates. In total, we require either \(2(t+r)2^{t+r}\log(t+r)+O((t+r)2^{t+r})\) ancillae and \(2n+6(t+r)2^{t+r}+O(2^{t+r}\log(t+r))\) Fan-Out gates with arity at most \(1+2^{t+r}\), or \(3(t+r)2^{t+r}+O(2^{t+r}\log(t+r))\) ancillae and 9 \(\mathsf{GT}\) gates with arity at most \(n+(t+r)2^{t+r}+O(2^{t+r}\log(t+r))\). ### Constant-depth circuits for \(f\)-Fin The circuits from the previous section can be used to implement an \(f\)-FIN, since they are a special case of \(f\)-UCGs, as explained before Definition 14. Nonetheless, the circuits from the previous section can be simplified due to their simpler structure, i.e., the \(\mathsf{Z}\)-decomposition of an \(f\)-FIN is simply \(\mathsf{HZ}(f(x))\mathsf{H}=\mathsf{X}^{f(x)}\). In particular, the controlled gates \(\mathsf{H}_{\rightarrow\mathtt{T}}\mathsf{C}_{\mathtt{E}_{z,j}}\)-\(\mathsf{Z}(\gamma_{z}(j))_{\rightarrow\mathtt{T}}\mathsf{H}_{ \rightarrow\mathtt{T}}=\mathsf{C}_{\mathtt{E}_{z,j}}\)-\(\mathsf{X}_{\rightarrow\mathtt{T}}^{\gamma_{z}(j)}\), where \(\gamma_{z}:\{0,1\}^{|J_{z}|}\rightarrow\{0,1\}\), that arise from the \(\mathsf{Z}\)-decomposition can be replaced by a single \(\mathsf{PARITY}\) gate (recall that the \(\mathsf{PARITY}\) gate can be implemented by a single Fan-Out gate (Fact 17)). We show how this can be done in the next result. **Theorem 27** (One-hot-encoding implementation of \(f\)-FIN).: _Let \(f:\{0,1\}^{n}\rightarrow\{0,1\}\) be a \((J,r)\)-junta for \(J\subseteq[n]\) with \(|\overline{J}|=t\) and \(r\in\mathbb{N}\). There is an \(O(1)\)-depth circuit for \(f\)-FIN that uses_ * _either_ \(2(t+r)2^{t+r}\log(t+r)+O((t+r)2^{t+r})\) _ancillae and_ \(2n+6(t+r)2^{t+r}+O(2^{t+r}\log(t+r))\) _Fan-Out gates with arity at most_ \(1+2^{t+r}\)_,_ * _or_ \(3(t+r)2^{t+r}+O(2^{t+r}\log(t+r))\) _ancillae and_ \(6\mathsf{GT}\) _gates with arity at most_ \(n+(t+r)2^{t+r}+O(2^{t+r}\log(t+r))\)_._ Proof.: Construct the state \[|x\rangle_{\mathtt{I}}|b\rangle_{\mathtt{T}}\left(\bigotimes_{z\in\{0,1\}^{t }}\bigotimes_{j\in\{0,1\}^{|J_{z}|}}|x_{\overline{J}}\oplus\overline{z}\rangle _{\mathtt{T}_{z,j}}|x_{J_{z}}\oplus\overline{j}\rangle_{\mathtt{J}_{z,j}}|e(x _{\overline{J}})_{z}\cdot e(x_{J_{z}})_{j}\rangle_{\mathtt{E}_{z,j}}\right)\] by following the same steps as in Theorem 26. To perform the \(f\)-FIN gate, we must apply a \(\mathsf{C}_{\mathtt{E}_{z,j}}\)-\(\mathsf{X}_{\rightarrow\mathtt{T}}\) gate for all \(z\in\{0,1\}^{t}\) and \(j\in g_{z}^{-1}(1)\), where \(g_{z}:\{0,1\}^{|J_{z}|}\rightarrow\{0,1\}\) is such that \(f_{J|z}(x_{J})=g_{z}(x_{J_{z}})\), since this leads to (consider only register \(\mathtt{T}\)) \[|b\rangle_{\mathtt{T}}\mapsto\big{|}b\oplus\bigoplus_{z\in\{0,1\}^{t}} \bigoplus_{j\in g_{z}^{-1}(1)}e(x_{\overline{J}})_{z}\cdot e(x_{J_{z}})_{j} \big{\rangle}_{\mathtt{T}}=|b\oplus g_{x_{\overline{J}}}(x_{J_{z_{\overline{J} }}})\rangle_{\mathtt{T}}=|b\oplus f(x)\rangle_{\mathtt{T}},\] where we used (i) the definition of one-hot encoding, \(e(x_{\overline{J}})_{z}=1\) if and only if \(z=x_{\overline{J}}\), and \(e(x_{J_{z}})_{j}=1\) if and only if \(j=x_{J_{z}}\); (ii) the fact that \(\bigoplus_{j\in g_{z}^{-1}(1)}e(y)_{j}=g_{z}(y)\) for any \(z\in\{0,1\}^{t}\) and \(y\in\{0,1\}^{|J_{z}|}\); (iii) the identity \(g_{x_{\overline{J}}}(x_{J_{z_{\overline{J}}}})=f_{J|x_{\overline{J}}}(x_{J})=f (x)\). There are two methods to apply the \(\mathsf{C}_{\mathtt{E}_{z,j}}\)-\(\mathsf{X}_{\rightarrow\mathtt{T}}\) gates in parallel. The first method is via \[\prod_{z\in\{0,1\}^{t}}\prod_{j\in g_{z}^{-1}(1)}\mathsf{C}_{\mathtt{E}_{z,j}} \mathsf{X}_{\rightarrow\mathtt{T}}=\mathsf{PARITY}_{\{\mathtt{E}_{z,j}\}_{z\in\{ 0,1\}^{t},j\in g_{z}^{-1}(1)}\rightarrow\mathtt{T}},\] i.e., applying a \(\mathsf{X}\) onto \(|b\rangle_{\mathsf{T}}\) controlled on \(|e(x\mathcal{J})_{z}\cdot e(x_{J_{z}})_{j}\rangle_{\mathsf{E}_{z,j}}\) for all \(z\in\{0,1\}^{t}\) and \(j\in g_{z}^{-1}(1)\) is equivalent to applying a PARITY gate onto \(|b\rangle_{\mathsf{T}}\) with input \(\bigotimes_{z\in\{0,1\}^{t}}\bigotimes_{j\in g_{z}^{-1}(1)}|e(x\mathcal{J})_{ z}e(x_{J_{z}})_{j}\rangle_{\mathsf{E}_{z,j}}\). The PARITY gate costs \(1\) Fan-Out or GT gate with arity \(1+\sum_{z\in\{0,1\}^{t}}|g_{z}^{-1}(1)|\). The second method is to use the \(\sum_{z\in\{0,1\}^{t}}|g_{z}^{-1}(1)|\leq 2^{t+r}\) registers \(\mathtt{T}_{z,j}\) for \(z\in\{0,1\}^{t}\) and \(j\in g_{z}^{-1}(1)\) (note that we do not require all registers \(\mathtt{T}_{z,j}\) from Theorem 26) via \[\prod_{z\in\{0,1\}^{t}}\prod_{j\in g_{z}^{-1}(1)}\mathsf{C}_{\mathsf{E}_{z,j} }\mathsf{X}_{\to\mathsf{T}}=\mathsf{H}_{\to\mathsf{T}}\mathsf{FO}_{\mathsf{T }\to\{\mathtt{T}_{z,j}\}_{z,j}}\left(\prod_{z\in\{0,1\}^{t}}\prod_{j\in g_{z}^ {-1}(1)}\mathsf{C}_{\mathsf{E}_{z,j}}\mathsf{Z}_{\to\mathtt{T}_{z,j}}\right) \mathsf{FO}_{\mathsf{T}\to\{\mathtt{T}_{z,j}\}_{z,j}}\mathsf{H}_{\to\mathsf{T }}.\] In more details, we have (consider only registers \(\mathtt{T}\) and \(\mathtt{T}_{z,j}\)) \[|b\rangle_{\mathsf{T}} \mapsto\frac{1}{\sqrt{2}}|0\rangle_{\mathsf{T}}\bigotimes_{z\in\{ 0,1\}^{t}}\bigotimes_{j\in g_{z}^{-1}(1)}|0\rangle_{\mathsf{T}_{x,j}}+\frac{(- 1)^{b}}{\sqrt{2}}|1\rangle_{\mathsf{T}}\bigotimes_{z\in\{0,1\}^{t}}\bigotimes_{ j\in g_{z}^{-1}(1)}|1\rangle_{\mathsf{T}_{z,j}}\] \[\mapsto\frac{1}{\sqrt{2}}|0\rangle_{\mathsf{T}}\bigotimes_{z\in\{ 0,1\}^{t}}\bigotimes_{j\in g_{z}^{-1}(1)}|0\rangle_{\mathsf{T}_{z,j}}+\frac{(- 1)^{b}}{\sqrt{2}}|1\rangle_{\mathsf{T}}\bigotimes_{z\in\{0,1\}^{t}}\bigotimes_ {j\in g_{z}^{-1}(1)}(-1)^{e(x\mathcal{J})_{z}e(x_{J_{z}})_{j}}|1\rangle_{ \mathsf{T}_{z,j}}\] \[\mapsto\frac{|0\rangle_{\mathsf{T}}+(-1)^{b+f(x)}|1\rangle_{ \mathsf{T}}}{\sqrt{2}}\] \[\mapsto|b\oplus f(x)\rangle_{\mathsf{T}}.\] The above requires \(2\) Fan-Out or GT gates with arity at most \(1+2^{t+r}\). We crucially remark that the GT gates can be absorbed by the ones from computing and uncomputing registers \(\mathtt{E}_{z,j}\). The rest of the circuit is identical to Theorem 26: uncompute registers \(\overline{\mathtt{J}}_{z,j}\), \(\mathtt{J}_{z,j}\), and \(\mathtt{E}_{z,j}\) for \(z\in\{0,1\}^{t}\) and \(j\in\{0,1\}^{|J_{z}|}\). The number of ancillae and Fan-Out gates is asymptotically the same as in Theorem 26. The number of GT gates is reduced from \(9\) to \(6\) by using the second method. ### Constant-depth circuits for quantum memory devices via one-hot encoding In this section, we apply our previous circuit constructions based on one-hot encoding to the case of QRAM and QRAG. As mentioned before, QRAM is simply an \(f\)-FIN with the Boolean function \(f:\{0,1\}^{n}\times\{0,1\}^{\log n}\to\{0,1\}\) defined by \(f(x,i)=x_{i}\). Furthermore, this Boolean function is a \((J,1)\)-junta with \(J=[n]\) and \(|\overline{J}|=\log n\). Indeed, by fixing the coordinates of \(i\in[n]\), \(f_{[n]i}(x)=x_{i}\) depends only on one input coordinate. Theorem 27 thus immediately applies to any QRAM by setting \(r=1\) and \(t=|\overline{J}|=\log n\). (Actually, there is no need to compute the one-hot encoding of register J since \(r=1\). This means that we only require registers \(\mathtt{E}_{z}\) for \(z\in\{0,1\}^{t}\). The number of ancillae is thus halved). For completeness we depict the circuit in Figure 7. **Theorem 28** (One-hot-encoding implementation of QRAM).: _For every \(n\in\mathbb{N}\) a power of \(2\), a_ QRAM _of memory size \(n\) can be implemented in \(O(1)\)-depth using_ * _either_ \(2n\log n\log\log n+O(n\log n)\) _ancillae and_ \(6n\log n+O(n\log\log n)\) _Fan-Out gates with arity_ \(\leq n+1\)_,_ * _or_ \(3n\log n+O(n\log\log n)\) _ancillae and_ \(6\)__GT _gates with arity_ \(\leq n\log n+O(n\log\log n)\)_._ Even though QRAG is not an \(f\)-FIN or even an \(f\)-UCG, it is possible to use the one-hot encoding framework from the previous circuit constructions to implement QRAG in constant depth. **Theorem 29** (One-hot-encoding implementation of QRAG).: _For every \(n\in\mathbb{N}\) a power of \(2\), a_ QRAG _of memory size \(n\) can be implemented in \(O(1)\)-depth using_ * _either_ \(2n\log n\log\log n+O(n\log n)\) _ancillae and_ \(6n\log n+O(n\log\log n)\) _Fan-Out gates with arity_ \(\leq n+1\)_,_ * _or_ \(3n\log n+O(n\log\log n)\) _ancillae and_ \(9\)__GT _gates with arity_ \(\leq n\log n+O(n\log\log n)\)_._ Proof.: Given the state \(|i\rangle_{\texttt{A}}|b\rangle_{\texttt{T}}|x_{0},\ldots,x_{n-1}\rangle_{ \texttt{M}}\), we shall compute the one-hot encoding \(e(i)\in\{0,1\}^{n}\) of the address \(i\in\{0,1\}^{\log n}\) given by \(e(i)_{j}=\bigwedge_{k\in[\log n]}(i_{k}\oplus\overline{j}_{k})\), where \(j\in[n]\) and \(j_{k}\) is its \(k\)-th bit in binary encoding. Since \(e(i)_{j}=1\) if and only if \(j=i\), the one-hot encoding is then used to swap the correct entry \(x_{i}\) from the memory M onto an \(n\)-qubit ancillary register B. The swapped entry in register B is then mapped onto the target register T using a PARITY gate. At this point, both registers M and T are in the desired state. The final step is uncomputing register B with an additional ancillary register C. Consider the following circuit (see Figure 8): 1. Attach an \(((n-1)\log n)\)-qubit ancillary register \(\bigotimes_{j=1}^{n-1}|0\rangle_{\texttt{A}_{j}}^{\otimes\log n}\) and copy \(n-1\) times the register \(|i\rangle_{\texttt{A}}\) using either \(\log n\) Fan-Out gates with arity \(n\) or \(1\)__GT _gate with arity \(n\log n\). 2. Attach an \(n\)-qubit ancillary register \(|0\rangle_{\texttt{B}}^{\otimes n}=\bigotimes_{j\in[n]}|0\rangle_{\texttt{B}_ {j}}\) and apply an \((n+1)\)-arity Fan-Out gate \(\textsf{FO}_{\texttt{T}\rightarrow\texttt{B}}^{(n+1)}\) from register T to register B to copy \(n\) times the register \(|b\rangle_{\texttt{T}}\). 3. For each \(j\in[n]\), apply the gate \(\bigotimes_{k\in[\log n]}\mathsf{X}^{\overline{J}_{k}}\) to \(|i\rangle_{\mathtt{A}_{j}}\) (define \(\mathtt{A}_{0}:=\mathtt{A}\)). This leads to \[\left(\bigotimes_{j\in[n]}|i\rangle_{\mathtt{A}_{j}}|b\rangle_{\mathtt{B}_{j} }|x_{j}\rangle_{\mathtt{M}_{j}}\right)|b\rangle_{\mathtt{T}}\mapsto\left( \bigotimes_{j\in[n]}|i\oplus\overline{j}\rangle_{\mathtt{A}_{j}}|b\rangle_{ \mathtt{B}_{j}}|x_{j}\rangle_{\mathtt{M}_{j}}\right)|b\rangle_{\mathtt{T}}.\] 4. Attach a new \(n\)-qubit ancillary register \(|0\rangle_{\mathtt{E}}^{\otimes n}=\bigotimes_{j\in[n]}|0\rangle_{\mathtt{E}_ {j}}\) and apply an \(\mathsf{AND}^{(\log n)}_{\mathtt{A}_{j}\to\mathtt{E}_{j}}\) gate from register \(\mathtt{A}_{j}\) onto register \(\mathtt{E}_{j}\) for all \(j\in[n]\) to obtain \[\left(\bigotimes_{j\in[n]}|i\oplus\overline{j}\rangle_{\mathtt{A}_{j}}|b \rangle_{\mathtt{B}_{j}}|x_{j}\rangle_{\mathtt{M}_{j}}|0\rangle_{\mathtt{E}_ {j}}\right)|b\rangle_{\mathtt{T}}\mapsto\left(\bigotimes_{j\in[n]}|i\oplus \overline{j}\rangle_{\mathtt{A}_{j}}|b\rangle_{\mathtt{B}_{j}}|x_{j}\rangle_{ \mathtt{M}_{j}}|e(i)\rangle_{\mathtt{E}_{j}}\right)|b\rangle_{\mathtt{T}},\] where \(e(i)\in\{0,1\}^{n}\) is the one-hot encoding of \(i\). 5. Apply \(n\)\(\mathtt{C}_{\mathtt{E}_{j}}\)-\(\mathsf{SWAP}_{\mathtt{B}_{j}\leftrightarrow\mathtt{M}_{j}}\) gates in parallel for \(j\in[n]\), i.e., swap registers \(\mathtt{B}_{j}\) and \(\mathtt{M}_{j}\) controlled on \(\mathtt{E}_{j}\). Since \(e(i)_{j}=1\) if and only if \(j=i\), we obtain (ignore the register \(\bigotimes_{j\in[n]}|i\oplus\overline{j}\rangle_{\mathtt{A}_{j}}\) for clarity) \[|e(i)\rangle_{\tt E}|b,\ldots,b,x_{i},b,\ldots,b\rangle_{\tt B}|x_{0},\ldots,x_{i- 1},b,x_{i+1},\ldots,x_{n-1}\rangle_{\tt M}|b\rangle_{\tt T}.\] 6. Apply an \((n+1)\)-arity Fan-Out gate \(\mathsf{FO}^{(n+1)}_{\tt T\to B}\) from register T to register B to get \[|e(i)\rangle_{\tt E}|0,\ldots,0,b\oplus x_{i},0,\ldots,0\rangle_{\tt B}|x_{0}, \ldots,x_{i-1},b,x_{i+1},\ldots,x_{n-1}\rangle_{\tt M}|b\rangle_{\tt T}.\] 7. Apply a \(\mathsf{PARITY}_{\tt B\to T}\) gate from register B onto register T to obtain \[|e(i)\rangle_{\tt E}|0,\ldots,0,b\oplus x_{i},0,\ldots,0\rangle_{\tt B}|x_{0}, \ldots,x_{i-1},b,x_{i+1},\ldots,x_{n-1}\rangle_{\tt M}|x_{i}\rangle_{\tt T}.\] 8. Apply \(n\)\(\mathsf{C}_{\{{\tt E}_{j},{\tt M}_{j}\}}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-} \mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-} \mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-} \mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-} \mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-} \mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-} \mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-} \mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-} \mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-} \mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-} \mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-} \mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-} \mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-} \mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-} \mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-} \mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-} \mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-} \mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-} \mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-} \mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-} \mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-} \mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-} \mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-} \mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-} \mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-} \mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-} \mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-} \mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-} \mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-} \mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-} \mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-} \mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-} \mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-} \mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-} \mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-} \mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-} \mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-} \mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-} \mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-} \mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-} \mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-} \mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-} \mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-} \mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-} \mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-} \mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-} \mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-} \mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-} \mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-} \mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-} \mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-} \mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-} \mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-}\mbox{-} \ Constant-depth circuits based on Boolean analysis In this section, we explore the Boolean analysis connection between constant-depth gates and Fan-Outs made by Takahashi and Tani [16] and propose several constructions to \(f\)-\(\mathsf{UCG}\)s. Given \(f:\{0,1\}^{n}\to\mathcal{U}(\mathbb{C}^{2\times 2})\), consider its \(\mathsf{Z}\)-decomposition \(\alpha,\beta,\gamma,\delta:\{0,1\}^{n}\to[-1,1]\). Recall that \(\operatorname{supp}(f):=\operatorname{supp}(\alpha)\cup\operatorname{supp}( \beta)\cup\operatorname{supp}(\gamma)\cup\operatorname{supp}(\delta)\) and \(\deg(f):=\max\{\deg(\alpha),\deg(\beta),\deg(\gamma),\deg(\delta)\}\). Similar definitions apply to \(\operatorname{supp}^{>k}(f)\), \(\operatorname{supp}^{\leq k}(f)\), \(\operatorname{supp}^{=k}(f)\), \(\operatorname{supp}_{\{0,1\}}(f)\), and \(\operatorname{supp}_{\{0,1\}}^{>k}(f)\). ### Constant-depth circuits for \(f\)-UcGs **Theorem 30** (Real implementation of \(f\)-Ucg).: _Given \(f:\{0,1\}^{n}\to\mathcal{U}(\mathbb{C}^{2\times 2})\), there is an \(O(1)\)-depth circuit for \(f\)-\(\mathsf{UCG}\) that uses_ * _either_ \(\sum_{S\in\operatorname{supp}(f)}|S|+2|\operatorname{supp}^{>1}(f)|\) _ancillae and_ \(2|\bigcup_{S\in\operatorname{supp}^{>1}(f)}S|+2|\operatorname{supp}^{>1}(f)|+6\) _Fan-Outs with arity at most_ \(1+\max\{|\operatorname{supp}^{>0}(f)|,\deg(f)\}\)_,_ * _or_ \(|\operatorname{supp}^{>1}(f)|\) _ancillae and_ \(5\mathsf{GT}\) _gates with arity at most_ \(2\sum_{S\in\operatorname{supp}(f)}|S|\)_._ Proof.: Consider the initial state \(|x\rangle_{\mathtt{I}}|b\rangle_{\mathtt{T}}\) for \(x\in\{0,1\}^{n}\) and \(b\in\{0,1\}\). We wish to implement \(|x\rangle_{\mathtt{I}}|b\rangle_{\mathtt{T}}\mapsto|x\rangle_{\mathtt{I}}f(x)|b \rangle_{\mathtt{T}}\). Let \(\alpha,\beta,\gamma,\delta:\{0,1\}^{n}\to[-1,1]\) be the \(\mathsf{Z}\)-decomposition of \(f\). Let \(\alpha(x)=\sum_{S\subseteq[n]}\widehat{\alpha}(S)\chi_{S}(x)\) be the Fourier expansion of \(\alpha\), and similarly for \(\beta\), \(\gamma\), and \(\delta\). For ease of notation, write \(m:=|\operatorname{supp}^{>0}(f)|\). Let also \(m_{i}:=|\{S\in\operatorname{supp}^{>1}(f):i\in S\}|\) be the number of sets of size greater than \(1\) that contain the coordinate \(i\in[n]\). Consider first the Fan-Out-based circuit (see Figure 9): 1. Attach an ancillary register \(\bigotimes_{S\in\operatorname{supp}^{>1}(f)}|0\rangle_{\mathtt{R}_{S}}^{ \otimes|S|}\). For each \(i\in[n]\) in parallel, copy \(m_{i}\) number of times the qubit \(|x_{i}\rangle_{\mathtt{I}}\) using a \((1+m_{i})\)-arity Fan-Out gate. This leads to \[|x\rangle_{\mathtt{I}}|b\rangle_{\mathtt{T}}\mapsto|x\rangle_{\mathtt{I}}|b \rangle_{\mathtt{T}}\bigotimes_{S\in\operatorname{supp}^{>1}(f)}|x_{S}\rangle_ {\mathtt{R}_{S}}.\] 2. Attach an ancillary register \(|0\rangle_{\mathtt{P}}^{\otimes|\operatorname{supp}^{>1}(f)|}=\bigotimes_{S \in\operatorname{supp}^{>1}(f)}|0\rangle_{\mathtt{P}_{S}}\). For each \(S\in\operatorname{supp}^{>1}(f)\) in parallel, apply a \(\mathsf{PARITY}_{\mathtt{R}_{S}\to\mathtt{P}_{S}}^{(|S|)}\) gate using a \((1+|S|)\)-arity Fan-Out gate. We obtain the state \[|x\rangle_{\mathtt{I}}|b\rangle_{\mathtt{T}}\bigotimes_{S\in\operatorname{ supp}^{>1}(f)}|x_{S}\rangle_{\mathtt{R}_{S}}\mapsto|x\rangle_{\mathtt{I}}|b \rangle_{\mathtt{T}}\bigotimes_{S\in\operatorname{supp}^{>1}(f)}|x_{S}\rangle_ {\mathtt{R}_{S}}\big{|}\bigoplus_{i\in S}x_{i}\rangle_{\mathtt{P}_{S}}.\] 3. Attach an ancillary register \(|0\rangle_{\mathtt{T}^{\prime}}^{\otimes(m-1)}\) and apply an \(m\)-arity Fan-Out gate \(\mathsf{FO}_{\mathtt{T}\to\mathtt{T}^{\prime}}^{(m)}\) from register \(\mathtt{T}\) to register \(\mathtt{T}^{\prime}\). Apply a \(\mathsf{Z}(\delta(0^{n}))_{\to\mathtt{T}}\) gate onto register \(\mathtt{T}\). Notice that \(\delta(0^{n})=\sum_{S\subseteq[n]}\widehat{\delta}(S)\). Then, for each \(S\in\operatorname{supp}^{>0}(\delta)\) in parallel, apply a \(\mathsf{Z}(-2\widehat{\delta}(S))\) gate controlled on register \(\mathtt{P}_{S}\) onto the \(S\)-th qubit in register \(\mathtt{T}^{\prime}\) (if \(|S|=1\), the gate is controlled on \(|x_{S}\rangle_{\mathtt{I}}\)). Finally, apply \(\mathsf{FO}^{(m)}_{\mathtt{T}\rightarrow\mathtt{T}^{\prime}}\) again. This chain of operations leads to (omit registers \(\mathtt{R}_{S}\) and \(\mathtt{P}_{S}\) for simplicity) \[|x\rangle_{\mathtt{I}}|b\rangle_{\mathtt{T}}\mapsto|x\rangle_{ \mathtt{I}}|b\rangle_{\mathtt{T},\mathtt{T}^{\prime}}^{\otimes m} \mapsto|x\rangle_{\mathtt{I}}\mathsf{Z}\left(\sum_{S\subseteq[n] }\widehat{\delta}(S)\left(1-2\bigoplus_{i\in S}x_{i}\right)\right)|b\rangle_{ \mathtt{T},\mathtt{T}^{\prime}}^{\otimes m}\] \[=|x\rangle_{\mathtt{I}}\mathsf{Z}\left(\sum_{S\subseteq[n]} \widehat{\delta}(S)\chi_{S}(x)\right)|b\rangle_{\mathtt{T},\mathtt{T}^{\prime }}^{\otimes m}\mapsto|x\rangle_{\mathtt{I}}\mathsf{Z}(\delta(x))|b\rangle_{ \mathtt{T}}.\] 2. Apply a \(\mathsf{H}_{\rightarrow\mathtt{T}}\) gate onto register \(\mathtt{T}\) followed by an \(m\)-arity Fan-Out gate \(\mathsf{FO}^{(m)}_{\mathtt{T}\rightarrow\mathtt{T}^{\prime}}\) from register \(\mathtt{T}\) to register \(\mathtt{T}^{\prime}\). Apply a \(\mathsf{Z}(\gamma(0^{n}))_{\rightarrow\mathtt{T}}\) gate onto register \(\mathtt{T}\). Then, for each \(S\in\mathrm{supp}^{>0}(\gamma)\) in parallel, apply a \(\mathsf{Z}(-2\widehat{\gamma}(S))\) gate controlled on register \(\mathtt{P}_{S}\) to the \(S\)-th qubit in register \(\mathtt{T}^{\prime}\) (if \(|S|=1\), the gate is controlled on \(|x_{S}\rangle_{\mathtt{I}}\)). Finally, apply \(\mathsf{FO}^{(m)}_{\mathtt{T}\rightarrow\mathtt{T}^{\prime}}\) again. For simplicity, write \(\mathsf{FO}^{(m)}_{\mathtt{T}\rightarrow\mathtt{T}^{\prime}}\) as \(\mathsf{FO}^{(m)}_{\mathtt{T}\rightarrow\mathtt{T}^{\prime}}\). Figure 9: The circuit for an \(f\)-\(\mathsf{UCG}^{(n)}\) using Fan-Out gates. Here \(m:=|\,\mathrm{supp}^{>0}(f)|\). We highlight the \(\mathsf{PARITY}\) operations inside the dashed boxes. For simplicity, we write \(\widehat{\alpha}(S_{j})\) as \(\widehat{\alpha}_{j}\) (and similarly for \(\beta\), \(\gamma\), \(\delta\)). Moreover, we depict \(S_{1},\ldots,S_{m}\in\mathrm{supp}^{>0}(f)\), but in reality, there is no need to compute the parities of sets with size \(1\) (hence why the register \(\mathtt{P}\) is shown with size \(m\)). Moreover, we include targets onto all registers \(\mathtt{R}_{S}\) in the Fan-Out gates copying \(x_{0},\ldots,x_{n-1}\), but in reality \(x_{i}\) is copied only onto the registers such that \(S\ni i\). \(\mathsf{HZ}(\delta(x))|b\rangle_{\mathsf{T}}=r_{b,x}|0\rangle_{\mathsf{T}}+s_{b,x} |1\rangle_{\mathsf{T}}\). This chain of operations leads to \[|x\rangle_{\mathsf{I}}\mathsf{HZ}(\delta(x))|b\rangle_{\mathsf{T}} \mapsto|x\rangle_{\mathsf{I}}\big{(}r_{b,x}|0\rangle_{\mathsf{T}, \mathsf{T}^{\prime}}^{\otimes m}+s_{b,x}|1\rangle_{\mathsf{T},\mathsf{T}^{ \prime}}^{\otimes m}\big{)}\] \[\mapsto|x\rangle_{\mathsf{I}}\mathsf{Z}\left(\sum_{S\subseteq[n]} \widehat{\gamma}(S)\left(1-2\bigoplus_{i\in S}x_{i}\right)\right)\big{(}r_{b, x}|0\rangle_{\mathsf{T},\mathsf{T}^{\prime}}^{\otimes m}+s_{b,x}|1\rangle_{ \mathsf{T},\mathsf{T}^{\prime}}^{\otimes m}\big{)}\] \[\mapsto|x\rangle_{\mathsf{I}}\mathsf{Z}\left(\sum_{S\subseteq[n]} \widehat{\gamma}(S)\chi_{S}(x)\right)\mathsf{HZ}(\delta(x))|b\rangle_{ \mathsf{T}}\] \[=|x\rangle_{\mathsf{I}}\mathsf{Z}(\gamma(x))\mathsf{HZ}(\delta(x ))|b\rangle_{\mathsf{T}}.\] 2. Apply a \(\mathsf{H}_{\to\mathsf{T}}\) gate onto register \(\mathsf{T}\) followed by an \(m\)-arity Fan-Out gate \(\mathsf{FO}^{(m)}_{\mathsf{T}\to\mathsf{T}^{\prime}}\) from register \(\mathsf{T}\) to register \(\mathsf{T}^{\prime}\). Apply a \(\mathsf{Z}(\beta(0^{n}))_{\to\mathsf{T}}\) gate onto register \(\mathsf{T}\). Then, for each \(S\in\mathrm{supp}^{>0}(\beta)\) in parallel, apply a \(\mathsf{Z}(-2\widehat{\beta}(S))\) gate controlled on register \(\mathsf{P}_{S}\) to the \(S\)-th qubit in register \(\mathsf{T}^{\prime}\) (if \(|S|=1\), the gate is controlled on \(|x_{S}\rangle_{\mathsf{I}}\)). Finally, apply \(\mathsf{FO}^{(m)}_{\mathsf{T}\to\mathsf{T}^{\prime}}\) again. Similarly to the previous step, this chain of operations leads to \[|x\rangle_{\mathsf{I}}\mathsf{HZ}(\gamma(x))\mathsf{HZ}(\delta(x))|b\rangle_{ \mathsf{T}}\mapsto|x\rangle_{\mathsf{I}}\mathsf{Z}(\beta(x))\mathsf{HZ}( \gamma(x))\mathsf{HZ}(\delta(x))|b\rangle_{\mathsf{T}}.\] 3. Apply an overall \(e^{i\pi\alpha(0^{n})}\) phase. For each \(S\in\mathrm{supp}^{>0}(\alpha)\) in parallel, apply a \(\mathsf{Z}(-2\widehat{\alpha}(S))\) gate onto register \(\mathsf{P}_{S}\) (if \(|S|=1\), apply the gate onto \(|x_{S}\rangle_{\mathsf{I}}\)). This leads to \[|x\rangle_{\mathsf{I}}e^{i\pi\alpha(x)}\mathsf{Z}(\beta(x))\mathsf{HZ}(\gamma( x))\mathsf{HZ}(\delta(x))|b\rangle_{\mathsf{T}}=|x\rangle_{\mathsf{I}}f(x)|b \rangle_{\mathsf{T}}.\] 4. Uncompute Step 1. We now consider the \(\mathsf{GT}\)-gate-based circuit (see Figure 10), which is basically the same as the Fan-Out-based circuit, the main difference being that it is no longer necessary to copy the register \(|x\rangle_{\mathsf{I}}\) several times into \(\bigotimes_{S\in\mathrm{supp}^{>1}(f)}\bigotimes_{i\in S}|x_{i}\rangle_{ \mathsf{R}_{S}}\) as an intermediate step in order to compute the terms \(\bigoplus_{i\in S}x_{i}\) for all \(S\in\mathrm{supp}^{>1}(f)\). A single \(\mathsf{GT}\) gate can compute all the parity terms in parallel according to Claim 22. In the following, write register \(\mathsf{I}\) as \(|x\rangle_{\mathsf{I}}=\bigotimes_{i\in[n]}|x\rangle_{\mathsf{I}_{i}}\). 1. Apply Hadamard gates to an ancillary register \(|0\rangle_{\mathsf{P}}^{\otimes|\mathrm{supp}^{>1}(f)|}=\bigotimes_{S\in \mathrm{supp}^{>1}(f)}|0\rangle_{\mathsf{P}_{S}}\). Then, using one \(\mathsf{GT}\) gate with arity \(|\bigcup_{S\in\mathrm{supp}^{>1}(f)}S|+\sum_{S\in\mathrm{supp}^{>1}(f)}|S|\), apply, for each \(i\in[n]\), a \(\mathsf{Z}\) gate controlled on \(|x_{i}\rangle_{\mathsf{I}_{i}}\) to the registers \(\mathsf{P}_{S}\) indexed by the sets \(S\in\mathrm{supp}^{>1}(f)\) that contain \(i\). Finally, apply another layer of Hadamard gates to the ancillary register \(\mathsf{P}\). We obtain \[|x\rangle_{\mathsf{I}}|b\rangle_{\mathsf{T}}\mapsto|x\rangle_{\mathsf{I}}|b \rangle_{\mathsf{T}}\bigotimes_{S\in\mathrm{supp}^{>1}(f)}\big{|}\bigoplus_{i \in S}x_{i}\rangle_{\mathsf{P}_{S}}.\] 2. Apply the gate (write \(\mathsf{P}_{S}:=\mathsf{I}_{S}\) if \(|S|=1\)) \[\left(e^{i\pi\alpha(0^{n})}\prod_{S\in\mathrm{supp}^{>0}( \alpha)}\mathsf{Z}(\widehat{\alpha}(-2S))_{\to\mathsf{P}_{S}}\right)\left( \mathsf{Z}(\beta(0^{n}))_{\to\mathsf{T}}\prod_{S\in\mathrm{supp}^{>0}(\beta)} \mathsf{C}_{\mathsf{P}_{S}}\mathsf{-Z}(\widehat{\beta}(-2S))_{\to\mathsf{T}} \right)\mathsf{H}_{\to\mathsf{T}}\] \[\cdot\left(\mathsf{Z}(\gamma(0^{n}))_{\to\mathsf{T}}\prod_{S\in \mathrm{supp}^{>0}(\gamma)}\mathsf{C}_{\mathsf{P}_{S}}\mathsf{-Z}(\widehat{ \gamma}(-2S))_{\to\mathsf{T}}\right)\mathsf{H}_{\to\mathsf{T}}\left(\mathsf{Z}( \delta(0^{n}))_{\to\mathsf{T}}\prod_{S\in\mathrm{supp}^{>0}(\delta)}\mathsf{C}_{ \mathsf{P}_{S}}\mathsf{-Z}(\widehat{\delta}(-2S))_{\to\mathsf{T}}\right)\] using 3 \(\mathsf{GT}\) gates (one for each \(\prod_{S\in\mathrm{supp}^{>0}(\cdot)}\mathsf{C}_{\mathsf{P}_{S}}\mathsf{-Z}(\cdot) _{\to\mathsf{T}}\)). 3. Uncompute Step 1. We now analyse the resources for each step: * Step 1: in the Fan-Out-based construction we need to copy each \(x_{i}\), \(i\in[n]\), for every3\(S\in\operatorname{supp}^{>1}(f)\) such that \(i\in S\). Registers \(\mathtt{R}_{S}\) thus require \(\sum_{i=0}^{n-1}m_{i}=\sum_{S\in\operatorname{supp}^{>1}(f)}|S|\) ancillae. Such copying can be done with \(|\bigcup_{S\in\operatorname{supp}^{>1}(f)}S|\) Fan-Out gates with arity at most \(1+\max_{i\in[n]}m_{i}\). Moreover, all the parity terms \(\bigoplus_{i\in S}x_{i}\) in registers \(\mathtt{P}_{S}\) for \(S\in\operatorname{supp}^{>1}(f)\) require \(|\operatorname{supp}^{>1}(f)|\) ancillae and can be computed using either \(|\operatorname{supp}^{>1}(f)|\) Fan-Out gates with arity at most \(1+\deg(f)\) or \(1\)\(\mathsf{GT}\) gate with arity \(|\bigcup_{S\in\operatorname{supp}^{>1}(f)}S|+\sum_{S\in\operatorname{supp}^{> 1}(f)}|S|\); Footnote 3: We do not need to copy \(|x_{i}\rangle\) for \(S\ni i\) if \(|S|=1\) since the state \(|\bigoplus_{j\in S}x_{j}\rangle\) already equals \(|x_{i}\rangle\). * Step 2: constructing \(f(x)\) requires either \(m-1\) ancillae and \(6\) Fan-Out gates with arity \(m\) or no ancillae and \(3\)\(\mathsf{GT}\) gates with arity \(m\); * Step 3: either \(|\bigcup_{S\in\operatorname{supp}^{>1}(f)}S|+|\operatorname{supp}^{>1}(f)|\) Fan-Out gates or \(1\)\(\mathsf{GT}\) gate. In total, we require \(\sum_{S\in\operatorname{supp}^{>1}(f)}|S|+|\operatorname{supp}^{>0}(f)|+| \operatorname{supp}^{>1}(f)|=\sum_{S\in\operatorname{supp}(f)}|S|+2| \operatorname{supp}^{>1}(f)|\) ancillae and \(2|\bigcup_{S\in\operatorname{supp}^{>1}(f)}S|+2|\operatorname{supp}^{>1}(f)|+6\) Fan-Outs with arity \(\leq 1+\max\{m,\deg(f)\}\), or \(|\operatorname{supp}^{>1}(f)|\) ancillae and \(5\)\(\mathsf{GT}\) gates with arity \(\leq\max\{m,|\bigcup_{S\in\operatorname{supp}^{>1}(f)}S|+\sum_{S\in \operatorname{supp}^{>1}(f)}|S|\}\leq 2\sum_{S\in\operatorname{supp}(f)}|S|\). Instead of exactly simulating an \(f\)-\(\mathsf{UCG}^{(n)}\) as in the previous result, it is possible to approximate it by a simpler \(\mathsf{UCG}^{(n)}\). For such, we can employ real polynomials that equal \(\alpha,\beta,\gamma,\delta\) on all inputs up to a small additive error. We shall use the next technical result. **Lemma 31** ([1, Theorem 5.12]).: _Let \(f:\{0,1\}^{n}\to\mathbb{R}\) be nonzero, \(\epsilon>0\), and \(s\geq 4n\|f\|_{1}^{2}/\epsilon^{2}\) an integer. Then there is a multilinear polynomial \(p:\{0,1\}^{n}\to\mathbb{R}\) of degree and sparsity at most \(\deg(f)\) and \(s\), respectively, such that \(\max_{x\in\{0,1\}^{n}}|f(x)-p(x)|<\epsilon\)._ **Theorem 32** (Approximate real implementation of \(f\)-\(\mathsf{UCG}\)).: _Let \(\epsilon>0\). Given \(f:\{0,1\}^{n}\to\mathcal{U}(\mathbb{C}^{2\times 2})\), let \(\alpha,\beta,\gamma,\delta:\{0,1\}^{n}\to[-1,1]\) be its \(\mathsf{Z}\)-decomposition. For \(\nu\in\{\alpha,\beta,\gamma,\delta\}\), define Figure 10: The circuit for an \(f\)-\(\mathsf{UCG}^{(n)}\) using \(\mathsf{GT}\) gates. Here \(m:=|\operatorname{supp}^{>0}(f)|\). We highlight the \(\mathsf{GT}\) gates inside the dashed boxes. For simplicity, we write \(\widehat{\alpha}(S_{j})\) as \(\widehat{\alpha}_{j}\) (and similarly for \(\beta\), \(\gamma\), \(\delta\)). Moreover, we depict \(S_{1},\ldots,S_{m}\in\operatorname{supp}^{>0}(f)\), but in reality there is no need to compute the parities of sets with size \(1\) (hence why the register \(\mathtt{P}\) is shown with size \(m\)). Moreover, we apply \(\mathsf{Z}\) gates onto all registers \(\mathtt{P}_{S}\) controlled on \(x_{0},\ldots,x_{n-1}\) being in the \(|1\rangle\) state, but in reality the \(\mathsf{Z}\) gates controlled on \(x_{i}\) are only applied onto the registers indexed by sets \(S\) such that \(S\ni i\). \(s_{\nu}=\min\{\operatorname{supp}^{>1}(\nu),\left\lceil 64\pi^{2}n\|\nu^{>1}\|_{1}^{2}/ \epsilon^{2}\right\rceil\}\) and \(s=s_{\alpha}+s_{\beta}+s_{\gamma}+s_{\delta}\). There is an \(O(1)\)-depth circuit that implements an \(f^{\prime}\)-UCG with \(f^{\prime}:\{0,1\}^{n}\to\mathcal{U}(\mathbb{C}^{2\times 2})\) such that \(\max_{x\in\{0,1\}^{n}}\|f(x)-f^{\prime}(x)\|\leq\epsilon\) and uses_ * _either_ \(s(\deg(f)+2)+|\operatorname{supp}^{=1}(f)|\) _ancillae and_ \(2s+2|\bigcup_{S\in\operatorname{supp}^{>1}(f)}S|+6\) _Fan-Out gates with arity at most_ \(1+\max\{s+|\operatorname{supp}^{=1}(f)|,\deg(f)\}\)_,_ * _or_ \(s\) _ancillae and_ \(5\)__GT _gates with arity at most_ \(2s\deg(f)+|\operatorname{supp}^{=1}(f)|\)_._ Proof.: Consider the initial state \(|x\rangle_{\mathtt{I}}|b\rangle_{\mathtt{T}}\) for \(x\in\{0,1\}^{n}\) and \(b\in\{0,1\}\). We wish to perform the operation \(|x\rangle_{\mathtt{I}}|b\rangle_{\mathtt{T}}\mapsto|x\rangle_{\mathtt{I}}f^{ \prime}(x)|b\rangle_{\mathtt{T}}\) for some \(f^{\prime}:\{0,1\}^{n}\to\mathcal{U}(\mathbb{C}^{2\times 2})\) such that \(\max_{x\in\{0,1\}^{n}}\|f(x)-f^{\prime}(x)\|\leq\epsilon\). Consider, for \(\nu\in\{\alpha,\beta,\gamma,\delta\}\), a multilinear polynomial \(p_{\nu}:\{0,1\}^{n}\to\mathbb{R}\) of degree and sparsity at most \(\deg(f)\) and \(s_{\nu}\), respectively, such that \(\max_{x\in\{0,1\}^{n}}|p_{\nu}(x)-\nu^{>1}(x)|<\sqrt{2\epsilon}/\pi\) according to Lemma 31. Our construction is the same from Theorem 30, the only difference is that we now use \(p_{\alpha}\) in place of \(\alpha^{>1}\), so e.g. \(\alpha(x)\) is replaced with \(\alpha^{\leq 1}(x)+p_{\alpha}(x)\) (and similarly for \(\beta\), \(\gamma\), and \(\delta\)). This means that \(\operatorname{supp}^{>1}(f)\) is replaced with \(\operatorname{supp}(p_{\alpha})\cup\operatorname{supp}(p_{\beta})\cup \operatorname{supp}(p_{\gamma})\cup\operatorname{supp}(p_{\delta})\). The number of resources follows from Theorem 30 by replacing \[|\operatorname{supp}^{>1}(f)| \to|\operatorname{supp}(p_{\alpha})\cup\operatorname{supp}(p_{ \beta})\cup\operatorname{supp}(p_{\gamma})\cup\operatorname{supp}(p_{\delta})|,\] \[|\operatorname{supp}^{>0}(f)| \to|\operatorname{supp}^{=1}(f)|+|\operatorname{supp}(p_{\alpha}) \cup\operatorname{supp}(p_{\beta})\cup\operatorname{supp}(p_{\gamma})\cup \operatorname{supp}(p_{\delta})|,\] \[\Big{|}\bigcup_{S\in\operatorname{supp}^{>1}(f)}S\Big{|} \to\Big{|}\bigcup_{S\in\operatorname{supp}(p_{\alpha})\cup\operatorname{ supp}(p_{\beta})\cup\operatorname{supp}(p_{\gamma})\cup\operatorname{supp}(p_{\delta})}S \Big{|},\] \[\sum_{S\in\operatorname{supp}(f)}|S| \to|\operatorname{supp}^{=1}(f)|+\sum_{S\in\operatorname{supp}(p_{ \alpha})\cup\operatorname{supp}(p_{\beta})\cup\operatorname{supp}(p_{\gamma}) \cup\operatorname{supp}(p_{\delta})}|S|,\] and bounding \[|\operatorname{supp}(p_{\alpha})\cup\operatorname{supp}(p_{\beta })\cup\operatorname{supp}(p_{\gamma})\cup\operatorname{supp}(p_{\delta})| \leq s,\] \[\Big{|}\bigcup_{S\in\operatorname{supp}(p_{\alpha})\cup \operatorname{supp}(p_{\beta})\cup\operatorname{supp}(p_{\gamma})\cup \operatorname{supp}(p_{\delta})}S\Big{|} \leq\Big{|}\bigcup_{S\in\operatorname{supp}^{>1}(f)}S\Big{|},\] \[\sum_{S\in\operatorname{supp}(p_{\alpha})\cup\operatorname{supp}( p_{\beta})\cup\operatorname{supp}(p_{\gamma})\cup\operatorname{supp}(p_{\delta})}|S |\leq s\deg(f).\] To show correctness of the circuit, define \(\overline{p}_{\alpha}:=\alpha^{\leq 1}+p_{\alpha}\) for simplicity (and similarly for \(\overline{p}_{\beta}\), \(\overline{p}_{\gamma}\), and \(\overline{p}_{\delta}\)). Then (omit the \(x\) dependence for clarity) \[\|e^{i\pi\alpha}\mathsf{Z}(\beta)\mathsf{HZ}(\gamma)\mathsf{HZ}( \delta)-e^{i\pi\overline{p}_{\alpha}}\mathsf{Z}(\overline{p}_{\beta})\mathsf{HZ }(\overline{p}_{\gamma})\mathsf{HZ}(\overline{p}_{\delta})\|\] \[\leq\|(e^{i\pi\alpha}-e^{i\pi\overline{p}_{\alpha}})\mathsf{Z}( \beta)\mathsf{HZ}(\gamma)\mathsf{HZ}(\delta)\|+\|e^{i\pi\overline{p}_{\alpha}} \mathsf{Z}(\beta)\mathsf{HZ}(\gamma)\mathsf{HZ}(\delta)-e^{i\pi\overline{p}_{ \alpha}}\mathsf{Z}(\overline{p}_{\beta})\mathsf{HZ}(\overline{p}_{\gamma}) \mathsf{HZ}(\overline{p}_{\delta})\|\] \[=2|\sin(\pi(\alpha-\overline{p}_{\alpha})/2)|+\|\mathsf{Z}(\beta) \mathsf{HZ}(\gamma)\mathsf{HZ}(\delta)-\mathsf{Z}(\overline{p}_{\beta})\mathsf{ HZ}(\overline{p}_{\gamma})\mathsf{HZ}(\overline{p}_{\delta})\|\] \[\leq 2|\sin(\pi(\alpha-\overline{p}_{\alpha})/2)|+\|\mathsf{Z}(\beta) -\mathsf{Z}(\overline{p}_{\beta})\mathsf{HZ}(\gamma)\mathsf{HZ}(\delta)\|+\| \mathsf{Z}(\overline{p}_{\beta})\mathsf{H}\big{(}\mathsf{Z}(\gamma)\mathsf{HZ}( \delta)-\mathsf{Z}(\overline{p}_{\gamma})\mathsf{HZ}(\overline{p}_{\delta}) \big{)}\|\] \[=2|\sin(\pi(\alpha-\overline{p}_{\alpha})/2)|+2|\sin(\pi(\beta- \overline{p}_{\beta})/2)|+\|\mathsf{Z}(\gamma)\mathsf{HZ}(\delta)-\mathsf{Z}( \overline{p}_{\gamma})\mathsf{HZ}(\overline{p}_{\delta})\|\] \[\leq 2|\sin(\pi(\alpha-\overline{p}_{\alpha})/2)|+2|\sin(\pi(\beta- \overline{p}_{\beta})/2)|+\|\mathsf{Z}(\gamma)-\mathsf{Z}(\overline{p}_{\gamma}) \mathsf{HZ}(\delta)\|+\|\mathsf{Z}(\overline{p}_{\gamma})\mathsf{H}\big{(}\mathsf{ Z}(\delta)-\mathsf{Z}(\overline{p}_{\delta})\big{)}\|\] \[=2|\sin(\pi(\alpha-\overline{p}_{\alpha})/2)|+2|\sin(\pi(\beta- \overline{p}_{\beta})/2)|+2|\sin(\pi(\gamma-\overline{p}_{\gamma})/2)|+2|\sin( \pi(\delta-\overline{p}_{\delta})/2)|\] \[\leq 8|\sin(\epsilon/8)|\leq\epsilon.\qed\] Our final construction uses the real-polynomial \(\{0,1\}\)-representation based on AND functions which can be computed using Facts 18 or 24. **Theorem 33** (Real \(\{0,1\}\)-implementation of \(f\)-Ucg).: _Given \(f:\{0,1\}^{n}\to\mathcal{U}(\mathbb{C}^{2\times 2})\), there is an \(O(1)\)-depth circuit for \(f\)-Ucg that uses_ * _either_ \(\sum_{S\in\operatorname{supp}_{\{0,1\}}^{>0}(f)}\left(2|S|\log|S|+O(|S|)\right)\) _ancillae and_ \(\sum_{S\in\operatorname{supp}_{\{0,1\}}^{>1}(f)}\left(8|S|+O(\log|S|)\right)\) _Fan-Outs with arity_ \(\leq\max\{1+|\operatorname{supp}_{\{0,1\}}^{>0}(f)|,2\deg(f)\}\)_,_ * _or_ \(\sum_{S\in\operatorname{supp}_{\{0,1\}}^{>1}(f)}\left(3|S|+O(\log|S|)\right)\) _ancillae and_ \(9\)__GT _gates with arity_ \(\leq 2\sum_{S\in\operatorname{supp}_{\{0,1\}}(f)}|S|\)_._ Proof.: Consider the initial state \(|x\rangle_{\mathtt{I}}|b\rangle_{\mathtt{T}}\) for \(x\in\{0,1\}^{n}\) and \(b\in\{0,1\}\). We wish to implement \(|x\rangle_{\mathtt{I}}|b\rangle_{\mathtt{T}}\mapsto|x\rangle_{\mathtt{I}}f(x) |b\rangle_{\mathtt{T}}\). Let \(\alpha(x)=\sum_{S\subseteq[n]}\widetilde{\alpha}(S)x^{S}\) be the real-polynomial \(\{0,1\}\)-representation of \(\alpha\), and similarly for \(\beta,\gamma,\delta\). Write \(m:=|\operatorname{supp}_{\{0,1\}}^{>0}(f)|\) and \(m_{i}:=|\{S\in\operatorname{supp}_{\{0,1\}}^{>1}(f):i\in S\}|\) for the number of sets of size greater than \(1\) that contain the coordinate \(i\in[n]\). Consider first the Fan-Out-based circuit: 1. Attach an ancillary register \(\bigotimes_{S\in\operatorname{supp}_{\{0,1\}}^{>1}(f)}|0\rangle_{\mathtt{R}_{ S}}^{\otimes|S|}\). For each \(i\in[n]\) in parallel, copy \(m_{i}\) number of times the qubit \(|x_{i}\rangle_{\mathtt{I}}\) using a \((1+m_{i})\)-arity Fan-Out to obtain \[|x\rangle_{\mathtt{I}}|b\rangle_{\mathtt{T}}\mapsto|x\rangle_{\mathtt{I}}|b \rangle_{\mathtt{T}}\bigotimes_{S\in\operatorname{supp}_{\{0,1\}}^{>1}(f)}|x _{S}\rangle_{\mathtt{R}_{S}}.\] 2. Attach an ancillary register \(\bigotimes_{S\in\operatorname{supp}_{\{0,1\}}^{>1}(f)}|0\rangle_{\mathtt{P}_{ S}}\). For each \(S\in\operatorname{supp}_{\{0,1\}}^{>1}(f)\) in parallel, apply an \(\mathsf{AND}_{\mathtt{R}_{S}\to\mathtt{P}_{S}}^{(|S|)}\) gate to get \[|x\rangle_{\mathtt{I}}|b\rangle_{\mathtt{T}}\bigotimes_{S\in\operatorname{ supp}_{\{0,1\}}^{>1}(f)}|x_{S}\rangle_{\mathtt{R}_{S}}\mapsto|x\rangle_{\mathtt{I}}|b \rangle_{\mathtt{T}}\bigotimes_{S\in\operatorname{supp}_{\{0,1\}}^{>1}(f)}|x _{S}\rangle_{\mathtt{R}_{S}}|x^{S}\rangle_{\mathtt{P}_{S}}.\] 3. Attach an ancillary register \(|0\rangle_{\mathtt{T}}^{\otimes(m-1)}\) and apply an \(m\)-arity Fan-Out gate \(\mathsf{FO}_{\mathtt{T}\to\mathtt{T}^{\prime}}^{(m)}\) from register \(\mathtt{T}\) to register \(\mathtt{T}^{\prime}\). Apply a \(\mathsf{Z}(\widetilde{\delta}(\emptyset))_{\rightarrow\mathtt{T}}\) gate to register \(\mathtt{T}\). Then, for each \(S\in\operatorname{supp}_{\{0,1\}}^{>0}(\delta)\) in parallel, apply a \(\mathsf{Z}(\widetilde{\delta}(S))\) gate controlled on register \(\mathtt{P}_{S}\) onto the \(S\)-th qubit in register \(\mathtt{T}^{\prime}\) (if \(|S|=1\), apply the gate onto \(|x_{S}\rangle_{\mathtt{I}}\)). Finally, apply \(\mathsf{FO}_{\mathtt{T}\to\mathtt{T}^{\prime}}^{(m)}\) again. This chain of operations leads to (omit registers \(\mathtt{P}_{S}\) and \(\mathtt{R}_{S}\) for simplicity) \[|x\rangle_{\mathtt{I}}|b\rangle_{\mathtt{T}}\mapsto|x\rangle_{\mathtt{I}}|b \rangle_{\mathtt{T},\mathtt{T}^{\prime}}^{\otimes m}\mapsto|x\rangle_{\mathtt{ I}}\mathsf{Z}\left(\sum_{S\subseteq[n]}\widetilde{\delta}(S)x^{S}\right)|b\rangle_{ \mathtt{T},\mathtt{T}^{\prime}}^{\otimes m}\mapsto|x\rangle_{\mathtt{I}} \mathsf{Z}(\delta(x))|b\rangle_{\mathtt{T}}.\] 4. Apply a \(\mathsf{H}_{\rightarrow\mathtt{T}}\) gate onto register \(\mathtt{T}\) followed by an \(m\)-arity Fan-Out gate \(\mathsf{FO}_{\mathtt{T}\to\mathtt{T}^{\prime}}^{(m)}\) from register \(\mathtt{T}\) to register \(\mathtt{T}^{\prime}\). Apply a \(\mathsf{Z}(\widetilde{\gamma}(\emptyset))_{\rightarrow\mathtt{T}}\) gate to register \(\mathtt{T}\). Then, for each \(S\in\operatorname{supp}_{\{0,1\}}^{>0}(\gamma)\) in parallel, apply a \(\mathsf{Z}(\widetilde{\gamma}(S))\) gate controlled on register \(\mathtt{P}_{S}\) to the \(S\)-th qubit in register \(\mathtt{T}^{\prime}\) (if \(|S|=1\), apply the gate onto \(|x_{S}\rangle_{\mathtt{I}}\)). Finally, apply \(\mathsf{FO}_{\mathtt{T}\to\mathtt{T}^{\prime}}^{(m)}\) again. We obtain \[|x\rangle_{\mathtt{I}}\mathsf{HZ}(\delta(x))|b\rangle_{\mathtt{T}}\mapsto|x \rangle_{\mathtt{I}}\mathsf{Z}\left(\sum_{S\subseteq[n]}\widetilde{\gamma}(S)x ^{S}\right)\mathsf{HZ}(\delta(x))|b\rangle_{\mathtt{T}}=|x\rangle_{\mathtt{I}} \mathsf{Z}(\gamma(x))\mathsf{HZ}(\delta(x))|b\rangle_{\mathtt{T}}.\] 3. Apply a \(\mathsf{H}_{\rightarrow\mathsf{T}}\) gate onto register \(\mathsf{T}\) followed by an \(m\)-arity Fan-Out gate \(\mathsf{FO}_{\mathsf{T}\rightarrow\mathsf{T}^{\prime}}^{(m)}\) from register \(\mathsf{T}\) to register \(\mathsf{T}^{\prime}\). Apply a \(\mathsf{Z}(\widetilde{\beta}(\emptyset))_{\rightarrow\mathsf{T}}\) gate to register \(\mathsf{T}\). Then, for each \(S\in\operatorname{supp}_{\{0,1\}}^{>0}(\beta)\) in parallel, apply a \(\mathsf{Z}(\widetilde{\beta}(S))\) gate controlled on register \(\mathsf{P}_{S}\) to the \(S\)-th qubit in register \(\mathsf{T}^{\prime}\) (if \(|S|=1\), apply the gate onto \(|x_{S}\rangle_{\mathsf{T}}\)). Finally, apply \(\mathsf{FO}_{\mathsf{T}\rightarrow\mathsf{T}^{\prime}}^{(m)}\) again. Similarly to the previous steps, this chain of operations leads to \[|x\rangle_{\mathsf{T}}\mathsf{HZ}(\gamma(x))\mathsf{HZ}(\delta(x))|b\rangle_{ \mathsf{T}}\mapsto|x\rangle_{\mathsf{T}}\mathsf{Z}(\beta(x))\mathsf{HZ}(\gamma (x))\mathsf{HZ}(\delta(x))|b\rangle_{\mathsf{T}}.\] 4. Apply an overall phase \(e^{i\pi\widetilde{\alpha}(\emptyset)}\). Then, for each \(S\in\operatorname{supp}_{\{0,1\}}^{>0}(\alpha)\) in parallel, apply a \(\mathsf{Z}(\widetilde{\alpha}(S))\) gate onto register \(\mathsf{P}_{S}\) (if \(|S|=1\), apply the gate onto \(|x_{S}\rangle_{\mathsf{T}}\)). This yields \[|x\rangle_{\mathsf{T}}e^{i\pi\alpha(x)}\mathsf{Z}(\beta(x))\mathsf{HZ}(\gamma (x))\mathsf{HZ}(\delta(x))|b\rangle_{\mathsf{T}}=|x\rangle_{\mathsf{T}}f(x)|b \rangle_{\mathsf{T}}.\] 5. Uncompute Steps 1 and 2. We now consider the \(\mathsf{GT}\)-gate-based circuit. Steps 3a-d are replaced with the following Step 3. In the following, write register \(\mathsf{I}\) as \(|x\rangle_{\mathsf{I}}=\bigotimes_{i\in[n]}|x_{i}\rangle_{\mathsf{I}_{i}}\). 1. Apply the gate (write \(\mathsf{P}_{S}:=\mathsf{I}_{S}\) if \(|S|=1\) and \(\mathsf{P}_{\emptyset}:=\emptyset\)) \[\left(e^{i\pi\widetilde{\alpha}(\emptyset)}\prod_{S\in \operatorname{supp}_{\{0,1\}}^{>0}(\alpha)}\mathsf{Z}(\widetilde{\alpha}(S))_ {\rightarrow\mathsf{P}_{S}}\right)\left(\prod_{S\in\operatorname{supp}_{\{0,1 \}}(\beta)}\mathsf{C}_{\mathsf{P}_{S}}\mathsf{-Z}(\widetilde{\beta}(S))_{ \rightarrow\mathsf{T}}\right)\mathsf{H}_{\rightarrow\mathsf{T}}\\ \cdot\left(\prod_{S\in\operatorname{supp}_{\{0,1\}}(\gamma)} \mathsf{C}_{\mathsf{P}_{S}}\mathsf{-Z}(\widetilde{\gamma}(S))_{ \rightarrow\mathsf{T}}\right)\mathsf{H}_{\rightarrow\mathsf{T}}\left(\prod_ {S\in\operatorname{supp}_{\{0,1\}}(\delta)}\mathsf{C}_{\mathsf{P}_{S}}\mathsf{ -Z}(\widetilde{\delta}(S))_{\rightarrow\mathsf{T}}\right)\] using 3 \(\mathsf{GT}\) gates (one for each \(\prod_{S\in\operatorname{supp}_{\{0,1\}}(\cdot)}\mathsf{C}_{\mathsf{P}_{S}} \mathsf{-Z}(\cdot)_{\rightarrow\mathsf{T}}\)). We now analyse the resources for each step: * Step 1: we need to copy each \(x_{i}\), \(i\in[n]\), for every \(S\in\operatorname{supp}_{\{0,1\}}^{>1}(f)\) such that \(i\in S\). Thus registers \(\mathsf{R}_{S}\) require \(\sum_{i=0}^{n-1}m_{i}=\sum_{S\in\operatorname{supp}_{\{0,1\}}^{>1}(f)}|S|\) ancillae. Such copying can be done with either \(|\bigcup_{S\in\operatorname{supp}_{\{0,1\}}^{>1}(f)}S|\leq\sum_{S\in \operatorname{supp}_{\{0,1\}}^{>1}(f)}|S|\) Fan-Out gates, each with arity at most \(1+\max_{i\in[n]}m_{i}\), or 1 \(\mathsf{GT}\) gate with arity \(|\bigcup_{S\in\operatorname{supp}_{\{0,1\}}^{>1}(f)}S|+\sum_{S\in \operatorname{supp}_{\{0,1\}}^{>1}(f)}|S|\leq 2\sum_{S\in\operatorname{supp}_{\{0,1\}}^{>1}(f)} |S|\); * Step 2: the \(|\operatorname{supp}_{\{0,1\}}^{>1}(f)|\)\(\mathsf{AND}_{\mathsf{R}_{S}\rightarrow\mathsf{P}_{S}}^{(|S|)}\) gates can be performed with either \[\sum_{S\in\operatorname{supp}_{\{0,1\}}^{>1}(f)}\left(2|S|\log|S|+O(|S|)\right)\] ancillae and \[\sum_{S\in\operatorname{supp}_{\{0,1\}}^{>1}(f)}\left(3|S|+O(\log|S|)\right)\] Fan-Out gates with arity at most \(2\deg(f)\) (Fact 18), or \[\sum_{S\in\operatorname{supp}_{\{0,1\}}^{>1}(f)}\left(2|S|+O(\log|S|)\right)\] ancillae and \(2\)\(\mathsf{GT}\) gates with arity at most \(2\deg(f)+O(\log\deg(f))\) (Fact 24) if we postpone their inner uncomputation part until Step 5; * Step 3: constructing \(f(x)\) requires either \(m-1\) ancillae and \(6\) Fan-Out gates with arity \(m\) or no ancillae and \(3\)\(\mathsf{GT}\) gates with arity \(m\); * Step 4: uncomputing Steps 1 and 2 requires either \(\sum_{S\in\operatorname{supp}_{\{0,1\}}^{>1}(f)}\left(4|S|+O(\log|S|)\right)\) Fan-Out gates or \(3\)\(\mathsf{GT}\) gates. We require either \(\sum_{S\in\operatorname{supp}_{\{0,1\}}^{>0}(f)}\left(2|S|\log|S|+O(|S|)\right)\) ancillae and \(\sum_{S\in\operatorname{supp}_{\{0,1\}}^{>1}(f)}\left(8|S|+O(\log|S|)\right)\) Fan-Out gates with arity \(\leq\max\{1+|\operatorname{supp}_{\{0,1\}}^{>0}(f)|,2\deg(f)\}\) or \(\sum_{S\in\operatorname{supp}_{\{0,1\}}^{>1}(f)}\left(3|S|+O(\log|S|)\right)\) ancillae and \(9\)\(\mathsf{GT}\) gates with arity \(\leq 2\sum_{S\in\operatorname{supp}_{\{0,1\}}(f)}|S|\). ### Constant-depth circuits for \(f\)-Fins Similarly to Section 5.2, we now show how the circuits from the previous section can be simplified and used to implement \(f\)-FINs. **Theorem 34** (Real implementation of \(f\)-Fin).: _Given \(f:\{0,1\}^{n}\to\{0,1\}\), there is an \(O(1)\)-depth circuit for \(f\)-FIN that uses_ * _either_ \(\sum_{S\in\operatorname{supp}(f)}|S|+2|\operatorname{supp}^{>1}(f)|\) _ancillae and_ \(2|\bigcup_{S\in\operatorname{supp}^{>1}(f)}S|+2|\operatorname{supp}^{>1}(f)|+2\) _Fan-Outs with arity at most_ \(1+\max\{|\operatorname{supp}^{>0}(f)|,\deg(f)\}\)_,_ * _or_ \(2|\operatorname{supp}^{>0}(f)|\) _ancillae and_ \(2\)__\(\mathsf{GT}\) _gates with arity at most_ \(3\sum_{S\in\operatorname{supp}(f)}|S|\)_._ Proof.: Since an \(f\)-FIN is simply an \(f^{\prime}\)-UCG whose \(f^{\prime}\)\(\mathsf{Z}\)-decomposition is \(\mathsf{HZ}(f(x))\mathsf{H}\), Step 2 in Theorem 30 only requires \(2\) Fan-Out gates or \(1\)\(\mathsf{GT}\) gate. This gives the resource count for the Fan-Out-based construction and the number of \(\mathsf{GT}\) gates is reduced to \(3\). It is possible to further reduce the number of \(\mathsf{GT}\) gates to \(2\) by using the \((|\operatorname{supp}^{>0}(f)|-1)\)-qubit register \(\mathsf{T}^{\prime}\) similarly to the Fan-Out-based construction. This increases the number of ancillae to \(|\operatorname{supp}^{>0}(f)|+|\operatorname{supp}^{>1}(f)|\leq 2| \operatorname{supp}^{>0}(f)|\) and the arity to \(2\sum_{S\in\operatorname{supp}(f)}|S|+|\operatorname{supp}^{>0}(f)|\leq 3 \sum_{S\in\operatorname{supp}(f)}|S|\). To obtain the next result, the modifications that we do to Theorem 32 are the same that were conducted in the previous theorem. Recall that an \(f\)-FIN is an \(f^{\prime}\)-UCG such that \(f^{\prime}(x)=\mathsf{X}^{f(x)}\). Therefore, an approximate circuit for an \(f\)-FIN implements an \(f^{\prime}\)-UCG with \(f^{\prime}(x)\) close to \(\mathbb{I}_{1}\) or \(\mathsf{X}\). **Theorem 35** (Approximate real implementation of \(f\)-Fin).: _Let \(\epsilon>0\), \(f:\{0,1\}^{n}\to\{0,1\}\), and \(s=\min\{\operatorname{supp}^{>1}(f),\left[4\pi^{2}n\hat{\|}f^{>1}\hat{\|}_{1}^{ 2}/\epsilon^{2}\right]\}\). There is an \(O(1)\)-depth circuit that implements an \(f^{\prime}\)-UCG with \(f^{\prime}:\{0,1\}^{n}\to\mathcal{U}(\mathbb{C}^{2\times 2})\) such that \(\max_{x\in\{0,1\}^{n}}\|f^{\prime}(x)-\mathsf{X}^{f(x)}\|\leq\epsilon\) and uses_ * _either_ \(s(\deg(f)+2)+|\operatorname{supp}^{=1}(f)|\) _ancillae and_ \(2s+2|\bigcup_{S\in\operatorname{supp}^{>1}(f)}S|+2\) _Fan-Out gates with arity_ \(\leq s+|\operatorname{supp}^{=1}(f)|\)_,_ * _or_ \(2s+|\operatorname{supp}^{=1}(f)|\) _ancillae and_ \(2\)__\(\mathsf{GT}\) _gates with arity_ \(\leq s(\deg(f)+1)+|\bigcup_{S\in\operatorname{supp}^{>1}(f)}S|+|\operatorname{ supp}^{=1}(f)|\)_._ Finally, Theorem 33 can be considerably simplified in the case of \(f\)-FINs since we can use the \(\mathbb{F}_{2}\)-polynomial representation of \(f(x)\) instead of its real \(\{0,1\}\)-representation. **Theorem 36** (\(\mathbb{F}_{2}\)-implementation of \(f\)-Fin).: _Given \(f:\{0,1\}^{n}\to\{0,1\}\), there is an \(O(1)\)-depth circuit for \(f\)-FIN that uses_ * _either_ \(\sum_{S\in\operatorname{supp}^{>1}_{\mathbb{F}_{2}}(f)}\left(2|S|\log|S|+O(|S|)\right)\) _ancillae and_ \(\sum_{S\in\operatorname{supp}^{>1}_{\mathbb{F}_{2}}(f)}\left(8|S|+O(\log|S|)\right)\) _Fann-Out gates with arity_ \(\leq\max\{1+|\operatorname{supp}^{>0}_{\mathbb{F}_{2}}(f)|,2\deg_{\mathbb{F}_ {2}}(f)\}\)_,_ * _or_ \(\sum_{S\in\operatorname{supp}^{>0}_{\mathbb{F}_{2}}(f)}\left(4|S|+O(\log|S|)\right)\) _ancillae and_ \(6\)__\(\mathsf{GT}\) _gates with arity_ \(\leq 3\sum_{S\in\operatorname{supp}_{\mathbb{F}_{2}}(f)}|S|\)_._ Proof.: After constructing the state \[|x\rangle_{\mathtt{I}}|b\rangle_{\mathtt{T}}\bigotimes_{S\in\operatorname{ supp}^{\geq 1}_{\mathbb{F}_{2}}(f)}|x_{S}\rangle_{\mathtt{R}_{S}}|x^{S}\rangle_{ \mathtt{P}_{S}}\] as in Theorem 33, apply a \(\mathsf{X}^{\widetilde{f}_{\mathbb{F}_{2}}(\emptyset)}_{\rightarrow\mathtt{T}}\) gate onto register \(\mathtt{T}\), a \(\mathsf{PARITY}_{\{\mathtt{I}_{j}\}_{j\in\operatorname{supp}^{=1}_{\mathbb{F}_ {2}}(f)}\rightarrow\mathtt{T}}\) gate from registers \(\{\mathtt{I}_{j}\}_{j\in\operatorname{supp}^{=1}_{\mathbb{F}_{2}}(f)}\) onto register \(\mathtt{T}\) (\(\mathtt{I}_{j}\) contains \(x_{j}\)), and finally a \(\mathsf{PARITY}_{\{\mathtt{P}_{S}\}_{S\in\operatorname{supp}^{>1}_{\mathbb{F}_ {2}}(f)}\rightarrow\mathtt{T}}\) gate from registers \(\{\mathtt{P}_{S}\}_{S\in\operatorname{supp}^{\geq 1}_{\mathbb{F}_{2}}(f)}\) onto register \(\mathtt{T}\) (both \(\mathsf{PARITY}\) gates can be performed together). We get \[|x\rangle_{\mathtt{I}}|b\rangle_{\mathtt{T}}\bigotimes_{S\in \operatorname{supp}^{>1}_{\mathbb{F}_{2}}(f)}|x_{S}\rangle_{\mathtt{R}_{S}}|x^ {S}\rangle_{\mathtt{P}_{S}} \mapsto |x\rangle_{\mathtt{I}}|b\oplus\bigoplus_{S\subseteq[n]} \widetilde{f}_{\mathbb{F}_{2}}(S)x^{S}\rangle_{\mathtt{T}}\bigotimes_{S\in \operatorname{supp}^{>1}_{\mathbb{F}_{2}}(f)}|x_{S}\rangle_{\mathtt{R}_{S}}|x^ {S}\rangle_{\mathtt{P}_{S}}\] \[= |x\rangle_{\mathtt{I}}|b\oplus f(x)\rangle_{\mathtt{T}}\bigotimes _{S\in\operatorname{supp}^{>1}_{\mathbb{F}_{2}}(f)}|x_{S}\rangle_{\mathtt{R}_{S }}|x^{S}\rangle_{\mathtt{P}_{S}}.\] Uncomputing registers \(\mathtt{R}_{S}\) and \(\mathtt{P}_{S}\) gives the desired state. As in Theorem 33, the total cost of computing and uncomputing registers \(\mathtt{R}_{S}\) and \(\mathtt{P}_{S}\) is either \(\sum_{S\in\operatorname{supp}^{>1}_{\mathbb{F}_{2}}(f)}\left(2|S|\log|S|+O(|S|)\right)\) ancillae and \(\sum_{S\in\operatorname{supp}^{>1}_{\mathbb{F}_{2}}(f)}\left(8|S|+O(\log|S|)\right)\) Fan-Out gates with arity \(\leq\max\{1+|\operatorname{supp}^{>0}_{\mathbb{F}_{2}}(f)|,2\deg_{\mathbb{F}_ {2}}(f)\}\) or \(\sum_{S\in\operatorname{supp}^{>1}_{\mathbb{F}_{2}}(f)}\left(3|S|+O(\log|S|)\right)\) ancillae and \(6\)__\(\mathsf{GT}\) gates with arity \(\leq 2\sum_{S\in\operatorname{supp}_{\mathbb{F}_{2}}(f)}|S|\). The \(\mathsf{PARITY}\) gates cost another \((1+|\operatorname{supp}^{>0}_{\mathbb{F}_{2}}(f)|)\)-arity Fan-Out or \(\mathsf{GT}\) gate. It is possible to reduce the number of \(\mathsf{GT}\) gates from \(7\) to \(6\) by using \(|\operatorname{supp}^{>0}_{\mathbb{F}_{2}}(f)|\) extra ancillae similarly to Theorem 34. ### Constant-depth circuits for quantum memory devices via Boolean analysis In this section, we apply our Boolean-based circuit constructions to the case of \(\mathsf{QRAM}\). We first compute the Fourier coefficients of \(f(x,i)=x_{i}\). **Lemma 37**.: _Let \(n\in\mathbb{N}\) be a power of \(2\) and let \(f:\{0,1\}^{n}\times\{0,1\}^{\log n}\to\{0,1\}\) be the Boolean function \(f(x,i)=x_{i}\). The Fourier coefficients of \(f\) are_ \[\widehat{f}(S,T)=\begin{cases}\frac{1}{2}&\text{if }(S,T)=(\emptyset,\emptyset), \\ \frac{-\chi_{T}(k)}{2n}&\forall(S,T)\subseteq[n]\times[\log n],S=\{k\},\\ 0&\text{otherwise}.\end{cases}\] Proof.: By a straightforward calculation, \[\widehat{f}(\emptyset,T)=\frac{1}{2^{n}n}\sum_{i\in\{0,1\}^{\log n}}\chi_{T}(i) \sum_{x\in\{0,1\}^{n}}x_{i}=\frac{1}{2n}\sum_{i\in\{0,1\}^{\log n}}\chi_{T}(i)= \begin{cases}\frac{1}{2}&\text{if $T=\emptyset$},\\ 0&\text{otherwise}.\end{cases}\] Moreover, \[\widehat{f}(S,T)=\frac{1}{2^{n}n}\sum_{i\in\{0,1\}^{\log n}}\chi_{T}(i)\sum_{x \in\{0,1\}^{n}}x_{i}(-1)^{\sum_{j\in S}x_{j}}=\begin{cases}\frac{-\chi_{T}(k)}{ 2n}&\text{if $S=\{k\}$},\\ 0&\text{if $|S|\geq 2$},\end{cases}\] for every \(T\subseteq[\log n]\), since \(\sum_{x\in\{0,1\}^{n}}x_{i}(-1)^{x_{j}}\) equals \(0\) if \(i\neq j\) and \(-2^{n-1}\) if \(i=j\). **Theorem 38** (Real implementation of \(\mathsf{QRAM}\)).: _Let \(n\in\mathbb{N}\) be a power of \(2\). A \(\mathsf{QRAM}\) of memory size \(n\) can be implemented in \(O(1)\)-depth using_ * _either_ \(\frac{1}{2}n^{2}\log n+O(n^{2})\) _ancillae and_ \(2n^{2}+O(n\log n)\) _Fan-Out gates with arity at most_ \(1+n^{2}\)_,_ * _or_ \(2n^{2}\) _ancillae and_ \(2\)__\(\mathsf{GT}\) _gates with arity at most_ \(\frac{1}{2}n^{2}\log n+O(n^{2})\)_._ Proof.: By Lemma 37, \(\operatorname{supp}(f)=\{(S,T)\subseteq[n]\times[\log n]:|S|=1\}\cup\{( \emptyset,\emptyset)\}\). Thus \(|\operatorname{supp}^{>0}(f)|=n^{2}\), \(|\operatorname{supp}^{>1}(f)|=n^{2}-n\), \(|\bigcup_{(S,T)\in\operatorname{supp}^{>1}(f)}(S,T)|=n+\log n\), \(\deg(f)=1+\log n\), and \[\sum_{(S,T)\in\operatorname{supp}(f)}|S|+|T|=n\sum_{k=0}^{\log n}\binom{\log n }{k}(1+k)=n^{2}+\frac{n^{2}\log n}{2}.\] By Theorem 34, there is an \(O(1)\)-depth circuit for \(\mathsf{QRAM}\) that uses either \(\frac{1}{2}n^{2}\log n+O(n^{2})\) ancillae and \(2n^{2}+O(n\log n)\) Fan-Out gates with arity at most \(1+n^{2}\), or \(2n^{2}\) ancillae and \(2\)\(\mathsf{GT}\) gates with arity at most \(\frac{1}{2}n^{2}\log n+O(n^{2})\). **Remark 39**.: _Since the \(\mathbb{F}_{2}\)-support of \(f:\{0,1\}^{n}\times\{0,1\}^{\log n}\to\{0,1\}\), \(f(x,i)=x_{i}\), is quite dense, Theorem 36 is not suited for constructing an efficient \(\mathsf{QRAM}\). Moreover,_ \[\hat{\|}f^{>1}\hat{\|}_{1}=\sum_{(S,T)\in\operatorname{supp}^{>1}(f)}|\widehat {f}(S,T)|=\frac{1}{2n}n(2^{\log n}-1)=\frac{n}{2}-O(1).\] _Therefore, \(s=\lceil 4\pi^{2}n\hat{\|}f^{>1}\hat{\|}_{1}^{2}/\epsilon^{2}\rceil=\lceil\pi ^{2}n^{3}/\epsilon^{2}\rceil-O(n^{2})\) and the approximate real constructions in Theorem 35 require many more ancillae compared to Theorem 38._ ## 7 Acknowledgement We thank Rainer Dumke, Yvonne Gao, Wenhui Li, and Daniel Weiss for useful discussions on physical implementations of \(\mathsf{QRAM}\), Arthur Rattew for interesting conversations on the feasibility of \(\mathsf{QRAMs}\), Patrick Rebentrost for initial discussions, and Shengyu Zhang for general discussions and for clarifying some points in [13, 14]. This research is supported by the National Research Foundation, Singapore and A*STAR under its CQT Bridging Grant and its Quantum Engineering Programme under grant NRF2021-QEP2-02-P05. This work was done in part while JFD, AL, and MS were visiting the Simons Institute for the Theory of Computing.
2304.08834
Non-normalizable quasi-equilibrium states under fractional dynamics
We study non-normalizable quasi-equilibrium states (NNQE) arising from anomalous diffusion. Initially, particles in contact with a thermal bath are released from an asymptotically flat potential well, with dynamics that is described by fractional calculus. For temperatures that are sufficiently low compared to the potential depth, the properties of the system remain almost constant in time. We use the fractional-time Fokker-Planck equation (FTFPE) and continuous-time random walk approaches to calculate the ensemble averages of observables. We obtain analytical estimates of the duration of NNQE, depending on the fractional order, from approximate theoretical solutions of the FTFPE. We study and compare two types of observables, the mean square displacement typically used to characterize diffusion, and the thermodynamic energy. We show that the typical time scales for stagnation depend exponentially on the activation energy in units of temperature multiplied by a function of the fractional exponent.
Lucianno Defaveri, Maike A. F. dos Santos, David A. Kessler, Eli Barkai, Celia Anteneodo
2023-04-18T08:57:23Z
http://arxiv.org/abs/2304.08834v1
# Non-normalizable quasi-equilibrium states under fractional dynamics ###### Abstract We study non-normalizable quasi-equilibrium states (NNQE) arising from anomalous diffusion. Initially, particles in contact with a thermal bath are released from an asymptotically flat potential well, with dynamics that is described by fractional calculus. For temperatures that are sufficiently low compared to the potential depth, the properties of the system remain almost constant in time. We use the fractional-time Fokker-Planck equation (FTFPE) and continuous-time random walk approaches to calculate the ensemble averages of observables. We obtain analytical estimates of the duration of NNQE, depending on the fractional order, from approximate theoretical solutions of the FTFPE. We study and compare two types of observables, the mean square displacement typically used to characterize diffusion, and the thermodynamic energy. We show that the typical time scales for stagnation depend exponentially on the activation energy in units of temperature multiplied by a function of the fractional exponent. **Keywords**: fractional diffusion; non-confining fields; non-normalizable quasi-equilibrium ## I Introduction When Brownian particles are subjected to a confining one-dimensional potential \(V(x)\), the probability density function (PDF) \(P(x,t)\) of a single particle attains a normalizable equilibrium state. In fact, the Fokker-Planck equation (FPE) in one-dimension [1] \[\frac{\partial}{\partial t}P(x,t)\;=\;\mathcal{K}_{1}\frac{\partial}{\partial x }\left\{-\frac{F(x)}{k_{B}T}+\frac{\partial}{\partial x}\right\}P(x,t)\,, \tag{1}\] where \(\mathcal{K}_{1}\) is the diffusion coefficient, \(T\) the temperature, \(k_{B}\) the Boltzmann constant and \(F(x)=-V^{\prime}(x)\) is the force, admits the equilibrium solution \(P_{\text{eq.}}(x)=Z^{-1}\exp\left[-V(x)/(k_{B}T)\right]\), where \(Z=\int_{-\infty}^{\infty}\exp\left[-V(x)/(k_{B}T)\right]dx\) is the normalizing partition function. However, if the potential is asymptotically flat, as in Lennard-Jones, Coulomb and gravitational fields, \(Z\) is divergent and this picture breaks down [2; 3]. We will consider even potentials (i.e., \(V(-x)=V(x)\)) that behave asymptotically as \(V(x)\propto 1/x^{\mu}\), with \(\mu>1\), specifically of the form \[V(x)=-\frac{V_{0}}{\left(1+(x/x_{0})^{2}\right)^{\mu/2}}\,, \tag{2}\] where the length-scale \(x_{0}\) represents the effective region of the potential well. For this kind of potential, which is locally confining, it has been shown that a kind of quasi-equilibrium state emerges, where dynamical and thermodynamical observables remain nearly constant [4; 5], with a lifetime that is controlled by the ratio between the temperature and the depth of the potential well. These quasi-equilibrium states are different from those studied by other authors, produced by traps [6] or defects [7], related to aging properties. Although \(Z\) is divergent, the average of observables, in these so-called non-normalizable quasi-equilibrium (NNQE) states, can be predicted through an appropriate regularization of the partition function \(Z\). All this is well studied for the FPE (1) [4; 5], which describes normal diffusion and relaxation in a force field. However, anomalous diffusion, where the mean square displacement (MSD) scales nonlinearly with time as \(\left\langle(x-\langle x\rangle)^{2}\right\rangle\sim t^{\alpha}\), with \(\alpha\neq 1\), for the force free case, is ubiquitous in nature, and as such has been reported in many theoretical and experimental works [8; 9]. The MSD was shown to exhibit stagnation, similarly to NNQE in experiments tracking probe particles in micellar solutions, with sub-diffusive dynamics [10; 11; 12], in the cytosol of bacteria, yeast, and human cells [13; 14]. Another example, which exhibits anomalous diffusion, is the complex motion of excitons in semiconductors [15] and perovskites [16], the latter also exhibiting stagnation in the MSD. Then, the question emerges: how does anomalous diffusion affect the averages and other features of NNQE states? To describe the phenomenology of anomalous diffusion, different generalizations of the diffusion equation have been proposed, with each generalization connected to different mechanisms [8; 9], such as, memory effects, non-locality, and disordered or heterogeneous media. Several methods have been proposed to unravel these mechanisms behind anomalous diffusion in experimental data sets [17; 18]. Three main phenomenological approaches are nonlinear diffusion [19; 20; 21], fractional Brownian motion [22], and the fractional-time Fokker-Planck approach [23; 24; 25; 26; 27; 28; 29; 30; 31]. In this manuscript, we will focus on the latter, based on fractional calculus, and identified as a useful tool for investigating anomalous diffusion [32]. In fractional calculus, different classes of fractional derivatives (integro-differential operators) can be defined, with applications in several fields of science and engineering [33]. We introduce a fractional dynamics by replacing the integer-order temporal derivative in Eq. (1) by the fractional Caputo derivative \[\frac{\partial}{\partial t}P\to{}^{C}D_{t}^{\alpha}P\equiv\frac{1}{\Gamma(1- \alpha)}\int_{0}^{t}\frac{1}{(t-t^{\prime})^{\alpha}}\frac{dP}{dt^{\prime}}dt^ {\prime}\,, \tag{3}\] for \(0<\alpha<1\), where \(\Gamma(x)\) is the Gamma function. This fractional dynamics is particularly interesting as it can be connected with a continuous time random walk (CTRW) description. In the limit \(\alpha\to 1\), the integer-order partial derivative is recovered. Then, the fractional time FPE (FTFPE), which generalizes Eq. (1), is \[{}^{C}D_{t}^{\alpha}P(x,t) = \mathcal{K}_{\alpha}\frac{\partial}{\partial x}\left\{-\frac{F(x )}{k_{B}T}+\frac{\partial}{\partial x}\right\}P(x,t), \tag{4}\] where \(\mathcal{K}_{\alpha}\) is the generalized diffusion coefficient. Alternatively, it can be rewritten as [33; 34] \[\frac{\partial}{\partial t}P(x,t)=\mathcal{K}_{\alpha}\ ^{RL}D_{t}^{1-\alpha}\frac{ \partial}{\partial x}\left\{-\frac{F(x)}{k_{B}T}+\frac{\partial}{\partial x} \right\}P(x,t), \tag{5}\] where \[{}^{RL}D_{t}^{1-\alpha}\psi(t)\equiv\frac{1}{\Gamma(\alpha)}\frac{d}{dt}\int_ {0}^{t}\frac{\psi(t^{\prime})}{(t-t^{\prime})^{1-\alpha}}dt^{\prime} \tag{6}\] is the fractional derivative of a function \(\psi(t)\) in the Riemann-Liouville sense. In the free case (\(F(x)=0\)), Eq. (5) produces subdiffusion as \(\left<(x-\langle x\rangle)^{2}\right>\sim t^{\alpha}\)[24] with an exponent that coincides with the fractional order \(\alpha\). For the class of potentials of interest, which are asymptotically flat, nearly free subdiffusion occurs at large distances. We complete the model given by Eq. (5) using the family of asymptotically flat potentials in Eq. (2). Re-scaling \[x/x_{0}\to x,\ \ V(x)/V_{0}\to v(x),\ \ t/t_{0}\to t, \tag{7}\] with \(t_{0}=(x_{0}^{2}/\mathcal{K}_{\alpha})^{1/\alpha}\), and defining \(\xi=k_{B}T/V_{0}\), we finally obtain the dimensionless form of Eq. (5), namely, \[\frac{\partial}{\partial t}P(x,t) = \ ^{RL}D_{t}^{1-\alpha}\frac{\partial}{\partial x}\left\{-\frac{f( x)}{\xi}+\frac{\partial}{\partial x}\right\}P(x,t), \tag{8}\] where \(f(x)=-v^{\prime}\), with \[v(x)=-\frac{1}{\left(1+x^{2}\right)^{\mu/2}}. \tag{9}\] Initially the particles are placed at the minimum of the potential, and we expect that for long times they will diffuse. In Fig. 1 we illustrate the behavior of the MSD as a function of time, for \(\alpha=0.6\) and different values of \(\xi\), where full lines were obtained by numerical integration of the FTFPE for the initial condition \(P(x,0)=\delta(x)\). For short times, the packet initially localized at the origin (sub)diffuses almost freely, with the MSD growing as \(\langle x^{2}\rangle\sim t^{\alpha}\). A plateau emerges, for sufficiently low \(\xi\), signaling a NNQE where the MSD becomes stagnated for some time. The duration of the NNQE increases with \(1/\xi\), with its end due to the escape of the particles. Clearly, beyond the MSD, different observables will have plateaus with different lifespans. Our purpose is to characterize the NNQE states that emerge at some length and time scales for the class of potentials (9), with a well at the origin and asymptotically flat, under the chosen fractional dynamics. For example, what are the values of the MSD or the energy of the system in stagnation, namely during the NNQE time span? Let us mention that escape problems in the scenario of fractional dynamics have been explored before for different potential and boundary conditions [35; 36; 37; 24], but not for flat potentials as those here considered. The flatness of the attractive potential is important as it makes it easier for some particles to return to the well, unlike the typical escape problem where the particle must overcome a potential barrier and, once it escapes the well, is repelled away. Therefore, we deal with a phenomenon that has not been treated before. The remaining of the manuscript is organized as follows. In Sec. II, we obtain the solution of the fractional problem, focusing on the intermediate long-time regime, corresponding to NNQE states. In Sec. III, we present results for the microscopic counterpart of the FTFPE provided by a CTRW, in good agreement with the FTFPE description. The impact of the fractional order \(\alpha\) on the NQE states is studied in Sec. IV, for the mean square displacement (MSD) and the energy, as examples of dynamic and thermodynamic observables, respectively. Sec. V contains final remarks. Figure 1: MSD vs. time, different values of \(\xi\) indicated in the legend. In all cases, the fractional order is \(\alpha=0.6\), and the decay of the potential is ruled by \(\mu=4\). Solid lines represent the exact result obtained from the numerical integration of the FTFPE (8), as described in Sec. II. Symbols correspond to results obtained from CTRW simulations, over \(5\times 10^{5}\) trajectories (symbols), using \(\sigma=10^{-2}\) and \(\tau=\{\sigma^{2}/(2\Gamma[1-\alpha])\}^{1/\alpha}\simeq 1.8\times 10^{-8}\) (see Sec. III). The NNEQ level, predicted by Eq. (24), is plotted by dotted lines. Solving the fractional-time FPE For the initial condition \(P(x,0)=\delta(x)\), the solution of the FTFPE is expected to present three main distinct temporal regimes, as we observed in Fig. 1. First, for very short times, subdiffusion is nearly free at the bottom of the well, and anomalous, with the MSD increasing as \(\langle x^{2}\rangle\sim t^{\alpha}\). Second, at intermediate long-time scales, the NNQE state occurs. Third, at very long times, when most of the particles have escaped the well, anomalous diffusion with exponent \(\alpha\) should be predominant again, since \(P(x,t)\) will be dominated by the \(x\gg 1\) limit, where \(v(x)\to 0\). Although it is possible to derive the solutions of the FTFPE rigorously using an eigenfunction expansion, analogously to what was done for \(\alpha=1\)[5], here we will focus on a much simpler procedure that maps the already known \(\alpha=1\) solution onto the fractional problem. ### Mapping procedure The solution of the FTFPE, \(P_{\alpha}(x,t)\), for any \(0<\alpha\leq 1\), for the initial condition \(P_{\alpha}(x,0)=\delta(x)\), can be expressed, using a subordination technique, as used by several authors in similar contexts [38; 39; 40; 41; 42; 43; 44]. Namely, we use the following transform of the solution for the non-fractional case (\(\alpha=1\)) [45] \[P_{\alpha}(x,t)\ =\ \int_{0}^{\infty}n_{\alpha}(q,t)P_{1}(x,q)dq\,, \tag{10}\] with \[n_{\alpha}(q,t)=\frac{t}{\alpha\;q^{1+1/\alpha}}L_{\alpha}\left(t/q^{1/\alpha }\right), \tag{11}\] where \(L_{\beta}(z)\) is the one-sided Levy PDF, whose Laplace \(z\to s\) transform is \(\tilde{L}_{\beta}=\exp(-s^{\beta})\)[46]. Within the framework of continuous time random walks (CTRW), the function \(n_{\alpha}(q,t)\) can be interpreted as the probability of \(q\) steps in the interval \((0,t)\)[45]. When \(\alpha\to 1\), \(n_{\alpha}(q,t)\to\delta(q-t)\). In the opposite case \(\alpha\to 0\), we have \(n_{\alpha}(q,t)\to\exp(-q)\), independently of \(t\), implying that the initial density does not evolve. For intermediate values of \(\alpha\), the shape of \(n_{\alpha}(q,t)\) is represented in Fig. 2, in terms of the scaled variable \(q/t^{\alpha}\), for which the curves for any \(t\) collapse, as it can be straightforwardly derived from Eq. (11). In particular, for \(\alpha=1/2\), \(n_{1/2}(q,t)\) corresponds to half a Gaussian. Some succeeding calculations can be simplified by using the Laplace transform of Eq. (10), namely, \[\tilde{P}_{\alpha}(x,s) = \int_{0}^{\infty}\tilde{n}_{\alpha}(q,s)P_{1}(x,q)dq \tag{12}\] \[= \int_{0}^{\infty}s^{\alpha-1}e^{-s^{\alpha}q}P_{1}(x,q)dq\] \[= s^{\alpha-1}\tilde{P}_{1}(x,s^{\alpha})\,.\] Then, the PDF is obtained by inversion of the Laplace transform, namely, \[P_{\alpha}(x,t)=\mathcal{L}^{-1}\{s^{\alpha-1}\tilde{P}_{1}(x,s^{\alpha})\}\,. \tag{13}\] Furthermore, the mapping (10) can be applied straightforwardly to averaged quantities. Namely, for an observable \(O\), we have \[\langle O\rangle_{\alpha}\ =\ \int_{0}^{\infty}n_{\alpha}(q,t)\langle O \rangle_{1}(q)dq\,, \tag{14}\] where \(\langle O\rangle_{1}=\int_{-\infty}^{\infty}OP_{1}(x,t)dx\). ### Solution for the integer-order FPE (\(\alpha=1\)) For the FPE (1) with integer-order time derivative (\(\alpha=1\)), an approximate solution was found in previous work [5], based on the eigenfunction expansion of the time-dependent solution of the FPE, with free boundary conditions. Let us summarize the approximate solution found for intermediate timescales, where time-independence emerges, and for potentials that decay with exponent \(\mu>1\). For the central region (**C**), \(x\ll t^{\frac{1}{2}}\), \[P_{1}^{\textbf{C}}(x,t)\ \simeq\ \frac{e^{-v(x)/\xi}}{\mathcal{Z}}+\mathcal{O}(t^{- 1/2}), \tag{15}\] while, for the region of the tails (**T**), \(x\gg t^{\frac{1}{2}}\), \[P_{1}^{\textbf{T}}(x,t)\ \simeq\ \frac{1}{\mathcal{Z}}\mathrm{erfc}\left(\frac{|x |}{2\sqrt{t}}\right), \tag{16}\] where \[\mathcal{Z}\ \equiv\ 2\int_{0}^{\infty}(e^{-v(x)/\xi}-1)dx\approx\sqrt{\frac{2\pi \xi}{\mu}}\ e^{1/\xi}, \tag{17}\] Figure 2: Transformation function \(n_{\alpha}\) vs. \(q/t^{\alpha}\), scaled such that curves for different times collapse, according to Eq. (11), for different values of \(\alpha\) (increasing from darker to lighter). which plays the role of a regularized BG partition function, does not depend on \(\alpha\), and its approximate value was obtained for small \(\xi\)[5]. Note that, as expected, the central region is dominated by the Boltzmann factor. However, since it is non-normalizable, Eq. (15) is only valid for a range of positions inside and near the well (that is, \(x\sim O(1)\)). Meanwhile, for large values of \(x\), the tails are given by the complementary error function erf the that governs the free diffusion, as the force in that region is negligible, of the very small number of particles that have managed to escape the deep well into the intermediate regime. ### Solution for the fractional-time FPE (\(0<\alpha<1\)) Let us use the method described at the beginning of subsection II.1 to obtain the PDF \(P_{\alpha}(x,t)\) in regions \(\mathbf{C}\) and \(\mathbf{T}\). For region \(\mathbf{C}\), \(x\ll t^{\alpha/2}\), using Eq. (13), we have \[P_{\alpha}^{\mathbf{C}}(x,t) = \mathcal{L}^{-1}\left\{s^{\alpha-1}\tilde{P}_{1}^{\mathbf{C}}(x, s^{\alpha})\right\} \tag{18}\] \[\simeq \mathcal{L}^{-1}\left\{\frac{e^{-v(x)/\xi}}{\mathcal{Z}}\frac{1} {s}\right\}+\mathcal{O}(t^{-\alpha/2})\] \[\simeq \frac{e^{-v(x)/\xi}}{\mathcal{Z}}+\mathcal{O}(t^{-\alpha/2}).\] Let us highlight that, for sufficiently long times (but not so long that the tails of the distribution dominate), this central region of the PDF remains time-independent, independently of \(\alpha\), with the BG shape normalized by the regularized partition function \(\mathcal{Z}\). Analogously, for region \(\mathbf{T}\), \(x\gg t^{\alpha/2}\), \[P_{\alpha}^{\mathbf{T}}(x,t) \simeq \mathcal{L}^{-1}\left\{s^{\alpha-1}\tilde{P}_{1}^{\mathbf{T}}(x, s^{\alpha})\right\} \tag{19}\] \[= \mathcal{L}^{-1}\left\{\frac{1}{\mathcal{Z}s}e^{-s^{\frac{9}{2}} \left|x\right|}\right\}\] \[= \frac{\int_{0}^{t}L_{\frac{\alpha}{2}}\left(t^{\prime}/|x|^{ \frac{3}{\alpha}}\right)dt^{\prime}}{\mathcal{Z}\left|x\right|^{\frac{3}{ \alpha}}}\,.\] Since \(L_{\beta}\) is the one-sided Levy density, the integral is the corresponding cumulative distribution. This expression represents a generalization of the complementary error function, decaying for large \(|x|\). The crossover position \(\ell\) between both regions, \(\mathbf{C}\) and \(\mathbf{T}\), scales as \(\ell\sim t^{\alpha/2}\), due to subdiffusion. In Fig. 3(a), we plot the exact numerical solution (solid lines) for \(\alpha=0.5\) at different times \(t\), with the central region matching the Boltzmann factor (inset) and the tails in the main plot approaching the region \(\mathbf{T}\) solution for larger \(x\) and \(t\). For comparison we also show the results for \(\alpha=1\) in panel (b). Actually, there is a shift that can be exactly calculated [5] producing a perfect match of the tails, but for simplicity we neglected it as it does not affect the physical conclusions regarding NNQE values and scaling laws. Although other procedures exist [47; 48], the numerical solution of the FTFE was obtained through the mapping procedure given by Eq. (10), using the numerical solutions for the integer order problem (\(\alpha=1\)) obtained using a standard Crank-Nicolson integration scheme. Let us find the condition under which most of the probability is concentrated in the time-independent region \(\mathbf{C}\). To find out the time at which this assumption fails, we calculate how much probability has flowed from region \(\mathbf{C}\) to region \(\mathbf{T}\) by a given time \(t\). By considering a point \(x=\ell\) in the overlap interval, the whole probability in region \(\mathbf{T}\), from Eq. (19), scales as \[\int_{\ell}^{\infty}P_{\alpha}^{\mathbf{T}}(x,t)dx = \mathcal{L}^{-1}\left\{\frac{1}{\mathcal{Z}s}\int_{\ell}^{\infty }e^{-s^{\frac{9}{2}}\left|x\right|}dx\right\} \tag{20}\] \[= \frac{t^{\frac{\alpha}{2}}}{\mathcal{Z}\Gamma(\frac{\alpha}{2}+1) }+\mathcal{O}(1),\] Figure 3: Numerical solution of the FTFE (solid lines) at different times \(t\) for \(\alpha=0.5\) (a) and \(1.0\) (b). The inset shows the exact solutions around the central region (solid lines) at different \(t\), together with the Boltzmann factor \(e^{-v(x)/\xi}/\mathcal{Z}\) (blue dotted line), which dominates the approximate the solution for region \(\mathbf{C}\), given by Eq. (18). In the main plot, we highlight the tails (only the positive semi-axis), together with the approximate solution for region \(\mathbf{T}\), given by Eq. (19), in dashed lines. The potential has exponent \(\mu=4\), and the relative temperature is \(\xi=0.05\). which recovers the known result when \(\alpha=1\). The time dependence is negligible for small values of \(\int_{\ell}^{\infty}P_{\alpha}^{\mathbf{T}}(x,t)dx\), which occurs for times such that \[t\ll t^{*}\equiv\left\{\mathcal{Z}\,\Gamma\left(\frac{\alpha}{2}+1\right) \right\}^{\frac{2}{\alpha}}\sim e^{\frac{2}{2\pi\xi}}\,. \tag{21}\] The scaling of \(t^{*}\) quantifies how decreasing \(\alpha\) prolongs the departure from the time-independent regime. Note that the scaling exponent is different from that of the escape time to overcome a potential barrier, with a repelling force driving the particle away from the well at large distances, namely \(t\sim e^{1/(\alpha\xi)}\)[24]. The information on the shape of the potential is embodied in \(\mathcal{Z}\), given by Eq. (17), implying that the more flat the potential (i.e., the larger is \(\mu\)), the window of time-independence shortens, which is expected as for larger values of \(\mu\) the force decays faster and becomes negligible at shorter distances. ## III CTRW approach A microscopic counterpart of the FTFPE (8) can be simulated using a continuous time random walk (CTRW) [34]. In this process, a particle (the walker) moves through a one-dimensional lattice by jumping either left or right with a given probability. The waiting times between consecutive jumps are chosen from a probability density distribution given by \(\psi(t)=\alpha\tau^{-1}(\tau/t)^{\alpha+1}\), for \(t>\tau\), and zero otherwise. The available positions for the walker on the lattice are \(x=j\sigma\), where \(j\) is an integer. At time \(t=t_{n}=\sum_{i=1}^{n}t_{i}\), with \(t_{i}\) drawn from \(\psi(t_{i})\), the position of the random walker is given by \(x_{n}=x_{n-1}\pm\sigma\), that is, the walker jumps from a site \(j\) to site \(j\pm 1\), which occurs with probability \[p_{j}=\frac{1}{2}(1\pm g_{j})\,, \tag{22}\] where \(g_{j}\) (such that \(|g_{j}|\leq 1\)) is given by [49] \[g_{j} = \frac{\sigma}{2\xi}f(j\sigma)=-\frac{\mu}{2\xi}\frac{j\sigma^{2}}{(1+( j\sigma)^{2})^{\frac{\alpha}{2}+1}}.\] In Eq. (22), we have used thermal detailed balance and hence the jump probabilities depend on temperature as usual. Note that small enough \(\sigma\), suitable for the continuum FTFPE [49], requires \(|g_{j}|\ll 1\). The bias \(g_{j}\) renders the jump probability space-dependent, emulating the effect of the potential, and vanishes under free subdiffusion where the walker moves to the left or to the right with equal probability \(1/2\). Let us note that in the free case, Eq. (8) yields \(\langle x^{2}\rangle=(2/\Gamma[\alpha+1])t^{\alpha}\), when \(0<\alpha<1\). Figure 1 displays results obtained from simulations of trajectories based on the CTRW model for \(\alpha=0.6\) (symbols), for a packet of particles starting at the origin. These results are in good agreement with the solid lines obtained from the numerical integration of the FTFPE, described in Sec. II. To establish a connection between the microscopic description and the FTFPE, we must have \(\sigma\to 0\) and time \(\tau\to 0\) such that \(K_{\alpha}\) remains well defined according to the rescaling in Eq. (7). This equation is analogous to setting the generalized diffusion constant \(K_{\alpha}=1\), which in turn means that \(\sigma\) and \(\tau\) are not independent, and we have \(\sigma^{2}=2\Gamma[1-\alpha]\tau^{\alpha}\)[34]. The effects of the fractional order \(\alpha\) on the NNQE states will be shown in the next sections based on the analytical results, as well as numerical integration of the FTFPE, which are less time-demanding than simulations of numerical trajectories. ## IV NNQE lifespan In this section, we analyze the effect of the fractional order \(\alpha\) on the characteristics of quasi-equilibibrium, taking the MSD and the energy of the particle, as representative dynamical and thermodynamical quantities, respectively. ### Mean-square displacement In Fig. 4, we show the behavior of the MSD vs. time, for different values of \(\alpha\), and for \(\xi=0.08\) (a) and \(0.05\) (b). Besides the outcomes from the exact numerical integration (solid lines), we plot the approximate analytical expression valid for intermediate-long-times (dashed) and the analytical prediction of the NNQE (dotted), which will be presented below. The very good agreement between our analytical expressions and exact numerical results validates the formulas that we will derive in the next section using the approximate analytical expressions. By means of the approximate solutions derived in Sec. II.3, valid for intermediate-long time scales (see derivation in Appendix A), we have \[\langle x^{2}\rangle_{\alpha}(t) \simeq \langle x^{2}\rangle^{\mathbf{C}}+\langle x^{2}\rangle_{\alpha}^{ \mathbf{T}}(t) \tag{23}\] \[\simeq \langle x^{2}\rangle^{\mathbf{C}}+\frac{4t^{\frac{3\alpha}{2}}}{ \mathcal{Z}\,\Gamma(\frac{3\alpha}{2}+1)}\,,\] where, within the considered timescales, the second term is negligible compared to the first, which is time-independent. Moreover, \(\langle x^{2}\rangle^{\mathbf{C}}\) is the same for any \(\alpha\), hence equal to the expression already obtained for \(\alpha=1\)[4], namely, for any \(\mu>1\), \[\langle x^{2}\rangle^{\mathbf{C}}\simeq\frac{\int_{0}^{\infty}x^{2}(e^{-v(x)/ \xi}-h(x))dx}{\int_{0}^{\infty}(e^{-v(x)/\xi}-1)dx}\,, \tag{24}\] where \(h(x)=\sum_{k=0}^{\lfloor 3/\mu\rfloor}(-1)^{k}[v(x)/\xi]^{k}/k!\)[5]. That is, \(h(x)\) has only one term for \(\mu=4\) and two terms for \(\mu=2\). This auxiliary function is responsible for regularizing the integrand, ensuring that it will reach a finite value. Time-independence occurs, according to Eq. (23), for timescales such that \[t\ll t^{**}\equiv\left\{\langle x^{2}\rangle^{\mathbf{C}}\,\frac{\mathcal{Z}}{4} \,\Gamma\Big{(}\frac{3\alpha}{2}+1\Big{)}\right\}^{\frac{2}{3\alpha}}\sim e^{ \frac{2}{3\alpha\xi}}, \tag{25}\] recalling that both \(\mathcal{Z}\) and \(\langle x^{2}\rangle^{\mathbf{C}}\) depend on \(\xi\) and \(\mu\), but not on \(\alpha\). This inequality constrains the condition found in Eq. (21), as far as \(t^{**}<t^{*}\). Then, the lifetime of the quasi stationary MSD increases with \(1/\xi\) as can be seen by comparing both panels of Fig. 4. It also increases with \(1/\alpha\), as seen by comparing the different curves in each panel. In the limit \(\alpha\to 0\), \(t^{**}\sim e^{2/(3\xi\alpha)}\to\infty\), meaning that the stagnation tends to last forever. However, if \(\alpha=0\), the NNQE is trivial, as it simply means that the initial condition remains frozen, similarly to the limit of zero temperature \(\xi\). In Fig. 5, we display how the duration time \(t_{d}\), during which the MSD remains in NNQE, depends on \(\alpha\). This duration was estimated as the instant at which the MSD exceeds the NNQE value by \(\delta\)% (symbols), where \(\delta=0.1\). This estimate can be obtained by identifying the last term in Eq. (23) with the excess \((\delta/100)\langle x^{2}\rangle^{\mathbf{C}}\), which provides the prediction \[t_{d}=t^{**}\,(\delta/100)^{\frac{2}{3\alpha}}\sim(e^{\frac{1}{4}}\delta/100)^ {\frac{2}{3\alpha}}. \tag{26}\] The effect of the shape of the potential is embodied in \(t^{**}\) through \(\mathcal{Z}\) and \(\langle x^{2}\rangle^{\mathbf{C}}\). Notice in Fig. 5 that \(t_{d}\) increases exponentially with \(1/\alpha\) following the predicted law. It is important to remark that we are considering the nondimensionalized variables, while in the transformation \(t/t_{0}\to t\), the scaling factor \(t_{0}=(x_{0}^{2}/\mathcal{K}_{\alpha})^{1/\alpha}\), in Eq. (7), depends on \(\alpha\). Hence, the observation that decreasing \(\alpha\) prolongs the duration of NNQE, as observed for the scaled time (equivalent to setting \(t_{0}=1\)), is still true for the real time if \(x_{0}^{2}/\mathcal{K}_{\alpha}\gtrsim 1\), but it can be inverted otherwise. In fact, to recover the original (nonscaled) variables, the time in Eqs. (21) and (25) must be multiplied by \(t_{0}=(x_{0}^{2}/\mathcal{K}_{\alpha})^{1/\alpha}\). Therefore, if \(x_{0}^{2}/\mathcal{K}_{\alpha}\lesssim\exp[-2/(3\xi)]\), which can occur for relatively narrow potential well or large diffusivity coefficient, the lifetime of the MSD increases with \(\alpha\). ### Energy In this section we discuss the time behavior and NNQE duration for a thermodynamical observable, the energy. The time evolution of the average energy is shown in Fig. 6, for \(\xi=0.1\) (a) and \(0.08\) (b), with different values of \(\alpha\). Also in this case, the lifespan increases, diverging exponentially with \(1/\alpha\), as shown in Fig. 5. Figure 4: MSD vs. time for a potential with \(\mu=4\), being \(\xi=0.08\) (a) and \(0.05\) (b). The solid lines are the exact numerical solution for the fractional cases. Dashed lines represent the averages performed over the approximate PDFs, according to Eq. (23). Notice that the NNQE level (dotted line) does not depend on \(\alpha\). Figure 5: Duration \(t_{d}\) of NNQE for the MSD (filled symbols) and for the energy (open symbols). The duration was obtained from the respective timeseries, by measuring the time at which the averaged observable departs from the NNQE value by \(\delta=0.1\)%. The full lines correspond to the estimates given by Eq. (26) for the MSD and dotted lines to exponential fits for the energy data. The values of the relative temperature are \(\xi=0.05\) (circles), \(\xi=0.08\) (squares) and \(\xi=0.1\) (triangles). Potentials with \(\mu=2\) (light purple) and \(\mu=4\) (dark blue) were considered. Since the energy vanishes asymptotically, the tails have a smaller effect on the average energy than they do for the moments of the PDF. As a first consequence, the plateau of the energy can emerge for higher relative temperature \(\xi\) than for the MSD. Note in Fig. 6, where we plot the average energy versus time, that a plateau appears for \(\xi=0.1\), while not for the MSD (see Fig. 4). Moreover, the lifetime for the average energy is much larger than for the MSD at the same relative temperature. This is because the MSD is more sensitive to the large \(x\) behavior of the density than the energy observable. The NNQE value of the energy can be predicted by averaging the potential energy with the regularized procedure, yielding for \(\mu>1\)[4] \[\langle u\rangle^{\mathbf{C}}\simeq\frac{\int_{0}^{\infty}v(x)e^{-v(x)/\xi}dx} {\int_{0}^{\infty}(e^{-v(x)/\xi}-1)dx}\,, \tag{27}\] represented by dotted lines in Fig. 6, in good agreement with the corresponding numerical solutions. As another consequence of the asymptotic behavior of the observable, only the denominator needs to be regularized. Through an heuristic reasoning, we can estimate that the correction to the stagnation value, \(\langle u\rangle^{\mathbf{T}}(t)\), which describes the departure from the plateau, is \(\mathcal{O}(t^{\alpha/2})\), the same law as in Eq. (20), which will lead to a departure time following a law similar to Eq. (25). In the central region, the PDF is proportional to the BG factor \(e^{-v_{\mu}(x)/\xi}\), and since the energy only shows non-negligible values in this central region, the approximation is expected to hold as long as our approximation for the PDF holds. The departure times \(t_{d}\), observed numerically, for the energy are plotted in Fig. 5 (open symbols), showing exponential dependence with \(1/\alpha\), as for the MSD. ## V Discussion and final remarks We have introduced a fractional time derivative of Caputo type in the FPE (actually, a Riemann Liouville integral equation) for an asymptotically flat potential with a well at the origin. We have shown that this potential leads to NNQE states, in which the averaged dynamical and thermodynamical observables can be obtained by a regularized procedure of BG statistics, valid for the whole range of anomalous processes, \(0<\alpha\leq 1\), since this index acts only on the time domain and not in the spatial one. The reason for this universal behavior is that the quasi-equilibrium is controlled by the BG factor, and not by \(\alpha\), as the latter is responsible for dynamical effects only. Hence, in that sense, the regularized approach obtained for the integer order case [4; 5] is general. With regard to the lifetime of the NNQE states for the MSD, for fixed \(\alpha\), it increases exponentially with the ratio between the potential depth at the origin and the temperature, \(1/\xi\), as \(\exp[2/(3\alpha\xi)]\), as given by Eq. (25). The fractional exponent \(\alpha\) contributes to this exponential similarly to \(\xi\). The lifetime is typically longer the more subdiffusive the dynamics (the smaller \(\alpha\)). However, the opposite may happen, in the original (non-scaled) variable, when the width of the potential \(x_{0}\) is sufficiently small or the diffusivity constant \(\mathcal{K}_{\alpha}\) sufficiently large, as discussed in Sec. IV. Looking forward, it would be interesting to explore other generalization of the FPE, with both time and space fractional derivatives. It may be also worth considering systems driven by non-Gaussian and colored heat baths, as well as the behavior at very long times, after the NNQE regime, once enough particles have escaped the well. In the limit of large \(\xi\), that is, \(V_{0}\gtrsim K_{B}T\), the particles will escape and return to the origin many times. This represents another direction of research, where we expect the statistics to be also related to the Boltzmann-Gibbs distribution, and infinite ergodic theory to hold [50]. Finally, ergodic properties of time averages and their relation to the Boltzmann-Gibbs measure could be studied as well. **ACKNOWLEDGMENTS:** We acknowledge partial financial support from CAPES (code 001), CNPq and Figure 6: Average energy vs. time, for \(\mu=4\), and different values of \(\alpha\), when \(\xi=0.1\) (a) and \(\xi=0.08\) (b). The solid lines correspond to the exact numerical solution. The NNQE level (dotted line), given by Eq. (27), does not depend on \(\alpha\). FAPERJ, in Brazil. The support of Israel Science Foundation's grant 1614/21 is acknowledged. ## Appendix A Theoretical averages For sufficiently large \(t\), the part of the solution \(P^{\mathbf{C}}(x,t)\) becomes time-independent, hence, Eq. (18) captures the behavior of the majority of the particles. Then, when the system is in the intermediate long timescale, we can determine the MSD by the procedure that will be detailed below. We will omit the subindex \(\alpha\), to simplify the notation. To calculate the average of an observable, we split the integration as follows \[\langle\mathcal{O}\rangle\simeq 2\int_{0}^{\ell}\mathcal{O}(x)P^{\mathbf{C}}(x,t )dx+2\int_{\ell}^{\infty}\mathcal{O}(x)P^{\mathbf{T}}(x,t)dx.\] First note that the first integral is nearly constant in time. The integral corresponding to the region of the tails, \(\ell\leq x\), can be calculated via the Laplace transform in the time variable. For the MSD, we obtain \[\widetilde{\langle x^{2}\rangle}^{\mathbf{T}}(s) \simeq 2\int_{\ell}^{\infty}x^{2}\widetilde{P}^{\mathbf{T}}(x,s)dx\] \[\simeq \frac{2}{\mathcal{Z}s}\int_{\ell}^{\infty}x^{2}e^{-s^{\frac{ \alpha}{2}}|x|}dx\] \[= \left(\frac{\ell^{2}}{2s^{\frac{\alpha}{2}+1}}+\frac{\ell}{s^{ \alpha+1}}+\frac{1}{s^{\frac{3\alpha}{2}+1}}\right)\frac{4}{\mathcal{Z}}e^{-s ^{\frac{\alpha}{2}}\ell}\,,\] which, for long time, i.e., \(s\to 0\), becomes \[\widetilde{\langle x^{2}\rangle}^{\mathbf{T}}(s) \sim \frac{4}{\mathcal{Z}\,s^{\frac{3\alpha}{2}+1}}\,, \tag{20}\] and after applying the inverse Laplace transform, we arrive at \[\langle x^{2}\rangle^{\mathbf{T}}(t) \simeq \frac{4}{\mathcal{Z}\,\Gamma(\frac{3\alpha}{2}+1)}t^{\frac{3 \alpha}{2}}+\mathcal{O}(t^{\frac{\alpha}{2}})\,. \tag{21}\] That is, in the large-\(t\) limit, we obtain \[\langle x^{2}\rangle(t) \simeq \langle x^{2}\rangle^{\mathbf{C}}+\langle x^{2}\rangle^{\mathbf{ T}}(t) \tag{22}\] \[\simeq \langle x^{2}\rangle^{\mathbf{C}}+\frac{4}{\mathcal{Z}\,\Gamma( \frac{3\alpha}{2}+1)}t^{\frac{3\alpha}{2}}\,.\]
2309.08610
Do the Frankenstein, or how to achieve better out-of-distribution performance with manifold mixing model soup
The standard recipe applied in transfer learning is to finetune a pretrained model on the task-specific dataset with different hyperparameter settings and pick the model with the highest accuracy on the validation dataset. Unfortunately, this leads to models which do not perform well under distribution shifts, e.g. when the model is given graphical sketches of the object as input instead of photos. In order to address this, we propose the manifold mixing model soup, an algorithm which mixes together the latent space manifolds of multiple finetuned models in an optimal way in order to generate a fused model. We show that the fused model gives significantly better out-of-distribution performance (+3.5 % compared to best individual model) when finetuning a CLIP model for image classification. In addition, it provides also better accuracy on the original dataset where the finetuning has been done.
Hannes Fassold
2023-08-28T06:13:32Z
http://arxiv.org/abs/2309.08610v1
Do the Frankenstein, or how to achieve better out-of-distribution performance with manifold mixing model soups ###### Abstract The standard recipe applied in transfer learning is to finetune a pretrained model on the task-specific dataset with different hyperparameter settings and pick the model with the highest accuracy on the validation dataset. Unfortunately, this leads to models which do not perform well under distribution shifts, e.g. when the model is given graphical sketches of the object as input instead of photos. In order to address this, we propose the _manifold mixing model soup_, an algorithm which mixes together the latent space manifolds of multiple finetuned models in an optimal way in order to generate a fused model. We show that the fused model gives significantly better out-of-distribution performance (+3.5 % compared to best individual model) when finetuning a CLIP model for image classification. In addition, it provides also better accuracy on the original dataset where the finetuning has been done. **Keywords:** Latent space manifold, transfer learning, finetuning, distribution shift, image classification ## 1 Introduction Large pretrained visual foundation models like CLIP [14] or CoCa [27] got very popular recently due to their great performance for a variety of computer vision tasks, either as zero-shot learner (without finetuning) or serving as a base for task-specific finetuning on a smaller dataset. Typically, multiple models are finetuned with different hyperparameters (like learning rate, weight decay or data augmentation strategy), using the same pretrained model as initialization. From those, the model with the best accuracy on the validation dataset is selected. Unfortunately, this procedure leaves out important information which has been learned in the latent space manifolds (individual layers or a collection of layers) of the remaining finetuned models. As shown in [27], even fusing multiple finetuned models in a very straightforward way by averaging them makes the fused model already significantly more robust to distribution shifts in the data. Motivated by this, we propose the _manifold mixing model soup_ (_ManifoldMixMS_) algorithm. Instead of simple averaging, it uses a more sophisticated strategy to generate the fused model. Specifically, it partitions a neural network model into several latent space manifolds (which can be individual layers or a collection of layers). Afterwards, from the pool of finetuned models available after hyperparameter tuning, the most promising ones are selected and their latent space manifolds are mixed together individually. The optimal mixing coefficient for each latent space manifold is calculated automatically via invoking an optimization algorithm. The fused model we retrieve with this procedure can be thought as sort of a "Frankenstein" model, as it integrates (parts of) individual model components from multiple finetuned models into one model. The remainder of the work is organized as follows. In section 2 we revise related work. Section 3 presents the proposed manifold mixing model soup algorithm. Section 4 presents the experiments and evaluation, which show the advantage of the proposed algorithm with respect to the state of the art, especially with respect to distribution shifts in the data. Finally, section 5 concludes the paper. State of the Art A variety of methods has been proposed recently for merging several models into one fused model, with the aim of increasing the generalization capability and robustness to distribution shifts of the fused model. A classical method is _stochastic weight averaging_ (SWA) [17], which produces the fused model by averaging the weights of a set of models sampled from the final stages of a single training run. They show that SWA leads to solutions of the optimization problem that are wider than the optima found by standard SGD, which in turn leads to a better generalization of the fused model. The authors of [21] propose to replicate and learn in parallel a subset of weights (e.g. the batch-norm and classification layers) in a late phase of neural network learning. These late-phase weights define an ensemble of models which share every other weight. These parameters are then optimized independently and subsequently averaged. In [14] an algorithm called _population parameter averaging_ (PAPA) is presented, which trains a population of models in parallel (with different learning rate, data augmentation strategies etc.). It improves overall performance by infrequently replacing the weights with the population average and frequently pushing all model weights slightly towards the population average. A disadvantage of this method is the high memory consumption, as the gradients for _several_ parallel training runs have to be kept up. The work of [13] shows that averaging the weights of multiple models finetuned with different hyperparameter configurations often improves accuracy and out-of-distribution performance of the averaged model. They propose two different averaging algorithms (which they call "souping"), _uniform soup_ and _greedy soup_. The uniform soup is a very simple procedure, as it does an averaging of all finetuned models. In contrast, the greedy soup is constructed by sequentially adding each model as a potential ingredient to the soup, and only keeping the model if it improves the performance of the averaged model. Our proposed _manifold mixing model soup_ algorithm (see section 3) is inspired by their greedy soup algorithm. But in contrast to them, we are partitioning the model into several components (latent space manifolds) and do an optimization in order to calculate the optimal mixing factor for each component. In [12] it is shown that uniform averaging of several finetuned models corresponds to making an isotropic Gaussian approximation to their posteriors. The authors propose an alternative merging procedure based on the Laplace approximation, where each model's posterior is approximated as a Gaussian distribution whose precision matrix (inverse of the covariance matrix) corresponds to its Fisher information. The authors of [13] found that while fine-tuning a pretrained vision model improves performance on the downstream task, it also tends to decrease accuracy on the original pretraining task. They therefore propose a robust finetuning procedure called _WiSE-FT_ that computes a weighted average of the original pretrained parameters and the finetuned parameters. Different weighting values produce different trade-offs between pretraining and finetuning task performance. ## 3 Manifold mixing model soup In the following, we outline the proposed algorithm for generating a fused model - the _manifold mixing model soup_ - from its ingredients (the finetuned models after hyperparameter tuning). The algorithm pseudocode can be seen in Algorithm 1. ``` 1:\(\theta_{i}\), \(i=0,...,n-1\), \(i=0,...,n-1\), \(i=1,...,n-1\), \(i=0,...,n-1\), \(i=1,...,n-1\), \(i=0,...,n-1\), \(i=1,...,n-1\), \(i=0,...,n-1\), \(i=1,...,n-1\), \(i=0,...,n-1\), \(i=1,...,n-1\), \(i=1,...,n-1\), \(i=0,...,n-1\), \(i=1,...,n-1\), \(i=1,...,n-1\), \(i=0,...,n-1\), \(i=1,...,n-1\), \(i=1,...,n-1\), \(i=0,...,n-1\), \(i=1,...,n-1\), \(i=1,...,n-1\), \(i=0,...,n-1\), \(i=1,...,n-1\), \(i=1,...,n-1\), \(i=0,...,n-1\), \(i=1,...,n-1\), \(i=1,...,n-1\), \(i=0,...,n-1\), \(i=1,...,n-1\), \(i=1,...,n-1\), \(i=0,...,n-1\), \(i=1,...,n-1\), \(i=1,...,n-1\), \(i=0,...,n-1\), \(i=1,...,n-1\), \(i=1,...,n-1\), \(i=0,...,n-1\), \(i=1,...,n-1\), \(i=1,...,n-1\), \(i=0,...,n-1\), \(i=1,...,n-1\), \(i=1,...,n-1\), \(i=0,...,n-1\), \(i=1,...,n-1\), \(i=1,...,n-1\), \(i=0,...,n-1\), \(i=1,...,n-1\), \(i=1,...,n-1\), \(i=0,...,n-1\), \(i=1,...,n-1\), \(i=1,...,n-1\), \(i=0,...,n-1\), \(i=1,...,n-1\), \(i=1,...,n-1\), \(i=0,...,n-1\), \(i=1,...,n-1\), \(i=1,...,n-1\), \(i=0,...,n-1\), \(i=1,...,n-1\), \(i=1,...,n-1\), \(i=0,...,n-1\), \(i=1,...,n-1\), \(i=1,...,n-1\), \(i=0,...,n-1\), \(i=1,...,n-1\), \(i=1,...,n-1\), \(i=0,...,n-1\), \(i=1,...,n-1\), \(i=1,...,n-1\), \(i=0,...,n-1\), \(i=1,...,n-1\), \(i=1,...,n-1\), \(i=0,...,n-1\), \(i=1,...,n-1\), \(i=1,...,n-1\), \(i=0,...,n-1\), \(i=1,...,n-1\), \(i=1,...,n-1\), \(i=0,...,n-1\), \(i=1,...,n-1\), \(i=1,...,n-1\), \(i=0,...,n-1\), \(i=1,...,n-1\), \(i=1,...,n-1\), \(i=0,...,n-1\), \(i=1,...,n-1\), \(i=1,...,n-1\), \(i=0,...,n-1\), \(i=1,...,n-1\), \(i=1,...,n-1\), \(i=1,...,n-1\), \(i=0,...,n-1\), \(i=1,...,n-1\), \(i=1,...,n-1\), \(i=0,...,n-1\), \(i=1,...,n-1\), \(i=1,...,n-1\), \(i=1,...,n-1\), \(i=0,...,n-1\), \(i=1,...,n-1\), \(i=1,...,n-1\), \(i=0,...,n-1\), \(i=1,...,n-1\), \(i=1,...,n-1\), \(i=1,...,n-1\), \(i=0,...,n-1\), \(i=1,...,n-1\), \(i=1,...,n-1\), \(i=0,...,n-1\), \(i=1,...,n-1\), \(i=1,...,n-1\), \(i=0,...,n-1\), \(i=1,...,n-1\), \(i=1,...,n-1\), \(i=0,...,n-1\), \(i=1,...,n-1\), \(i=1,...,n-1\), \(i=0,...,n-1\), \(i=1,...,n-1\), \(i=1,...,n-1\), \(i=0,...,n-1\), \(i=1,...,n-1\), \(i=1,...,n-1\), \(i=0,...,n-1\), \(i=1,...,n-1\), \(i=1,...,n-1\), \(i=0,...,n-1\), \(i=1,...,n-1\), \(i=1,...,n-1\), \(i=0,...,n-1\), \(i=1,...,n-1\), \(i=1,...,n-1\), \(i=0,...,n-1\), \(i=1,...,n-1\), \(i=1,...,n-1\), \(i=0,...,n-1\), \(i=1,...,n-1\), \(i=1,...,n-1\), \(i=0,...,n-1\), \(i=1,...,n-1\), \(i=1,...,n-1\), \(i=0,...,n-1\), \(i=1,...,n-1\), \(i=1,...,n-1\), \(i=0,...,n-1\), \(i=1,...,n-1\), \(i=0,...,n-1\), \(i=1,...,n-1\), \(i=1,...,n-1\), \(i=0,...,n-1\), \(i=1,...,n-1\), \(i=1,...,n-1\), \(i=0,...,n-1\), \(i=1,...,n-1\), \(i=1,...,n-1\), \(i=0,...,n-1\), \(i=1,...,n-1\), \(i=1,...,n-1\), \(i=0,...,n-1\), \(i=1,...,n-1\), \(i=1,...,n-1\), \(i=1,...,n-1\), \(i=0,...,n-1\), \(i=1,...,n-1\), \(i=0,...,n-1\), \(i=1,...,n-1\), \(i=1,...,n-1\), \(i=0,...,n-1\), \(i=1,...,n-1\), \(i=1,...,n-1\), \(i=0,...,n-1\), \(i=1,...,n-1\), \(i=1,...,n-1\), \(i=0,...,n-1\), \(i=1,...,n-1\), \(i=1,. layers into one component is to reduce the number of variables during optimization, which makes it easier for the optimizer to find a good optimum. For each model, of course the same partitioning is employed. The fused model \(\Psi\) is now calculated in an sequential way, by iteratively mixing promising ingredient models with it. At first, the fused model is set to the best finetuned model via \(\Psi=\theta_{0}\), and the variable \(k\), which counts the number of models which have been mixed so far into the fused model, is set to 1. In each iteration (for \(i=1,...,n-1\)), we try now to mix the candidate model \(\theta_{i}\) with the current fused model \(\Psi\) in an optimal way, with the aim of increasing the validation accuracy of the updated fused model \(\Psi^{\prime}\) (which includes \(\theta_{i}\)) on the original dataset. In order to save computation time, we skip the optimization step for a candidate model \(\theta_{i}\) for which it is unlikely that we get an increase in the validation accuracy by mixing \(\theta_{i}\) into the current fused model \(\Psi\). For that, we generate the "approximate average" model \(\bar{\Psi}\) via \[\bar{\Psi}=\frac{k}{k+1}\cdot\Psi+\frac{1}{k+1}\cdot\theta_{i} \tag{1}\] and test whether the condition \(ValAcc(\bar{\Psi})>\tau\cdot ValAcc(\Psi)\) is fulfilled. If so, we continue with this iteration. If it is not fulfilled, we skip the following steps of this iteration, so candidate model \(\theta_{i}\) will not be taken into account. The motivation for the specific combination provided in Equation (1) is that \(\bar{\Psi}\) calculated in this way corresponds approximately to the _average_ of all candidate models (like in [20]) which have been mixed so far into the fused model (including \(\theta_{i}\)), _if we assume_ that the optimization did not change the mixing coefficients drastically from their provided initial values. We set the constant \(\tau\) to 0.998. Having identified \(\theta_{i}\) as a promising candidate model, in the next step we determine the optimal factors for mixing its latent space manifolds into the current fused model \(\Psi\). For this, we define the updated fused model \(\Psi^{\prime}(\lambda)\) as a _component-wise_ convex combination of \(\Psi\) and \(\theta_{i}\) via \[\Psi^{\prime}(\lambda)^{j}=\lambda^{j}\cdot\Psi^{j}+(1-\lambda^{j})\cdot \theta_{i}^{j} \tag{2}\] for all components \(j=1,...,m\). Note that \(\Psi^{\prime}(\lambda)\) is a function of the mixing vector \(\lambda\). The mixing factor \(\lambda^{j}\in[0,1]\) determines how much of the \(j-th\) component (latent space manifold) of the candidate model \(\theta_{i}\) is mixed into the current fused model \(\Psi\). The component-wise convex combination of the two models allows an optimizer to explore the latent space manifolds of the models \(\Psi\) and \(\theta_{i}\) in a very flexible way, in order to find the optimal mixing vector \(\lambda^{*}\in\mathbb{R}^{m}\) which gives the highest validation accuracy for the updated fused model \(\Psi^{\prime}\). For the subsequent optimization step, we set up the optimization problem to solve as \[\lambda^{*}=\operatorname*{arg\,max}_{\lambda\in[0,1]^{m}}\left(ValAcc\left( \Psi^{\prime}(\lambda)\right)\right) \tag{3}\] where \([0,1]^{m}\) is the \(m-\)dimensional unit interval. Via the constraint \(\lambda\in[0,1]^{m}\) we ensure that a convex combination is done for each component, so we are in fact _interpolating linearly_ between the latent space manifolds \(\Psi^{j}\) and \(\theta^{j}\). The model \(\Psi^{\prime}(\lambda)\) can be calculated via Equation (2). For solving this optimization problem, we employ the _Nevergrad_1 optimization package. It provides a large variety of black-box _derivative-free_ optimization algorithms together with a sophisticated heuristic [15] to select the best optimizer based on the characteristic (number of variables, allowed budget for function evaluations etc.) of the optimization problem. As the initial value for the mixing factors, we set \(\lambda^{j}=k/(k+1)\) for \(j=1,...,m\) with a similar motivation as explained earlier for Equation (1). Footnote 1: [https://facebookresearch.github.io/nevergrad/](https://facebookresearch.github.io/nevergrad/) We invoke now the optimizer in order to calculate the optimal mixing vector \(\lambda^{*}\) which give the highest validation accuracy on the dataset used for finetuning. The optimal updated fused model can be calculated now via \(\Psi^{\prime\,*}=\Psi^{\prime}(\lambda^{*})\). We check now whether the condition \(ValAcc(\Psi^{\prime\,*})>ValAcc(\Psi)\) is fulfilled. If so, we have found a better fused model \(\Psi^{\prime\,*}\) by mixing \(\theta_{i}\) into it. Consequently, we replace \(\Psi\) by \(\Psi^{\prime\,*}\) and increase \(k\) by 1. If this is not the case, we keep the current fused model \(\Psi\) and \(k\) as they are. After iterating over all candidate models \(\theta_{i}\) for \(i=1,...,n-1\) we retrieve a final fused model \(\Psi\) (the _manifold mixing model soup_), which mixes together the \(k\) selected candidate models / ingredients in an optimal way. ``` 0: Finetuned models \(\{\theta_{0},...,\theta_{n-1}\}\) as result of hyperparameter tuning 0: Partitioning of a model \(\zeta\) into \(m\) components (latent space manifolds) \(\zeta^{j}\) for \(j=1,...,m\) 0: Function \(ValAcc(\zeta)\) which calculates validation accuracy for \(\zeta\) on dataset used for finetuning \(\{\theta_{0},...,\theta_{n-1}\}\gets sort(\{\theta_{0},...,\theta_{n-1}\})\)\(\triangleright\) Sort \(\{\theta_{0},...,\theta_{n-1}\}\) in descending order based on \(Valacc(\theta_{i})\) \(k\gets 1\)\(\triangleright\) Number of candidate models mixed into fused model \(\Psi\leftarrow\theta_{0}\)\(\triangleright\) Set initial fused model to best finetuned model \(\theta_{0}\) \(\tau\gets 0.998\)\(\triangleright\) Tolerance factor for promising candidate models for \(i=1,...,n-1\)do\(\triangleright\) Iterative over all candidate models \(\theta_{i}\) \(\bar{\Psi}=\frac{k}{k+1}\cdot\Psi+\frac{1}{k+1}\cdot\theta_{i}\)\(\triangleright\) Generate "approximate average" model if\(ValAcc(\bar{\Psi})>\tau\cdot ValAcc(\Psi)\)then\(\triangleright\) Do optimization only if candidate is promising \(\Psi^{\prime}(\lambda)^{j}=\lambda^{j}\cdot\Psi^{j}+(1-\lambda^{j})\cdot\theta_{i}^ {j}\)\(\triangleright\) Define updated fused model for components \(j=1,...,m\) \(\lambda^{*}=\operatorname*{arg\max}_{\lambda\in[0,1]^{m}}\left(ValAcc\left( \Psi^{\prime}(\lambda)\right)\right)\)\(\triangleright\) Calculate optimal mixing factors \(\Psi^{\prime}*=\Psi^{\prime}(\lambda^{*})\)\(\triangleright\) Calculate optimal updated fused model if\(ValAcc(\Psi^{\prime}*)>ValAcc(\Psi)\)then \(k\gets k+1\)\(\Psi\leftarrow\Psi^{\prime}*\)\(\triangleright\) Mix candidate model \(\theta_{i}\) into current fused model endif endif endfor return\(\Psi\)\(\triangleright\) Return final fused model ``` **Algorithm 1** Manifold mixing model soup algorithm ## 4 Experiments and Evaluation The setup for our experiments is very similar to the one for the vision models given in the _model soup_ paper [20]. We summarize it in the following for clarity and completeness. The model employed for finetuning is the _CLIP_ model [16]. CLIP is a powerful multi-modal zero-shot neural network, which has been pretrained with contrastive learning on a huge dataset of image-text pairs. Specifically, we use the _CLIP ViT-B/32_ variant specified in Table 20 of [16] and provided in the _OpenCLIP_ package 2. Finetuning of the pretrained model is performed end-to-end (all parameters are modified), as it typically leads to better performance than training only the final linear layer. Before finetuning, the final layer is initialized with a linear probe as described in [17]. The loss function employed for finetuning is the cross-entry loss. Footnote 2: [https://github.com/mlfoundations/open_clip](https://github.com/mlfoundations/open_clip) The original dataset employed for finetuning is _ImageNet_[4]. Since the official ImageNet validation dataset is typically used as the test dataset, we use roughly 2% of the ImageNet training dataset as held-out validation dataset for calculating the validation accuracy in our proposed algorithm (see section 3 and the pseudocode provided in Algorithm 1). For measuring the out-of-distribution performance (robustness to distribution shifts) of our proposed algorithm, we employ five datasets derived from ImageNet with natural (not synthetically generated) distribution shifts. They corresponds to datasets with naturally occurring variations of the data samples due to different lighting, viewpoint, geographic location, image style (e.g. sketch instead of photo), crowdsourcing and more. The five datasets with distribution shifts we use are: * ImageNet-V2 (IN-V2) [18] is a reproduction of the ImageNet test set with distribution shift. The dataset was collected by closely following the original labelling protocol. * ImageNet-R (IN-R) [15] contains renditions (e.g., sculptures, paintings) for 200 ImageNet classes. * ImageNet-Sketch (IN-Sketch) [22] contains sketches instead of natural images. It contains only sketches in "black-and-white" color scheme. * ObjectNet [1] provides objects in various scenes with 113 classes overlapping with ImageNet. * ImageNet-A (IN-A) [1] is a test set of natural images misclassified by a ResNet-50 model for 200 ImageNet classes. See Figure 1 for an illustration of samples for each of the datasets with natural distribution shifts. For all datasets (the original used for finetuning and the ones with distribution shifts), we take the top-1 accuracy on the respective test set for measuring the performance of a model. We calculate the overall out-of-distribution performance of a model as the average of its test accuracy over all five datasets with distribution shifts. We partition the CLIP ViT-B/32 model into 8, 15 and 26 components. A too fine partitioning (e.g. one component for each layer of the model) makes the optimization much more difficult, whereas a too coarse partioning provides not enough flexibility for mixing the latent space manifolds individually in an optimal way. The structure of the partitioning is done roughly according to the hierarchy of the building blocks of the CLIP model. We denote the respective variant of our proposed algorithm with 8, 15 and 26 components as ManifoldMixMS-C8, ManifoldMixMS-C15 and ManifoldMixMS-C26. We parametrize the Nevergrad optimizer with a maximum budget for the number of function evaluations (of the objective function to optimize) of roughly 250 function evaluations for all ManifoldMixMS variants. The employed optimizer is automatically selected by the Nevergrad optimization package (see [11]). For our cases, it always selects the _Cobyla_[13] optimization algorithm. The Cobyla algorithm is one of the best derivative-free algorithms for optimization of continuous variables with bound constraints, especially when the allowed number of function evaluations is quite small. For the evaluation of our proposed manifold mixing model soup algorithm, we compare mainly with the _greedy soup_ and _uniform soup_ algorithms which have been proposed in [14]. Additionally, we compare our proposed algorithm also against ensemble models. We compare against the same ensemble models as done in [14] and take also the accuracy numbers reported there for them. Of course, one should take into account that the computational cost for inference of an ensemble model is much higher - \(K\) times higher for an ensemble model consisting of \(K\) individual models - than for our proposed ManifoldMixMS algorithm which produces only a single fused model. Figure 1: Samples for class _lemon_, from the original ImageNet dataset and the five datasets with natural distribution shifts. Image courtesy of [14] The scatterplot in Figure 2 shows how our proposed ManifoldMixMS-C8 algorithm (the overall best variant) performs compared to the greedy soup and uniform soup algorithm from [Wortsman et al., 2022a] and to the individual finetuned models. Furthermore, Table 1 gives a detailed evaluation of our proposed variants of the manifold mixing soup algorithm with 8, 15 and 26 on the five datasets with distribution shifts (ImageNet-V2, ImageNet-R, ImageNet-Sketch, ObjectNet, ImageNet-A) as well as on the original dataset used for finetuning (ImageNet). One can see clearly from the scatterplot that our proposed manifold mixing model soup (especially the preferred variant with 8 components) algorithm combines the best properties of the uniform model soup and greedy soup algorithm. Specifically, it has practically the same good out-of-distribution accuracy as the uniform soup algorithm and still keeps the good accuracy of the greedy soup algorithm on the original ImageNet dataset. In contrast, the uniform soup algorithm performs on the original ImageNet dataset even worse than the best individual finetuned model. It is significantly better with respect to the best finetuned model both on the datasets with distribution shifts (+3.5%), but also on the original ImageNet dataset (+0.6%). The difference grows even bigger when comparing with the second-best finetuned model. Surprisingly, it has also a significantly better out-of-distribution accuracy than both Ensemble methods, although its accuracy on the original ImageNet dataset is worse especially when compared with the greedy ensemble method. As already mentioned, one should take into account that Ensemble methods have a much higher computational cost. Figure 2: Comparison of our proposed manifold mixing soup algorithm (with 8 components) against greedy soup and uniform soup algorithm from [Wortsman et al., 2022a] and the individual finetuned models. ## 5 Conclusion We propose the _manifold mixing model soup_ algorithm, which mixes together the latent space manifolds of multiple finetuned models in an optimal way in order to generate a fused model. Experiments show that the fused model gives significantly better out-of-distribution performance (+3.5 % compared to best finetuned model) when finetuning a CLIP model for image classification. In the future, we plan to evaluate the proposed algorithm on other neural network architectures, for both computer vision as well as natural language processing tasks. Furthermore, we plan to do a theoretical analysis of the properties of the proposed algorithm in order to get a better insight why it provides a better out-of-distribution performance. ## Acknowledgment This work was supported by European Union's Horizon 2020 research and innovation programme under grant number 951911 - AI4Media.
2303.13519
Learning and Verification of Task Structure in Instructional Videos
Given the enormous number of instructional videos available online, learning a diverse array of multi-step task models from videos is an appealing goal. We introduce a new pre-trained video model, VideoTaskformer, focused on representing the semantics and structure of instructional videos. We pre-train VideoTaskformer using a simple and effective objective: predicting weakly supervised textual labels for steps that are randomly masked out from an instructional video (masked step modeling). Compared to prior work which learns step representations locally, our approach involves learning them globally, leveraging video of the entire surrounding task as context. From these learned representations, we can verify if an unseen video correctly executes a given task, as well as forecast which steps are likely to be taken after a given step. We introduce two new benchmarks for detecting mistakes in instructional videos, to verify if there is an anomalous step and if steps are executed in the right order. We also introduce a long-term forecasting benchmark, where the goal is to predict long-range future steps from a given step. Our method outperforms previous baselines on these tasks, and we believe the tasks will be a valuable way for the community to measure the quality of step representations. Additionally, we evaluate VideoTaskformer on 3 existing benchmarks -- procedural activity recognition, step classification, and step forecasting -- and demonstrate on each that our method outperforms existing baselines and achieves new state-of-the-art performance.
Medhini Narasimhan, Licheng Yu, Sean Bell, Ning Zhang, Trevor Darrell
2023-03-23T17:59:54Z
http://arxiv.org/abs/2303.13519v1
# Learning and Verification of Task Structure in Instructional Videos ###### Abstract Given the enormous number of instructional videos available online, learning a diverse array of multi-step task models from videos is an appealing goal. We introduce a new pre-trained video model, VideoTaskformer, focused on representing the semantics and structure of instructional videos. We pre-train VideoTaskformer using a simple and effective objective: predicting weakly supervised textual labels for steps that are randomly masked out from an instructional video (masked step modeling). Compared to prior work which learns step representations locally, our approach involves learning them globally, leveraging video of the entire surrounding task as context. From these learned representations, we can verify if an unseen video correctly executes a given task, as well as forecast which steps are likely to be taken after a given step. We introduce two new benchmarks for detecting mistakes in instructional videos, to verify if there is an anomalous step and if steps are executed in the right order. We also introduce a long-term forecasting benchmark, where the goal is to predict long-range future steps from a given step. Our method outperforms previous baselines on these tasks, and we believe the tasks will be a valuable way for the community to measure the quality of step representations. Additionally, we evaluate VideoTaskformer on 3 existing benchmarks--procedural activity recognition, step classification, and step forecasting--and demonstrate on each that our method outperforms existing baselines and achieves new state-of-the-art performance. ## 1 Introduction Picture this, you're trying to build a bookshelf by watching a YouTube video with several intricate steps. You're annoyed by the need to repeatedly hit pause on the video and you're unsure if you have gotten all the steps right so far. Fortunately, you have an interactive assistant that can guide you through the task at your own pace, verifying each step as you perform it and interrupting you if you make a mistake. A composite task such as "_making a bookshelf_" involves multiple fine-grained activities such as "_drilling holes_" and "_adding support blocks_." Accurately categorizing these activities requires not only recognizing the individual steps that compose the task but also understanding the task structure, which includes the temporal ordering of the steps and multiple plausible ways of executing a step (e.g., one can beat eggs with a fork or a whisk). An ideal interactive assistant has both a high-level understanding of a broad range of tasks, as well as a low-level understanding of the intricate steps in the tasks, their temporal ordering, and the multiple ways of performing them. As seen in Fig. 1, prior work [12, 13] models step representations of a single step independent of the overall task context. This might not be the best strategy, given that steps for a task are related, and the way a step is situated in an overall task may contain important information about the step. To address this, we pre-train our model with a masked modeling objective that encourages the step representations to capture the _global context_ of the entire video. Prior work lacks a benchmark for detecting mistakes in videos, which Figure 1: Prior work [13, 12] learns step representations from single short video clips, independent of the task, thus lacking knowledge of task structure. Our model, VideoTaskformer, learns step representations for masked video steps through the global context of all surrounding steps in the video, making our learned representations aware of task semantics and structure. is a crucial component of verifying the quality of instructional video representations. We introduce a mistake detection task and dataset for verifying if the task in a video is executed correctly--i.e. if each step is executed correctly and in the right order. Our goal is to learn representations for the steps in the instructional video which capture semantics of the task being performed such that each step representation contains information about the surrounding context (other steps in the task). To this end, we train a model VideoTaskformer, using a masked step pre-training approach for learning step representations in instructional videos. We learn step representations jointly for a whole video, by feeding multiple steps to a transformer, and masking out a subset. The network learns to predict labels for the masked steps given just the visual representations of the remaining steps. The learned contextual representations improve performance on downstream tasks such as forecasting steps, classifying steps, and recognizing procedures. Our approach of modeling steps further enables a new method for mistake identification. Recall, our original goal was to assist a user following an instructional video. We synthetically generate a mistakes dataset for evaluation using the step annotations in COIN [25]. We consider two mistake types: mistakes in the steps of a task, and mistakes in the ordering of the steps of a task. For the first, we randomly replace the steps in a video with steps from a similar video. For the second, we re-order the steps in a task. We show that our network is capable of detecting both mistake types and outperforms prior methods on these tasks. Additionally, we evaluate representations learned by VideoTaskformer on three existing benchmarks: step classification, step forecasting, and procedural activity recognition on the COIN dataset. Our experiments show that learning step representation through masking pre-training objectives improves the performance on the downstream tasks. We will release code, models, and the mistake detection dataset and benchmark to the community. ## 2 Related Works **Instructional Video Datasets and Tasks.** Large-scale narrated instructional video datasets [6, 17, 25, 30, 31] have paved the way for learning joint video-language representations and task structure from videos. More recently, datasets such as Assembly-101 dataset [21] and Ikea ASM [3] provide videos of people assembling and disassembling toys and furniture. Assembly-101 also contains annotations for detecting mistakes in the video. Some existing benchmarks for evaluating representations learned on instructional video datasets include step localization in videos [6, 25], step classification [6, 25, 31], procedural activity recognition [25], and step forecasting [13]. In our work, we focus on a broad range of instructional videos found in HowTo100M [17] and evaluate the learned representations on the downstream tasks in COIN [25] dataset. We additionally introduce 3 new benchmarks for detecting mistakes in instructional videos and forecasting long-term activities. **Procedure Learning from Instructional Videos.** Recent works have attempted to learn procedures from instructional videos [2, 5, 13, 19, 27]. Most notably, [5] generates a sequence of actions given a start and a goal image. [2] finds temporal correspondences between key steps across multiple videos while [19] distinguishes pairs of videos performing the same sequence of actions from negative ones. [13] uses distant supervision from WikiHow to localize steps in instructional videos. Contrary to prior works, our step representations are aware of the task structure as we learn representations globally for all steps in a video jointly, as opposed to locally, as done in past works. **Video Representation Learning.** There has been significant improvement in video action recognition models over the last few years [1, 9, 10, 14]. All of the above methods look at trimmed videos and focus on learning short-range atomic actions. In this work, we build a model that can learn longer and more complex actions, or steps, composed of multiple short-range actions. For example, the first step in Fig. 1, _"Make batter"_, is composed of several atomic actions such as _"pour flour"_ and _"whisk"_. There have also been works [13, 16, 20, 23, 29] which learn representations for longer video clips containing semantically more complex actions. Our work falls into this line of work. ## 3 Learning Task Structure through Masked Modeling of Steps Our goal is to learn task-aware step representations from a large corpus of instructional videos. To this end, we develop VideoTaskformer, a video model pre-trained using a BERT [7] style masked modeling loss. In contrast to BERT and VideoBERT [23], we perform masking at the step level, which encourages the network to learn step embeddings that encapsulate the semantics and temporal ordering of steps within the task. Our framework consists of two steps: pre-training and fine-tuning. During pre-training, VideoTaskformer is trained on weakly labeled data on the pre-training task. For fine-tuning, VideoTaskformer is first initialized with the pre-trained parameters, and a subset of the parameters is fine-tuned using labeled data from the downstream tasks. Each downstream task yields a separate fine-tuned model. We first provide an overview of the pre-training approach before delving into details of the individual components. **Overview.** Our approach for pre-training VideoTaskformer is outlined in Fig. 2. Consider an instructional video \(V\) consisting of \(K\) video clips \(v_{i},i\in[1,\dots,K]\) corresponding to \(K\) steps in the video. A step \(v_{i}\in\mathbb{R}^{L\times H\times W\times 3}\) is a sequence of \(L\) consecutive frames depicting a step, or semantic com ponent of the task. For example, for the task "_Making a french toast_", examples of steps include "_Whisk the batter_", and "_Dip bread in batter_." We train a video model VideoTaskformer \(f_{\text{VT}}\) to learn step representations. We mask out a few clips in the input \(V\) and feed it to \(f_{\text{VT}}\) which learns to predict step labels for the masked-out clips. We evaluate the embeddings learned by our pre-training objective on 6 downstream tasks: step classification, procedural activity recognition, step forecasting, mistake step detection, mistake ordering detection, and long term forecasting. Below, we provide more details on how we pre-train VideoTaskformer using a masked step modeling loss, followed by fine-tuning details on the downstream tasks. ### Pre-training VideoTaskformer with Masked Step Modeling We extend masked language modeling techniques used in BERT and VideoBERT to learn step representations for instructional videos. While BERT and VideoBERT operate on language and visual tokens respectively, VideoTaskformer operates on clips corresponding to steps in an instructional video. By predicting weakly supervised natural language step labels for masked out clips in the input video, VideoTaskformer learns semantics and long-range temporal interactions between the steps in a task. Unlike prior works wherein step representations are learned from local short video snippets corresponding to the step, our step representations are from the entire video with all the steps as input and capture _global context_ of the video. **Masked Step Modeling.** Let \(V=\{v_{1},\dots,v_{K}\}\) denote the visual clips corresponding to \(K\) steps in video \(V\). The goal of our our Masked Step Modeling pre-training setup is to encourage VideoTaskformer to learn representations of clips \(v_{i}\) that are aware of the semantics of the corresponding step and the context of the surrounding task. To this end, the task for pre-training is to predict categorical natural language step labels for the masked out steps. While we do not have ground truth step labels, we use the weak supervision procedure proposed by [13] to map each clip \(v_{i}\) to a distribution over step labels \(p(y_{i}\mid v_{i})\) by leveraging the noisy ASR annotations associated with each clip. The distribution \(p(y_{i}\mid v_{i})\) is a categorical distribution over a finite set of step labels \(Y\). More details are provided in Sec. 3.3. Let \(M\subseteq[1,\dots,K]\) denote some subset of clip indices (where each index is included in \(M\) with some masking probability \(r\), a hyperparameter). Let \(V_{\setminus M}\) denote a partially masked-out sequence of clips: the same sequence as \(V\) except with clips \(v_{i}\) masked out for all \(i\in M\). Let \(f_{\text{VT}}\) represent our VideoTaskformer model with parameters \(\theta\). \(f_{\text{VT}}\) is composed of a video encoder model \(f_{\text{vid}}\) which encodes each clip \(v_{i}\) independently, followed by a step transformer \(f_{\text{trans}}\) operating over the sequence of clip representations, and finally a linear layer \(f_{\text{head}}\) (which includes a softmax). The input to the model is an entire video (of size \(K\times L\times H\times W\times 3\)) and the output is of size \(K\times S\) (where \(S\) is the output dimension of the linear layer). We pre-train \(f_{\text{VT}}\) by inputting a masked video \(V_{\setminus M}\) and predicting step labels \(y_{i}\) for each masked-out clip \(v_{i}\), as described below. For the downstream tasks, we extract step-aware representations using \(f_{\text{VT}}\) by feeding an unmasked video \(V\) to the model. We then extract the intermediate outputs of \(f_{\text{trans}}\) (which are of size \(K\times D\), where \(D\) is the Figure 2: **VideoTaskformer Pre-training (Left). VideoTaskformer \(f_{\text{VT}}\) learns step representations for the masked out video clip \(v_{i}\), while attending to the other clips in the video. It consists of a video encoder \(f_{\text{vid}}\), a step transformer \(f_{\text{trans}}\), and a linear layer \(f_{\text{head}}\), and is trained using weakly supervised step labels. Downstream Tasks (Right). We evaluate step representations learned from VideoTaskformer on 6 downstream tasks.** output embedding size). To predict step labels for masked-out steps at pre-training time, we consider two training objectives: (1) step classification, and (2) distribution matching. We describe them below in the context of Masked Step Modeling. **Step classification loss.** We use the outputs of \(f_{\text{VT}}\) to represent an \(S\)-dimensional prediction distribution over steps, where \(S=|Y|\). We form the target distribution by placing all probability mass on the best textual step description \(y_{i}^{*}\) for each clip \(v_{i}\) according to the weak supervision process. That is, \[y_{i}^{*}=\operatorname*{argmax}_{y\in Y}p(y\mid v_{i}). \tag{1}\] We calculate the cross entropy between the predicted and target distributions for each masked out clip, yielding the following expression: \[-\log([f_{\text{VT}}(V_{\setminus M})]_{j}) \tag{2}\] where \(j\) is the index of \(y_{i}^{*}\) in \(Y\), i.e., such that \(y_{i}^{*}=Y_{j}\). To get the final training objective for a single masked video \(V_{\setminus M}\), we sum over all indices \(i\in M\), and minimize with respect to \(\theta\). **Distribution matching loss.** For this objective, we treat the distribution of step labels \(p(y_{i}\mid v_{i})\) from weak supervision as the target distribution for each clip \(v_{i}\). We then compute the KL Divergence between the prediction distribution \(f_{\text{VT}}(V_{\setminus M})\) and the target distribution \(p(y_{i}\mid v_{i})\) as follows: \[\sum_{j^{\prime}=1}^{S}p(Y_{j^{\prime}}\mid v_{i})\log\frac{p(Y_{j^{\prime}} \mid v_{i})}{[f_{\text{VT}}(V_{\setminus M})]_{j^{\prime}}} \tag{3}\] We sum over all \(i\in M\) and minimize with respect to \(\theta\). Following [13], we use only the top-\(k\) steps in \(p(y_{i}\mid v_{i})\) and set the probability of the remaining steps to 0. Lin _et al._[13] show that the distribution matching loss results in a slight improvement over step classification loss. For VideoTaskformer, we find both objectives to have similar performance and step classification outperforms distribution matching on some downstream tasks. We use \(f_{\text{VT}}\) as a feature extractor (layer before softmax) to extract step representations for new video segments. ### Downstream Tasks To show that the step representations learned by VideoTaskformer capture task structure and semantics, we evaluate the representations on 6 downstream tasks--3 new tasks which we introduce (mistake step detection, mistake ordering detection, and long-term step forecasting) and 3 existing benchmarks (step classification, procedural activity recognition, and short-term step forecasting). We describe the dataset creation details for our 3 new benchmarks in Sec. 4. **Mistake Detection.** A critical aspect of step representations that are successful at capturing the semantics and structure of a task is that, from these representations, _correctness_ of task execution can be verified. We consider two axes of correctness: content (what steps are portrayed in the video) and ordering (how the steps are temporally ordered). We introduce 2 new benchmark tasks to test these aspects of correctness. \(\bullet\)**Mistake step detection.** The goal of this task is to identify which step in a video is incorrect. More specifically, each input consists of a video \(V=\{v_{1},\dots,v_{K}\}\) with \(K\) steps. \(V\) is identical to some unaltered video \(V_{1}\) that demonstrates a correctly executed task, except that step \(v_{j}\) (for some randomly selected \(j\in[1,\dots,K]\)) is replaced with a random step from a different video \(V_{2}\). The goal of the task is to predict the index \(j\) of the incorrect step in the video. \(\bullet\)**Mistake ordering detection.** In this task, the goal is to verify if the steps in a video are in the correct temporal order. The input consists of a video \(V=\{v_{1},\dots,v_{K}\}\) with \(K\) steps. There is a 50% probability that \(V\) is identical to some (correctly ordered) video \(V_{1}=\{v_{1}^{1},\dots,v_{K}^{1}\}\), and there is a 50% probability that the steps are randomly permuted. That is, \(v_{i}=v_{\pi_{i}}^{1}\) for some random permutation \(\pi\) of indices \([1,\dots,K]\). The goal of the task is to predict whether the steps are ordered correctly or are permuted. **Step Forecasting.** As another way to evaluate how learned step representations capture task structure, we test the capabilities of our model in anticipating future steps given one or more clips of a video. \(\bullet\)**Short-term forecasting.** Consider a video \(V=\{v_{1},\dots,v_{n},v_{n+1},\dots v_{K}\}\) where \(v_{i}\) denotes a step, and \(V\) has step labels \(\{y_{1},\dots,y_{K}\}\), where \(y_{i}\in Y\), the finite set of all step labels in the dataset. Short-term forecasting involves predicting the step label \(y_{n+1}\) given the previous \(n\) segments \(\{v_{1},\dots,v_{n}\}\)[13]. \(\bullet\)**Long-term step forecasting.** We introduce the challenging task of long-term step forecasting. Given a single step \(v_{i}\) in a video \(V=\{v_{1},\dots,v_{K}\}\) with step labels \(\{y_{1},\dots,y_{K}\}\), the task is to predict the step labels for the next 5 steps, i.e. \(\{y_{i+1},y_{i+2},\dots,y_{i+5}\}\). This task is particularly challenging since the network receives very little context--just a single step--and needs to leverage task information learned during training from watching multiple different ways of executing the same task. **Procedural Activity Recognition.** The goal of this task is to recognize the procedural activity (i.e., task label) from a long instructional video. The input to the network is all the \(K\) video clips corresponding to the steps in a video, \(V=\{v_{1},\dots,v_{K}\}\). The task is to predict the video task label \(t\in\mathcal{T}\) where \(\mathcal{T}\) is the set of all task labels for all the videos in the dataset. **Step Classification.** In this task, the goal is to predict the step label \(y_{i}\in Y\) given the video clip corresponding to step \(v_{i}\) from a video \(V=\{v_{1},\dots,v_{K}\}\). No context other than the single clip is given. Therefore, this task requires fine grained recognition capability, which would benefit from representations that contain information about the context in which a step gets performed. For all of the above tasks, we use the step and task label annotations as supervision. We show the "zero-shot" performance of VideoTaskformer by keeping the video model \(f_{\text{vid}}\) and the transformer layer \(f_{\text{trans}}\) fixed and only fine-tuning a linear head \(f_{\text{head}}\) on top of the output representations. Additionally, we also show fine-tuning results where we keep the base video model \(f_{\text{vid}}\) fixed and fine-tune the final transformer \(f_{\text{trans}}\) and the linear layer \(f_{\text{head}}\) on top of it. The network is fine-tuned using cross-entropy loss with supervision from the step labels for all downstream tasks. ### Implementation Details **Step labels from Weak Supervision.** To train VideoTaskformer, we require step annotations, i.e., step labels with start and end timestamps in the video, for a large corpus of instructional videos. Unfortunately, this is difficult to obtain manually and datasets that provide these annotations, like COIN and CrossTask, are small in size (\(\sim\)10K videos). To overcome this issue, the video speech transcript can be mapped to steps in WikiHow and used as a weak form of supervision [13]. The intuition behind this is that WikiHow steps are less noisy compared to transcribed speech. The WikiHow dataset contains a diverse array of articles with step-by-step instructions for performing a range of tasks. Denote the steps across all \(T\) tasks in WikiHow as \(s=\{s_{1},\dots,s_{N}\}\), where \(s_{n}\) represents the natural language title of the \(n\)th step in \(s\), and \(N\) is the number of steps across all tasks in WikiHow. Each step \(s_{n}\) contain a lengthy language-based description which we denote as \(y_{n}\). Consider a video with \(K\) sentences in the automatic speech transcript denoted as \(\{a_{1},\dots,a_{K}\}\). Each sentence is accompanied by a \(\{start,end\}\) timestamp to localize it in the video. This yields \(K\) corresponding video segments denoted as \(\{v_{1},\dots,v_{K}\}\). Each video segment \(v_{i}\) is a sequence of \(F\) RGB frames having spatial resolution \(H\times W\). To obtain the step label for a segment \(v_{i}\), the corresponding sentence in the transcript \(a_{i}\) is used to find the distribution of the nearest steps in the WikiHow repository. Following [13], we approximate this distribution using a textual similarity measure sim between \(y_{n}\) and \(a_{i}\): \[P(y_{n}|v_{i})\approx\frac{\exp{(\text{sim}(a_{i},y_{n}))}}{\sum_{n^{\prime}} \exp{(\text{sim}(a_{i},y_{n^{\prime}}))}}. \tag{4}\] The authors of [13] found it best to search over all the steps across all tasks (i.e., all \(y_{n}\)), rather than the set of steps for the specific task referenced in the video. The similarity function sim is formulated as a dot product between language embeddings obtained from a pre-trained language model. **Language model.** To compare WikiHow steps to the transcribed speech via the sim function, we follow the same setup as in Lin _et al._[13]. For a fair comparison to the baseline, we use MPNet (paraphrase-mpnet-base-v2) to extract sentence embeddings \(\in\mathbb{R}^{768}\). **Video model.** VideoTaskformer is a TimeSformer model with a two-layer transformer. Following [13], the TimeSformer is initialized with ViT [8] pre-trained on ImageNet-21K, and is trained on subsampled clips from HowTo100M (with 8 frames sampled uniformly from each 8-second span). We include additional implementation details in the Supplemental. ## 4 Datasets and Evaluation Metrics **Pre-training.** For pre-training, we use videos and transcripts from the HowTo100M (HT100M) [17] dataset and steps from the WikiHow dataset [4]. HT100M contains 136M video clips from 1.2M long narrated instructional videos, spanning 23k activities such as "gardening" and "personal care." The WikiHow dataset contains 10,588 steps collected from 1059 WikiHow articles which are sourced from the original dataset [11]. **Evaluation.** All evaluation benchmarks use videos and step annotations from the COIN dataset [25]. COIN consists of 11,827 videos related to 180 different tasks and provides step labels with start and end timestamps for every video. We use a subset of 11,104 videos that were available to download. As described in Sec. 3.2, we introduce 3 new benchmark tasks in this work: mistake step detection, mistake ordering detection, and long-term step forecasting. _Mistake Step Detection._ For creating the mistake step detection dataset, for every video in the COIN dataset, we randomly replace one step with a step from a different video. The network predicts the index of the mistake step. We use the same train/validation/test splits as in COIN and report average accuracy of predicting the mistake step index on the test set. _Mistake Ordering Detection._ We synthetically create the mistake ordering detection dataset by randomly shuffling the ordering of the steps in a given video, for 50% of the videos and train the network to predict whether the steps are in the right order or not. While creating the dataset, we repeatedly shuffle the input steps until the shuffled "mistake" order is different from the original valid order. Additionally, we compare the shuffled "mistake" order across all the videos in the same task, to ensure it doesn't match any other video's correct ordering of steps. Despite these two pre-processing checks, there might be noise in the dataset. We report average prediction accuracy on the test split. _Long-term step forecasting._ Given a video clip corresponding to a single step, long-term step forecasting involves pre dicting the step class label for the next 5 consecutive steps. If there are fewer than 5 next steps we append NULL tokens to the sequence. We compute classification accuracy as the number of correct predictions out of the total number of predictions, ignoring NULL steps. We again use the same splits in the COIN dataset. Additionally, we evaluate on 3 existing benchmarks: _step classification_[25] - predicts the step class label from a single video clip containing one step, _procedural activity recognition_[25] - predicts the procedure/task label given all the steps in the input video, and _short-term step forecasting_[13] - predicts the class of the step in the next segment given as input the sequence of observed video segments up to that step (excluded). ## 5 Experiments We evaluate VideoTaskformer (VideoTF) and compare it with existing baselines on 6 downstream tasks: step classification, procedural activity recognition, step forecasting, mistake step detection, and long term forecasting. Results are on the datasets described in Sec. 4. ### Baselines We compare our method to state-of-the-art video representation learning models for action/step recognition. We fine-tune existing models in a similar fashion to ours on the 6 downstream tasks. We briefly describe the best performing baseline, Learning with Distant Supervision (LwDS) [13]. \(\bullet\)**TimeSformer (LwDS) [13].** In this baseline model, the TimeSformer backbone is pre-trained on HowTo100M using the Distribution Matching loss (but without any masking of steps as in our model). Next, a single-layer transformer is fine-tuned on top of the pre-trained representations from the base model for each downstream task. \(\bullet\)**TimeSformer w/ KB transfer (LwDS) [13].** For procedural activity recognition and step forecasting, the LwDS baseline is modified to include knowledge base transfer via retrieval of most relevant facts from the knowledge base to assist the downstream task. We also include results by adding the same KB transfer component to our method, referenced as w/ KB Transfer. \(\bullet\)**Steps from clustering ASR text.** As an alternative to the weak supervision from WikiHow, we introduce an unsupervised baseline that relies only on the transcribed speech (ASR text) to obtain steps. [18] introduced an approach to segment a video into steps by clustering visual features along the time axis. It divides the video into non-overlapping segments and groups adjacent video segments together based on a similarity threshold. We adopt a similar approach but in the text space. We compute sentence embeddings for the ASR sentences and group adjacent sentences if their similarity exceeds the average similarity of all sentences across the entire video. We include ablations with different thresholds in the Supplemental. ### Ablations We evaluate our design choices by ablating different components of our model. \(\bullet\)**Base model.** We report results for different base video models for pre-training: S3D [16], SlowFast [10], TimeSformer [4] trained on HT100M, and TimeSformer trained on Kinetics. For short-term step forecasting, procedural activity recognition, and step classification, the results are from [13]. \(\bullet\)**Loss function.** For pre-training VideoTF, we test both the loss functions, Step Classification (SC), and Distribution Matching (DM) described in Sec. 3. \(\bullet\)**Modalities.** For mistake step detection and long-term forecasting tasks, we tried replacing video features with ASR text during fine-tuning. The base model is a language model for embedding sentences in the ASR text and is kept fixed. The ASR text embeddings for all the segments of the video are fed as input to the downstream model, a basic single-layer transformer, which is fine-tuned to each of the tasks. \(\bullet\)**Task label.** For mistake detection and long-term forecasting tasks, we include the task name, e.g. _"Install a Ceiling Fan"_, as input to the downstream model. We compute the sentence embedding of the task label and append it to the list of video tokens fed as input to the model. This domain knowledge provides additional context which boosts the performance on these challenging downstream tasks. \(\bullet\)**Linear-probe vs Fine-tuning.** In linear-probe evaluation, only the \(f_{\text{head}}\) layer is fine-tuned to each downstream task and in the fine-tuning setting, all the layers of the segment transformer \(f_{\text{trans}}\) are fine-tuned. ### Results **Quantitative Results.** We compare our approach to several baselines on all downstream tasks. For all the downstream tasks, the downstream segment transformer is fine-tuned, except for linear-probe where we keep our pre-trained model fixed and only train a linear head on top of it for each downstream task. On the step classification task in Tab. 1, VideoTF with step classification loss outperforms LwDS [13] by 2%, indicating that step representations learned with global context also transfer well to a task that only looks at local video clips. In procedural activity recognition (Tab. 2), we see that distribution matching loss works slightly better than step classification loss and our fine-tuned model achieves 1% improvement over the best baseline. For short-term forecasting in Tab. 3, we achieve a 3% improvement over \begin{table} \begin{tabular}{l l l l} \hline \hline Model & Pre-training Supervision & Pre-training Dataset & Acc (\%) \\ \hline TSN (RGB+Flow) [26] & Supervised: action labels & Kinetics & 36.5* \\ S3D [16] & Unsupervised: MIL-NCE on ASR & HT100M & 37.5* \\ \hline ClipBERT [12] & Supervised: captions & COCO + Visual Genome & 30.8 \\ VideoCLIP [28] & Unsupervised: NCE on ASR & HT100M & 39.4 \\ SlowFast [10] & Supervised: action labels & Kinetics & 32.9 \\ TimeSformer [4] & Supervised: action labels & Kinetics & 48.3 \\ LwDS: TimeSformer [4] & Unsupervised: \(k\)-means on ASR & HT100M & 46.5 \\ LwDS: TimeSformer & Distant supervision & HT100M & 54.1 \\ \hline VideoTF (SC) & Unsupervised: NN on ASR & HT100M & 47.0 \\ **VideoTF (DM)** & **Distant supervision** & HT100M & **54.8** \\ **VideoTF (SC)** & **Distant supervision** & HT100M & **56.5** \\ \hline \hline \end{tabular} \end{table} Table 1: **Step classification. We compare to the accuracy scores for all baselines. VideoTF (SC) pre-trained with step classification loss on distant supervision from WikiHow achieves state-of-the-art performance on the downstream step classification task. We report baseline results from [13]. * indicates results by fine-tuning on COIN** \begin{table} \begin{tabular}{l l l l l} \hline \hline Downstream Model & Base Model & Pre-training Supervision & Pre-training Dataset & Acc (\%) \\ \hline Transformer & S3D [16] & Unsupervised: MIL-NCE on ASR & HT100M & 28.1 \\ Transformer & SlowFast [10] & Supervised: action labels & Kinetics & 25.6 \\ Transformer & TimeSformer [4] & Supervised: action labels & Kinetics & 34.7 \\ LwDS: Transformer & TimeSformer [4] & Unsupervised: \(k\)-means on ASR & HT100M & 34.0 \\ LwDS: Transformer w/ KB Transfer & TimeSformer & Distant supervision & HT100M & 39.4 \\ \hline VideoTF (SC; fine-tuned) w/ KB Transfer & TimeSformer & Unsupervised: NN on ASR & HT100M & 35.1 \\ VideoTF (SC) w/ KB Transfer & TimeSformer & Distant supervision & HT100M & 39.2 \\ VideoTF (DM; linear-probe) w/ KB Transfer & TimeSformer & Distant supervision & HT100M & 40.1 \\ VideoTF (SC) w/ KB Transfer & TimeSformer & Distant supervision & HT100M & 41.5 \\ **VideoTF (DM) w/ KB Transfer** & **TimeSformer** & Distant supervision & **HT100M** & **42.4** \\ \hline \hline \end{tabular} \end{table} Table 2: Accuracy of different methods on the **procedural activity recognition** dataset. \begin{table} \begin{tabular}{l l l l l} \hline \hline Downstream Model & Base Model & Pre-training Supervision & Pre-training Dataset & Acc (\%) \\ \hline Transformer (ASR text) w/ Task label & MPNet & & 39.0 \\ Transformer & SlowFast [10] & Supervised: action labels & Kinetics & 15.2 \\ Transformer & TimeSformer [4] & Supervised: action labels & HT100M & 17.0 \\ Transformer w/ Task label & TimeSformer [4] & Supervised: action labels & HT100M & 40.1 \\ LwDS: Transformer w/ Task label & TimeSformer & Distant supervision & HT100M & 41.3 \\ \hline VideoTF (DM) & TimeSformer & Distant supervision & HT100M & 40.2 \\ **VideoTF (DM) w/ Task label** & **TimeSformer** & Distant supervision & **HT100M** & **46.4** \\ \hline \hline \end{tabular} \end{table} Table 4: Accuracy of different methods on the **long-term step forecasting** dataset. \begin{table} \begin{tabular}{l l l l l} \hline \hline Model & Base Model & Pre-training Supervision & Pre-training Dataset & Acc (\%) \\ \hline TSN (RGB+Flow) [26] & Inception [24] & Supervised: action labels & Kinetics & 73.4* \\ Transformer & S3D [16] & Unsupervised: MIL-NCE on ASR & HT100M & 70.2* \\ Transformer & ClipBERT [12] & Supervised: captions & COCO + Visual Genome & 65.4 \\ Transformer & VideoCLIP [28] & Unsupervised: NCE on ASR & HT100M & 72.5 \\ Transformer & SlowFast [10] & Supervised: action labels & Kinetics & 71.6 \\ Transformer & TimeSformer [4] & Supervised: action labels & Kinetics & 83.5 \\ LwDS: Transformer & TimeSformer [4] & Supervised: action labels & Kinetics & 85.3 \\ LwDS: Transformer & TimeSformer [4] & Unsupervised: \(k\)-means on ASR & HT100M & 85.3 \\ LwDS: Transformer & TimeSformer & Distant supervision & HT100M & 88.9 \\ LwDS: Transformer w/ KB Transfer & TimeSformer & Distant supervision & HT100M & 90.0 \\ \hline VideoTF (SC; fine-tuning) w/ KB Transfer & TimeSformer & Unsupervised: NN on ASR & HT100M & 81.2 \\ VideoTF (SC; linear-probe) w/ KB Transfer & TimeSformer & Distant supervision & HT100M & 83.1 \\ VideoTF (DM; linear-probe) w/ KB Transfer & TimeSformer & Distant supervision & HT100M & 85.7 \\ VideoTF (SC) w/ KB Transfer & TimeSformer & Distant supervision & HT100M & 90.5 \\ **VideoTF (DM) w/ KB Transfer** & **TimeSformer** & Distant supervision & **HT100M** & **91.0** \\ \hline \hline \end{tabular} \end{table} Table 1: **Step classification. We compare to the accuracy scores for all baselines. VideoTF (SC) pre-trained with step classification loss on distant supervision from WikiHow achieves state-of-the-art performance on the downstream step classification task. We report baseline results from [13]. * indicates results by fine-tuning on COIN** LwDS and our unsupervised pre-training using NN with ASR outperforms previous unsupervised methods. We also note that linear-probe performance is competitive in Tab. 2 and outperforms baselines in Tab. 3. VideoTF with achieves a strong improvement of 5% over LwDS on the long-term forecasting task, 4% on mistake step detection, and 4% on mistake ordering detection. Adding task labels improves performance on all three tasks. Additionally, we evaluate our approach on the activity recognition task in EPIC Kitchens-100 and include results in the Supplemental. We also report our models performance on the step localization task in COIN. **Qualitative Results.** Fig. 3 shows qualitative results of our model VideoTF on the mistake detection tasks. Fig. 3 (A) shows a result on mistake step detection, where our model's input is the sequence of video clips on the left and it correctly predicts the index of the mistake step "2" as the output. In (B), the order of the first two steps is swapped and our model classifies the sequence as incorrectly ordered. In (C), for the long-term forecasting task, the next 5 steps predicted by our model match the ground truth and in (D), for the short-term forecasting task, the model predicts the next step correctly given the past 2 steps. In Fig. 4 we show an example result of our method compared to the baseline LwDS on the short-term forecasting task. Our method correctly predicts the next step as "remove air nozzle" since it has acquired knowledge of task structure whereas the baseline predicts the next step incorrectly as "install valve cap." ## 6 Conclusion In this work, we introduce a new video model, VideoTaskformer, for learning contextualized step representations through masked modeling of steps in instructional videos. We also introduce 3 new benchmarks: mistake step detection, mistake order detection, and long term forecasting. We demonstrate that VideoTaskformer improves performance on 6 downstream tasks, with particularly strong improve Figure 4: **Qualitative comparison. We compare results from our method VideoTF to the baseline LwDS on the short-term forecasting task. Step labels are not passed to the model as input and are only for reference.** Figure 3: **Qualitative results. We show qualitative results of our method on 4 tasks. The step labels are not used during training and are only shown here for illustrative purposes.** ments in detecting mistakes in videos and long-term forecasting. Our method opens the possibility of learning to execute a variety of tasks by watching instructional videos; imagine learning to cook a complicated meal by watching a cooking show. **Acknowledgements.** We would like to thank Suvir Mirchandani for his help with experiments and paper writing. This work was supported in part by DoD including DARPA's LwLL, PTG and/or SemaFor programs, as well as BAIR's industrial alliance programs.
2310.13421
Tensorized Pauli decomposition algorithm
This paper introduces a novel general-purpose algorithm for Pauli decomposition that employs matrix slicing and addition rather than expensive matrix multiplication, significantly accelerating the decomposition of multi-qubit matrices. In a detailed complexity analysis, we show that the algorithm admits the best known worst-case scaling and more favorable runtimes for many practical examples. Numerical experiments are provided to validate the asymptotic speed-up already for small instance sizes, underscoring the algorithm's potential significance in the realm of quantum computing and quantum chemistry simulations.
Lukas Hantzko, Lennart Binkowski, Sabhyata Gupta
2023-10-20T11:15:23Z
http://arxiv.org/abs/2310.13421v4
# Tensorized Pauli decomposition algorithm ###### Abstract This paper introduces a novel general-purpose algorithm for Pauli decomposition that employs matrix slicing instead of matrix multiplication. This approach significantly accelerates the decomposition of multi-qubit matrices. Numerical experiments are provided to validate the observed speedup, underscoring the algorithm's potential significance in the realm of quantum computing and quantum chemistry simulations. ## I Introduction Pauli matrices are ubiquitous in the realm of quantum physics and hold a fundamental importance in many other fields like quantum information and quantum computing. Along with the \(2\times 2\) identity matrix Pauli matrices form a complete basis, spanning the space of all \(2\times 2\) matrices. The Pauli group generated by \(\sigma^{\{0,1,2,3\}}\coloneqq\{I,X,Y,Z\}\) constitutes the elements of the Clifford group, which define the fundamental gate operations in the circuit model of quantum computers [1]. They also play a crucial role in quantum error correction due to their pivotal importance in the theory of stabilizer codes. Pauli matrices and their tensorized products - the Pauli strings - are used for describing errors in quantum computers and for generating stabilizer codes to detect and correct errors [2]. They are essential in Hamiltonian simulation too as they are used in description of Hamiltonians of many physical systems that can be mapped onto spin models in quantum many body physics [3]. They are also widely used in quantum chemistry for description of electronic or molecular Hamiltonians [4]. In principle, any Hamiltonian - as it can be expressed as a linear combination of tensor product of Pauli matrices - can be simulated using a quantum simulator [5]. This operation of Pauli decomposition, however, often comes at a significant computational cost, particularly when dealing with multi-qubit operators. In the quest to accelerate this task, this paper introduces an innovative approach - the Tensorized Pauli Decomposition (TPD) algorithm. The TPD algorithm, as presented in this paper, takes a novel path by sidestepping the resource-intensive matrix multiplication typically used for Pauli decomposition. Instead, it leverages matrix slicing techniques to determine the Pauli weights in a recursive and iterative manner, promising a remarkable boost in the efficiency of multi-qubit matrix decomposition. The rest of the article is organized as follows: In Section II, we first introduce the formulation of Pauli decomposition and review the existing algorithms that tackle this task. In Section III, we give an in-depth formulation of two variants of the TPD algorithm. In Section IV, we present our results from benchmark tests we perform to gauge performance of the TPD algorithm compared to existing algorithms. Finally in Section V, we conclude that TPD outperforms all the other algorithms in the benchmark tests and discuss advantages that arise from the speedup offered by the TPD algorithm. We also discuss approaches to further enhance the performance of our algorithm. ## II Preliminaries ### Problem formulation We briefly collect all the relevant basic properties of the Pauli matrices. Throughout this paper, we denote with \(\operatorname{Mat}(d)\) the set of all complex \(d\times d\) matrices. The Pauli matrices \(\sigma^{\{0,1,2,3\}}\coloneqq\{I,X,Y,Z\}\) are a set of hermitian, involutory, and unitary \(2\times 2\) matrices. Tensorizing Pauli matrices with each other leads to _Pauli strings_. In order to shorten notation, we set \(\operatorname{Q}\coloneqq\{0,1,2,3\}\) and define a Pauli string of length \(n\) via a corresponding quaternary string \(\mathbf{t}\in\operatorname{Q}^{n}\) as \[\sigma^{\mathbf{t}}\coloneqq\sigma^{\mathsf{t}_{1}}\otimes\sigma^{\mathsf{t} _{2}}\otimes\cdots\otimes\sigma^{\mathsf{t}_{n}}. \tag{1}\] The set of Pauli strings \(\{\sigma^{\mathbf{t}}\,:\,\mathbf{t}\in\operatorname{Q}^{n}\}\) again constitutes hermitian, involutory, and unitary matrices which form an orthonormal basis of \(\operatorname{Mat}(2^{n})\) with respect to the _Frobenius inner product_\(\langle A,B\rangle\coloneqq\frac{1}{2^{n}}\operatorname{tr}(A^{*}B)\). We address the objective of computing the Pauli decomposition of a given matrix \(A\in\operatorname{Mat}(2^{n})\), that is \[A=\sum_{\mathbf{t}\in\operatorname{Q}^{n}}\omega_{\mathbf{t}}\sigma^{\mathbf{ t}},\quad\omega_{\mathbf{t}}\coloneqq\tfrac{1}{2^{n}}\operatorname{tr}\bigl{(} \sigma^{\mathbf{t}}A\bigr{)}. \tag{2}\] In the worst case, all \(|\operatorname{Q}^{n}|=4^{n}\) terms contribute which clearly dictates the worst-case scaling for every Pauli decomposition algorithm. Moreover, the direct calculation of \(\omega_{\mathbf{t}}\) involves multiplying together two \(2^{n}\times 2^{n}\) matrices. ### Existing algorithms Due to the ubiquity of Pauli decomposition, numerous algorithms addressing this task are already in existence. We provide a brief summary and elucidation of those algorithms, all of which are subjected to numerical testing in Section IV. The HZZIXY[6] algorithm straightforwardly calculates all \(4^{n}\) Pauli strings in advance and then formulates a system of \(4^{n}\) linear equations, one for each matrix element, to be solved.[7] In contrast, Qiskit's[8] internal Pauli decomposing method SparsePauliOp.from_operator directly calculates the Frobenius inner products of the input matrix with each Pauli string. In both cases, however, the Pauli strings are generated by forming the usual Kronecker product of Pauli matrices. The Pauli composer[9] algorithm offers a substantial speed-up generating the Pauli strings. This method relies on the insight that Pauli strings have exactly one non-zero entry per row and column, making it possible to construct them more efficiently in a well-suited sparse format. By replacing the standard Kronecker product with the Pauli composer method and harnessing the enhanced efficiency of sparse matrix multiplication, it becomes possible to markedly expedite the computation of Frobenius inner products. Finally, PennyLane's[10] implementation of a Pauli decomposition routine follows an entirely distinct, quantum-inspired approach: The weights \(\omega_{\mathbf{t}}\) are inferred via Bell measurements on the pure Choi matrix of the superoperator \(\rho\mapsto A\rho A^{*}\). ## III Methods Our approach is centered around expanding the decomposition (2) in its tensor factors and calculating their weights via matrix partitioning rather than multiplication.[11] Namely, by equally partitioning a given input matrix \(A\in\operatorname{Mat}(2^{n})\) into four blocks of dimension \(2^{n-1}\times 2^{n-1}\), we obtain the essential relation \[A=\begin{bmatrix}A_{11}&A_{12}\\ A_{21}&A_{22}\end{bmatrix}=\sum_{\mathbf{t}\in\operatorname{Q}}\Omega_{ \mathbf{\pi t}}\otimes\sigma^{\mathbf{t}}, \tag{3}\] with the _cumulative matrix weights_ (CMW) \[\begin{split}\Omega_{\mathbf{\ast 0}}&=\tfrac{1}{2}(A_{11}+A_{22}), \ \Omega_{\mathbf{\ast 1}}=\tfrac{1}{2}(A_{21}+A_{12}),\\ \Omega_{\mathbf{\ast 2}}&=\tfrac{i}{2}(A_{21}-A_{12}),\ \Omega_{\mathbf{\ast 3 }}=\tfrac{1}{2}(A_{11}-A_{22}).\end{split} \tag{4}\] The calculation of \(\Omega_{\mathbf{\ast t}}\) does not involve any matrix multiplication, is therefore comparatively cheap, and entails, by construction, the weighted sum of all Pauli strings having \(\sigma^{\mathbf{t}}\) in their last tensor factor, i.e., \[\Omega_{\mathbf{\ast t}}=\sum_{\mathbf{s}\in\operatorname{Q}^{n-1}}\omega_{ \mathbf{\ast t}}\sigma^{\mathbf{s}} \tag{5}\] Most importantly, if \(\Omega_{\mathbf{\ast t}}=0\), the multi-linearity of the tensor product yields that \(\omega_{\mathbf{t}^{\prime}}=0\) for all \(\mathbf{t}^{\prime}\in\operatorname{Q}^{n}\) with \(\mathbf{t}^{\prime}_{n}=\mathbf{t}\). Hence, in this case we would have already determined the values of \(\left|\operatorname{Q}^{n-1}\times\{\mathbf{t}\}\right|=4^{n-1}\) weights. This also means that additional input structure like diagonality or symmetry which rule out a large subset of Pauli strings is detected early. In an iterative manner, we now equally partition each nonzero CMW into for blocks as well, yielding analogues of (3) and (4) with new CMWs \[\Omega_{\mathbf{\ast s}}=\sum_{\mathbf{r}\in\operatorname{Q}^{n-2}}\omega_{ \mathbf{\ast r}\mathbf{\ast}}\sigma^{\mathbf{r}} \tag{6}\] which can be further partitioned to yield new CMWs and so on. The scheme continues until the remainder strings \(\ast\) are entirely expanded into concrete strings \(\mathbf{t}\in\operatorname{Q}^{n}\) Figure 1: Two variants of TPD. The recursive version inputs a matrix \(A\), a list \(L\) for storing the quaternary strings and associated weights, and a current (possibly incompletely expanded) quaternary string \(\mathbf{s}\). If \(A\) already is a scalar it entails, by constructing, the weight \(\omega\mathbf{s}\) for the current string \(\mathbf{s}\) which is therefore appended to \(L\). Otherwise the CMWs are computed via matrix slicing, and \(\mathtt{TPD}(\Omega_{\mathbf{\ast t}},\,L,\,\mathbf{t}\mathbf{s})\) for each nonzero CMW \(\Omega_{\mathbf{\ast t}}\). In the iterative version an initial set of strings \(S^{1}\) exclusively contains the wildest string \(\mathbf{\ast}\). In the following we iterate over all qubit positions \(i\) except the last one. For each such \(i\) we initialize an empty set \(S^{i+1}\), expand the \(i_{n-(i+1)}\) position of every string \(\mathbf{r}\) in the predecessor \(S^{i}\) with a concrete \(\mathbf{s}\in\operatorname{Q}\), calculate the respective CMW, and finally add the altered string \(\tilde{\mathbf{r}}\) to \(S^{i+1}\) if the corresponding CMW \(\Omega_{\tilde{\mathbf{r}}}\) is nonzero. The returned final set \(S^{n}\) therefore includes all entirely expanded quaternary strings \(\mathbf{t}\) which contribute nonzero weights \(\omega_{\mathbf{t}}\). respectively. Correspondingly, in the last iteration step, the CMWs become scalar-valued and yield the remaining individual weights \(\omega_{\mathbf{t}}\). ## IV Results We numerically investigate the performance of TPD against the H2ZIXY, Qiskit's implementation, the Pauli composer method, and PennyLane's implementation on instances from 2 to 10 qubits. The results are depicted in Figure 2 and include input matrices of specific types: symmetric, diagonal, random, sparse, and hermitian matrices as well as the unit matrix and the Transverse Field Ising Model Hamiltonian (TFIM) [12], respectively. The TPD is implemented once for sparse matrices and for matrices stored as numpy arrays. All matrices except the TFIM Hamiltonian as well as the unit matrix are created using matrix elements drawn uniformly at random from the interval \([0,1]\). Symmetric and Hermitian matrices are manipulated accordingly to guarantee their properties. In a comprehensive set of numerical experiments, with the exception of diagonal and unit matrices, the H2ZIXY algorithm exhibits the poorest performance. The implementations in PennyLane and Qiskit consistently exhibit identical behavior across all scenarios, showcasing the methods' insensitivity to pre-defined characteristics of the input data. Conversely, the PauliComposer exhibits runtime performance comparable to the aforementioned algorithms but significantly surpasses them in the case of diagonal and unit matrices. Lastly, both variants of the TPD consistently outperform all their competitors, with a substantial performance advantage becoming evident as the number of qubits increases. Except for the sparse matrices and the unit matrices, both TPD variants perform about equally well. Small differences are most likely due to transforming between sparse and nonsparse formats. ## V Conclusion and Outlook The task of computing the Pauli decomposition of multi-qubit operators is at the core of many applications from quantum computing, including hamiltonian simulation and quantum error correction. In this article we have proposed a simple, yet powerful method for accelerating this ubiquitous task. Our method, the Tensorized Pauli Decomposition (TPD) algorithm actively avoids expensive matrix multiplication and, instead, calculates the Figure 2: Measured execution times of the various Pauli decomposition of different matrix types. In each plot, the execution times are drawn for one matrix type, showing qubit number and the execution times on a logarithmic axis. Pauli weights in an recursive/iterative manner via matrix slicing. We have tested the TPD against the most popular alternatives on various instances from \(2-10\) qubits and observed a decreased runtime in favor of the TPD in all cases. Based on the observed trend in our numerical experiments, we anticipate improved algorithmic efficiency also for larger instances across domains. By detecting the input's structure early on, TPD achieves speed ups even in cases where its competitors receive additional flags, lowering their worst-case runtime. However, empirical verification remains essential. Additionally, it is important to note that, as of the current implementation, we have not explored parallelization of our algorithm. Nevertheless, given that the evaluation of the intermediate CMWs (4) relies on the analysis of disjoint blocks within our initial matrix, we are optimistic that any potential communication overhead incurred when employing multiple processors will be manageable. Future investigations into parallelization strategies may further enhance the scalability of our approach. ###### Acknowledgements. We thank Gereon Kossmann, Tobias J. Osborne, Luis Santos, and Timo Ziegler for helpful discussions. Especially, we thank Jake Lishman for testing and improving our code and pointing out inconsistencies in our initial plots. LB acknowledges financial support by the Quantum Valley Lower Saxony and the BMBF project QuBRA. **Data and code availability statement.** The depicted data is available upon reasonable request from the authors. The source code is available on GitHub: [https://github.com/HANTLUK/PauliDecomposition](https://github.com/HANTLUK/PauliDecomposition).
2308.06500
On the Isomorphic Means and Comparison Inequalities
Based on collection of bijections, variable and function are extended into ``isomorphic variable'' and ``dual-variable-isomorphic function'', then mean values such as arithmetic mean and mean of a function are extended to ``isomorphic means''. 7 sub-classes of isomorphic mean of a function are distinguished. Comparison problems of isomorphic means are discussed. A sub-class(class V) of isomorphic mean of a function related to Cauchy mean value is utilized for generation of bivariate means e.g. quasi-Stolarsky means. Demonstrated as an example of math related to ``isomorphic frames'', this paper attempts to unify current common means into a better extended family of means.
Yuan Liu
2023-08-12T08:25:51Z
http://arxiv.org/abs/2308.06500v1
# On the Isomorphic Means and Comparison Inequalities ###### Abstract Based on collection of bijections, variable and function are extended into "isomorphic variable" and "dual-variable-isomorphic function", then mean values such as arithmetic mean and mean of a function are extended to "isomorphic means". 7 sub-classes of isomorphic mean of a function are distinguished. Comparison problems of isomorphic means are discussed. A sub-class(class V) of isomorphic mean of a function related to Cauchy mean value is utilized for generation of bivariate means e.g. quasi-Stolarsky means. Demonstrated as an example of math related to "isomorphic frames", this paper attempts to unify current common means into a better extended family of means. Keywords: Isomorphic frame; Dual-variable-isomorphic function; Isomorphic mean; Elastic mean; Cauchy mean value; Stolarsky mean. etc MSC2010 Subject Classification: 26A24 ## 0 Introduction The quasi-arithmetic mean [1] or generalized \(f\)-mean \(f^{-1}\big{(}\frac{1}{n}\sum_{i=1}^{n}f(x_{i})\big{)}\) is a generalization of simple means (of numbers) using a function \(f\). Typical special cases are arithmetic mean, geometric mean and power mean etc. As a byproduct of the study of a generalization of convex function in article [7] in the early 2000s, the author claims having independently discovered an analogous "2-D" version of so-called "generalized \(g,h\)-mean of a function \(F\) defined on \([a,b]\)" instead(denoted by \(M_{F}\)), which typically can be in the formula: \[M_{F}=h^{-1}\Big{(}\frac{1}{g(b)-g(a)}\int_{g(a)}^{g(b)}h\big{(}F(g^{-1}(u)) \big{)}\mathrm{d}u\Big{)}, \tag{0.1}\] where \(g,h\), comparable to above \(f\), are 2 bijections each applied on 2 variables of \(F\) respectively(i.e. its independent variable and dependent variable, henceforth called "dual-variable-" throughout this work). With its essential intermediate value property (IVP)(Theorem 3.5) applicable to an ordinary function \(F\), this complex form of mean is uncovered as a HUGE class of mean of a function. It is defined as "isomorphic mean of a function" (Definition 3.1) and 7 sub-classes of it are distinguished in Section 3.5, such its special cases and derivations are abundant and they cross with many existing concepts of mean values. This focused "2-D" version mean of a function along with the "1-D" generalized-\(f\) mean are unified to a sweeping class of "Isomorphic Means"(as in the title) due to the same nature explained below. In Algebra, two algebraic systems based on 2 sets are mapped by a bijection with which two operations in one system each have the same structures, such they are "isomorphic" and these 2 operations are "images" to each other. The author borrows the concepts and terms of "isomorphic" and "image" in naming, renaming and explaining of the above 2 MA objects and many others in an unified way. 1. We introduce the "isomorphic number" as a new concept which structure is mapped from that of a variable by a bijection \(f\). Then generalized \(f\)-mean is the image of the mean of several isomorphic number instances by the inverted bijection. Thus it is redefined as "isomorphic mean (of numbers) generated by \(f\)". 2. Extending to the scope of 2 variables(dual-variable), we introduce the "dual-variable-isomorphic(DVI) function of \(f\)" generated by a pair of bijections, which structure is "co-mapped" by the pair from that of the original \(f\) (of 1 independent variable). With regards to \(f\colon D\to M\), it has the form of \[\varphi\colon=(h\circ f\circ g^{-1})\colon E\to N\quad(E=g(D),\ N=h(M)).\] (0.2) Then (0.1) is rightly the image of the mean of the DVI function of \(F\) therein, thus it is named as "isomorphic mean of \(F\) generated by \(g,\ h\)". 3. As a matter of fact, the "isomorphic mean of a function" can be directly derived from "isomorphic mean of numbers", as in Section 3.3. 4. Naturally these structures can be further generalized to \(n\)-dimensions for functions of \((n-1)\) variables. See the very last formula (6.1). For efficient discussions and evolutions, the related \(n\) monotone bijections as a single mapping needed by definitions is refined as a basic concept of "isomorphic frame", such the isomorphic means elaborated in this paper as a theory of generalization is just an example of mathematics related to isomorphic frames. Besides the basic properties and subtle classifications of isomorphic means, this paper features the following findings and results: 1. The abundant special cases of isomorphic means introduced in the paper. Among others are "elastic mean(E)" derived from Economics, and the unfamous "geometric mean of a function(G)" as the sibling of the former. Because isomorphic means of a function may derive mean values of numbers, (e.g. when \(g,h\) are power functions and \(f(x)=x\)) there obtained a class of "quasi-Stolarsky means" of the following form: \[Q_{p,q}(a,b)=\bigg{(}\frac{p(b^{p+q}-a^{p+q})}{(p+q)(b^{p}-a^{p})}\bigg{)}^{1 /q},\] (0.3) in the sense that this mother form yields all major children forms(see Section 3.9.2) of Stolarsky means([11], pp629). 2. This paper discusses the relations and differentiations of isomorphic means to(from) the outstanding "Cauchy mean values" and their conversions(Section 4). The major finding is that the IVP of isomorphic means works with ordinary functions, while the IVP of Cauchy mean values applies to derivable functions; and the Cauchy mean values can only be confidently converted to and from the so-called class V of isomorphic means of derivable generators. Especially for mean of a function, isomorphic mean is a concept of better origin and perspective, better identification and classification, more coverage and more natural generalization. 3. The comparison problems of isomorphic means are roughly solved, which take up a major part of this paper. The main results are Theorem 5.3 and Theorem 5.4 derived from Losonczi's theorems in [8], for class V comparison scenarios; and the Theorem 5.15, which is for comparison of delicate class II and features very original proof based on the very properties of isomorphic mean itself; and along with those theorems derived by help of monotonicity and convexity conditions for specific comparison scenarios. Typical examples are the comparisons between the geometric mean(G) and the elastic mean(E) of a function, among many others. There is also a Lemma 2.8 quoted from article [7], and Theorem 2.7, which all delicately compares 2 isomorphic means of numbers(generalized-\(f\) means). In the general sense, the above-mentioned "isomorphic mean of a function" has the formal name of "the dual-variable-isomorphic(DVI) mean of a function", as defined in Definition 3.1. ## 1 Basics and preliminaries There are 3 basic concepts to be introduced. Namely "Isomorphic frame", "Isomorphic number and Isomorphic variable", and "Dual-variable-isomorphic function". ### Isomorphic frame #### 1.1.1 Definition **Lemma 1.1**.: Let \(X_{1},...,X_{n},U_{1},...,U_{n}\subseteq\mathbb{R}\), \(X=X_{1}\times...\times X_{n}\), \(U=U_{1}\times...\times U_{n}\) and \(g_{i}\colon X_{i}\to U_{i}(i=1,...,n)\) be \(n\) bijections. The ordered set \(\{g_{1},...,g_{n}\}\) that can map \(\forall x=(x_{1},...,x_{n})\in X\) to \(u=(g_{1}(x_{1}),...,g_{n}(x_{n}))\in U\) is a bijection, if it is treated as a function with \(X\) being its domain and \(U\) being the range(the image). Proof.: Firstly \(\forall x=(x_{1},...,x_{n})\in X,\ y=(y_{1},...,y_{n})\in X\) satisfying \((g_{1}(x_{1}),...,g_{n}(x_{n}))=(g_{1}(y_{1}),...,g_{n}(y_{n}))\), then due to the \(n\) bijections, \(x_{1}=y_{1},\...,\ x_{n}=y_{n}\Rightarrow x=y(\)injective). Secondly \(\forall u=(u_{1},...,u_{n})\in U\), there always be an \(x=(g_{1}^{-1}(u_{1}),...,g_{n}^{-1}(u_{n}))\in X\) such that \(\{g_{1},...,g_{n}\}(x)=u(\)surjective). **Definition 1.2**.: The ordered set \(\{g_{1},...,g_{n}\}\) given by Lemma 1.1 as a bijection as well as a collection, denoted by \(\mathscr{I}\{g_{1},...,g_{n}\}\), is called an \(n\)-dimensional isomorphic frame. It is further expressed as the following notational forms: \[\begin{split}\mathscr{I}\{g_{1},...,g_{n}\}:X\to U& =[X\sharp\ U]_{g_{1},...,g_{n}}\\ &=[X_{1}\times...\times X_{n}\sharp\ U_{1}\times...\times U_{n}] _{g_{1},...,g_{n}}\\ &=[X_{1},...,X_{n}\sharp\ U_{1},...,U_{n}]_{g_{1},...,g_{n}}. \end{split} \tag{1.1}\] \(X\) is called the base frame of the isomorphic frame(or "the base" for short), and \(U\) is called the image frame of the isomorphic frame(or "the image"). The bijection \(g_{i}(i=1,...,n)\) is called a (the \(i\)th) dimensional mapping(DM) of \(\mathscr{I}\{g_{1},...,g_{n}\}\). **Notation 1.3**.: We also write \(\mathscr{I}^{-1}\{g_{1},...,g_{n}\}\) for the inversion of \(\mathscr{I}\{g_{1},...,g_{n}\}\). **Theorem 1.4**.: \(\mathscr{I}\{g_{1}^{-1},...,g_{n}^{-1}\}=(\mathscr{I}\{g_{1},...,g_{n}\})^{-1} (=\mathscr{I}^{-1}\{g_{1},...,g_{n}\})\)_._ (Proof omitted.) #### 1.1.2 Embedding in isomorphic frame **Notation 1.5**.: Let \(\mathscr{I}\{g_{1},...,g_{n}\}=[X_{1}\times...\times X_{n}\sharp\ U_{1}\times...\times U_{n}]_{g_{1},...,g_{n}}\) and \(D\subseteq X_{1}\times...\times X_{n}\), \(E\subseteq U_{1}\times...\times U_{n}\). (i) \(D\) is said to be embedded in the base of \(\mathscr{I}\{g_{1},...,g_{n}\}\), for which way we use the sign "\(\vee\)" to write \(D\vee\mathscr{I}\{g_{1},...,g_{n}\}\), or \(D\vee\big{(}\mathscr{I}\{g_{1},...,g_{n}\}=[X_{1}\times...\times X_{n}\sharp \ U_{1}\times...\times U_{n}]_{g_{1},...,g_{n}}\big{)}\), etc. (ii) \(E\) is said to be embedded in the image of \(\mathscr{I}\{g_{1},...,g_{n}\}\), and for this we write \(E\vee\mathscr{I}\{g_{1}^{-1},...,g_{n}^{-1}\}\). **Theorem 1.6**.: \(\mathscr{I}\{g_{1},...,g_{n}\}(D)\vee\mathscr{I}\{g_{1}^{-1},...,g_{n}^{-1}\}\) if \(D\vee\mathscr{I}\{g_{1},...,g_{n}\}\)_._ This is because the image of \(D\) is a subset of the image of the isomorphic frame. #### 1.1.3 Bonding on isomorphic frame **Notation 1.7**.: Let \(\mathscr{I}\{g_{1},...,g_{n}\}=[X_{1}\times...\times X_{n}\sharp\ U_{1}\times... \times U_{n}]_{g_{1},...,g_{n}}\). (i) If function \(f:D\to M\) of \((n-1)\) variables is such that \(D\subseteq X_{1}\times...\times X_{n-1}\)\((D\vee\mathscr{I}\{g_{1},...,g_{n-1}\})\) and the range \(M\subseteq X_{n}\)\((M\vee\mathscr{I}\{g_{n}\})\), then \(f\) is said to be bonded on the base of \(\mathscr{I}\{g_{1},...,g_{n}\}\). For this we use the sign "\(\wedge\)" to write \((f:D\to M)\wedge\mathscr{I}\{g_{1},...,g_{n}\}\), or the alike. (ii) If \(D\subseteq U_{1}\times...\times U_{n-1}\) and \(M\subseteq U_{n}\), then \(f\) is said to be bonded on the image of \(\mathscr{I}\{g_{1},...,g_{n}\}\), and for this we write \((f:D\to M)\wedge\mathscr{I}\{g_{1}^{-1},...,g_{n}^{-1}\}\), \(f\wedge\mathscr{I}^{-1}\{g_{1},...,g_{n}\}\) or the alike. **Remark 1.1**.: In the extreme case \(n=1\), the above notation applies to a single variable, e.g. \(x\in M\subseteq X\), or \(x\in M\subseteq U\), which are treated as functions of \(0\) variables bonded on (the base or the image of) the isomorphic frame. **Notation 1.8**.: Let \(\mathscr{I}\{g\}=[X\sharp\ U]_{g}\) be \(1\) dimensional, and \(k\)-tuple \(\underline{x}\in X\), then \(\underline{x}\) is said to be either embedded in or bonded on the base of \(\mathscr{I}\{g\}\), written as e.g. \(\underline{x}\vee[X\sharp\ U]_{g}\), or \(\underline{x}\wedge[X\sharp\ U]_{g}\). #### 1.1.4 About embedding and bonding Embedding and bonding are \(2\) basic styles the objects of MA "attach or tie" to isomorphic frames. A set embedded in an isomorphic frame is a part of the latter while a function bonded on an isomorphic frame is not that way strictly. Most definitions in the paper will be based on these \(2\) concepts. From now on, all the isomorphic frames are with \(\mathit{strictly\ monotone\ bijections}\), i.e. strictly monotone (invertible) real functions, as their dimensional mappings unless otherwise specified. These isomorphic frames are denoted as \(\mathscr{I}_{m}\{g_{1},...,g_{n}\}\). ### Isomorphic number and isomorphic variable **Definition 1.9**.: With \(1\) dimensional \(\mathscr{I}_{m}\{g\}=[X\sharp\ U]_{g}\), \(\forall x\in X\), \(u=g(x)\in U\) is called the isomorphic number of \(x\) generated by mapping \(g\)(or by \(\mathscr{I}_{m}\{g\}\)); In terms of variables, \(u\) is called the isomorphic variable of \(x\) generated by mapping \(g\). \(u\) is specially denoted as \(u=\varphi(x:g)\) or \(u=\varphi(x:\mathscr{I}_{m}\{g\})\). **Remark 1.2**.: Any real number \(x\) is an isomorphic number of itself generated by identity mapping (\(y=x\)). An example of isomorphic number in Physics is Conductance G with regards to Resistance R, since \(G=1/R\). With Definition 1.9\(u\) is already called the "isomorphic number of \(x\)" without an operation defined, because with such \(u\) it is ready and easy to introduce \(2\) genuine operations with "isomorphism". For examples: 1. If there is an operation of arithmetic "\(+\)" in \(U\), that \(\forall u_{1},u_{2}\in U\), \(\exists u_{1}+u_{2}\in U\), then we can define a binary operation denoted by \([\.+\.\ ]_{g}\) on \(X\), such that \(\forall x_{1},x_{2}\in X\), \(\exists x_{3}\in X\) which holds \(x_{3}=[x_{1}+x_{2}]_{g}=g^{-1}(g(x_{1})+g(x_{2}))\). 2. Similarly, if there's an operation on \(U\) that computes the mean of \(u_{1}\), \(u_{2}\), there is also a mapped operation on \(X\), which computes a "special mean": \(\bar{x}\) of \(x_{1},x_{2}\). More generalized, \(u\) and \(x\) are "2 conjugate variables embedded in \([X\not\sharp U]_{g}\)" or "2 imaging functions of 0 variables bonded on \([X\not\sharp U]_{g}\)". ### Dual-variable-isomorphic function With a function of 1 variable bonded on the base of a 2-D isomorphic frame, on the image of the latter bonded is the so called "dual-variable-isomorphic function". #### 1.3.1 Definition **Definition 1.10**.: Let \((f\colon D\to M)\wedge\big{(}\mathscr{I}_{m}\{g,h\}=[X,Y\not\sharp U,V]_{g,h} \big{)}\) and \(E=g(D),\ N=h(M)\), * Function \((h\circ f\circ g^{-1})\colon E\to N\) is defined to be the dual-variable-isomorphic(DVI) function of \(f\) generated by mapping \(g,h(\)or generated by \(\mathscr{I}_{m}\{g,h\})\). It is denoted by \(\varphi(f:g,h)\), or \(\varphi(f:\mathscr{I}_{m}\{g,h\})\), \[\varphi(f:g,h)\colon\ =(h\circ f\circ g^{-1})\colon E\to N.\] (1.2) * \(g\) and \(h\) are called the independent variable's generator mapping(function) or dimensional mapping(IVDM), and dependent variable's generator mapping(function) or dimensional mapping(PVDM) of \(\varphi\) respectively. * \(E\subseteq U\) is called the isomorphic domain of \(D\) generated by mapping \(g\); \(N\subseteq V\) is called the isomorphic range of \(M\) generated by mapping \(h\). **Theorem 1.11**.: \(\varphi(f:g,h)\wedge\big{(}\mathscr{I}_{m}\{g^{-1},h^{-1}\}=[U,V\not\sharp X,Y] _{g^{-1},h^{-1}}\big{)}\)_._ That is, \(\varphi(f:g,h)\) is bonded on the image of \(\mathscr{I}_{m}\{g,h\}\). Proof omitted. Illustration 1.1 is the visual impression of the definition and the theorem, where \(f\wedge\mathscr{I}_{m}\{g,h\}\) and \(\varphi\wedge\mathscr{I}_{m}\{g^{-1},h^{-1}\}\). A hidden mapping between \(f\) and \(\varphi\) is formed by \(\mathscr{I}_{m}\{g,h\}\) under which these 2 functions are deemed as either (i) 2 similar elements in different spaces, (ii) 2 sets of relations of same cardinal number, or (iii) 2 unary operations of same structure but each spanning across a pair of number sets. In traditional terms, the range \(M\) here is the image of \(f\), and \(Y\) is the codomain. The following expression sometimes may also be used for DVI functions, with which the range \(N\subseteq V\) is implied, where \(V\) is also the codomain. \[\varphi(f:g,h)\colon\ =(h\circ f\circ g^{-1})\colon E\to V. \tag{1.3}\] #### 1.3.2 Sub-classing of DVI function **Definition 1.12**.: In view of Definition 1.10, the following special cases can be considered: 1. Let \(g\) be identity mapping, \(\varphi\colon\ =(h\circ f)\colon D\to N\) is called the dependent-variable-isomorphic(PVI) function of \(f\) generated by \(h\). 2. Let \(h\) be identity mapping, \(\varphi\colon\,=(f\circ g^{-1})\colon E\to M\) is called the independent-variable-isomorphic(IVI) function of \(f\) generated by \(g\). 3. Let \(Y=X\), \(h=g\), \(\varphi\colon\,=(g\circ f\circ g^{-1})\colon E\to N\) is called the same-mapping dual-variable-isomorphic function of \(f\) generated by \(g\). 4. In general case, \(h\neq g\), \(\varphi\colon\,=(h\circ f\circ g^{-1})\colon E\to N\) is called the (general) dual-variable-isomorphic function of \(f\) generated by \(g,h\). 5. Let \(f(x)=x\), \(\varphi\colon\,=(h\circ g^{-1})\colon E\to N\) is called the dual-variable-isomorphic function of identity generated by \(g,h\). 6. Let \(g\), \(h\) both be identity mappings, \(\varphi\colon\,=(h\circ f\circ g^{-1})\colon E\to N\) is equivalent to \(f\colon D\to M\). Both domains are \(D\), and both ranges are \(M\). i.e. \(f\) is a special dual-variable-isomorphic function of itself generated by identities. 7. For monotone function \(f\colon D\to M(range\ M)\), \(f^{-1}\) is the dual-variable-isomorphic function of \(f\) generated by \(f\), \(f^{-1}\). Above special cases also could be treated as 7 sub-classes of DVI function. #### 1.3.3 Anti-dual-variable-isomorphic function **Definition 1.13**.: Let \(\varphi\colon\,=(h\circ f\circ g^{-1})\colon E\to N\) be the DVI function of \(f\colon D\to M\). Then \(f\) is called the anti-dual-variable-isomorphic function of \(\varphi\). **Remark 1.3**.: With respect to its DVI function \(\varphi\) generated by mapping \(g,\,h,\ f\colon D\to M\) can be represented by \(f=(h^{-1}\circ\varphi\circ g):D\to M\). Obviously \(f\) is the DVI function of \(\varphi\) generated by \(g^{-1}\), \(h^{-1}\). This can be observed in Illustration 1.1. The following are 3 useful special cases of DVI functions. #### 1.3.4 V-scaleshift: Vertical scale and shift of a function **Definition 1.14**.: For a real \(f\colon D\to M\) and 2 constants \(k\neq 0\), \(C\), define the function \[V_{ss}\big{(}f:k,C\big{)}\colon\ =\ \ v=kf(x)+C\ \ \big{(}v\in k(M)+C\big{)} \tag{1.4}\] a V-scaleshift of \(f\) with scale \(k\) and shift \(C\). Define the set \[\mathbb{V}f=\{V_{ss}\big{(}f:k,C\big{)}:k,C\in\mathbb{R},k\neq 0\} \tag{1.5}\] the V-scaleshift space of \(f\). **Lemma 1.15**.: _1).\(f\in\mathbb{V}f\); 2).\(\mathbb{V}g=\mathbb{V}f\), if \(g\in\mathbb{V}f\)._ **Remark 1.4**.: A V-scaleshift of \(y=f(x)\) is a special case of dependent-variable-isomorphic function of \(f\), where the PVDM is \(v=ky+C\). **Lemma 1.16**.: Suppose \(f\colon D\to M\) be (strictly) convex or (strictly) concave on \(D\), then \(V_{ss}\big{(}f:k,C\big{)}\) and \(f\) are of the same (strict) convexity if \(k>0\) or of the opposite (strict) convexity if \(k<0\). Proof is omitted. **Notation 1.17**.: In this paper, \(\mathbb{V}x(\mathbb{V}y)\) is denoting the V-scaleshift space of identity mapping \(g(x)=x(h(y)=y)\). #### 1.3.5 H-scaleshift: Horizontal scale and shift of a function **Definition 1.18**.: For a real \(f\colon D\to M\) and 2 constants \(k\neq 0\), \(C\), define the function \[H_{ss}\big{(}f:k,C\big{)}\colon\ =\ \ y=f\big{(}\frac{1}{k}(u-C)\big{)}\ \ \big{(}u\in k(D)+C\big{)} \tag{1.6}\] an H-scaleshift of \(f\) with scale \(k\) and shift \(C\). Define the set \[\mathbb{H}f=\{H_{ss}\big{(}f:k,C\big{)}:k,C\in\mathbb{R},k\neq 0\} \tag{1.7}\] the H-scaleshift space of \(f\). **Lemma 1.19**.: _1).\(f\in\mathbb{H}f\); 2).\(\mathbb{H}g=\mathbb{H}f\), if \(g\in\mathbb{H}f\)._ **Remark 1.5**.: An H-scaleshift of \(y=f(x)\) is a special case of independent-variable-isomorphic function of \(f\), where the IVDM is \(u=kx+C\). **Lemma 1.20**.: Suppose \(f\colon D\to M\) be (strictly) convex or (strictly) concave on \(D\), then \(H_{ss}\big{(}f:k,C\big{)}\) is of the same (strict) convexity on \(k(D)+C\) as \(f\) on \(D\). Proof.: Suppose \(f\) is convex, then \(\forall u_{1},u_{2}\in k(D)+C\), \(\forall\lambda\in[0,1]\)\(\exists x_{1}=(u_{1}-C)/k,x_{2}=(u_{2}-C)/k\in D\), such that \(f(\lambda x_{1}+(1-\lambda)x_{2})\leq\lambda f(x_{1})+(1-\lambda)f(x_{2})\). This \(\Rightarrow\)\(f(\lambda(u_{1}-C)/k+(1-\lambda)(u_{2}-C)/k)=f((\lambda u_{1}+(1-\lambda)u_{2})/k-C) \leq\lambda f((u_{1}-C)/k)+(1-\lambda)f((u_{2}-C)/k)\), which means \(H_{ss}\big{(}f:k,C\big{)}\) is also convex. While \(f\) has other (strict) convexities, \(H_{ss}\big{(}f:k,C\big{)}\) will also copy. **Lemma 1.21**.: Let \(g,h\) be both invertible. 1).If \(g\in\mathbb{H}h\), then \(g^{-1}\in\mathbb{V}(h^{-1})\). 2).If \(g\in\mathbb{V}h\), then \(g^{-1}\in\mathbb{H}(h^{-1})\). Proof is omitted. #### 1.3.6 HV-scaleshift: Horizontal and vertical scale and shift of a function **Definition 1.22**.: For a real \(y=f(x):D\to M\) and constants \(p\neq 0\), \(Q\), \(k\neq 0\), \(L\), define the function \[\begin{array}{c}HV_{ss}\big{(}f:p,Q;k,L\big{)}\colon\ =\ \ v=kf\big{(}\frac{1}{p}(u-Q) \big{)}+L\\ \big{(}u\in p(D)+Q,\ v\in k(M)+L\big{)}\end{array} \tag{1.8}\] an HV-scaleshift of \(f\) with scale \(p,k\) and shift \(Q,L\). Define the set \[\mathbb{H}\mathbb{V}f=\{HV_{ss}\big{(}f:p,Q;k,L\big{)}:p,Q,k,L\in\mathbb{R},p \neq 0,k\neq 0\} \tag{1.9}\] the HV-scaleshift space of \(f\). **Lemma 1.23**.: _1).\(f\in\mathbb{H}\mathbb{V}f\); 2).\(\mathbb{H}\mathbb{V}g=\mathbb{H}\mathbb{V}f\), if \(g\in\mathbb{H}\mathbb{V}f\)._ **Remark 1.6**.: An HV-scaleshift of \(y=f(x)\) is a special case of DVI function of \(f\): \[HV_{ss}\big{(}f:p,Q;k,L\big{)}=\varphi(f:g,h), \tag{1.10}\] where the DMs \(g,h\) are defined by \(\ u=g(x)=px+Q,\ v=h(y)=ky+L\). ## 2 Isomorphic weighted mean of numbers After preparation, the generalized \(f\)-mean will be re-defined as the "isomorphic (weighted) mean" with isomorphic frame involved. ### Isomorphic weighted mean **Definition 2.1**.: Let \(\mathscr{I}_{m}\{g\}=[X\ \sharp\ U]_{g}\) be 1 dimensional, \(n\)-tuple\((n\geq 2)\ \underline{x}\wedge[X\ \sharp\ U]_{g}\) and \(\underline{u}\) being their respective isomorphic numbers. With positive \(\underline{p}\) satisfying \(\sum_{i=1}^{n}p_{i}=1\) if \(\sum_{i=1}^{n}p_{i}u_{i}\in U\), then \(g^{-1}\big{(}\sum_{i=1}^{n}p_{i}u_{i}\big{)}\) is called the isomorphic weighted mean of \(n\)-tuple (numbers) \(\underline{x}\) generated by \(g\)(or by \(\mathscr{I}_{m}\{g\}\)). Here it is denoted by \(\overline{x,p_{R}}|_{g}\) (or \(\overline{x_{\{i\}},p_{R}}|_{g}\), or simpler \(\overline{x},\overline{p}|_{g}\)), \[\overline{x,p_{R}}|_{g}=g^{-1}\big{(}\sum_{i=1}^{n}p_{i}g(x_{i})\big{)}. \tag{2.1}\] \(g\) is called the generator mapping, or dimensional mapping of the isomorphic weighted mean. **Remark 2.1**.: The subscript \(R\) of \(p\) indicates \(\underline{p}\) are the "relative" weights (fractions always add up to 1), as comparing to another type of weights known as Frequency Numbers, for which case the definition and formula will be slightly different as in [1]. For simplicity, in this paper we use \(\overline{x,p}|_{g}\) which agrees series \(\underline{p}\) are relative (fractional) weights. **Definition 2.2**.: Especially when \(p_{1}=\ldots=p_{n}=1/n\), the isomorphic weighted mean is called the isomorphic mean of \(n\)-tuple \(\underline{x}\) generated by \(g\), it is denoted by \(\overline{x_{\{i\}}}|_{g}\) or \(\overline{x}|_{g}\), \[\overline{x}|_{g}=g^{-1}\big{(}\frac{1}{n}\sum_{i=1}^{n}g(x_{i})\big{)}. \tag{2.2}\] **Remark 2.2**.: In simple words, isomorphic (weighted) mean is the inverse image of the (weighted) mean of \(n\) number of \(\varphi(x_{i}:g)\). Among many already known properties, the following are some key properties of the mean. #### 2.1.1 Property of mean value **Theorem 2.3**.: With \(n\)-tuple \(\underline{x}\wedge(\mathscr{I}_{m}\{g\}=[X\sharp\ U]_{g})\) that are not all equal, where \(g\) is continuous on interval \(X\), and with weights \(\underline{p}\), there is an unique \(\xi\in(\min\{\underline{x}\},\max\{\underline{x}\})\subseteq X\) such that \[\xi=\overline{x,p}|_{g}=g^{-1}\big{(}\sum_{i=1}^{n}p_{i}g(x_{i})\big{)}. \tag{2.3}\] This theorem reflects a basic property of above defined isomorphic weighted mean. However in the definition of quasi-arithmetic mean in pp266 of [1], \(g\) being continuous is prerequisite thus the existence of the mean is ensured. The proof is omitted. #### 2.1.2 Property of monotonicity **Theorem 2.4**.: Any participating number\((x_{i})\)'s value increasing will result in the isomorphic weighted mean's increasing. Thus any one's decreasing results in the mean's decreasing. This property is obvious with (2.1) as \(g\) and \(g^{-1}\) are always of the same strict monotonicity. #### 2.1.3 Invariant value with vertical scale and shift of DM **Theorem 2.5**.: \(\overline{x,p}|_{h}=\overline{x,p}|_{g}\)_, for \(h\in\mathbb{V}g\)._ Proof.: \(\overline{x,p}|_{h}=g^{-1}\bigg{(}\Big{(}-C+\sum_{i=1}^{n}p_{i}\big{(}kg(x_{i} )+C\Big{)}\Big{)}/k\bigg{)}\)\(=\overline{x,p}|_{g}\). \(\blacksquare\) ### Comparison of 2 isomorphic (weighted) means of numbers For same \(n\)-tuple\((n\geq 2)\)\(\underline{x}\) and weights \(\underline{p}\), 2 sets of methods are proposed to compare these 2 means: \[\overline{x,p}|_{g}=g^{-1}\big{(}\sum_{i=1}^{n}p_{i}g(x_{i})\big{)},\quad \overline{x,p}|_{h}=h^{-1}\big{(}\sum_{i=1}^{n}p_{i}h(x_{i})\big{)}.\] Note: In this paper regarding the convexity of a function, "convex" means convex to lower, and "concave" means convex to upper. #### 2.2.1 Comparison method derived from Jensen's inequality The following theorem may already be available, though we reproduce it here with our own simple proof, also serving to the purpose of easier cross-reference. **Theorem 2.6**.: Suppose with \(n\)-tuple \(\underline{x}\wedge[X\sharp U]_{g}\), \([X\sharp V]_{h}\), and weights \(\underline{p}\), there exists \(\overline{x,p}|_{g}\), \(\overline{x,p}|_{h}\). Let \(D=[\min\{\underline{y}\},\ \max\{\underline{y}\}]\) where tuple \(\underline{y}=h(\underline{x})\). Then 1). \(\overline{x,p}|_{g}\geq\overline{x,p}|_{h}\), if \(g\) is increasing on \(X\) and \(g(h^{-1})\) is convex on \(D\); 2). \(\overline{x,p}|_{g}\leq\overline{x,p}|_{h}\), if \(g\) is increasing on \(X\) and \(g(h^{-1})\) is concave on \(D\); 3). \(\overline{x,p}|_{g}\leq\overline{x,p}|_{h}\), if \(g\) is decreasing on \(X\) and \(g(h^{-1})\) is convex on \(D\); 4). \(\overline{x,p}|_{g}\geq\overline{x,p}|_{h}\), if \(g\) is decreasing on \(X\) and \(g(h^{-1})\) is concave on \(D\). For all cases, the equality holds only if \(n\)-tuple \(\underline{x}\) are all equal. Proof.: Assume \(g\) is strictly increasing, the inequality between \(\overline{x,p}|_{g}\) and \(\overline{x,p}|_{h}\) is equivalent to the inequality between \(\sum_{i=1}^{n}p_{i}g(h^{-1}(y_{i}))\) and \(g(h^{-1}(\sum_{i=1}^{n}p_{i}y_{i}))\). According to Jensen's inequality [4], if \(g(h^{-1})\) is a convex function(convex to lower) on \([\min\{\underline{y}\},\ \max\{\underline{y}\}]\) then \[\sum_{i=1}^{n}p_{i}g(h^{-1}(y_{i}))\geq g(h^{-1}(\sum_{i=1}^{n}p_{i}y_{i})). \tag{2.4}\] At the same time, \(\overline{x,p}|_{g}\geq\overline{x,p}|_{h}\). While \(g\) may also be decreasing which reverses above-mentioned equivalence of inequalities, or \(g(h^{-1})\) may be concave which reverses above Jensen's inequality, there totals 4 cases by the combinations, which summarize as the theorem. As for equality, it corresponds to how Jensen's inequality behaves similarly. \(\blacksquare\) For instance of Theorem 2.6, let \(g(x)=\sin x,\ y=h(x)=\ln x\), \(x_{i}\in(0,\pi/2]\). \(g\) is strictly increasing, \(g(h^{-1}(y))=sin(e^{y})\). Simple calculation concludes that when \(1/x\geq\tan x\), \(\mathrm{d}^{2}g(h^{-1}(y))/\mathrm{d}y^{2}\geq 0\), which indicates \(g(h^{-1}(y))\) is convex on \(D\). Solved the inequality with iterative method we get \(x\in(0,0.8603...)\). Then according to Theorem 2.6 case 1), \(\overline{x,p}|_{\sin x}\geq\overline{x,p}|_{\ln x}\) when \(x_{i}\in(0,0.8603...)\). And contrarily with Theorem 2.6 case 2), \(\overline{x,p}|_{\sin x}\leq\overline{x,p}|_{\ln x}\) when \(x_{i}\in(0.8603...,\pi/2)\). #### 2.2.2 Comparison method of differential criteria **Theorem 2.7**.: With \(n\)-tuple \(\underline{x}\wedge[X\sharp U]_{g}\), \(\underline{x}\wedge[X\sharp V]_{h}\), and weights \(\underline{p}\), where \(X\) is an interval on which \(g,h\) are monotone and derivable, \(g^{\prime}\neq 0,h^{\prime}\neq 0\), 1). \(\overline{x,p}|_{g}\geq\overline{x,p}|_{h}\), if \(|g^{\prime}/h^{\prime}|\) is increasing on \(X\); 2). \(\overline{x,p}|_{g}\leq\overline{x,p}|_{h}\), if \(|g^{\prime}/h^{\prime}|\) is decreasing on \(X\). For both cases, the equality holds only if \(n\)-tuple \(\underline{x}\) are all equal. Proof.: Let \(y=h(x)\), then \(g(h^{-1}(y))^{\prime}=(g^{\prime}/h^{\prime})(x)\circ h^{-1}(y)\). With Theorem 2.6 to distinguish 8 cases: 1). In the case \(g^{\prime}>0\) thus \(g\) is increasing, and \((g^{\prime}/h^{\prime})(x)\) is increasing with \(h^{\prime}>0\) thus \(g(h^{-1}(y))\) is convex, which implies \(|g^{\prime}/h^{\prime}|=|g^{\prime}|/|h^{\prime}|=g^{\prime}/h^{\prime}\) is increasing, then \(\overline{x,p}|_{g}\geq\overline{x,p}|_{h}\). 2). In the case \(g^{\prime}>0\) thus \(g\) is increasing, and \((g^{\prime}/h^{\prime})(x)\) is increasing with \(h^{\prime}<0\) thus \(g(h^{-1}(y))\) is concave, which implies \(|g^{\prime}/h^{\prime}|=|g^{\prime}|/|h^{\prime}|=g^{\prime}/(-h^{\prime})\) is decreasing, then \(\overline{x,p}|_{g}\leq\overline{x,p}|_{h}\). 3). In the case \(g^{\prime}<0\) thus \(g\) is decreasing, and \((g^{\prime}/h^{\prime})(x)\) is increasing with \(h^{\prime}>0\) thus \(g(h^{-1}(y))\) is convex, which implies \(|g^{\prime}/h^{\prime}|=|g^{\prime}|/|h^{\prime}|=(-g^{\prime})/h^{\prime}\) is decreasing, then \(\overline{x,p}|_{g}\leq\overline{x,p}|_{h}\). 4). through 8)....(omitted) All the omitted 5 cases are analogous and 8 cases are finally merged without overlapping or confliction into these 2 cases of the theorem. \(\blacksquare\) #### 2.2.3 The nature of the comparisons from a new perspective Also based on the DVI function, article [7] is yet another theory on extension of convex function. Its Theorem 2 is about how to use the monotonicity of a derivative-like function \((h\circ f)^{\prime}/g^{\prime}\) to compare an isomorphic-weighted-mean-involved inequality, so as to identify the extended convexity of \(f\). A special case (where \(f(x)=x\)) proposed as its Corollary 1, is quoted as below Lemma 2.8 (some letters are changed): **Lemma 2.8**.: Let \(g,h\) be strictly monotone continuous and derivable functions on an open interval \(X\), and \(g^{\prime}\neq 0,\ h^{\prime}\neq 0\). \(h^{\prime}/g^{\prime}\) is monotone. There are \(n\)-tuple(\(n\geq 2\)) \(\underline{x}\in X\), and \(n\)-tuple positive \(\underline{p}\) adding up to 1. Among functions \(h^{\prime}/g^{\prime},h,g\), 1). if odd numbers (1 or 3) are increasing then \(\overline{x,p}|_{g}\leq\overline{x,p}|_{h}\); 2). if odd numbers (1 or 3) are decreasing then \(\overline{x,p}|_{g}\geq\overline{x,p}|_{h}\). For both cases, the inequalities are equal only if \(n\)-tuple \(\underline{x}\) are all equal. It's easy to see Theorem 2.7 is just an improved version of the Lemma 2.8. In article [7], with simple steps Lemma 2.8 facilitates the proof of the power mean inequality: _For positive \(n\)-tuple(\(n\geq 2\)) \(\underline{x}\) and real number \(p>q\),_ \[\overline{x}|_{x^{p}}\geq\overline{x}|_{x^{q}}, \tag{2.5}\] _the inequality is equal only when \(n\)-tuple \(\underline{x}\) are all equal._ Another example by Lemma 2.8 in pp85 of article [7] is: \(g(x)=\sinh(x),\ h(x)=\cosh(x)\). On \(\mathbb{R}^{+}\ g,h\) are increasing, and \(h^{\prime}/g^{\prime}=(e^{x}-e^{-x})/(e^{x}+e^{-x})=1-2/(e^{2x}+1)\) is increasing, thus \[\overline{x,p}|_{\sinh x}\leq\overline{x,p}|_{\cosh x}\ (x_{i}>0). \tag{2.6}\] On the other hand, in Analysis the method of using the monotonicity of \(f^{\prime}\) (to compare an inequality) to determine the convexity of \(f\) is also a special case of Theorem 2 in [7], since convexity is actually the basic case of the "extended convexity" in article [7]. And the Lemma 2.8 is a "cousin" method of the former. (Similarly in [7] is a third cousin method for "geometrically convexity".) Correspondingly \(h^{\prime}/g^{\prime}\) in Lemma 2.8 is a special form of \((h\circ f)^{\prime}/g^{\prime}\) when \(f\) is identity, as "cousin" to the derivative of \(y=x\) which is also a special form of \((h\circ f)^{\prime}/g^{\prime}\) when \(h,g\) are identities. But being not always 1 the former does have monotonicity indeed. **In this sense, the inequality between 2 isomorphic means generated by \(g\) and by \(h\) is merely the indication of the "extended convexity generated by \(g,h\)" of \(y=x\).** #### 2.2.4 An impression of comparisons The author has used the methods to compare isomorphic means pairs among 5 common generator mappings: \(y=\sin x,\ y=1/x,\ y=\ln x,\ y=x\), and \(y=e^{x}\) for \(x_{i}\in[-\pi/2,\pi/2]\). A graphical view of the results is as below: **Illustration 2.1**.: _An impression of comparison of 5 types of isomorphic means_ Taking the 2nd vertical partition from left for example, it shows that for \(x_{i}\in(-1.0768...,-\pi/4)\), \(\overline{x}|_{1/x}\geq\overline{x}|_{\sin x}\geq\overline{x}|_{e^{x}}\geq \overline{x}|_{x}\). An imaginary falling slope is drawn for \(\overline{x}|_{\sin x}\), which displays a rightward trend of "being weaker", reflecting an interesting property of the sine function. While other inequalities such as arithmetic mean being stronger than geometric mean can also be observed. In the next section, one can see that the redefinition of "generalized \(f\)-mean" to isomorphic mean of numbers is in harmony with "isomorphic mean of a function", and the former is also deemed as a basis of the latter. For this reason, here we give an informal classification to the isomorphic (weighted) mean of numbers: "the isomorphic mean class 0".(as more classes of isomorphic mean will come up next.) ## 3 Isomorphic mean of a function When the mean values of a function bonded on an isomorphic frame is concerned, the dual-variable-isomorphic(DVI) mean of a function can be similarly introduced. ### The definition of DVI mean of a function **Definition 3.1**.: Let \(f:D\to M\) be bounded on interval \(D\) and \(f\wedge\big{(}\mathscr{I}_{m}\{g,h\}=[X,Y\nmid U,V]_{g,h}\big{)}\). \(g\) is continuous on \(D\), interval \(E=g(D)\), and \(h\) is continuous on \([\inf M,\sup M]\). If there exists \(M_{\varphi}\in V\), being the mean value of \(\varphi\colon=(h\circ f\circ g^{-1})\colon E\to V\), then \(h^{-1}(M_{\varphi})\in Y\) is called the (dual-variable-) isomorphic mean(DVI mean) of \(f\) on \(D\) generated by \(g,h\) (or generated by \(\mathscr{I}_{m}\{g,h\}\)), denoted by \(\overline{f_{D}}\big{|}_{g,h}\), \(\overline{f}\big{|}_{g,h}\) or \(M_{f}|_{g,h}\). \[\overline{f_{D}}\big{|}_{g,h} =h^{-1}\bigg{(}\frac{\int_{E}h\big{(}f(g^{-1}(u))\big{)}\mathrm{d}u} {\int_{E}\mathrm{d}u}\bigg{)}. \tag{3.1}\] \[\bigg{(} =h^{-1}\bigg{(}\frac{\int_{E}h\big{(}f(g^{-1}(u))\big{)}\mathrm{d} (-u)}{\int_{E}\mathrm{d}(-u)}\bigg{)}.\ \ \bigg{)}\] It's stipulated that \(\forall a\in D\), \(\overline{f_{[a,a]}}\big{|}_{g,h}\!\!=f(a)\). \(g,h\) are called the independent variable's generator mapping or dimensional mapping(IVDM), and the dependent variable's generator mapping or dimensional mapping(PVDM) of the isomorphic mean respectively. **Remark 3.1**.: If \(M_{\varphi}\) exists, then there must be \(M_{\varphi}\in h([\inf M,\sup M])\subseteq V\), because \(h([\inf M,\sup M])=[\inf N,\sup N]\) (\(N=h(M)\)) in which \(M_{\varphi}\) must resides. **Remark 3.2**.: In simple words, isomorphic mean of a function is the inverse image of the mean of \(\varphi(f:g,h)\). Intended to be the extension of mean of a function on an interval, this article defines the isomorphic mean of an "ordinary" function rather on an interval, than on a general real number set. Another ad-hoc requirement in addition to the "bonding" is \([\inf M,\sup M]\subseteq Y\). **Notation 3.2**.: Taking \([a,b]\) as the form of \(D\), the formula (3.1) turns into \[\overline{f_{[a,b]}}\big{|}_{g,h} =h^{-1}\bigg{(}\frac{1}{g(b)-g(a)}\int_{g(a)}^{g(b)}h\big{(}f(g^{ -1}(u))\big{)}\mathrm{d}u\bigg{)}, \tag{3.2}\] \[\bigg{(} =h^{-1}\bigg{(}\frac{1}{g(a)-g(b)}\int_{g(b)}^{g(a)}h\big{(}f(g^{ -1}(u))\big{)}\mathrm{d}u\bigg{)}\ \ \bigg{)}\] with which whether \(g(b)>g(a)\) is disregarded since the result of the formula is the same with that having \(g(a)\) and \(g(b)\) exchanged their positions. And if interval \(D\) has infinite endpoint(s), the limit form of (3.2) may be considered, with \(a\) and/or \(b\) approaching infinity. Whereas if \(f\) is not bounded, which falls out of scope of Definition 3.1, as long as the value of (3.1) exists, it could be considered the generalized isomorphic mean of an unbounded function. ### The existence of \(M_{f}|_{g,h}\) and tolerance of the definition With Definition 3.1, the existence of \(M_{f}|_{g,h}\) depends on the existence of the mean of \(\varphi(f:g,h)\). The following theorem is sufficient but not necessary. **Theorem 3.3**.: The \(\overline{f_{D}}\big{|}_{g,h}\) exists with any applicable \(g,\ h\), if \(f\) is continuous on a close interval \(D\). Proof.: Such \(f,g,h\) imply a continuous \(\varphi(f:g,h)\) on a close interval \(E\) with a convex range \(h(M)\). This means the numerator integral in the formula (3.1) exists and the dominator is non-zero real, such \(M_{\varphi}\) exists in \(h(M)\subseteq V\). These lead to an unique \(\overline{f_{D}}\big{|}_{g,h}\) in \(M\subseteq Y\). \(\blacksquare\) While one expects for \(f\) being continuous, the range \(M\) being convex(an interval) and \(M_{f}|_{g,h}\in M\), among others the definition however allows for 1. \(f\) is not continuous with jump discontinuity, but \(M_{f}|_{g,h}\) exists. e.g.: \[\begin{split} f(x)&=\begin{cases}1,\ \ (x\in[0,1])\\ 3.\ \ (x\in(1,2])\end{cases}\\ g(x)&=2x,\ h(y)=y+2.\end{split}\] With above, \(M_{f}|_{g,h}=2\). 2. \(f\) is not continuous with essential discontinuity, but \(M_{f}|_{g,h}\) exists. e.g. If \(f\) is a discontinuous but bounded Darboux function, especially being derivative of another continuous function \(F\), and being Riemann integrable, then with the same \(g,h\) as of above, \(M_{f}|_{g,h}\) exists. 3. \(M\) is not convex, and \(M_{f}|_{g,h}\notin M\); but \(M_{f}|_{g,h}\in Y\) as with above case 1). 4. \(M\) is not convex, but \(M_{f}|_{g,h}\in M\subseteq Y\). 5. The definition is not contradict with: \(f\) is not continuous, \(M_{f}|_{g,h}\) does not exist. e.g. \(f(x)=\pi,x\in[0,2]\) and \(x\) is irrational; \(f(x)=3.14,x\in[0,2]\) and \(x\) is rational, with the same \(g,h\) as of above. While it does not allow for \(M_{f}|_{g,h}\in Y\) and \(M_{f}|_{g,h}\notin[\inf M,\sup M]\), which will be proved later. Another sufficient condition for the existence of isomorphic mean is: **Theorem 3.4**.: The \(\overline{f_{D}}\big{|}_{g,h}\) exists if \(f\) is Riemann integrable on a close interval\(D=[a,b]\) with an applicable \(\mathscr{I}_{m}\{g,h\}\) such \(g\) satisfies one of the following: 1. \(g\) is derivable on \((a,b)\); 2. \(g\) is a convex or concave function on \([a,b]\); 3. \(g\) is absolutely continuous on any close sub-interval of \((a,b)\). Proof.: Any of the 3 additional conditions adding to the strictly monotone and continuous \(g\) ensures \(f\circ g^{-1}\) is also integrable on \(E\). With \(h\) being continuous, \(h\circ f\circ g^{-1}\) is Riemann integrable. Meanwhile the dominator is non-zero real, hence \(M_{\varphi}\) exists in \(h([\inf M,\sup M])\subseteq V\). This leads to an unique \(\overline{f_{D}}\big{|}_{g,h}\) in \([\inf M,\sup M]\subseteq Y\). Regarding the Riemann integrability of composite functions, the discussions can be founded in [9] and [3] which are applicable to e.g. \(h\circ f\circ g^{-1}\) here. According to [3] any of additional conditions 1) 2) applying to \(g\) will lead to condition 3) that will map the set of discontinuity points of \(f\) on \([a,b]\) of Lebesgue measure 0 to a counterpart of \(f\circ g^{-1}\) on \(E\) of Lebesgue measure 0. With above we claim that isomorphic means of a function are generally available in 2-D isomorphic frames with derivable DMs, for ORDINARY functions on close intervals, which (i) are Riemann integrable; (ii) may be somewhere discontinuous; (iii) have not to be monotone. ### An equivalent derivation of DVI mean of a function Isomorphic mean of a function can also be derived through the limit of the isomorphic weighted mean of numbers expressed in integral form. In Definition 3.1 assuming \(D\) is a close interval \([a,\ b]\), then due to monotone and continuous \(g\), \(E\) is a close interval \([g(a),\ g(b)]\)(disregarding whether \(g(b)>g(a)\)). Let \(\tau=f\circ g^{-1}\). Do a \(n\)-tuple partitions of \(E\) similar to is done with a definite integral. With each partition \(i\) (\(i\in 1,2,...,n\)), there is a corresponding value \(\tau(\xi_{i})\) for its tagged point \(\xi_{i}\). Then the ratio of \(\Delta u_{i}(=u_{i}-u_{i-1}\), where \(u_{i}\) is the end point of each partition towards \(g(b)\), \(u_{i-1}\) is the other end point of the same partition) over \(g(b)-g(a)\), is taken as \(\tau(\xi_{i})\)'s weight \(w_{i}\). We compute the isomorphic weighted mean of \(n\)-tuple \(\tau(\xi_{i})\) generated by \(h\), denoted by \(M_{\tau(\xi_{i})}\), \[\begin{split} M_{\tau(\xi_{i})}&=h^{-1}\bigg{(} \sum_{i=1}^{n}\frac{u_{i}-u_{i-1}}{g(b)-g(a)}h(\tau(\xi_{i}))\bigg{)}\\ &=h^{-1}\bigg{(}\frac{1}{g(b)-g(a)}\sum_{i=1}^{n}h(\tau(\xi_{i})) \Delta u_{i}\bigg{)}.\end{split} \tag{3.4}\] In order for \(\tau(\xi_{i})\) to enumerate all possible values of \(\tau\) on \(E\)(such \(f\) enumerates all on \([a,b]\)), let \(\|\Delta\|(=\max\{\Delta u_{i}\})\to 0\), i.e. the partitions become infinitely thin, such \[\begin{split}\lim_{\|\Delta\|\to 0}M_{\tau(\xi_{i})}&=h^{-1} \bigg{(}\frac{1}{g(b)-g(a)}\int_{g(a)}^{g(b)}h\big{(}\tau(u)\big{)}\mathrm{d}u \bigg{)}\\ &=h^{-1}\bigg{(}\frac{1}{g(b)-g(a)}\int_{g(a)}^{g(b)}h\big{(}f(g^ {-1}(u))\big{)}\mathrm{d}u\bigg{)}.\end{split} \tag{3.5}\] This limit value is deemed as "function \(\tau\)'s isomorphic weighted mean generated by mapping \(h\)". It is just the same in form as (3.2), that is function \(f\)'s dual-variable-isomorphic mean generated by \(g,h\). A special case of above is, the partitions are of all equal size, which is \(\frac{1}{n}|g(b)-g(a)|\), such the weights are all \(1/n\). Then (3.5) can be deemed as an integral evolution of isomorphic mean of numbers, which has the same value if exists. If \(E\) is infinite or \(D\) is open or half open, the limit forms of (3.5) will be included cases of (3.1). ### Basic properties of DVI mean of a function #### 3.4.1 Property of intermediate value(IVP) **Theorem 3.5**.: For a bounded \(f\colon D\to M\)(\(M\) is range) with its isomorphic mean \(\overline{f_{D}}\big{|}_{g,h}\) in the applicable isomorphic frame \(\mathscr{I}_{m}\{g,h\}\), it holds \[\inf M\leq\overline{f_{D}}\big{|}_{g,h}\leq\sup M; \tag{3.6}\] especially if exists \(\min\{M\}=\inf M\), \(\max\{M\}=\sup M\), then \[\min\{M\}\leq\overline{f_{D}}\big{|}_{g,h}\leq\max\{M\}. \tag{3.7}\] Proof.: Let \(N=h(M)\). (i) if \(M_{\varphi}\in N\subseteq V\), then due to the bijection \(h\), \(h^{-1}(M_{\varphi})\in M\subseteq[\inf M,\ \sup M]\); (ii) if in general case \(M_{\varphi}\in N\) is not to be considered, with (3.4), \(h\) being continuous on \([\inf M,\sup M]\) by definition, and Theorem 2.3, it holds \[\inf M\leq\min\{\tau(\xi_{i})\}\leq M_{\tau(\xi_{i})}\leq\max\{\tau(\xi_{i})\} \leq\sup M \tag{3.8}\] while \(\|\Delta\|\to 0\). Thus \(\inf M\leq\overline{f_{D}}\big{|}_{g,h}\leq\sup M\). For all cases, if there exists \(\min\{M\},\max\{M\}\), it holds \(\min\{M\}\leq\overline{f_{D}}\big{|}_{g,h}\leq\max\{M\}\). Generally this Intermediate Value Property(IVP) or mean value property of an isomorphic mean does not require the corresponding intermediate value(s) within the domain of the function. This IVP holds with ordinary functions. It could be fundamentally accepted as an attribute of ordinary functions on the background of isomorphic frames of continuous DMs, which will make isomorphic means have more coverage of extended unique means of a function than those concepts of IVP only holding with continuous or derivable functions. The transforming nature of the background as a single bijection also makes the isomorphic mean an extended mean value of more straightforward and genuine origin, as directly from the mean value of an transformed function. As a fact, it naturally refines the development of mean values from simpler arithmetic mean into complicated ones, so that the isomorphic mean is a natural generalization of the simple mean. In view of these, the isomorphic mean of a function is rather a differentiated concept of IVP than a repeated one with some others. (Also see Section 4.4.) #### 3.4.2 Property of monotonicity **Theorem 3.6**.: Let \(f,m\) be defined on interval \(D\), interval \(D_{2}\subseteq D\). If \[m(x)\geq f(x)\ \ \forall x\in D_{2}, \tag{3.9}\] and \(m(x)=f(x)\ \forall x\notin D_{2}\), then for any applicable \(\mathscr{I}_{m}\{g,h\}\), it holds \[M_{m}|_{g,h}\geq M_{f}|_{g,h}. \tag{3.10}\] If (3.9) is strict, then (3.10) is strict. Proof.: While there are several cases according to the ways \(D_{2}\) is located in \(D\), we first assume both be close intervals, such \(D\) can be expressed as \([a,b]\ \ (a<b)\). And arrange \(D_{2}\) such \(D_{2}=[a,c]\), \((a<c\leq b)\), i.e. \(D_{2}\) locates to the left of \(D\). Let \(\tau=f\circ g^{-1}\), \(\mu=m\circ g^{-1}\), with the same way in Section 3.3, do a set of \(n\)-tuple partitions and tagged \(\xi_{i}\) for \(D_{2}=[a,c]\), another set of \(k\)-tuple partitions and tagged \(\eta_{j}\) for \((c,b]\). Let \[\begin{split} M_{\tau(\xi,\eta)}&=h^{-1}\biggl{(} \sum_{i=1}^{n}\frac{u_{i}-u_{i-1}}{g(b)-g(a)}h(\tau(\xi_{i}))+\sum_{j=1}^{k} \frac{v_{j}-v_{j-1}}{g(b)-g(a)}h(\tau(\eta_{j}))\biggr{)}\\ M_{\mu(\xi,\eta)}&=h^{-1}\biggl{(}\sum_{i=1}^{n} \frac{u_{i}-u_{i-1}}{g(b)-g(a)}h(\mu(\xi_{i}))+\sum_{j=1}^{k}\frac{v_{j}-v_{j- 1}}{g(b)-g(a)}h(\mu(\eta_{j}))\biggr{)}\end{split} \tag{3.11}\] Above each putting in \(h^{-1}\) are 2 partial sums. For convenience let's denote them \(S_{1}\), \(S_{2}\) for those inside \(M_{\tau(\xi,\eta)}\) resp., and \(T_{1}\), \(T_{2}\) for those inside \(M_{\mu(\xi,\eta)}\) resp., such \[M_{\tau(\xi,\eta)}=h^{-1}\big{(}S_{1}+S_{2}\big{)},\ \ M_{\mu(\xi,\eta)}=h^{-1} \big{(}T_{1}+T_{2}\big{)} \tag{3.12}\] In the case \(c=b\), \(D_{2}=D\), \(S_{2},T_{2}\) do not exist. As \(\forall x\notin D_{2}(i.e.\ x\in(c,b])\), \(m(x)=f(x)\), such \(S_{2}\equiv T_{2}\). Meanwhile \(\forall x\in D_{2}\), \(m(x)\geq f(x)\), this makes \(T_{1}\geq S_{1}\Rightarrow T_{1}+T_{2}\geq S_{1}+S_{2}\) with increasing \(h\), or \(T_{1}\leq S_{1}\Rightarrow T_{1}+T_{2}\leq S_{1}+S_{2}\) with decreasing \(h\). In both cases \(M_{\mu(\xi,\eta)}\geq M_{\tau(\xi,\eta)}\). When the partitions become infinitely thin, the inequality holds, thus it holds \(M_{m}|_{g,h}\geq M_{f}|_{g,h}\). For cases where a close \(D_{2}\) is located other ways in a close \(D\), the proof is analogous; Then at most we have 3 partial sums each in (3.12). For at least one of \(D_{2}\),\(D\) and interval(s) of \(D-D_{2}\) being open/half open intervals including above, the proofs always need to threat the endpoints carefully not to let tagged points be the endpoints that do not belongs, as fortunately finite points exclusion of tagged points does not affected the integral calculations; For the cases \(D_{2}\) and/or \(D\) being infinite intervals, the proofs shall further consider the holding inequality on a selected partial finite interval, and let it hold under a limit of the endpoint(s). Finally if (3.9) is strict, then the corresponding inequalities within the proof including the result are strict. \(\blacksquare\) **Corollary 3.7**.: Let \(f,m\) be defined on interval \(D\), there are finite disjoint intervals \(D_{1},...,D_{n}\subseteq D\). If \[m(x)\geq f(x)\ \ \forall x\in D_{1}\cup...\cup D_{n}, \tag{3.13}\] and \(m(x)=f(x)\ \forall x\notin D_{1}\cup...\cup D_{n}\), then for any applicable \(\mathscr{I}_{m}\{g,h\}\), it holds \[M_{m}|_{g,h}\geq M_{f}|_{g,h}. \tag{3.14}\] If (3.13) is strict, then (3.14) is strict. This can be proved via Theorem 3.6 and construction of \((n-1)\) bridging functions and passing the same inequality down total \((n+1)\) isomorphic means: \(M_{m}|_{g,h}\geq...\geq M_{f}|_{g,h}\). #### 3.4.3 Property of symmetry with endpoints of interval **Proposition 3.1**.: (As implied by the definition,) The value of an isomorphic mean of a function on a close interval \([a,b]\) is invariant with \(a,b\) exchanging their values in the formulae (3.2). As a result, any function of the pair \((a,b)\) derived by (3.2), maybe of different forms but of the equivalence of (3.2), is symmetrical with \(a,b\). **Notation 3.8**.: For an existing \(\overline{f_{[a,b]}}\big{|}_{g,h}\), also denote \(\overline{f_{[b,a]}}\big{|}_{g,h}\) for the same isomorphic mean. Therefore future for such \([a,b]\), \(b\geq a\) is no longer a compulsory requirement. #### 3.4.4 Invariant value with vertical scale and shift of dimensional mappings 3.4.4.1 Invariant value with V-scaleshift of IVDM **Theorem 3.9**.: \(M_{f}|_{m,h}=M_{f}|_{g,h}\) _for \(m\in\mathbb{V}g\)._ _Proof._ Let \(u=m(x)\), \(v=g(x)\), then \(m^{-1}(u)=g^{-1}(v)\), \(u=kv+C\), taking \([a,b]\) as \(D\), thus \[M_{f}|_{m,h} =h^{-1}\bigg{(}\frac{1}{kg(b)+C-kg(a)-C}\int_{kg(a)+C}^{kg(b)+C}h \big{(}f(m^{-1}(u))\big{)}\mathrm{d}u\bigg{)}\] \[=h^{-1}\bigg{(}\frac{1}{kg(b)-kg(a)}\int_{g(a)}^{g(b)}h\big{(}f(g^ {-1}(v))\big{)}\mathrm{d}(kv+C)\bigg{)}\ \ (v=\frac{u-C}{k})\] \[=h^{-1}\bigg{(}\frac{1}{g(b)-g(a)}\int_{g(a)}^{g(b)}h\big{(}f(g^ {-1}(v))\big{)}\mathrm{d}v\bigg{)}\!\!=M_{f}|_{g,h}.\] While cases with other forms of \(D\) are treated as holding limit forms of above. \(\blacksquare\) #### 3.4.4.2 Invariant value with V-scaleshift of PVDM **Theorem 3.10**.: \(M_{f}|_{g,l}=M_{f}|_{g,h}\) _for \(l\in\mathbb{V}h\)._ _Proof._ Taking \([a,b]\) as \(D\), then \[M_{f}|_{g,l} =h^{-1}\bigg{(}\bigg{(}\bigg{(}\frac{1}{g(b)-g(a)}\int_{g(a)}^{g(b )}\big{(}kh\big{(}f(g^{-1}(u))\big{)}+C\big{)}\mathrm{d}u\bigg{)}-C\bigg{)}/k \bigg{)}\] \[=h^{-1}\bigg{(}\bigg{(}\bigg{(}\frac{1}{g(b)-g(a)}\int_{g(a)}^{g( b)}kh\big{(}f(g^{-1}(u))\big{)}\mathrm{d}u+C\bigg{)}-C\bigg{)}/k\bigg{)}\] \[=h^{-1}\bigg{(}\frac{1}{g(b)-g(a)}\int_{g(a)}^{g(b)}h\big{(}f(g^ {-1}(u))\big{)}\mathrm{d}u\bigg{)}\!\!=M_{f}|_{g,h}.\] While cases with other forms of \(D\) are treated as holding limit forms of above. \(\blacksquare\) #### 3.4.4.3 Invariant value with V-scaleshifts of both DMs **Corollary 3.11**.: \(M_{f}|_{m,l}=M_{f}|_{g,h}\) _for \(m\in\mathbb{V}g\) and \(l\in\mathbb{V}h\)._ It's a result of two-step process by previous 2 theorems. **Notation 3.12**.: Also denote \(M_{f}|_{\forall g,\forall h}\) for \(M_{f}|_{g,h}\), \(M_{f}|_{\forall g}^{II}\) for \(M_{f}|_{g}^{II}\), etc., i.e. for any IVDM or PVDM \(g\), \(\mathbb{V}g\Leftrightarrow g\) in denoting of isomorphic means of a function. ### Sub-classing of DVI mean of a function There are 7(seven) typical sub-classes of isomorphic mean of a function, in correspondence with 7 special cases of DVI function: **Definition 3.13**.: With Definition 3.1, consider the following cases due to special \(g,h,f\): 1. Let \(g\) be identity, then \(\varphi\colon\,=(h\circ f)\colon D\to V\). The isomorphic mean of \(f\) is called the dependent-variable-isomorphic mean(PVI mean) of \(f\) on \(D\) generated by mapping \(h\), or the isomorphic mean class I of \(f\) on \([a,b]\) generated by \(h\). It is denoted by \(\overline{f_{D}}\big{|}_{h}\), or \(M_{f}|_{h}\), \[\overline{f_{D}}\big{|}_{h}=h^{-1}\Bigg{(}\frac{\int_{{}_{D}}h\big{(}f(x)\big{)} \mathrm{d}x}{\int_{{}_{D}}\mathrm{d}x}\Bigg{)}\ (=\overline{f_{D}}\big{|}_{\forall x,\forall h}).\] (3.15) If taking \([a,b]\) as \(D\), \[\overline{f_{[a,b]}}\big{|}_{h}=h^{-1}\Bigg{(}\frac{1}{b-a}\int_{a}^{b}h\big{(} f(x)\big{)}\mathrm{d}x\Bigg{)}.\] (3.16) 2. Let \(h\) be identity, then \(\varphi\colon\,=(f\circ g^{-1})\colon E\to M\). The isomorphic mean of \(f\) is called the independent-variable-isomorphic mean(IVI mean) of \(f\) on \(D\) generated by mapping \(g\), or the isomorphic mean class II of \(f\) on \(D\) generated by \(g\). It is denoted by \(\overline{f_{D}}\big{|}_{g}^{II}\), or \(M_{f}|_{g}^{II}\), \[\overline{f_{D}}\big{|}_{g}^{II}=\frac{\int_{{}_{E}}f(g^{-1}(u))\mathrm{d}u}{ \int_{{}_{E}}\mathrm{d}u}\ (=\overline{f_{D}}\big{|}_{\forall g,\forall y}).\] (3.17) If taking \([a,b]\) as \(D\), \[\overline{f_{[a,b]}}\big{|}_{g}^{II}=\frac{1}{g(b)-g(a)}\int_{g(a)}^{g(b)}f \big{(}g^{-1}(u)\big{)}\mathrm{d}u.\] (3.18) If \(g\) is differentiable, \[\overline{f_{[a,b]}}\big{|}_{g}^{II}=\frac{1}{g(b)-g(a)}\int_{a}^{b}f(x) \mathrm{d}g(x).\] (3.19) 3. Let \(Y=X,\ h=g\), then \(\varphi\colon\,=(g\circ f\circ g^{-1})\colon E\to V\). The isomorphic mean of \(f\) is called the same-mapping (dual-variable-)isomorphic mean(SDVI mean) of \(f\) on \(D\) generated by mapping \(g\), or the isomorphic mean class III of \(f\) on \(D\) generated by \(g\). It is denoted by \(\overline{f_{D}}\big{|}_{g}^{III}\), \(M_{f}|_{g,g}\), or \(M_{f}|_{g}^{III}\), \[\overline{f_{D}}\big{|}_{g}^{III}=g^{-1}\Bigg{(}\frac{\int_{{}_{E}}g\big{(}f(g ^{-1}(u))\big{)}\mathrm{d}u}{\int_{{}_{E}}\mathrm{d}u}\Bigg{)}\ (=\overline{f_{D}}\big{|}_{ \forall g,\forall g}).\] (3.20) If taking \([a,b]\) as \(D\), \[\overline{f_{[a,b]}}\big{|}_{g}^{III}=g^{-1}\Bigg{(}\frac{1}{g(b)-g(a)}\int_{g( a)}^{g(b)}g\big{(}f(g^{-1}(u))\big{)}\mathrm{d}u\Bigg{)}.\] (3.21) 4. If in general case \(h\neq g\), then \(\varphi\colon\,=(h\circ f\circ g^{-1})\colon E\to V\). The isomorphic mean \(\overline{f_{D}}\big{|}_{g,h}(=\overline{f_{D}}\big{|}_{\forall g,\forall h})\) is called (dual-variable-)isomorphic mean(DVI mean) of \(f\) on \(D\) generated by mapping \(g,h\), or the isomorphic mean class IV of \(f\) on \(D\) generated by \(g,h\), which formula is as (3.1), or as (3.2) if taking \([a,b]\) as \(D\). 5. Let \(f\) be identity, then \(\varphi\colon=(h\circ g^{-1})\colon E\to V\). The isomorphic mean simplifies to the mean of one variable \(x\). In this paper it is called the (dual-variable-)isomorphic mean of identity on \(D\) generated by mapping \(g,h\), or the isomorphic mean class V on \(D\) generated by \(g,h\). It is denoted by \(\overline{x_{D}}\big{|}_{g,h}\), or \(M_{x}\big{|}_{g,h}\), \[\overline{x_{D}}\big{|}_{g,h}=h^{-1}\Bigg{(}\frac{\int_{{}_{E}}h\big{(}f(g^{-1} (u))\big{)}\mathrm{d}u}{\int_{{}_{E}}\mathrm{d}u}\Bigg{)}\ (=\overline{x_{D}}\big{|}_{ \forall g,\forall h}).\] (3.22) If taking \([a,b]\) as \(D\), \[\overline{x_{[a,b]}}\big{|}_{g,h}=h^{-1}\Bigg{(}\frac{1}{g(b)-g(a)}\int_{g(a)} ^{g(b)}h\big{(}g^{-1}(u)\big{)}\mathrm{d}u\Bigg{)}.\] (3.23) If \(g\) is differentiable, \[\overline{x_{[a,b]}}\big{|}_{g,h}=h^{-1}\Bigg{(}\frac{1}{g(b)-g(a)}\int_{a}^{ b}h(x)\mathrm{d}g(x)\Bigg{)}.\] (3.24) 6. Let \(g,h\) be identities, then \(\varphi\colon=f\). The isomorphic mean simplifies to the mean of the function. In this paper it is denoted by \(\overline{f_{D}}\), or \(M_{f}\), \[\overline{f_{D}}=\frac{\int_{{}_{D}}f(x)\mathrm{d}x}{\int_{{}_{D}}\mathrm{d} x}\ (=\overline{f_{D}}\big{|}_{\forall x,\forall y}).\] (3.25) In the case \(D\) is a close interval \([a,b]\), \[\overline{f_{[a,b]}}=\frac{1}{b-a}\int_{a}^{b}f(x)\mathrm{d}x.\] (3.26) 7. For monotone function \(f\), its inverse function \(f^{-1}\) is the dual-variable-isomorphic function of \(f\) generated by mapping \(f,f^{-1}\). Correspondingly the dual-variable-isomorphic mean of \(f\) generated by \(f,f^{-1}\) on a close interval is \[\overline{f_{[a,b]}}\big{|}_{f,f^{-1}}=f\bigg{(}\frac{1}{f(b)-f(a)}\int_{f(a) }^{f(b)}f^{-1}(u)\mathrm{d}u\bigg{)}.\] (3.27) For convenience, while without confusions, we may use "class I" or "Isomorphic mean class I" for short name of Isomorphic mean class I of a function, and the similar for those of other classes later in this article, e.g. class II, class V. For Theorem 3.9, especially when \(g(x)=x\), then \(M_{f}|_{m,h}=M_{f}|_{g,h}=M_{f}|_{h}\). For Theorem 3.10, especially when \(h(y)=y\), then \(M_{f}|_{g,l}=M_{f}|_{g,h}=M_{f}|_{g}^{II}\). For Corollary 3.11, especially when \(g(x)=x\), \(h(y)=y\), then \(M_{f}|_{m,l}=M_{f}|_{g,h}=M_{f}\). ### Isomorphic mean class I of a function \[\overline{f_{[a,b]}}\big{|}_{h}=M_{f}|_{h}=h^{-1}\Bigg{(}\frac{1}{b-a}\int_{ a}^{b}h\big{(}f(x)\big{)}\mathrm{d}x\Bigg{)}. \tag{3.28}\] As a special case, isomorphic mean class I also can be an equivalent derivation from isomorphic mean of numbers of \(f(\xi_{i})\), since in (3.4), let \(g\) be identity mapping, then \(\tau=f\). The class I can also be called the quasi-arithmetic mean of a function. #### 3.6.1 Some properties of isomorphic mean class I **Theorem 3.14**.: \(\overline{H_{ss}\big{(}f:k,C\big{)}_{[ka+C,kb+C]}}\big{|}_{h}\)_= \(\overline{f_{[a,b]}}\big{|}_{h}\)._ Proof.: \(\overline{H_{ss}\big{(}f:k,C\big{)}_{[ka+C,kb+C]}}\big{|}_{h}=h^{-1}\bigg{(} \frac{1}{(kb+C)-(ka+C)}\int_{ka+C}^{kb+C}h\big{(}f((u-C)/k)\big{)}\mathrm{d}u \bigg{)}\) \(\blacksquare\) This means isomorphic mean class I is invariant with the function's horizontal scale and shift. Isomorphic mean class I is not a homogeneous mean in general cases, as with constant \(k\), generally \[h^{-1}\bigg{(}\frac{1}{b-a}\int_{a}^{b}h\big{(}kf(x)\big{)}\mathrm{d}x\bigg{)} \neq kh^{-1}\bigg{(}\frac{1}{b-a}\int_{a}^{b}h\big{(}f(x)\big{)}\mathrm{d}x \bigg{)}, \tag{3.29}\] i.e. \(M_{kf}|_{h}\neq kM_{f}|_{h}\). There are some special cases or instances of it, as discussed below. #### 3.6.2 Arithmetic mean of a function When \(h\in\mathbb{V}y\), \[\overline{f_{[a,b]}}\big{|}_{h}\overline{f(x)}=\frac{1}{b-a}\int_{a}^{b}f(x) \mathrm{d}x. \tag{3.30}\] It is the arithmetic mean of \(f\) on \([a,b]\). #### 3.6.3 Geometric mean of a function Let \(h(y)=\ln y\), \(g\) be identity and \(f(x)>0\) in (3.4), it turns into \[M_{f(\xi_{i})}=\exp\biggl{(}\frac{1}{b-a}\sum_{i=1}^{n}\ln f(\xi_{i})\Delta x_ {i}\biggr{)}\!\!=\,\,^{b-a}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! And the following is called the geometric mean (value) of \(f(x)\) on \([a,b]\) \[\overline{f(x)_{[a,b]}}\big{|}_{\ln y}=e^{\frac{1}{b-a}\int_{a}^{b}\ln f(x){\rm d }x}. \tag{3.34}\] It's obvious that geometric mean of a positive function is a homogeneous mean. For a point \(c\in[a,b]\), it holds that: \[(\overline{f_{[a,c]}}\big{|}_{\ln y})^{(c-a)}\cdot(\overline{f_{[c,b]}}\big{|} _{\ln y})^{(b-c)}=(\overline{f_{[a,b]}}\big{|}_{\ln y})^{(b-a)}. \tag{3.35}\] Following are some special instances of geometric means on open intervals, in which the PVDM \(\ln y\) is not defined on the \(\inf M=0(M\) is range of \(f\)). These only need an extra step of generalization by shrinking the domains \((a,b)\) of \(f\) a little bit to \((a^{\prime},b^{\prime})\) so that \(\inf M>0\), then taking the limit of the isomorphic mean with \(a^{\prime}\to a\) and/or \(b^{\prime}\to b\). 1. The geometric mean of \(f(x)=x\) on \((0,b)\) : \(\overline{f(x)_{(0,b)}}\big{|}_{\ln y}=e^{\frac{1}{b-0}\int_{0^{+}}^{b-}\ln x {\rm d}x}=\frac{b}{e}\). 2. The geometric mean of \(f(x)=\sin x\) on \((0,\pi)\): \(\overline{\sin x_{(0,\pi)}}\big{|}_{\ln y}=e^{\frac{1}{\pi}\int_{0^{+}}^{\pi- }\ln\sin x{\rm d}x}\). Because the improper integral \(\int_{0^{+}}^{\pi-}\ln\sin x{\rm d}x=-\pi\ln 2\)[5], such \[\overline{\sin x_{(0,\pi)}}\big{|}_{\ln y}=e^{-\ln 2}=\frac{1}{2}.\] (3.36) It's easy to further conclude the geometric mean of \(sinx\) on \((0,\frac{\pi}{2})\) is also \(\frac{1}{2}\). 3. Function \(tanx\) is unbounded on \((0,\frac{\pi}{2})\), however we consider the following as the generalized geometric mean for unbounded \(tanx\) on \((0,\frac{\pi}{2})\), and we are able to work out the geometric mean value is \(1\). \[\overline{\tan x_{(0,\pi/2)}}\big{|}_{\ln y}=e^{\frac{2}{\pi}\int_{0^{+}}^{ \frac{\pi}{2}-}\ln\tan x{\rm d}x}=1.\] (3.37) 4. A round is with diameter \(d\), radius \(r\). To compute the geometric mean of all parallel chords(e.g. in vertical direction): \(\bar{c}\). The function of such chords can be written as \(c=2\sqrt{r^{2}-x^{2}},(x\in(-r,r))\), such \[\bar{c}=\exp\bigl{(}\frac{1}{r-(-r)}\int_{(-r)^{+}}^{r^{-}}\ln(2\sqrt{r^{2}- x^{2}}){\rm d}x\bigr{)}=\frac{4}{e}r=\frac{2}{e}d\approx 0.7358d.\] (3.38) Summarizing instance (1) and (4), and considering the homogeneity of geometric mean and its property by (3.35) with instance (1), we have the following geometrical representation of some geometric means in Illustration 3.1. #### 3.6.4 Harmonic mean of a function Let \(h(y)=1/y\), \(f(x)>0\) or \(f(x)<0\), \[\overline{f(x)_{[a,b]}}\big{|}_{1/y}=\frac{b-a}{\int_{a}^{b}\frac{{\rm d}x}{f( x)}} \tag{3.39}\] is called the harmonic mean of \(f(x)\) on \([a,b]\). #### 3.6.5 Mean of power integral of a function Let \(h(y)=y^{p},\ (p\neq 0)\), and \(f(x)>0\) is continuous on \([a,b],\ b>a>0,\) \[\overline{f(x)_{[a,b]}}\big{|}_{y^{p}}=\bigg{(}\frac{1}{b-a}\int_{a}^{b}f^{p}(x )\mathrm{d}x\bigg{)}^{\frac{1}{p}}, \tag{3.40}\] It is \(f\)'s \(p\)-order mean of power integral. Especially when \(f(x)=x(p>-1,\ p\neq 0,\ b>a>0)\), \[\overline{f(x)_{[a,b]}}\big{|}_{y^{p}}=\bigg{(}\frac{b^{(p+1)}-a^{(p+1)}}{(p+ 1)(b-a)}\bigg{)}^{\frac{1}{p}}, \tag{3.41}\] #### 3.6.6 Function value on the midpoint of an interval When it happens that \(f=h^{-1}\), then we know \(f\) is monotone and \(h=f^{-1}\), then \[\overline{f(x)_{[a,b]}}\big{|}_{f^{-1}}=f\bigg{(}\frac{1}{b-a}\int_{a}^{b}x \mathrm{d}x\bigg{)}=f\big{(}\frac{a+b}{2}\big{)}. \tag{3.42}\] i.e. the monotone function's value on the midpoint of an interval is a special case of isomorphic mean class I of the function. ### Isomorphic mean class II of a function Thanks to the introduction of isomorphic frame and DVI function, isomorphic mean class II of a function of case 2) of Definition 3.13 is the sibling of class I, though these two are quite different in their special forms. \[\overline{f_{[a,b]}}\big{|}_{g}^{II}=M_{f}\big{|}_{g}^{II}=\frac{1}{g(b)-g(a )}\int_{g(a)}^{g(b)}f\big{(}g^{-1}(u)\big{)}\mathrm{d}u. \tag{3.43}\] If \(g\) is differentiable, \[\overline{f_{[a,b]}}\big{|}_{g}^{II}=M_{f}\big{|}_{g}^{II}=\frac{1}{g(b)-g(a) }\int_{a}^{b}f(x)\mathrm{d}g(x). \tag{3.44}\] Isomorphic mean class II is not invariant with H-scaleshift of the function in general cases, as opposed to that of class I, displayed in Theorem 3.14. #### 3.7.1 Some properties of isomorphic mean class II 3.7.1.1 Homogeneity **Theorem 3.15**.: \(\overline{V_{ss}\big{(}f:k,C\big{)}_{[a,b]}}|_{g}^{II}=k\overline{f_{[a,b]}}|_{g} ^{II}+C\)_._ Proof omitted. This is again opposed to the property of class I. 3.7.1.2 Conjugation of 2 isomorphic means class II **Theorem 3.16**.: Let \(g,\ f\) be 2 strictly monotone, differentiable functions on interval \([a,b]\), and \(A=f(a),B=f(b),C=g(a),D=g(b)\). There exists 4 isomorphic means class II: \(E=M_{f}|_{g}^{II},\ F=M_{g}|_{f}^{II}\) ( \(E\neq A,\ E\neq B,\ F\neq C,\ F\neq\ D\)), \(G=M_{f}|_{f}^{II}=(A+B)/2\), \(H=M_{g}|_{g}^{II}=(C+D)/2\). Then the following hold: \[\begin{split} 1).&(\overrightarrow{AE}: \overrightarrow{EB})\cdot\big{(}\overrightarrow{CF}:\overrightarrow{FD} \big{)}=1\\ 2).&\overrightarrow{GE}:\overrightarrow{AB}= \overrightarrow{FH}:\overrightarrow{CD}.\end{split} \tag{3.45}\] **Illustration 3.2**.: _Conjugation of 2 isomorphic means class II_ Proof.: \[\begin{split} M_{f}|_{g}^{II}&=\frac{1}{g(b)-g(a)} \int_{a}^{b}f(x)\mathrm{d}g(x)=\frac{1}{g(b)-g(a)}\bigg{(}\big{(}f(x)g(x)\big{)} |_{a}^{b}-\int_{a}^{b}g(x)\mathrm{d}f(x)\bigg{)}\\ &=\frac{1}{D-C}\big{(}BD-AC-F(B-A)\big{)}=\frac{1}{D-C}\big{(}B( D-F)+A(F-C)\big{)}\\ \Longrightarrow E&=B\overrightarrow{\overrightarrow{ ED}}+A\overrightarrow{\overrightarrow{CF}}\Longrightarrow E-A=B\overrightarrow{ \overrightarrow{FD}}+A\big{(}\overrightarrow{\overrightarrow{CF}}-1\big{)}=B \overrightarrow{\overrightarrow{FD}}-A\big{(}\overrightarrow{ \overrightarrow{FD}}\big{)}.\end{split}\] It follows that \(\overrightarrow{AE}\colon\overrightarrow{AB}=\overrightarrow{FD}\colon \overrightarrow{CD}\); Similarly \(\overrightarrow{EB}\colon\overrightarrow{AB}=\overrightarrow{CF}\colon \overrightarrow{CD}\). Then it concludes with the result 1). The result 2) is a natural geometrical corollary of 1). \(\blacksquare\) Here the result 1) is said to be a kind of "Conjugation of these 2 related isomorphic means class II(\(E\) and \(F\))" with exchanged original function and generator function. Illustration 3.2 is the visual impression of the property when \(f,g\) are of the same monotonicity. #### 3.7.2 Simple special cases of isomorphic mean class II 1. Let \(g\in\mathbb{V}x\), \[\overline{f_{[a,b]}}|_{\mathbb{V}x}^{II}=\overline{f(x)}=\frac{1}{b-a}\int_{a} ^{b}f(x)\mathrm{d}x.\] (3.46) It is the arithmetic mean of \(f(x)\) on \([a,b]\). 2. When \(f(x)\) is monotone and continuous on \([a,b]\), and \(g(x)=f(x)\), \[\overline{f_{[a,b]}}|_{g}^{II}=\overline{g_{[a,b]}}|_{f}^{II}=\frac{1}{f(b)-f( a)}\int_{f(a)}^{f(b)}u\mathrm{d}u=\frac{1}{2}\big{(}f(a)+f(b)\big{)},\] (3.47) i.e. the arithmetic mean of the monotone function's values at the 2 endpoints of an interval is a special case of isomorphic mean class II. #### 3.7.3 A special case: With a classical mean value theorem **Theorem 3.17**.: If \(f\colon[a,\ b]\to\mathbb{R}\) is continuous and \(g\) is an integrable function that does not change sign on \([a,b]\), then there exists \(\xi\) in (a, b) such that \[f(\xi)=\overline{f_{[a,b]}}|_{G}^{II}\ \ \left(=\int_{a}^{b}f(x)g(x) \mathrm{d}x/\int_{a}^{b}g(x)\mathrm{d}x\right)\ \ \left(where\ \ G\in\int g\right). \tag{3.48}\] Proof.: With the conditions and the general case of well-known "First mean value theorem for definite integrals", there is a mean value \(f(\xi)\) such \[\int_{a}^{b}f(x)g(x)\mathrm{d}x=f(\xi)\int_{a}^{b}g(x)\mathrm{d}x\ \ (\xi\in(a,b)). \tag{3.49}\] Let \(G(x)=\int_{a}^{x}g(t)\mathrm{d}t+C\), then \(G(x)\) is monotone and continuous, \(G^{\prime}(x)=g(x)\), thus \[f(\xi)=\int_{a}^{b}f(x)g(x)\mathrm{d}x\div\int_{a}^{b}g(x) \mathrm{d}x=\frac{1}{G(b)-G(a)}\int_{a}^{b}f(x)\mathrm{d}G(x). \tag{3.50}\] Hence \(f(\xi)\) is the isomorphic mean class II generated by the antiderivative of \(g\). **Remark 3.3**.: While \(f\) has not to be monotone, such \(\xi\) has not to be unique, but the isomorphic mean \(f(\xi)\) is unique. Especially when \(g(x)=k,\ G(x)=kx+C\in\mathbb{V}x\), \[f(\xi)=\overline{f_{[a,b]}}|_{\mathbb{V}x}^{II}=\int_{a}^{b}kf(x )\mathrm{d}x\div\int_{a}^{b}k\ \mathrm{d}x=\frac{1}{b-a}\int_{a}^{b}f(x) \mathrm{d}x=\overline{f_{[a,b]}}. \tag{3.51}\] #### 3.7.4 A special case: Elastic mean of a function **Definition 3.18**.: The isomorphic mean class II of \(f\) on \([a,b]\) (\(b>a>0\)) generated by a logarithmic function e.g. \(g(x)=\ln x(x>0)\), \[\overline{f_{[a,b]}}|_{\mathbb{V}\ln x}^{II}=\frac{1}{\ln b-\ln a}\int_{a}^{b }\frac{f(x)}{x}\ \mathrm{d}x, \tag{3.52}\] is defined as the elastic mean of \(f\) on \([a,b]\) in this paper. **Remark 3.4**.: Any other logarithmic function is in \(\mathbb{V}\ln x\). With \(f(x)=x\), \[\overline{f_{[a,b]}}|_{\ln x}^{II}=\frac{b-a}{\ln b-\ln a}. \tag{3.53}\] In Economics term, if \(f\) is the elasticity [10] of another function \(F\) such \(f=xF^{\prime}/F=(\mathrm{d}F/F)/(\mathrm{d}x/x)\), then it reflects the local relative change of \(F\) against that of \(x\), e.g. percentage change of demand (as an "elastic" response) against percentage change of price. Above \(f(x)=x\) is the elasticity of \(F(x)=ce^{x}(c>0)\). Let \(K=F(b)\div F(a)\), and \(k=b\div a\), it is easy to get \[\overline{f_{[a,b]}}|_{\ln x}^{II}=\ln\left(F(b)\div F(a)\right)\div\ln(b\div a )=\log\,_{k}K. \tag{3.54}\] On the other hand, let \(M=F(x+\mathrm{d}x)\div F(x)\), \(m=(x+\mathrm{d}x)\div x\), then \[\begin{split} f(x)&=(\mathrm{d}F/F)/(\mathrm{d}x /x)=\mathrm{d}\ln F/\mathrm{d}\ln x\approx\Delta\ln F/\Delta\ln x\\ &=(\ln F(x+\mathrm{d}x)-\ln F(x))\div(\ln(x+\mathrm{d}x)-\ln x) =\log_{m}M,\end{split} \tag{3.55}\] i.e. \(f\) is a "micro-logarithm" of 2 "micro-multiplications" for every local \(x\), while \(\overline{f_{[a,b]}}|_{\ln x}^{II}\) is the logarithm of 2 overall multiplications over \([a,b]\). Therefore the elastic mean of \(f\) is actually the "average of the elasticity \(f\)". It is very similar to arithmetic mean of a function \(f\) computed via the Newton-Leibniz formula: \[M_{f}=\left(F(b)-F(a)\right)\div(b-a), \tag{3.56}\] where \(F\) is now the anti-derivative of \(f\). #### 3.7.5 Instances of isomorphic mean class II ##### 3.7.5.1 Elastic mean of \(\tan x\) A special instance of elastic mean is about unbounded \(tanx\) on \((0,\frac{\pi}{2})\), as following limiting form: \[\overline{\tan x_{(0,\pi/2)}}|_{\ln x}^{II}=\lim_{x\to 0,y\div\frac{\pi}{2}} \frac{1}{\ln\frac{\pi}{2}-\ln x}\int_{0}^{y}\frac{\tan(t)}{t}\mathrm{d}t. \tag{3.57}\] The numerator part is an improper definite integral, and the denominator part is approaching \(+\infty\). We transform it with \(x=\rho\cos\theta,\ y=\frac{\pi}{2}+\rho\sin\theta\)\((\rho>0,\ -\frac{\pi}{2}<\theta<0)\), and apply L 'Hopital's rule for twice in the following: \[\begin{split}\overline{\tan x_{(0,\pi/2)}}|_{\ln x}^{II}& =\lim_{\rho\to 0}\frac{1}{\ln\frac{\pi}{2}-\ln(\rho\cos\theta)}\int_{0}^{ \frac{\pi}{2}+\rho\sin\theta}\frac{\tan(t)}{t}\mathrm{d}t\\ &=\lim_{\rho\to 0}\frac{1}{-\cos\theta\frac{1}{\rho\cos \theta}}\cdot\frac{\tan(\frac{\pi}{2}+\rho\sin\theta)}{\frac{\pi}{2}+\rho\sin \theta}\cdot\sin\theta=\frac{2}{\pi}.\end{split} \tag{3.58}\] ##### 3.7.5.2 Elastic mean of power function Let \(f(x)=x^{p},p\neq 0,b>a>0,x\in[a,b],g(x)=\ln x(x>0)\), \[\overline{f_{[a,b]}}|_{\ln x}^{II}=\frac{1}{\ln b-\ln a}\int_{\ln a}^{\ln b}f( e^{u})\mathrm{d}u=\frac{f(b)-f(a)}{\ln f(b)-\ln f(a)}, \tag{3.59}\] which is the logarithmic mean of \(f(a)\) and \(f(b)\). #### 3.7.5.3 Power function's isomorphic mean class II generated by \(1/x(x>0)\) Let \(f(x)=x^{p},p\neq 1,b>a>0,x\in[a,b],g(x)=1/x(x>0)\), \[\overline{f_{[a,b]}}\big{|}_{1/x}^{II}=\frac{1}{(1/b)-(1/a)}\int_{1/a}^{1/b}(1 /u)^{p}\mathrm{d}u=\frac{ab(b^{p-1}-a^{p-1})}{(p-1)(b-a)}. \tag{3.60}\] When \(p=2,\ \overline{f_{[a,b]}}\big{|}_{1/x}^{II}=ab\); when \(p=3,\ \overline{f_{[a,b]}}\big{|}_{1/x}^{II}=\frac{1}{2}ab(a+b)\). While in the case \(p=1,\ \overline{f_{[a,b]}}\big{|}_{1/x}^{II}=\frac{1}{(1/b)-(1/a)}\int_{1/a}^{1/b} (1/u)\mathrm{d}u=\frac{ab(\ln b-\ln a)}{(b-a)}\), It is the product of \(a,b\) divided by the logarithmic mean of \(a,b\), it is also the limit of above \(\frac{ab(b^{p-1}-a^{p-1})}{(p-1)(b-a)}\) when \(p\to 1\). ### Isomorphic mean class III & IV of a function Isomorphic mean class III & class IV are just the general forms of the Definition, being the combined form of class I & class II. Their properties are mainly covered by previous sections. #### 3.8.1 A special case of class III **Theorem 3.19**.: An isomorphic mean class III of \(f(x)=x\) on [a,b] generated by \(g\) equals the isomorphic mean of \(a,\ b\) generated by \(g\) (the generalized \(g\)-mean). Proof.: \[\overline{x_{[a,b]}}\big{|}_{g}^{III}=g^{-1}\bigg{(}\frac{1}{g(b)-g(a)}\int_{g (a)}^{g(b)}g\big{(}g^{-1}(u)\big{)}\mathrm{d}u\bigg{)}=g^{-1}\big{(}\frac{g(a )+g(b)}{2}\big{)}.\] (3.61) ### Isomorphic mean class V Isomorphic mean class V in (3.23) can be deemed as a special mean of a single variable, e.g. in the form of \[\overline{x_{[a,b]}}\big{|}_{g,h}=h^{-1}\Bigg{(}\frac{1}{g(b)-g(a)}\int_{a}^{ b}h(x)\mathrm{d}g(x)\Bigg{)}.\] While traditionally without isomorphic frame in mind, it is not so worthwhile to discuss, since on \([a,b]\)\(y=x\) always has a mean value \(\frac{1}{2}(a+b)\). (Recalling Section 2.2.3, in article [7] the so-called extended convexity of \(y=x\) is somehow meaningful too.) The class V is generally not a mean value of an ordinary function. It is further sub-classifiable with \(g\in\mathbb{V}x\) or \(h\in\mathbb{V}y\), corresponding to class I or class II of \(f(x)=x\), i.e. of a single variable, which are the closest concepts to so-called "class 0": the isomorphic mean of numbers. #### 3.9.1 Composite class V **Definition 3.20**.: With a strictly monotone and continuous \(f\), and applicable \(g,h\), \[M_{x}|_{g,H}=f^{-1}(M_{f}|_{g,h}) \tag{3.62}\] is called a composite isomorphic mean class V generated by \(g,h,f\), where \(H:=h\circ f\). Above \(H\) is strictly monotone and continuous. Composite class V is a special case of class V, but from the view of a normal class V of \(t=M_{x}|_{g,H}\), the PVDM \(H\) can be decomposed to \(h\circ f\) whereby class V can be related to mean values of more monotone functions, i.e. \(f(t)=M_{f}|_{g,h}\). Both class V and composite class V are useful when relating to other types of means in mathematics. See Section 4. #### 3.9.2 Generation of bivariate means by class V For a close interval \([a,b]\) and applicable \(g,h\)(and a monotone \(f\) as with composite class V), the \(M_{x}|_{g,h}\) (or \(M_{x}|_{g,H}\)) is clearly a sort of mean value of \(a,b\). Moreover with the "property of symmetry with endpoints of interval", such bivariate mean is symmetric. Below are 2 examples among possible others. ##### 3.9.2.1 Bivariate means regarding trigonometric functions By choosing \(g(x)=\sin x,\ h(y)=\cos y\), \([a,b]\subseteq[0,\pi/2]\), \[\begin{split}\overline{x_{[a,b]}}\big{|}_{\sin x,\cos y}& =\arccos\biggl{(}\frac{1}{\sin b-\sin a}\int_{a}^{b}\cos x(\sin x )^{\prime}\mathrm{d}x\biggr{)}\\ &=\arccos\biggl{(}\frac{b-a+\sin b\cos b-\sin a\cos a}{2(\sin b- \sin a)}\biggr{)}.\end{split} \tag{3.63}\] While \(g,h\) are exchanged, \[\begin{split}\overline{x_{[a,b]}}\big{|}_{\cos x,\sin y}& =\arcsin\biggl{(}\frac{1}{\cos b-\cos a}\int_{a}^{b}\sin x(\cos x )^{\prime}\mathrm{d}x\biggr{)}\\ &=\arcsin\biggl{(}\frac{a-b+\sin b\cos b-\sin a\cos a}{2(\cos b- \cos a)}\biggr{)}.\end{split} \tag{3.64}\] ##### 3.9.2.2 A class of quasi-Stolarsky means Another example of bivariate mean class generated by the class V, in the case \(g(x)=x^{p}(x>0,\ p\neq 0),\ h(y)=y^{q}(y>0,q\neq 0),a>0,b>0,a\neq b\). It's denoted and formulated by: \[Q_{p,q}(a,b)=\biggl{(}\frac{p(b^{p+q}-a^{p+q})}{(p+q)(b^{p}-a^{p})}\biggr{)}^{ 1/q}. \tag{3.65}\] It is very similar to the derivation of the Stolarsky means from the "Cauchy's extended mean value theorem"([6], pp207) by a pair of power functions. This class has also different special cases(including limits of \(Q_{p,q}(a,b)\) when \(p\to 0\) and/or \(q\to 0\), which are actually equivalent results as with replaced \(g(x)=\ln x\) and/or \(h(y)=\ln y\)): \[Q_{p,q}(a,b)=\begin{cases}(\dfrac{a^{p}+b^{p}}{2})^{1/p}&(p=q,pq\neq 0),\\ \sqrt{ab}&(p=q=0),\\ \left(\dfrac{b^{p}-a^{p}}{p(\ln b-\ln a)}\right)^{1/p}&(p+q=0,pq\neq 0),\\ \left(\dfrac{b^{q}-a^{q}}{q(\ln b-\ln a)}\right)^{1/q}&(p=0,q\neq 0),\\ \exp\biggl{(}\dfrac{b^{p}\ln b-a^{p}\ln a}{b^{p}-a^{p}}-\dfrac{1}{p}\biggr{)}&( q=0,p\neq 0),\\ \dfrac{2(a^{2}+ab+b^{2})}{3(a+b)}&(p=2,q=1),\\ \sqrt[3]{a\cdot\dfrac{a+b}{2}\cdot b}&(p=-1,q=3).\end{cases} \tag{3.66}\] Details of derivations and proofs are omitted. It features the class of power mean as its well-balanced children forms. Though conversion of this form to Stolarsky mean is easy by substituting power \(s=p+q\), but from the perspective of isomorphic mean, it's the very balanced form with respect to \(p\) and \(q\). ### Isomorphic mean class VI,VII The case 6) of Definition 3.13 is just the mean of \(f\), as "class VI" of isomorphic mean. It is not further discussed here. For instance of case 7) as "class VII", if \(f(x)=x^{a}(x\in[0,c],a>0)\) then \(\overline{f_{[0,c]}}\big{|}_{f,f^{-1}}\)\(=(\frac{a}{a+1})^{a}c^{a}\), which coefficient \((\frac{a}{a+1})^{a}\) is approaching \(\frac{1}{e}\) when \(a\) is approaching \(+\infty\). ## 4 Isomorphic mean of a function vs Cauchy mean value ### About Cauchy mean value **Theorem 4.1**.: Cauchy's mean-value theorem states that _([8], pp12): If \(f\), \(g\) are continuous real functions on \([x_{1},x_{2}]\) which are differentiable in \((x_{1},x_{2})\), and \(g^{\prime}(u)\neq 0\) for \(u\in(x_{1},x_{2})\), then there is a point \(t\in(x_{1},x_{2})\) such that_ \[\frac{f^{\prime}(t)}{g^{\prime}(t)}=\frac{f(x_{2})-f(x_{1})}{g(x_{2})-g(x_{1} )}. \tag{4.1}\] Note such \(t\) does not have to be unique in general cases(e.g. an definition in [2] says there is at least one such \(t\in(x_{1},x_{2})\) are satisfactory). Hence none of \(f(t)\),\(g(t)\),\(f^{\prime}(t)\),\(g^{\prime}(t)\) has to be unique, i.e. there is not uniquely defined mean value of function(s) either. For such \(t\) to be unique, in [8] there is further restriction, whereby there comes the definition of "Cauchy mean value of two numbers": **Definition 4.2**.: Assuming now (with Theorem 4.1) that \(f^{\prime}/g^{\prime}\) is invertible we get \[t=\big{(}\frac{f^{\prime}}{g^{\prime}}\big{)}^{-1}\bigg{(}\frac{f(x_{2})-f(x_{1} )}{g(x_{2})-g(x_{1})}\bigg{)}. \tag{4.2}\] This number is called the Cauchy mean value of the numbers \(x_{l},x_{2}\) and will be denoted by \(t=D_{fg}(x_{1},x_{2})\). In this paper, we further say the mean value of such 2 numbers "is generated by \(f,g\)". (And in [8], further covered is a generalized form of above to Cauchy mean of \(n\) numbers: \(D_{fg}(x_{1},x_{2},...,x_{n})\)). But even with Definition 4.2, neither \(f(t)\) nor \(g(t)\) can be deemed as well-defined mean value of a function, as they are symmetrically, temporarily depending on each other. The Cauchy mean value class and certain classes of isomorphic means can be converted to each other, with some criteria. A reasonable point is that since Cauchy mean value deals with 2 functions while isomorphic mean does with 3, the corresponding isomorphic mean is related to or of class V, which is with simplest \(f(x)=x\). ### Conversion from Cauchy mean value to class V To get a class V from an ordinary Cauchy mean value, below supporting theorem is needed: **Theorem 4.3**.: (Extended Darboux's Theorem:) If on \([a,b]\)\(f,g\) are both differentiable and \(g^{\prime}(x)\neq 0\), then on \((a,b)\)\(f^{\prime}(x)/g^{\prime}(x)\) can take any value between \(f^{\prime}(a)/g^{\prime}(a)\) and \(f^{\prime}(b)/g^{\prime}(b)\). Then we have the following theorem: **Theorem 4.4**.: Let \(f\), \(g\) be differentiable on \([a,b]\) and \(g^{\prime}\) is Riemann integrable, \(\forall x\in(a,b)\)\(g^{\prime}(x)\neq 0\) and \(g(b)\neq g(a)\). If \(h:=f^{\prime}/g^{\prime}\) is invertible on \([a,b]\), then the Cauchy mean value of \(a,b\) generated by \(f,g\) is the isomorphic mean class V on \([a,b]\) generated by \(g,h\). i.e. The Cauchy mean value \(t\in(a,b)\) is such that \[t =\big{(}\frac{f^{\prime}}{g^{\prime}}\big{)}^{-1}\bigg{(}\frac{f( b)-f(a)}{g(b)-g(a)}\bigg{)}=D_{fg}(a,b) \tag{4.3}\] \[=\ h^{-1}\Bigg{(}\frac{1}{g(b)-g(a)}\int_{a}^{b}h(x)g^{\prime}(x )\mathrm{d}x\Bigg{)}\!\!=\overline{x_{[a,\ b]}}\big{|}_{g,h}.\] Proof.: Referring to Definition 3.1 and 3.1, we claim that both \(h,g\) are continuous and strictly monotone on \([a,b]\), and \(y=x\) has the isomorphic mean. (i) According to the Extended Darboux's Theorem, \(h=f^{\prime}/g^{\prime}\) has the intermediate value property(IVP): \(\forall c,\forall d\in[a,b]\) (\(c<d\)) and every \(w\) between \(h(c)\) and \(h(d)\) (here \(h(c)\neq h(d)\) due to invertible \(h\)), \(\exists\xi\in(c,d)\) such that \(h(\xi)=w\). Moreover because it's invertible, \(h\) is injective. With these 2 properties, It's easy to prove that \(h\) is further strictly monotone, and again with IVP \(h\) is continuous on \([a,b]\). (ii) \(g\) is continuous due to \(g\) is differentiable. Since \(g^{\prime}\) is non-zero, according to Darboux's Theorem \(g^{\prime}\) is either all positive or all negative on \((a,b)\), and \(g^{\prime}_{+}(a)\) or \(g^{\prime}_{-}(b)\) also has the same sign as they are non-zero for \(f^{\prime}_{+}(a)/g^{\prime}_{+}(a)\) or \(f^{\prime}_{-}(b)/g^{\prime}_{-}(b)\) to exist. Such \(g\) is strictly monotone. (iii) As \(y=x\) is continuous, according to Theorem 3.3, \(\overline{x_{[a,b]}}\big{|}_{g,h}\) exists. (iv) The Cauchy mean value \(t\) is (uniquely) existing as \(h\) is invertible, and these 2 mean values are easily seen equal when \(g^{\prime}\) is Riemann integrable(such \(hg^{\prime}\) also Riemann integrable), since according to the fundamental theorem of calculus: \[\int_{a}^{b}h(x)g^{\prime}(x)\mathrm{d}x=\int_{a}^{b}f^{\prime}(x)\mathrm{d}x= f(b)-f(a). \tag{4.4}\] This ends the proof of the theorem. \(\blacksquare\) **Remark 4.1**.: With Theorem 4.4, practically if \(h:=f^{\prime}/g^{\prime}\) is continuous and strictly monotone, then it's invertible, and \(\overline{x_{[a,\ b]}}\big{|}_{g,h}\) exists. For example: With Theorem 4.4, let 2 generator functions be \(f(x)=\ln x\), \(g(x)=x\), \(b>a>0\), \[D_{fg}(a,b)=\frac{1}{\frac{f(b)-f(a)}{g(b)-g(a)}}=\frac{b-a}{\ln b-\ln a}. \tag{4.5}\] In order to get class V, \(h(x)=f^{\prime}(x)/g^{\prime}(x)=1/x\). To verify: \[\overline{x_{[a,b]}}\big{|}_{g,h}\!\!=\frac{1}{\frac{1}{b-a}\int_{a}^{b}\frac {1}{x}\mathrm{d}x}=\frac{b-a}{\ln b-\ln a}=D_{fg}(a,b). \tag{4.6}\] ### Conversion from class V to Cauchy mean value **Theorem 4.5**.: Let \(f\) be a strictly monotone, continuous and bounded function on \([a,b]\) with range \(M\); Let \(g\) be strictly monotone and differentiable on \([a,b]\), \(g^{\prime}\) is Riemann integrable and \(\forall x\in(a,b)\)\(g^{\prime}(x)\neq 0\); Let \(h\) be strictly monotone continuous on \(M\) and \(\gamma:=h\circ f\). Then there are following equivalences: 1). \[\begin{split} t&=\overline{x_{[a,b]}}\big{|}_{g, \gamma}\!\!=f^{-1}\!\left(h^{-1}\!\left(\frac{1}{g(b)-g(a)}\int_{a}^{b}h(f(x) )g^{\prime}(x)\mathrm{d}x\right)\right)\\ &=\ \gamma^{-1}\!\left(\frac{l(b)-l(a)}{g(b)-g(a)}\right)\!\!=(\frac{l^{ \prime}}{g^{\prime}})^{-1}\!\left(\frac{l(b)-l(a)}{g(b)-g(a)}\right)\!\!=D_{lg }(a,b).\\ &\big{(}\ where\ \ l\in\int(\gamma\ g^{\prime})=\int((h\circ f)g^{ \prime}).\ \big{)}\end{split} \tag{4.7}\] 2). \[\overline{f(x)_{[a,\ b]}}\big{|}_{g,h}\!\!=f(t)=f(D_{lg}(a,b)). \tag{4.8}\] Especially when \(f\) is identity, \[\overline{x_{[a,b]}}\big{|}_{g,h}\!\!=t=D_{lg}(a,b),\ \ where\ \ l\in\int(hg^{\prime}). \tag{4.9}\] Proof.: With such defined \(g,h,f\) and derived \(\ l\), the \(f^{-1}(\overline{f(x)_{[a,b]}}\big{|}_{g,h})\) and \(D_{lg}(a,b)\) exist and they are equal as proved by the inline formulae (4.7), and further 2) is true. \(\blacksquare\) **Remark 4.2**.: Let \(l(x):=\int_{a}^{x}(h(f(x))\cdot g^{\prime}(x)dx+C,\ l_{2}(u):=\int_{g(a)}^{u}h(f(g^ {-1}(u)))du+C\), then \(l_{2}(u)=l\circ(g^{-1})(u)\). According to the Lagrange's differential mean value theorem, there is a \(\xi\in(\min\{g(a),g(b)\},\max\{g(a),g(b)\})\) such that: \[\begin{split} l_{2}^{\prime}(\xi)=\frac{l_{2}(g(b))-l_{2}(g(a))} {g(b)-g(a)}\\ \stackrel{{ t=g^{-1}(\xi)}}{{\Longrightarrow}}\frac{ l^{\prime}(t)}{g^{\prime}(t)}=\frac{l(b)-l(a)}{g(b)-g(a)}.\end{split} \tag{4.10}\] This means: The composite isomorphic mean class V (by \(g,h,f\)) \(t\) is the image of the Lagrange mean value \(\xi\) of \(g(a),\ g(b)\)("generated by \(\int\varphi(f:g,h)\)") under the inverted bijection of \(g\). Conversion examples: 1. Conversion of a geometric mean into a Cauchy mean value. Geometric mean of a positive monotone \(y=f(x)\), is a special case of class I where \(g(x)=x\), \(h(y)=\ln y\). With Theorem 4.5, \(l\in\int h(f(x))g^{\prime}(x)\mathrm{d}x=\int\ln f(x)\mathrm{d}x\). Then \[D_{lg}(a,b)=\big{(}\frac{l^{\prime}}{g^{\prime}}\big{)}^{-1}\big{(}\frac{l(b )-l(a)}{g(b)-g(a)}\big{)}{=f^{-1}\Bigg{(}\exp\bigg{(}\frac{1}{b-a}\int_{a}^{b} \ln f(x)\mathrm{d}x\bigg{)}\Bigg{)}}.\] (4.11) 2. Conversion of an elastic mean into a Cauchy mean value. Elastic mean of a monotone \(y=f(x)\), is a special case of class II where \(g(x)=\ln x\), \(h(y)=y\). With Theorem 4.5, \(l\in\int h(f(x))g^{\prime}(x)\mathrm{d}x=\int(f(x)/x)\mathrm{d}x\). \[D_{lg}(a,b)=\big{(}\frac{l^{\prime}}{g^{\prime}}\big{)}^{-1}\big{(}\frac{l(b )-l(a)}{g(b)-g(a)}\big{)}{=f^{-1}\Bigg{(}\frac{1}{\ln b-\ln a}\int_{a}^{b} \frac{f(x)}{x}\mathrm{d}x\bigg{)}}.\] (4.12) ### Differentiations between isomorphic mean and Cauchy's mean value theorem #### 4.4.1 About their coverage and categorical intersection Since the IVP of isomorphic means applies to ordinary functions, while IVP of the Cauchy's mean value theorem applies to derivable functions, and in view of Theorem 4.4 and Theorem 4.5, we claim that (i). Whenever there is a Cauchy mean value, there is a class V(only of derivable DMs). (ii). Whenever there is an isomorphic mean, it's not ensured there is a convertible Cauchy mean value by our ready Theorem. Following are some examples of isomorphic means that can not convert to Cauchy mean values. 1. In Section 3.2 case (1), the function \(f\) taking value of either 1 or 3, has jump discontinuity, though the isomorphic mean is 2. With reference to Theorem 4.5, we attempt an \(l\in\int((h\circ f)\cdot g^{\prime})=\int 2(f(x)+2)\mathrm{d}x\), \(a=0,b=2\). It follows that \((l(b)-l(a))/(g(b)-g(a))=4\), but \(l^{\prime}(x)/g^{\prime}(x)=h(f(x))=f(x)+2\) which value is either 3 or 5 on \([a,b]\), never being 4. This is simply because \(f\) being not continuous at \(x=1\) makes \(l\) being not differentiable everywhere on \((a,b)\), therefore some criteria for Theorem 4.1 are not met. Such here the Cauchy mean value of 0, 2 even does not exist. 2. \(f(x)=\sin x(x\in[0,\pi])\) being not monotone, has a mean value \(2/\pi\), which is a special case of isomorphic mean (class VI). \((l(b)-l(a))/(g(b)-g(a))=2/\pi\), but \(l^{\prime}(x)/g^{\prime}(x)=h(f(x))=f(x)\) is not invertible, therefore the Cauchy mean value does not exist (uniquely). However in this case Theorem 4.1 holds. 3. \(f(x)\equiv A\), such with any applicable \(g,h\) its isomorphic mean is \(A\). It's easy to check Theorem 4.1 holds for every \(x\) with \(l,g\), but the Cauchy mean value does not exist either. The IVP of an isomorphic mean primarily gets an unique intermediate value in-between of the function values, while the IVP of Cauchy's mean value theorem primarily non-unique intermediate values within the function's domain. The latter calls for an unique value in the domain by further restriction of \(f^{\prime}/g^{\prime}\) being invertible, whereas the former calls for an unique value within the domain by specially letting \(f(x)=x\), then there is an intersection of the two categories, i.e. convertible instances between the isomorphic means class V and the Cauchy mean values. **In summary, isomorphic means of a function cover broader range of unique mean values of an ORDINAY function.** Only the class V can be matched by the Cauchy mean value as a special derivation of the Cauchy's mean value theorem in term of uniqueness, whereas more other classes of isomorphic means conform to the ubiquitous theorem. #### 4.4.2 About generator functions and identifications of unique means For an isomorphic mean, \(f\)'s bonding on \(\{g,h\}\) makes the DMs \(g,h\) not in the equal or symmetric statuses with the focused \(f\), such the isomorphic mean is prominently an extended unique mean value of \(f\) as of good identification. On the other hand for a Cauchy mean value there are only 2 generator functions \(f,g\). Though \(f(t)\) is somehow a kind of mean generated by \(g\) and vise versa for \(g(t)\),\(f\), rather \(f,g\) are symmetrical in statuses meanwhile \(f^{\prime}/g^{\prime}\) being monotone is a requirement of mixed nature. Therefore the Cauchy mean value \(t\) and its corresponding generator functions' values \(f(t)\) or \(g(t)\) are not well identifiable as being mean values of a specific function. Cauchy mean value lacks of information of 1 ordinary function to match the complexity of a class IV, but only to its best it matches a special class V. **In summary, isomorphic means have better identifications as being well-defined unique extended means of a specific function.** It is with such good identifications, that isomorphic means also have had diversified classifications possibly. #### 4.4.3 Conclusion Generally speaking: Isomorphic mean seems a concept of better origin and perspective, better identification and classification, more coverage and more natural generalization. However certain conversions between these 2 types are useful. For examples, 1. With the quasi-Stolarsky formula (3.66), it's easy to know the power mean class is a Cauchy mean value generated by \(f(x)=x^{2p},\ g(x)=x^{p}\). 2. The solutions to comparisons of certain isomorphic means can be easily based on those to the comparison of Cauchy mean values, which applications are whereby made richer. ## 5 The comparison problems of isomorphic means of a function We will discuss 2 types of the comparison methods of isomorphic means of a function: the methods derived from comparison methods of Cauchy mean values, and the methods by help of monotonicity and convexity conditions. ### Comparison methods of class V scenario derived from comparison of Cauchy mean values The Theorem 1 and Theorem 2 of Losonczi [8] are the necessary conditions and sufficient conditions respectively for the functions \(f,g,F,G\) such the comparison inequality([8] (2)) \[D_{fg}(x_{1},x_{2},...,x_{n})\leq D_{FG}(x_{1},x_{2},...,x_{n}) \tag{5.1}\] holds, where \(n\geq 2\) is fixed. The prerequisites for both theorems are: \(I\) is real interval, \(\varepsilon_{n}(I)\) is the set of all pairs \((f,g)\) of functions \(f,g:I\to\mathbb{R}\) satisfying the following conditions: (i) \(f,g\) are \(n\)-times differentiable on \(I\), (ii) \(g^{(n-1)}(u)\neq 0\) for \(u\in I\), (iii) the (first) derivative of \((f^{(n-1)}/g^{(n-1)})\) is not zero on \(I\). And there are notations: \(\bar{f}=f^{(n-1)},\bar{g}=g^{(n-1)},\bar{F}=F^{(n-1)},\bar{G}=G^{(n-1)},\ h= \bar{f}/\bar{g},\ H=\bar{F}/\bar{G}\). Losonczi's necessary conditions for (5.1) is quoted: **Theorem 5.1**.: _([8], pp15, Theorem 1) Suppose that the functions \(f,g,F,G:I\to\mathbb{R}\) are \(n+1\) times continuously differentiable and \((f,g),(F,G)\in\varepsilon_{n}(I)\). Then the inequality_ \[\frac{h^{\prime\prime}(x)}{h^{\prime}(x)}+2\frac{\bar{g}\ ^{\prime}(x)}{\bar{g}(x )}\leq\frac{H^{\prime\prime}(x)}{H^{\prime}(x)}+2\frac{\bar{G}\ ^{\prime}(x)}{\bar{G}(x)}\ \ (x\in I) \tag{5.2}\] _is necessary for (5.1) to hold._ According to remark 1 of [8], these necessary conditions' proof assumes \(n\) values are such that \(x_{2}=...=x_{n}\) are near \(x_{1}\). This is especially meaningful for \(n=2\) cases. Losonczi's sufficient conditions for (5.1) is read as: **Theorem 5.2**.: _([8], pp18, Theorem 2) Suppose that \((f,g),(F,G)\in\varepsilon_{n}(I)\). Then the inequality_ \[\frac{h(u)-h(v)}{h^{\prime}(v)}\ \frac{\bar{g}(u)}{\bar{g}(v)}\leq\frac{H(u)-H(v )}{H^{\prime}(v)}\ \frac{\bar{G}(u)}{\bar{G}(v)}\ \ (u,v\in I) \tag{5.3}\] _is sufficient for (5.1) to hold._ To make them use for composite class V scenario, we consider the \(n=2\) basic cases of both theorems. Then by coincidences or by internal relations, the above important parametric function \(h:=\bar{f}/\bar{g}\) is right our PVDM \(h:=f^{\prime}/g^{\prime}\) with Theorem 4.4 or \(\gamma:=l^{\prime}/g^{\prime}\) with Theorem 4.5. This makes these 2 theorems literally much orientated to and ready for isomorphic means. Our comparison problems are: To find necessary conditions and sufficient conditions for functions \(f,g,h,G,H\) such the inequality \[\overline{f(x)_{[a,b]}}\big{|}_{g,h}\leq\overline{f(x)_{[a,b]}}\big{|}_{G,H} \tag{5.4}\] holds, where \(f\) is continuous and invertible on \([a,b](a\neq b)\). For valid comparison problems, all isomorphic means of a function in various context of this section are assumed existing with an applicable isomorphic frame. We have following derived theorems: #### 5.1.1 Necessary conditions **Theorem 5.3**.: Let \((\)i\()\)\(f:[a,b]\to M\) be a strictly monotone, differentiable and bounded function; \((\)ii\()\)\(g,G\) be 2 times continuously differentiable bijections on \([a,b]\), on which \(\ g^{\prime}\neq 0,\ G^{\prime}\neq 0\); \((\)iii\()\)\(h,H\) be 2 times continuously differentiable bijections on \(M\), on which \(h^{\prime}\neq 0,\ H^{\prime}\neq 0\). Then \[\begin{split}&\frac{\gamma^{\prime\prime}(x)}{\gamma^{\prime}(x)}+2 \frac{g^{\prime\prime}(x)}{g^{\prime}(x)}\leq(\geq)\frac{\Gamma^{\prime\prime}( x)}{\Gamma^{\prime}(x)}+2\frac{G^{\prime\prime}(x)}{G^{\prime}(x)}\\ &\big{(}x\in[a,b],\ \ f\ is\ increasing(decreasing)\big{)}\end{split} \tag{5.5}\] is necessary for \(\overline{f(x)_{[a,b]}}\big{|}_{g,h}\leq\overline{f(x)_{[a,b]}}\big{|}_{G,H}\)(5.4) to hold, where \(\gamma:=h\circ f\), \(\Gamma:=H\circ f\). Proof.: Such required \(f,g,h,G,H\) make\(\gamma,\ \Gamma\) both invertible, and the following equivalences according to Theorem 4.5: \[\begin{split}& t=D_{lg}(a,b)=\overline{x_{[a,\ b]}}\big{|}_{g, \gamma}\ \ (l\in\int\gamma(x)g^{\prime}(x)\mathrm{d}x),\\ & T=D_{LG}(a,b)=\overline{x_{[a,\ b]}}\big{|}_{G,\ \Gamma}\ \ (L\in\int\Gamma(x)G^{\prime}(x)\mathrm{d}x),\end{split}\] \((\)\(l,L\) are 3 times differentiable\()\) and \(f(D_{lg}(a,b))=\overline{f(x)_{[a,\ b]}}\big{|}_{g,h}\), \(f(D_{LG}(a,b))=\overline{f(x)_{[a,\ b]}}\big{|}_{G,\ H}\). Subsequently \(l,g,L,G\) at least on \([a,b]\) meet the prerequisites of Losor-czi's necessity Theorem 5.1, according to which: \((\)i\()\)In the case \(f\) is decreasing and \(\overline{f(x)_{[a,b]}}\big{|}_{g,h}\leq\overline{f(x)_{[a,b]}}\big{|}_{G,H}\), which turns out \(D_{lg}(a,b)\geq D_{LG}(a,b)\), it must hold the "\(\geq\)" case of (5.5) where \(\gamma=l^{\prime}/g^{\prime}\) and \(\Gamma=L^{\prime}/G^{\prime}\). \((\)ii\()\)In the case \(f\) is increasing, analogously it must have "\(\leq\)" case of (5.5) for (5.4). **Remark 5.1**.: According to Losonczi's Remark 1, above theorem needs to assume \(a,b\) are near enough; and if (5.5) is strict(\(<\) or \(>\)) then it's sufficient for (5.4) to hold. Although Losonczi's necessity theorem need its generator functions \(f,g\) be \((n+1)\) times differentiable, however the \(g^{(n+1)}\) is prematurely cancellable item in its derivation, and does not affect the final condition's inequality. In our case only \(l,L\) being 3 times differentiable are required. In Section 5.1.3, there is an example in which we work out how near \(a,b\) to each other is necessary and sufficient for (5.4). #### 5.1.2 Sufficient conditions **Theorem 5.4**.: Let (i) \(f:[a,b]\to M\) be a strictly monotone, differentiable and bounded function; (ii) \(g,G\) be 2 times continuously differentiable bijections on \([a,b]\), on which \(g^{\prime}\neq 0,\ G^{\prime}\neq 0\); (iii) \(h,H\) be 2 times continuously differentiable bijections on \(M\), on which \(h^{\prime}\neq 0,\ H^{\prime}\neq 0\). Then \[\begin{split}&\frac{\gamma(u)-\gamma(v)}{\gamma^{\prime}(v)}\ \frac{g^{\prime}(u)}{g^{\prime}(v)}\leq(\geq)\frac{\Gamma(u)-\Gamma(v)}{\Gamma ^{\prime}(v)}\ \frac{G^{\prime}(u)}{G^{\prime}(v)}\\ &\big{(}u,v\in[a,b],\ \ f\ is\ increasing(decreasing)\big{)}\end{split} \tag{5.6}\] is sufficient for \(\overline{f(x)_{[a,b]}}\big{|}_{g,h}\leq\overline{f(x)_{[a,b]}}\big{|}_{G,H}\) (5.4) to hold, where \(\gamma:=h\circ f\), \(\Gamma:=H\circ f\). Proof.: With the same equivalences in previous proof and \(\gamma=l^{\prime}/g^{\prime}\), \(\Gamma=L^{\prime}/G^{\prime}\), the holding of (5.6)("\(\leq\)" case) is sufficient for \(D_{lg}(a,b)\leq D_{LG}(a,b)\) according to Losonczi's sufficiency Theorem 5.2. Therefore if with increasing \(f\), \(\overline{f(x)_{[a,b]}}\big{|}_{g,h}\leq\overline{f(x)_{[a,b]}}\big{|}_{G,H}\). The other case is analogous. #### 5.1.3 Comparison of geometric mean and elastic mean Let \(y=f(x)\) defined on \([a,b]\ \ (b>a>0)\) be positive, 2 times continuously differentiable and monotone. Here we denote the set of all \(f\) as \(CPM([a,b])\). As an example, we shall find specific necessary conditions, and sufficient conditions for \[\overline{f(x)_{[a,b]}}\big{|}_{x,\ln y}\leq\overline{f(x)_{[a,b]}}\big{|}_{ \ln x,y}\ \ \big{(}f\in\ CPM([a,b])\big{)}. \tag{5.7}\] The left side is the geometric mean(will be denoted as \(G\)), and the right side is the elastic mean(denoted as \(E\)). (5.7) is whereby denoted by \(G\leq E\). #### 5.1.3.1 Necessary conditions Assume \(f\) is increasing first. With (5.5) it is easy to check that: \[\frac{(\ln f)^{\prime\prime}}{(\ln f)^{\prime}}\leq\frac{f^{\prime\prime}}{f^ {\prime}}-\frac{2}{x} \tag{5.8}\] is necessary. It follows that \(xf^{\prime}/f\geq 2\) is necessary(note: \(xf^{\prime}/f\) is called the elasticity of \(f\)). If \(f\) is decreasing then \(xf^{\prime}/f\leq 2\) is necessary. Recalling the remark that if the inequalities are strict, then they are also sufficient for \(\overline{f(x)_{[a,b]}}\big{|}_{g,h}\leq\overline{f(x)_{[a,b]}}\big{|}_{G,H}\) provided \(a,b\) are near enough. Also in the case \(f\) is decreasing, \(xf^{\prime}/f\leq 0<2\), such we have the following propositions: **Proposition 5.1**.: Let \(f\in CPM([a,b])\) and \(a,b\) be provided near enough. If \(f\) is increasing then its elasticity \(xf^{\prime}/f>2\) is necessary and sufficient for (5.7). If \(f\) is decreasing then (5.7) shall hold. **Proposition 5.2**.: For \(f(x)=x^{p}\) on [a,b] with \(a,b\) being near enough. If \(p\) is positive, then \(p>2\) is necessary and sufficient for (5.7). If \(p<0\) then (5.7) holds. #### 5.1.3.2 Sufficient conditions **Lemma 5.5**.: \(x^{y}<\exp{(x-1)}\) if 1 separates\(x>0,\ y>0\). Proof.: It is easy to prove that \(\forall x(x\neq 1),\ \exp(x-1)>x\). (i)When\(y>1>x>0\), \(\exp(x-1)>x>x^{y}\). (ii)When\(x>1>y>0\), also \(\exp(x-1)>x>x^{y}\). **Theorem 5.6**.: Let \(f\in CPM([a,b])\). Then (5.7) holds if \[\frac{f(u)}{f(v)}-\frac{u}{v}\ln{\big{(}\frac{f(u)}{f(v)}\big{)}}-1\geq 0,\ \ (u,v\in[a,b]). \tag{5.9}\] Proof.: Assume \(f\) is increasing first. With (5.6) it is easy to check that: \[\frac{(\ln f(u)-\ln f(v))f(v)}{f^{\prime}(v)}\leq\frac{f(u)-f(v)}{f^{\prime}(v )}\ \frac{v}{u},\ \ \big{(}u,v\in[a,b])\] is sufficient. It follows that \((\ln f(u)-\ln f(v))f(v)\leq(f(u)-f(v))v/u\) is sufficient and obviously it's the same for decreasing \(f\). Finally we get an unified (5.9) is sufficient regardless of \(f\) being either increasing or decreasing. **Corollary 5.7**.: Let \(f\in CPM([a,b])\) be decreasing (e.g. \(f(x)=x^{p}\ (p<0)\)), then (5.7) holds. Proof.: With (5.9), let \(w=f(u)/f(v)\), \(r=u/v\) then \(w-r\ln w-1\geq 0\ \Rightarrow\ w^{r}\leq\exp(w-1)\) is sufficient. Since \(f\) is strictly decreasing, then \(w,r\) are positive and separated by 1 or both are 1, such \(w^{r}\leq\exp(w-1)\). The sufficient conditions are always met for decreasing \(f\), no matter \(a,b\) are near or far to each other. #### 5.1.3.3 Comprehensive comparison of \(E\) and \(G\) of \(x^{p}\) **Lemma 5.8**.: For function \(S(r)=r^{p}-p\ r\ln r-1\ (r>0,p\neq 0)\), a fixed (trivial) root for \(S(r)=0\) is \(r=1\), and 1). If \(p<0\), then \(S(r)\geq 0\) and 1 is the sole root; 2). If \(p>2\), there is sole second root \(\alpha<1\) such that (a)\(S(\alpha)=0\), (b)\(S(r)<0\) (\(r\in(0,\alpha)\)), (c)\(S(r)\geq 0\) (\(r\in[\alpha,+\infty)\)); 3). If \(1<p<2\), there is sole second root \(\beta>1\) such that (a)\(S(\beta)=0\), (b)\(S(r)\leq 0\) (\(r\in(0,\beta]\)), (c)\(S(r)>0\) (\(r\in(\beta,+\infty)\)); 4). If \(0<p\leq 1\), then \(S(r)\leq 0\) and 1 is the sole root; 5). If \(p=2\) then (a)\(S(r)<0\) (\(r\in(0,1)\)), (b)\(S(r)>0\) (\(r\in(1,+\infty)\)). This is obtained by the analysis of \(S\) via \(S^{\prime}\), the limit of \(r^{p}/(p\ r\ln r+1)\) etc. Then we have the following theorem. **Theorem 5.9**.: Let \(f(x)=x^{p}\ (x>0)\) defined on \([a,b]\) (\(b>a>0\)), and \(\alpha<1,\beta>1\) be the roots of \(S(r)\) in corresponding to case 2) and 3) resp. in Lemma 5.8. Then listed are sufficient conditions for \[\overline{x^{p}{}_{[a,b]}}\big{|}_{x,\ln y}\leq\overline{x^{p}{}_{[a,b]}} \big{|}_{\ln x,y}\ \ \ (G\leq E) \tag{5.10}\] or \[\overline{x^{p}{}_{[a,b]}}\big{|}_{\ln x,y}\leq\overline{x^{p}{}_{[a,b]}} \big{|}_{x,\ln y}\ \ \ (E\leq G)\ : \tag{5.11}\] _1)._ If \(p<0\), then (5.10) holds; _2)._ If \(p>2\) and \(a<b\leq a/\alpha\) then (5.10) holds; _3)._ If \(1<p<2\) and \(a<b\leq a\beta\) then (5.11) holds; _4)._ If \(0<p\leq 1\), then (5.11) holds. Proof.: Put \(f\) in (5.9) and let \(r=u/v\) we get: \(G\leq E\) if the inequality \[S(r):=r^{p}-pr\ln r-1\geq 0,\ \ (r\in[a/b,\ b/a]). \tag{5.12}\] Another symmetrical theorem should be true, by which \(E\leq G\) if \[S(r):=r^{p}-pr\ln r-1\leq 0,\ \ (r\in[a/b,\ b/a]). \tag{5.13}\] 1). In the case \(p<0\)\(f\) is decreasing, such \(G\leq E\) is true with Corollary 5.7. Also it's verified by Lemma 5.8 case 1) that \(r^{p}-pr\ln r-1\geq 0\) if \(p<0\). 2). In the case \(p>2\) and \(\alpha<1\) being the root of \(S(r)=0\) of Lemma 5.8 case 2), if \(a<b\leq a/\alpha\) then \(r\in[\alpha,1/\alpha]\), on which \(S(r)\geq 0\). It's sufficient for \(G\leq E\). 3). In the case \(1<p<2\) and \(\beta>1\) being the root of \(S(r)=0\) of Lemma 5.8 case 3), if \(a<b\leq a\beta\) then \(r\in[1/\beta,\beta]\), on which \(S(r)\leq 0\). It's sufficient for \(E\leq G\). 4). In the case \(0<p\leq 1\), according to Lemma 5.8 case 4) \(S(r)\leq 0\). Such also \(E\leq G\). **Remark 5.2**.: All above are sufficient conditions, however not necessary ones. The above theorem is not yet a total solution for comparison of these 2 means. e.g. In case 2) \(p>2\) we don't have a sufficient condition for otherwise \(E\leq G\), since interval \([0,\alpha]\) does not contain any \(r=u/v>1\), because \([0,\alpha]\) never spans over point 1. Similar are with case 3) and the case if \(p=2\). 5.1.3.4 Direct comparison of \(E\) and \(G\) of \(x^{p}\) by relative difference **Remark 5.3**.: By derivation of the formulae of geometric mean and elastic mean: \[G=\overline{x^{p}{}_{[a,b]}}\big{|}_{x,\ln y}=\exp\bigl{(}\bigl{(}\frac{b\ln b -a\ln a}{b-a}-1\bigr{)}p\bigr{)}=(I(a,b))^{p}, \tag{5.14}\] \[E=\overline{x^{p}{}_{[a,b]}}\big{|}_{\ln x,y}=\frac{b^{p}-a^{p}}{p(\ln b-\ln a )}=(L_{p}(a,b))^{p}, \tag{5.15}\] where \(I(a,b)\) is the identric mean of \(a,b\) and \(L_{p}(a,b)\) is the \(p\)-order logarithmic mean of \(a,b\)[11]. Especially \(p=2\), \(G=(I(a,b))^{2}\) and \(E=A(a,b)L(a,b)\) where \(A(a,b)\) is the arithmetic mean and \(L(a,b)\) is the logarithmic mean. **Definition 5.10**.: Let \(y=x^{p}\) (\(p\neq 0\)) defined on \([a,b]\) (\(b>a>0\)). The number \[\sigma_{GE}=(\overline{x^{p}{}_{[a,b]}}\big{|}_{x,\ln y}-\overline{x^{p}{}_{[ a,b]}}\big{|}_{\ln x,y})/\overline{x^{p}{}_{[a,b]}}\big{|}_{\ln x,y}=G/E-1 \tag{5.16}\] is called the relative difference of the geometric mean to the elastic mean of \(x^{p}\). **Lemma 5.11**.: With above definition if the ratio \(r=b/a\) or \(t=a/b\) is fixed, then 1). \(\sigma_{GE}\) is the function of \((r,p)\) or \((t,p)\): \[\sigma_{GE}(r,p)=\frac{pr^{\frac{pr}{r-1}}\ln r}{e^{p}(r^{p}-1)}-1,\ \ \ \sigma_{GE}(t,p)=\frac{pt^{\frac{pt}{t-1}}\ln t}{e^{p}(t^{p}-1)}-1. \tag{5.17}\] 2). \(\sigma_{GE}(r,p)=\sigma_{GE}(t,p)\). 3). \(\lim_{r\to 1}\sigma_{GE}(r,p)=0\). All above are easy to check and the proof is omitted. **Theorem 5.12**.: Let \(y=x^{p}\) (\(p\neq 0\)) defined on \([a,b]\) (\(b>a>0\)) and \(r=b/a\). The sufficient and necessary condition for \(G<E(E<G)\) is \(\sigma_{GE}(r,p)=\sigma_{GE}(1/r,p)<0(>0)\). Proof.: By the definition and the lemma, \(\sigma_{GE}(r,p)=\sigma_{GE}(1/r,p)=(G-E)/E\), therefore \(G<E\Leftrightarrow\sigma_{GE}(r,p)<0\) and \(E<G\Leftrightarrow\sigma_{GE}(r,p)>0\) since \(E>0\). **Corollary 5.13**.: \(\overline{x^{2}{}_{[a,b]}}\big{|}_{x,\ln y}\)_\(>\overline{x^{2}{}_{[a,b]}}\big{|}_{\ln x,y}\) (\(G>E\)) on \([a,b]\) (\(b>a>0\))._ Proof.: By checking up to 6 times derivatives of \[\sigma_{GE}(r,2)=\frac{2r^{\frac{2r}{r-1}}\ln r}{e^{2}(r^{2}-1)}-1 \tag{5.18}\] to \(r\), or the derivatives of the factors of the derivatives, till it is no more transcendental, we are able to claim that \(\sigma_{GE}(r,2)\) is decreasing when \(r<1\) or is increasing when \(r>1\), while \(\lim_{r\to 1}\sigma_{GE}(r,2)=0\). Such \(\sigma_{GE}(r,2)>0\Leftrightarrow G>E\). With the theorem it's manually verifiable that Theorem 5.9 are sufficient conditions which are not sharp. For example, if \(p=3\) then \(\alpha=0.2142142...\) such if \(a<b\leq a/0.2142142...\) then sufficiently \(\overline{x^{3}{}_{[a,b]}}\big{|}_{x,\ln y}\)\(\overline{x^{3}{}_{[a,b]}}\big{|}_{\ln x,y}\); however as far as \(a<b\leq 666a\) is still safe for the inequality, as with Theorem 5.12, the threshold for the inequality to reverse near 666 can be manually verified by the graph of \(\sigma_{GE}(r,3)\). This method is numerically efficient as the bivariate inequality turns to an univariate one(\(p\) is fixed). The next subsections are all about the comparison methods for various scenarios, derived by help of monotonicity and convexity, and all are sufficient conditions. ### Comparison of 2 isomorphic means class I of a function The following theorem is derived from an integral extension form of Jensen's inequality. (Its proof is omitted.) **Theorem 5.14**.: Given \(f\colon D\to M\) defined on interval \(D\), \(M\subseteq\) interval \(I\), and 2 PVDMs \(g,h\), 1). If \(g\) is increasing on \(I\), and \(g(h^{-1})\) is convex on \(h(I)\), then \(M_{f}|_{g}\geq M_{f}|_{h}\); 2). If \(g\) is increasing on \(I\), and \(g(h^{-1})\) is concave on \(h(I)\), then \(M_{f}|_{g}\leq M_{f}|_{h}\); 3). If \(g\) is decreasing on \(I\), and \(g(h^{-1})\) is convex on \(h(I)\), then \(M_{f}|_{g}\leq M_{f}|_{h}\); 4). If \(g\) is decreasing on \(I\), and \(g(h^{-1})\) is concave on \(h(I)\), then \(M_{f}|_{g}\geq M_{f}|_{h}\). And with \(g\), \(h\) being derivable (\(h^{\prime}\neq 0\)), 5). If \(|g^{\prime}/h^{\prime}|\) is increasing on \(I\), then \(M_{f}|_{g}\geq M_{f}|_{h}\); 6). If \(|g^{\prime}/h^{\prime}|\) is decreasing on \(I\), then \(M_{f}|_{g}\leq M_{f}|_{h}\). The case 1) through 4) resembles Theorem 2.6 (for numbers), also the Theorem 3 of [8] which is about the case \(g=G\) there. The case 5), 6) resembles Theorem 2.7 (for numbers). However all cases are applicable to an ordinary \(f\) on \(D\), which may be discontinuous or non-monotone. All 6 cases are sufficient but not necessary conditions, e.g. if with case 5) \(D\) is extended just a little bit such it happens \(|g^{\prime}/h^{\prime}|\) is not monotone on the extended \(M\) (and \(I\)), it's very possible that still holds \(M_{f}|_{g}\geq M_{f}|_{h}\) (on the new \(D\)), in which case the original \(D\) may take the major weight. ### Comparison of 2 class II of a monotone function For comparison of class II, the monotonicity of the function \(f\) is required in this section. **Theorem 5.15.** Given a monotone \(f\) defined on \([a,b]\) and 2 IVDMs \(g,h\), in the cases \(f\) increases, 1). If \(g\) increases on \([a,b]\), and \(g(h^{-1})\) is convex on \(h([a,b])\), then \(M_{f}|_{g}^{II}\geq M_{f}|_{h}^{II}\); 2). If \(g\) increases on \([a,b]\), and \(g(h^{-1})\) is concave on \(h([a,b])\), then \(M_{f}|_{g}^{II}\leq M_{f}|_{h}^{II}\); 3). If \(g\) decreases on \([a,b]\), and \(g(h^{-1})\) is convex on \(h([a,b])\), then \(M_{f}|_{g}^{II}\leq M_{f}|_{h}^{II}\); 4). If \(g\) decreases on \([a,b]\), and \(g(h^{-1})\) is concave on \(h([a,b])\), then \(M_{f}|_{g}^{II}\geq M_{f}|_{h}^{II}\). In the cases \(f\) decreases, all these 4 inequalities reverses. And with \(g,h\) being differentiable (\(h^{\prime}\neq 0\)), 5). If both of \(|g^{\prime}/h^{\prime}|\) and \(f\) increases or both decreases, then \(M_{f}|_{g}^{II}\geq M_{f}|_{h}^{II}\); 6). If one of \(|g^{\prime}/h^{\prime}|\) and \(f\) increases and another decreases, then \(M_{f}|_{g}^{II}\leq M_{f}|_{h}^{II}\). If \(f\) is strictly monotone, all the inequalities in these 6 cases are strict(\(>or<\)). _Proof._ Let \[k=\frac{g(b)-g(a)}{h(b)-h(a)}\neq 0,\ v=h_{2}(x)=kh(x)-kh(a)+g(a),\ u=h(x)\ \ (x\in[a,b]).\] Such an \(h_{2}\in\mathbb{V}h\) is said to be vertically aligned to \(g(x)\) on \([a,b]\), since (i). \(h_{2}(a)=g(a)\), \(h_{2}(b)=g(b)\); (ii). interval \(h_{2}([a,b])=g([a,b])\); (iii). \(h_{2},\ g\) are of the same monotonicity; (iv). Based on above, \(h_{2}^{-1}\) and \(g^{-1}\) both ends at \((g(a),a),(g(b),b)\). \(\Longrightarrow g(h_{2}^{-1}(v))\) ends at 2 points \((g(a),g(a)),(g(b),g(b))\) which are on the line \(x=v\); (v). Besides, according to Lemma 1.21\(h_{2}\in\mathbb{V}h\Longrightarrow h_{2}^{-1}\in\mathbb{H}(h^{-1})\), therefore \(g(h^{-1}(u))\) and \(g(h_{2}^{-1}(v))\) are also H-scalshifts, and have the same convexity according to Lemma 1.20. Next we distinguish 4 cases to compare \(f\circ h_{2}^{-1}(v)\) and \(f\circ g^{-1}(v)\) (\(\forall v\in g([a,b])\)): 1). \(g(h^{-1}(u))\) is convex \(\Longrightarrow g(h_{2}^{-1}(v))\) is convex on \(g([a,b])\Longrightarrow g(h_{2}^{-1}(v))\leq v\); meanwhile \(g\) is increasing and \(f\) is increasing \(\Longrightarrow f(h_{2}^{-1}(v))\leq f(g^{-1}(v))\). According to Theorem 3.6 (monotone property) \[\begin{array}{c}\overline{f\circ h_{2}^{-1}\ _{[g(a),g(b)]}}|_{\mathbb{V}_{ \mathrm{v}},\mathbb{V}_{g}}\leq\overline{f\circ g^{-1}\ _{[g(a),g(b)]}}|_{\mathbb{V}_{\mathrm{v}},\mathbb{V}_{g}}\\ \Longrightarrow M_{f}|_{h_{2}}^{II}\leq M_{f}|_{g}^{II}\ \stackrel{{ h_{2} \in\mathbb{V}h}}{{\Longrightarrow}}\ M_{f}|_{h}^{II}\leq M_{f}|_{g}^{II}.\end{array} \tag{5.19}\] 2). \(g(h^{-1}(u))\) is concave \(\Longrightarrow g(h_{2}^{-1}(v))\) is concave on \(g([a,b])\Longrightarrow g(h_{2}^{-1}(v))\geq v\); meanwhile \(g\) is increasing and \(f\) is increasing \(\Longrightarrow f(h_{2}^{-1}(v))\geq f(g^{-1}(v))\). Similarly this leads to \(M_{f}|_{h}^{II}\geq M_{f}|_{g}^{II}\). 3). 4)....(analogous and omitted.) So far we get 4 inequalities as the case 1) through 4). In the cases \(f\) decreases while other conditions remain unchanged, it's obvious all these 4 inequalities reverses. As for case 5) and 6), \(h_{2}\) is also differentiable. To examine the monotonicities of the derivative: \[\frac{\mathrm{d}g(h_{2}^{-1}(v))}{\mathrm{d}v}=\frac{\mathrm{d}g(h_{2}^{-1}(v ))}{\mathrm{d}h_{2}^{-1}(v)}\cdot\frac{\mathrm{d}h_{2}^{-1}(v)}{\mathrm{d}v}=g ^{\prime}\frac{1}{h_{2}^{\prime}}=\frac{g^{\prime}}{kh^{\prime}}|_{x}=(\frac{g ^{\prime}}{kh^{\prime}}\circ h_{2}^{-1})|_{v} \tag{5.20}\] in the following 4 cases and check their equivalences when combined with monotonicities of \(h_{2},g\)(below \(\nearrow\): increasing, \(\searrow\): decreasing): 1). When \((\frac{g^{\prime}}{kh^{\prime}}\circ h_{2}^{-1})|_{v}\nearrow\), \(g(h_{2}^{-1}(v))\leq v\) when \(\frac{g^{\prime}}{kh^{\prime}}\nearrow\),\(h_{2}^{-1}(v)\leq g^{-1}(v)\); 2). When \((\frac{g^{\prime}}{kh^{\prime}}\circ h_{2}^{-1})|_{v}\nearrow\), \(g(h_{2}^{-1}(v))\leq v\) when \(\frac{g^{\prime}}{kh^{\prime}}\searrow\),\(h_{2}^{-1}(v)\geq g^{-1}(v)\); 3). When \((\frac{g^{\prime}}{kh^{\prime}}\circ h_{2}^{-1})|_{v}\searrow\), \(g(h_{2}^{-1}(v))\geq v\) when \(\frac{g^{\prime}}{kh^{\prime}}\searrow\),\(h_{2}^{-1}(v)\geq g^{-1}(v)\); 4). When \((\frac{g^{\prime}}{kh^{\prime}}\circ h_{2}^{-1})|_{v}\searrow\), \(g(h_{2}^{-1}(v))\geq v\) when \(\frac{g^{\prime}}{kh^{\prime}}\nearrow\),\(h_{2}^{-1}(v)\leq g^{-1}(v)\). From there 4 cases can merge to 2 cases without conflictions. And knowing the monotonicity of \(g^{\prime}/(kh^{\prime})\) is the same as that of \(|g^{\prime}/h^{\prime}|\), these summarize as: a). When \(|g^{\prime}/h^{\prime}|\) increases, \(h_{2}^{-1}(v)\leq g^{-1}(v)\); b). When \(|g^{\prime}/h^{\prime}|\) decreases, \(h_{2}^{-1}(v)\geq g^{-1}(v)\). By further combining \(f\) and applying Theorem 3.6, case 5) and 6) are proved. Finally if \(f\) is strictly monotone then \(f\circ h_{2}^{-1},f\circ g^{-1}\) are strictly monotone, and based on convexity of \(g(h_{2}^{-1}(v))\) for every case there is at least 1 sub-interval of g([a,b]) such that on which \(f(h_{2}^{-1}(v))>(or<)f(g^{-1}(v))\) while for the rest parts \(f(h_{2}^{-1}(v))=f(g^{-1}(v))\) (Otherwise the contradiction is that there is no convexity of \(g(h_{2}^{-1}(v))\) at all). By applying the strict inequality of Theorem 3.6 or its corollary, all the inequalities are strict. This completes the proof of Theorem 5.15. **Remark 5.4**.: For easier practice, let \(m=|k|/k=sgn(k)\), then \(|g^{\prime}/h^{\prime}|\)=\(mg^{\prime}/h^{\prime}\). There is a trivial out-of-scope case when \(mg^{\prime}/h^{\prime}\) is a constant \(C>0\). Then by antiderivative we have \(mg=Ch+C^{\prime}\) hence two isomorphic means are equal. Also noticeable is that the monotonicity of \(|g^{\prime}/h^{\prime}|\) is invariant with V-scaleshifts of \(g\), \(h\). There are some examples of comparison of class II: 1. \(f(x)=\tan x,\ g(x)=\ln x,\ h(x)=x,\ x\in[0.1,1.5]\). \(|g^{\prime}/h^{\prime}|=\ln^{\prime}x/x^{\prime}=1/x\), It is decreasing. \(\tan x\) is increasing. Thus according to Theorem 5.15 case 6), \(\overline{\tan x_{[0.1,1.5]}}\big{|}_{\ln x}^{II}\leq\overline{\tan x_{[0.1,1. 5]}}\big{|}_{x}^{II}=\overline{\tan x_{[0.1,1.5]}}\big{|}_{\ln x}^{II}=2/\pi\), while \(\overline{\tan x_{(0,\ \pi/2)}}\) does not exist. 2. \(f(x)=x,\ g(x)=x^{2},\ h(x)=x,\ x\in[a,b],\ b>a\geq 0\). \(|g^{\prime}/h^{\prime}|=(x^{2})^{\prime}/x^{\prime}=2x\), It is increasing. \(f\) is increasing. Thus according to Theorem 5.15 case 5), \(\overline{x_{[a,b]}}\big{|}_{x^{2}}^{II}\geq\overline{x_{[a,b]}}\big{|}_{x}^{II} =\overline{x_{[a,b]}}\). Simple calculation leads to: For positive \(a,\ b\), \[\frac{2(a^{2}+ab+b^{2})}{3(a+b)}\geq\frac{a+b}{2}.\] (5.21) 3. \(f(x)=x,\ g(x)=\sin x,\ h(x)=\cos x,\ x\in[a,b],\ 0\leq a<b\leq\pi/2\). \(|g^{\prime}/h^{\prime}|=-\sin^{\prime}x/\cos^{\prime}x=\cot x\), It is decreasing. \(f\) is increasing. Thus according to Theorem 5.15 case 6), \(\overline{x_{[a,b]}}_{\sin x}^{II}\leq\overline{x_{[a,b]}}_{\cos x}^{II}\). After derivations, including those after \(\overline{x_{[a,b]}}\) participating in comparisons, we get: For \(0\leq a\leq\pi/2\), \(0\leq b\leq\pi/2\)(\(b\neq a\)), \[\frac{b\sin b-a\sin a+\cos b-\cos a}{\sin b-\sin a}<\frac{a+b}{2}<\ \frac{a\cos a-b\cos b+\sin b-\sin a}{\cos a-\cos b}.\] (5.22) ### Comparison of 2 class IV of a monotone \(\boldsymbol{f}-\) a partial solution **Theorem 5.16**.: _Let \(g,G\) be 2 differentiable IVDMs and \(h,H\) be 2 differentiable PVDMs of monotone \(f\colon[a,b]\to M\). \(G^{\prime}\neq 0\) on \([a,b]\), \(H^{\prime}\neq 0\) on \(I=[\min\{M\},\max\{M\}]\)._ 1. _If both_ \(|g^{\prime}/G^{\prime}|\) _and_ \(f\) _are increasing or decreasing on_ \([a,b]\)_, and_ \(|h^{\prime}/H^{\prime}|\) _is increasing on_ \(I\)_, then_ \(M_{f}|_{g,h}\geq M_{f}|_{G,H}\)_._ 2. _If one of_ \(|g^{\prime}/G^{\prime}|\) _and_ \(f\) _is increasing another decreasing on_ \([a,b]\)_, and_ \(|h^{\prime}/H^{\prime}|\) _is decreasing on_ \(I\)_, then_ \(M_{f}|_{g,h}\leq M_{f}|_{G,H}\)_._ _If \(f\) is strictly monotone, these 2 inequalities are strict(\(>or<\))._ Proof.: With the aid of vertically alignment of a V-scalshift \(G_{2}\) of \(G\) to \(g\) on \([a,b]\) same as that in the proof of previous theorem, the proof can be done mainly via cooperation of Theorem 3.6 and Theorem 5.14, since \(M_{f}|_{g,h}\) and \(M_{f}|_{G_{2},h}=M_{f}|_{G,h}\) can be treated as 2 isomorphic means class I on the same \(g([a,b])\). For e.g. case 1). it's therefore easy to prove \(M_{f}|_{g,h}\geq M_{f}|_{G,h}\geq M_{f}|_{G,H}\). The detailed proof should initially list 4 cases, while only these 2 cases therein are of decidable comparison results. Hence it is only a partial solution as there are 2 other cases not being able to handled by the theorem. The methods of proving in the next 5 subsections are similar and most proofs are omitted. ### Comparison of 2 class V - a partial solution As a special case of previous problem, this is to compare \(M_{x}|_{g,h},\ M_{x}|_{G,H}\): while \(f(x)=x\) being strictly increasing, the inequalities are strict in below corollary. **Corollary 5.17**.: _Let \(g,G\) be 2 differentiable IVDMs and \(h,H\) be 2 differentiable PVDMs of \(f(x)=x\) (\(x\in[a,b]\)). \(G^{\prime}\neq 0\), \(H^{\prime}\neq 0\) on \([a,b]\)._ 1). _If both_ \(|g^{\prime}/G^{\prime}|\) _and_ \(|h^{\prime}/H^{\prime}|\) _are increasing, then_ \(M_{x}|_{g,h}>M_{x}|_{G,H}\)_._ 2). _If both_ \(|g^{\prime}/G^{\prime}|\) _and_ \(|h^{\prime}/H^{\prime}|\) _are decreasing, then_ \(M_{x}|_{g,h}<M_{x}|_{G,H}\)_._ ### Comparison of 2 class IV with only different IVDMs \(\boldsymbol{(}g\neq G,h=H)\) This is to compare \(M_{f}|_{g,h},\ M_{f}|_{G,h}\). Like comparing isomorphic means class II, \(f\)'s monotone is required in below theorem. **Theorem 5.18**.: _Let \(g,G\) be 2 differentiable IVDMs and \(h\) be a PVDM of monotone \(f\) defined on \([a,b]\). \(G^{\prime}\neq 0\) on \([a,b]\)._ 1). _If both_ \(|g^{\prime}/G^{\prime}|\) _and_ \(f\) _increases or both decreases, then_ \(M_{f}|_{g,h}\geq M_{f}|_{G,h}\)_;_ 2). _If one of_ \(|g^{\prime}/G^{\prime}|\) _and_ \(f\) _increases another decreases, then_ \(M_{f}|_{g,h}\leq M_{f}|_{G,h}\)_._ _If \(f\) _is strictly monotone, these 2 inequalities are strict(\(>or<\))._ ### Comparison of 2 class IV with only different PVDMs \((g=G,h\neq H)\) This is to compare \(M_{f}|_{g,h},\ M_{f}|_{g,H}\), as if comparing 2 isomorphic means class I. \(f\)'s monotone is NOT required in below theorem. **Theorem 5.19**.: Let \(f\colon D\to M\) be defined on interval \(D\), \(M\subseteq\text{interval}\ I\). Let \(g\) be an IVDM and \(h,H\) be 2 differentiable PVDMs of \(f\). \(H^{\prime}\neq 0\) on \(I\). 1). If \(|h^{\prime}/H^{\prime}|\) is increasing on \(I\), then \(M_{f}|_{g,h}\geq M_{f}|_{g,H}\). 2). If \(|h^{\prime}/H^{\prime}|\) is decreasing on \(I\), then \(M_{f}|_{g,h}\leq M_{f}|_{g,H}\). ### Comparison of 2 class III of an increasing \(f\) **Theorem 5.20**.: Let 2 differentiable mappings \(g,h\) be both IVDMs and PVDMs of an increasing \(f\colon[a,b]\to M\). \(h^{\prime}\neq 0\) on \([a,b]\) and \(I=[\min\{M\},\max\{M\}]\). 1). If \(|g^{\prime}/h^{\prime}|\) is increasing on \([a,b]\) and \(I\) then \(M_{f}|_{g,g}\geq M_{f}|_{h,h}\). 2). If \(|g^{\prime}/h^{\prime}|\) is decreasing on \([a,b]\) and \(I\) then \(M_{f}|_{g,g}\leq M_{f}|_{h,h}\). If \(f\) is strictly monotone, these 2 inequalities are strict(\(>or<\)). Proof.: Based on proof of Theorem 5.16, 4 cases could be listed initially with \(f\) possibly being either increasing or decreasing. Then 2 cases with decreasing \(f\) are of undecidable comparison results. The rest 2 cases make the theorem true. The simplest application of above is to compare 2 isomorphic means class III of \(f(x)=x\) on \([a,b]\) generated respectively by \(g(x)=x^{p}\ (p\neq 0)\) and by \(h(x)=x^{q}\ (q\neq 0)\). If \(p>q\) then \(|g^{\prime}/h^{\prime}|=|p/q|x^{(p-q)}\) is increasing, therefore \(M_{f}|_{g,g}>M_{f}|_{h,h}\Longrightarrow\left(\frac{a^{p}+b^{p}}{2}\right)^{(1 /p)}>\left(\frac{a^{q}+b^{q}}{2}\right)^{(1/q)}\) for \(a\neq b,p>q\). ### Comparison of 2 class IV of a decreasing \(f\) with exchanged DMs **Theorem 5.21**.: Let 2 differentiable mappings \(g,h\) be both IVDMs and PVDMs of a decreasing \(f\colon[a,b]\to M\). \(h^{\prime}\neq 0\) on \([a,b]\) and \(I=[\min\{M\},\max\{M\}]\). 1). If \(|g^{\prime}/h^{\prime}|\) is increasing on \([a,b]\) and \(I\) then \(M_{f}|_{g,h}\leq M_{f}|_{h,g}\). 2). If \(|g^{\prime}/h^{\prime}|\) is decreasing on \([a,b]\) and \(I\) then \(M_{f}|_{g,h}\geq M_{f}|_{h,g}\). If \(f\) is strictly monotone, these 2 inequalities are strict(\(>or<\)). Proof.: Based on proof of Theorem 5.16, 4 cases could be listed initially with \(f\) possibly being either increasing or decreasing. Then 2 cases with increasing \(f\) are of undecidable comparison results. The rest 2 cases make the theorem true. Below are 3 examples: 1. By choosing \(g(x)=\cos x\), \(h(x)=\sin x\), \(f(x)=\pi/2-x\), \([a,b]\subseteq[0,\pi/2]\), and applying this method, we finally conclude with a bivariate inequality, to both sides of which further applying the inverse function of \(f\) yields \[\arccos(\frac{\cos a+\cos b}{2})>\arcsin(\frac{\sin a+\sin b}{2}). \tag{5.23}\] 2. By choosing \(g(x)=x^{p}\), \(p\neq 0\), \(h(x)=x^{q}\), \(q\neq 0\), \(f(x)=1/x\), \([a,b]\subseteq(0,+\infty)\) and applying the theorem, we get an inequality, to both sides of which further applying \(f^{-1}\) yields \[\left(\frac{(p-q)(b^{p}-a^{p})}{p(b^{p-q}-a^{p-q})}\right)^{1/q}\!\!<\left( \frac{(q-p)(b^{q}-a^{q})}{q(b^{q-p}-a^{q-p})}\right)^{1/p}\ for\ p<q.\] (5.24) The above reverses for \(p>q\). 3. By choosing \(g(x)=\ln x\), \(h(x)=x\), and a decreasing \(f(x)>0\) on \([a,b]\subseteq(0,+\infty)\), we get a result similar to Corollary 5.7, i.e. for a decreasing positive function (has not to be continuous), \(G\leq E\). ## 6 Conclusion & vision As a specific topic related to functions bonded on the isomorphic frames, isomorphic means are uncovered as a huge family of mean values. Various types of means of numbers and of function can be treated as the special cases, instances or derivations of isomorphic means. Such it crosses with some existing concepts of means, e.g. geometric mean of a function, Stolarsky means, Cauchy mean values and its derived forms. The isomorphic means can be used for derivation of inequalities, especially when the comparison problems of its various sub-classes can be solved in a systematical way. The root concept: the isomorphic frames and its basic derivations: the isomorphic number and the DVI function can be deemed as the fundamentals of some type of "mathematics related to isomorphic frames", which covers isomorphic means. The "dual-variable-isomorphic convex function" as introduced in reference article [7] (which is also based on DVI function though it never refined the isomorphic frames) can also be deemed as an example of such mathematics, and is closely related to the isomorphic means. Just for extra information, the "geometrically convex function" is also a special case of it, as discussed in [7]. Therefore the vision of this article is to expand the scope of MA in this similar way by tying of more MA concepts to isomorphic frames and studying their extended properties and behaviors, e.g. the "dual-variable-isomorphic convexity". To facilitate it by precisely graphically representing the isomorphic frames and its embedded and bonded objects, the so-called "\(n\)-dimensional isomorphic coordinate system" could have been introduced as another root concept, which generally represents a type of uneven space rendered by the isomorphic frame. The devise of such coordinate system is supported by the Lemma 1.1, which validates a bijection that maps the space onto a part of a Cartesian coordinate system. However this is abridged from this article for the time being. On the coordinate systems the geometrical meaning of isomorphic means and of "dual-variable-isomorphic convex function" can be demonstrated and correlated. One also can see that the geometrical meaning of Cauchy mean value in the coordinate system is different than that of isomorphic means. On the systems, more isomorphic-frame-related mathematics can be introduced, e.g. the so-called densities on the coordinate systems, the graphs of functions, the convexity of sets and functions, the slopes, lengths, curvatures, even the so-called local densities of the graphs of functions on the systems can be discussed. The author is looking forward to presenting more theories of extension of this genre, which are all expected to establish on the basis of \(n\)-dimensional isomorphic frames. Meaningfully for current scope, let's end by trying to give a premature definition of isomorphic mean of a function of \((n-1)\) variables\((n\geq 2)\): **Definition 6.1**.: Let intervals\(X_{1},...,X_{n},U_{1},...,U_{n}\subseteq\mathbb{R}\), and \(g_{i}\colon X_{i}\to U_{i}(i=1,...,n)\) be \(n\) continuous and monotone bijections. Function \(f:D\to M\) of \((n-1)\) variables is bounded, and \(f\wedge\mathscr{I}_{m}\{g_{1},...,g_{n}\}\). \(D\) and \(E=\mathscr{I}_{m}\{g_{1},...,g_{n-1}\}(D)\) each are connected sets and are measurable of \((n-1)\) dimensional hypervolume. If there exists \(M_{\varphi}\in U_{n}\), being the mean value of \(\varphi\colon=g_{n}\circ f(g_{1}^{-1},...,g_{n-1}{}^{-1})\colon E\to U_{n}\) on \(E\), then \(g_{n}^{-1}(M_{\varphi})\in X_{n}\) is called the (all-variable-) isomorphic mean of \(f\) on \(D\) generated by \(\mathscr{I}_{m}\{g_{1},...,g_{n}\}\), denoted by \(\overline{f_{D}}\big{|}_{g_{1},...,g_{n}}\), \[\overline{f_{D}}\big{|}_{g_{1},...,g_{n}}=g_{n}^{-1}\bigg{(}\frac{\int_{{}_{E }}g_{n}\circ f(g_{1}^{-1},...,g_{n-1}{}^{-1})\mathrm{d}^{(n-1)}u}{\int_{{}_{E} }\mathrm{d}^{(n-1)}u}\bigg{)}. \tag{6.1}\]
2305.09817
A Method for Training-free Person Image Picture Generation
The current state-of-the-art Diffusion model has demonstrated excellent results in generating images. However, the images are monotonous and are mostly the result of the distribution of images of people in the training set, making it challenging to generate multiple images for a fixed number of individuals. This problem can often only be solved by fine-tuning the training of the model. This means that each individual/animated character image must be trained if it is to be drawn, and the hardware and cost of this training is often beyond the reach of the average user, who accounts for the largest number of people. To solve this problem, the Character Image Feature Encoder model proposed in this paper enables the user to use the process by simply providing a picture of the character to make the image of the character in the generated image match the expectation. In addition, various details can be adjusted during the process using prompts. Unlike traditional Image-to-Image models, the Character Image Feature Encoder extracts only the relevant image features, rather than information about the model's composition or movements. In addition, the Character Image Feature Encoder can be adapted to different models after training. The proposed model can be conveniently incorporated into the Stable Diffusion generation process without modifying the model's ontology or used in combination with Stable Diffusion as a joint model.
Tianyu Chen
2023-05-16T21:46:28Z
http://arxiv.org/abs/2305.09817v1
# A Method for Training-free Person Image Picture Generation ###### Abstract The current state-of-the-art Diffusion model has demonstrated excellent results in generating images. However, the images are monotonous and are mostly the result of the distribution of images of people in the training set, making it challenging to generate multiple images for a fixed number of individuals. This problem can often only be solved by fine-tuning the training of the model. This means that each individual/animated character image must be trained if it is to be drawn, and the hardware and cost of this training is often beyond the reach of the average user, who accounts for the largest number of people. To solve this problem, the Character Image Feature Encoder model proposed in this paper enables the user to use the process by simply providing a picture of the character to make the image of the character in the generated image match the expectation. In addition, various details can be adjusted during the process using prompts. Unlike traditional Image-to-Image models, the Character Image Feature Encoder extracts only the relevant image features, rather than information about the model's composition or movements. In addition, the Character Image Feature Encoder can be adapted to different models after training. The proposed model can be conveniently incorporated into the Stable Diffusion generation process without modifying the model's ontology or used in combination with Stable Diffusion as a joint model. Deep Learning, Diffusion model, Image Generation ## 1 Introduction Image generation models have been gaining a lot of attention in the Artificial Intelligence (AI) field in recent years [1]. In the latter half of 2022, discussions and information regarding these models have been prevalent. In the past, image generation models often required large amounts of video memory. It made it impossible for the people who only had gaming graphics cards, to practice them themselves. The model that is going to break the deadlock in 2022 is Stable Diffusion, a diffusion generation model that has recently shown excellent performance in the field of AI painting. One of the dominant AI painting code frameworks is Stable Diffusion, which is based on an implementation of the Latent Diffusion Model [2]. At this stage, the Latent Diffusion Model is a Diffusion Model that uses various conditions to guide the model generation process (Fig. 1 is a schematic representation of the structure). In the Stable Diffusion implementation, the text is first vectorized by tokenize and subsequently encoded into an encoder hidden states vector by the Contrastive Language-Image Pre-training (CLIP) model [3]. Then, the condition is fed into a part of Stable Diffusion, a UNet [4], along with a generated random seed and a noisy image, in which various Resnet and Attention blocks are used [5]. The condition guides the generation of the final graphics in this process. The image from the UNet is still in Latent space and needs to be decoded by a Variational autoencoder (VAE) to become the final image observed in the AI painting [6]. Due to the VAE, the video memory of the UNet during the processing of the data is thus reduced. This reduces the memory usage of Stable Diffusion to a size that can be run on a normal gaming graphics card. Some of open-source developers have further optimized Stable Diffusion's attention code to run on smaller graphics cards. This is how the entire Stable Diffusion source code shows the process. Such a model shows excellent performance in the practical generation of image results. The fact that Stable Diffusion can run on normal gaming graphics cards has also led to an exponential increase in the number of users and more positive feedback for Stable Diffusion. For example, open-source tools have been developed to make it easier to use Stable Diffusion. However, even though the graphic memory cost of running Stable Diffusion has been reduced to a very low level, there are still some requirements that cannot be met by Stable Diffusion currently. For example, drawing multiple pictures of the same person requires that the images and appearances of the people in the resulting drawings are highly similar. In practice, this can only be achieved by specialized fine-tuning training. Fine-tuning training requires much more equipment and skills than running Stable Diffusion directly with a facilitated open-source tool. The aim of this paper is to address the problem that drawing a single character during AI image generation requires specialized training to achieve. It enables users to use AI image generation tools to generate various images of the same person without training when drawing a single person, and retaining the original image, appearance. The training process for Stable Diffusion is similar to the one shown in Fig. 1. Only the VAE-encoded latent image is put into UNet with other parameters after Gaussian noise has been added. The output noise is then calculated as the loss of the noise added to the latent image. Back propagation is then performed to optimize the model weights. The training process described above fine-tunes both UNet and CLIP, so theoretically it is possible to train a Stable Diffusion model with a high degree of tag freedom, provided that sufficient data is provided. Theoretically, a Stable Diffusion model can be trained with a high degree of tag freedom if sufficient data is provided, and the description is accurate enough. Unfortunately, the Latent Diffusion Figure 1: The structure of the Latent Diffusion Model. Model operates globally on the image, rather than in chunks. Although different drawing distributions are achieved under the guidance of condition, it is still difficult to achieve the same generative results as human will. This means that it is not possible to use pure prompt to infinitely approximate a real individual person or a character from an anime/game. The new model proposed in this article is therefore partly designed to solve the problem that a character image needs to be trained specifically for this purpose if it is to be drawn well. This model is called **Character Image Feature Encoder** (which can be abbreviated as CIFE or Character Encoder). The purpose of this model is to change the Stable Diffusion so that it can be used by the average user. It is possible to draw a given character without training, but only a portrait of the character. and some prompts describing the result the user wants. Very often some characters do not have a lot of training material, and the time and equipment costs to train the model are not bad. So, this model can be a good change from that. ## 2 Method ### Technical overview To achieve training-free image generation of a given character with only a picture of the character, it is necessary to input a batch of information to the model and have the model draw the same prompt with these hidden conditions. The most effective approach involves creating a single input that is identical to the CLIP encoding result. Since the Stable Diffusion implementation of the model does not limit the length of the CLIP encoding result. The classical length at this stage is generally (77,768), but it can be longer. This means that the condition vector for that value input can have more than just the CLIP result, or that CLIP can have a parallel encoder present. Only CLIP encodes the vectorized text as a hidden vector. The encoder, which works in parallel with the CLIP encoder, is encoding the visual information of the character as a hidden vector. ### Model Structure According to the tone and strategy set out above, in order to achieve the encoding of images as Hidden stats to be fed into UNet together with the CLIP encoding results, a network structure for feature extraction of image information is firstly required, followed by some encoded neural network layers for further character image feature extraction of the extracted image features, and finally as encoding results to be directed to UNet together with CLIP UNet to generate the images. The model will therefore be divided into two main parts, the image feature extraction structure, and the deep feature coding structure. A schematic of the structure of the model can be seen in Fig. 3 below. ### Image Feature extraction The image feature extraction process is well established in the industry [7-9], but it is a combination of various CNN layer networks, such as VGG networks or ResNet networks [5, 10], which are excellent classification models, but these models can also be used by the Character Encoder. layer can directly use the feature layer of these networks as the feature extraction layer, which has the advantage of using existing pre-trained weights for migration learning, in addition to being a tried and tested structure for image feature extraction networks. Although there is a difference between the pre-training weights for the feature extraction network of the classification network and the weights required for character encoding characters, there are always similarities and differences, which means that using migration learning can be less costly in terms of time. ### Deep Character Image Encoder The encoding process of the deep character image is the most important part of this model. Deep character encoding is just a matter of feeding the encoding results of the feature layer into a neural network of linear layers for processing. Only an appropriate number of neurons and network depth are required to encode the image features to the persona vector. ### Training of Character Image Feature Encoder The aforementioned structures form a model with full image feature extraction \(+\) character encoding capabilities, forming a complete Character Encoder model. For the training of the model, the best way is to adapt it directly to the existing Stable Diffusion model. Moreover, because it is a homogeneous model of the CLIP model, the Character Encoder can be directly adapted to Stable Diffusion models with different weights. The freedom of substitution does not affect the original weights of the model, which is also fundamentally different from fine-tuning training. As a direct adaptation of the training method to existing Stable Diffusion models, it is only necessary to plug the model directly into the Stable Diffusion model training process described in the previous section. The CLIP output is then stitched together with the Character Encoder output and fed into UNet (as shown in Fig. 3). ## 3 Result and Discussion The experimental models constructed according to the above theore Figure 3: The structure of same-place type and autoencoder type. Figure 2: The structure of mix encoding type. divided into three main categories, namely Mix Encoder type, Same-place type and Autoencoder type, both of which have the same image feature extraction part and use the special diagnostic extraction layer of the VGG model directly. However, they differ in the part of character encoding. All three models use the feature extraction layer to extract features from the image when combined with the Stable Diffusion model. The difference between the three models lies in the encoding part. As shown in Fig. 2, the Mix Encoder type encodes the CLIP results and the Character Encoder results after concatenation, while the Same-place type is shown in Fig. 3. The Autoencoder type is special. It has the same model structure as the Same-place type when combined with Stable Diffusion (Fig. 3), but the pre-training process (Fig. 4) can be performed separately from the Stable Diffusion model set. ### Training environment #### 3.1.1 Dataset The dataset for training includes 18 characters (All from a game that called "Arknights"). There are 2-4 pictures for character appearance image \(C\) and 50-70 images that is fanart \(F\). The character appearance pictures will be input into character encoder for extract character image feature, and fanart images is preparing for Stable Diffusion training process. Therefore, the total dataset is: \(\sum_{i}^{18}C_{i}*F_{i}\). #### 3.1.2 Other environments The basic model of training is Anything-V3.0, the training hardware platform is based on 3090, the version of PyTorch is 1.12.1. ### Mix Encoder type #### 3.2.1 Result The training results of the Mix Encoder type model were not satisfactory, with the training results outputting images that were incomprehensible. It same to in the case of the training failure caused by some misuse of the Textual Inversion training. In addition, the output of Stable Diffusion model that generates process guide by character encoder encoding result show us completely randomness. Which means the result was not get influence by character encoding information. #### 3.2.2 Discussion Although the Mix Encoder type model has complete fail in the above results, the reason is the training did not have enough training data and training steps. Because the dataset of CLIP has nearly 400 million images. However, the dataset of this experiment only has thousands of data after the character image data is combined with fanart image. Therefore, the reason for Mix Encoder type model failure may not be the technical root of Mix Encoder type is fault, it is the dataset not enough. ### Same-place type #### 3.3.1 Training details The training process of the model blocks the weights of the UNet and clip models, and the optimizer only adjusts the weights of the character encoder model. #### 3.3.2 Result This is the best performing model of the three types of models. The training results of the Same-place model show good quality of generated images. In addition, the images of the character persons are also well represented in the generated images. In the test of character image encoding ability. The test chooses a picture that was not in the training set, and the character image in the picture was also not in the training set. The result of the test was that the generated picture clearly brings that the character image encoded in the picture. In addition, after changing the main model, the generated picture also carried the encoded character image. Fig. 4 Image 1 in the following images is the image of the character used to input the Character Encoder. Fig. 4 Image 2 is result of the character drawn by the Stable Diffusion model Anythin-V3.0 guided by the encoding results of the Character Encoder. Fig. 4 Image 3 is a picture of the character drawn from the AbyssOrangeMix2 model guided by the encoding results. Fig. 4 Image 2 is a picture of basil mix model (real-life model) guided by the b-encoding results. The last image, on the other hand, is used as a control group. The images generated with the exact same parameters on Anything-V3.0 **without Character Encoder enabled**. These images are drawn with the following parameters: Seed: 7313187166, sampler: DDIM, sample steps: 30. Where the only variables for pictures 2-4 are different master models only. #### 3.3.3 Discussion The results of the experiments show that the image of the person is still reflected in the generated image after replacing the model as described above (Fig. 4 Image 2-Image 4). In addition, the Image is the result of Anything-V3.0 model generation without Character Encoder, this can prove the Character Encoder has the influence on the process of generation. The results of the training of the Same-place model demonstrate the viability of this route of the Character Encoder and prove that it is possible to implement this technique. The result of the encoding of the character encoder carries the data of the character's image. Therefore, Character Encoder achieves its intended purpose. Furthermore, CLIP and UNet were turned off during the training process. This avoids the character image being trained by CLIP or UNet on the combination of prompts, because it will evade the optimization of the character encoder. This is because UNet is well suited to fine tuning with small data sets. The role encoder, on the other hand, is trained with completely unadjusted weights except for migration learning from the VGG pre-trained model. The optimizer will choose to fine-tune UNet and CLIP rather than train the role encoder to get there faster. And, according to the result that previous description. Turning off UNet and CLIP training has the added benefit that the role encoders trained in this way can be applied to a different master model, rather than being restricted to the same master model. This is also well reflected in the training results. ### Autoencoder type #### 3.4.1 Special model structures The Autoencoder model is used in the same way as the Same-place model. However, there are two stages in the training process. The first stage is the training of the Autoencoder, where the encoder part is the same as the homunculus model, but the decoder part is not the same as the decoder usually used in industry. This is because the decoder part introduces CLIP encoder hidden states to differentiate between the different content of the same Same-place model, to guide the encoder to better extract the character features. In the second stage, the trained encoder and Stable Diffusion are trained together to fine-tune the model. Figure 4: The sample images used in this study. #### Result The Autoencoder type training yields results that, while superior to mixed encoding type training, fall short of the quality of Same-place type generated results. Nonetheless, the Autoencoder type still generates outputs that distinctly showcase character features, albeit of inferior quality compared to Same-place generated images. #### 3.4.3 Discussion The reason why the Autoencoder type model is not as effective as the Same-place model but is mentioned anyway is that the training log for the second training phase of the autoencoder is not the same as the training log for the Same-place model. The autoencoder training process begins with the appearance of distinct persona features in the generated samples. This indicates that the first stage of autoencoder training was effective and succeeded in making the encoder output contain the extracted persona features. ## 4 Conclusion Based on the experimental results, it is evident that the technical approach of the figure encoder is feasible. The same-place conformation model exhibits exceptional character extraction encoding capabilities and can be applied to different master models. However, the success of the Same-place conformation does not mean that the other two conformations fail. As mentioned in the previous discussion, the analysis suggests that this is a problem due to the insufficient amount of data and training steps. It is believed that if the amount of data can be increased, then the other two configurations should be able to perform as well as the Same-place configuration. Therefore, the following directions are possible for the future development of this technique. The first priority is to increase the amount of data and the training time. Secondly, there are two derivative directions. The first is to train the Character Encoder alone, as mentioned above, and only adjust the weights of the Character Encoder when trained alongside the Stable Diffusion model. The other direction is to fine-tune the overall Stable Diffusion model after the Character Encoder has been trained. For the Mix Encoding fine-tune, the Character Encoder, UNet and CLIP can be trained simultaneously. While for the AutoEncoder configuration, it can only train the Character Encoder and UNet. However, the future direction of this technology will also have a larger application scenario, Figure 4: The auto encoder type structure in training. which is to train on real-life datasets. In contrast to anime drawings, rea-life dataset should be better. Because the real-life data sets are taken by cameras and the errors are minimal and there are gaps between the drawings of different artists. Theoretically, this would result in a better implement than the animated characters. Perhaps the Character Encoder trained on the live action dataset could in turn be transferred to learn the application scenario of animated character drawing. This would in turn overtake a model trained on a purely animated character dataset.
2303.04065
Diffusion of light in turbid media and Kubelka-Munk theory
We show that the Kubelka-Munk equations for the description of the intensity transfer of light in turbid media are equivalent to a one-dimensional diffusion equation, which is obtained by averaging the three-dimensional diffusion equation over the lateral directions. This enables us to identify uniquely the Kubelka-Munk parameters and derive expressions for diffuse reflection and transmission coefficients including the effect of internal reflections. Without internal reflections we recover the Kubelka-Munk formulas for these coefficients. We show that the Kubelka-Munk equations are the proper radiative-transfer equations for the one-dimensional diffusion problem and comment on previous attempts to derive the Kubelka-Munk equations.
Walter Schirmacher, Giancarlo Ruocco
2023-03-07T17:24:12Z
http://arxiv.org/abs/2303.04065v2
# Diffusion of light in turbid media and Kubelka-Munk theory1 ###### Abstract We show that the Kubelka-Munk equations for the description of the intensity transfer of light in turbid media are equivalent to a one-dimensional diffusion equation, which is obtained by averaging the three-dimensional diffusion equation over the lateral directions. This enables us to identify uniquely the Kubelka-Munk parameters and derive expressions for diffuse reflection and transmission coefficients including the effect of internal reflections. Without internal reflections we recover the Kubelka-Munk formulas for these coefficients. We show that the Kubelka-Munk equations are the proper radiative-transfer equations for the one-dimensional diffusion problem and comment on previous attempts to derive the Kubelka-Munk equations. ## I Introduction Investigating the reflectance and transmission of turbid media is a widely-used tool for materials characterization with applications ranging from soil science, over medicine, the production of paper and paint, to the design of laser car headlights [1; 2; 3; 4; 5]. In the analysis of the observed spectra the theory of diffuse reflectance and transmisance of Kubelka and Munk [6; 7; 8], has been widely used. The microscopical significance of the phenomenological parameters \(S\) and \(K\) appearing in this theory was discussed in many treatments [1; 9; 10; 11; 12; 13; 14; 15; 16; 17], but with differing results for these coefficients. Here we show that for a geometry of rectangular incidence onto a slab, made of turbid material, in which the scattering is strong enough to lead to diffusive motion of the light intensity, the Kubelka-Munk equations are equivalent to the one-dimensional projection of the 3-dimensional diffusion equation of the light intensity in the medium. This is done in the second section. In the third section we derive expressions for the diffuse reflectance and transmission coefficients, including the effect of internal reflection. The standard Kubelka-Munk results without internal reflection [6; 7] are recovered. In the fourth section we show that the Kubelka-Munk equations are, in fact, the proper radiative-transfer equations for the quasi-onedimensional scattering problem In the fifth section, we discuss why other authors might have obtained results for the Kubelka-Munk coefficients different from ours. In the sixth section some conclusions are drawn. ## II Diffusion and Kubelka-Munk equations In the diffusion approximation [9; 18] the light intensity \(U(\mathbf{r})\) and the current density \(\mathbf{j}(\mathbf{r})\) obey the steady-state energy-balance and Fick equations \[\nabla\mathbf{j}(\mathbf{r}) =-\lambda_{a}U(\mathbf{r})+\mathcal{J}(\mathbf{r})\] \[\nabla U(\mathbf{r}) =-\frac{1}{\widetilde{D}}\mathbf{j}(\mathbf{r}) \tag{1}\] which are equivalent to the diffusion equation \[\lambda_{a}U(\mathbf{r})=\widetilde{D}\nabla^{2}U(\mathbf{r})+ \mathcal{J}(\mathbf{r}) \tag{2}\] Here \(\mathcal{J}(\mathbf{r})\) is a source term. The quantity \(\widetilde{D}\), which is the diffusivity divided by the light velocity in the material2\(v=c/n\) is given by [20] Footnote 2: \(c\) is the light velocity and \(n\) is the index of refraction. \[\widetilde{D}=D/v=\frac{1}{\lambda_{a}+3\lambda_{t}} \tag{3}\] \(\lambda_{a},\lambda_{s}\) and \(\lambda_{t}\) are the inverse mean free paths due to absorption, scattering and transport. The latter two are related as \[\lambda_{t}=\lambda_{s}(1-\langle\cos\gamma\rangle) \tag{4}\] where \(\gamma\) is the scattering angle and \(\langle\cos\gamma\rangle\) is the anisotropy parameter. The relation of the diffusivity to the absorption parameter \(\lambda_{a}\), Eq. (3) had been subject to a dispute in the literature. It was argued [21; 22; 23] that the time-dependent diffusion equation \[\bigg{(}v\frac{\partial}{\partial t}+\lambda_{a}\bigg{)}U(\mathbf{r},t)= \widetilde{D}\nabla^{2}U(\mathbf{r},t)+\mathcal{J}(\mathbf{r})\,, \tag{5}\] with a diffusivity that depends on \(\lambda_{a}\), violates the property, obeyed by the radiative transfer equation, that the absorptivity \(\lambda_{a}\) should always occur together with the time derivative in combination \(v\frac{\partial}{\partial t}+\lambda_{a}\). Therefore it was argued in Refs. [21; 22; 23] that the diffusivity should not depend on the absorptivity \(\lambda_{a}\). The counter argument is, that the proper generalization of the steady-state diffusion equation Eq. (2) is _not_ Eq. (5), but a damped telegrapher's equation [20], which obeys the proper scaling. However, for this property to be obeyed, the absorptivity dependence of the diffusivity is given by (3) and not by \(\widetilde{D}=[3(\lambda_{a}+\lambda_{t})]^{-1}\) according to the conventional literature (e.g. [9]). Let us now consider the geometry of a diffusive-reflection (or -transmission) setup with uniform illumination, i.e. an incoming plane wave in the \(z\) direction onto a sample with surface at the \(z=0\) plane, thickness \(t\) in \(z\) direction and a large incidence area \(A\to\infty\) in \((x,y)\) direction (see Fig. 1). Instead of considering a three-dimensional diffusion problem, in which the the material parameters are assumed to depend only on the \(z\) direction, as usually done [9; 24], we consider the photon density \(\widetilde{U}(z)\), photon current \(\tilde{j}(z)\), and source function \(\tilde{\mathcal{J}}(z)\), averaged over the lateral \((x,y)\) directions: \[\widetilde{U}(z) =\frac{1}{A}\int_{A}dx\,dy\,U(\mathbf{r})\qquad\tilde{j}(z)= \frac{1}{A}\int_{A}dx\,dy\,j_{z}(\mathbf{r})\] \[\tilde{\mathcal{J}}(z) =\frac{1}{A}\int_{A}dx\,dy\mathcal{J}(\mathbf{r}) \tag{6}\] It is evident that these quantities obey the following (quasi-) one-dimensional equations \[\frac{\partial}{\partial z}\tilde{j}(z) =-\lambda_{a}\bar{U}(z)+\tilde{\mathcal{J}}(z)\] \[\frac{\partial}{\partial z}\widetilde{U}(z) =-\frac{1}{\widetilde{D}}\tilde{j}(z) \tag{7}\] which lead to the one-dimensional diffusion equation \[\lambda_{a}\bar{U}(z)=\widetilde{D}\frac{\partial^{2}}{\partial z^{2}}\bar{U} (z)+\tilde{\mathcal{J}}(z) \tag{8}\] Defining now the incoming and outgoing currents as \[I_{\pm}(z)=\frac{1}{2}[\bar{U}(z)\pm\bar{j}(z)] \tag{9}\] we obtain from the diffusion equations (7) the Kubelka-Munk equations \[\bigg{(}\frac{\partial}{\partial z}+K\bigg{)}I_{+}(z) =-S\bigg{(}I_{+}(z)-I_{-}(z)\bigg{)}+\tilde{\mathcal{J}}(z)\] \[\bigg{(}-\frac{\partial}{\partial z}+K\bigg{)}I_{-}(z) =-S\bigg{(}I_{-}(z)-I_{+}(z)\bigg{)}+\tilde{\mathcal{J}}(z) \tag{10}\] with \[K =\lambda_{a}\] \[S =\frac{1}{2}\bigg{(}\frac{1}{\widetilde{D}}-\lambda_{a}\bigg{)}= \frac{3}{2}\lambda_{t} \tag{11}\] Eq. (11) can also be written as \[\frac{1}{\widetilde{D}}=K+2S \tag{12}\] ## III Derivation of reflectance and transmission coefficients Instead of solving Eqs. (10) we solve the diffusion equation (8). The general solution of the homogeneous diffusion equation (setting \(\tilde{\mathcal{J}}=0\) in Eq. (8) ) is \[\bar{U}(z)=Ae^{\alpha z}+Be^{-\alpha z} \tag{13}\] whith the inverse diffusion length \[\alpha=\sqrt{K/\widetilde{D}}=\sqrt{K(K+2S)} \tag{14}\] From the solution (13) we get the in- and outgoing currents [25; 26] \[I_{\pm}(z)=\frac{1}{2}\bigg{(}A(1\mp\beta)e^{\alpha z}+B(1\pm\beta)e^{-\alpha z }\bigg{)} \tag{15}\] with \[\beta=\widetilde{D}\alpha=\sqrt{K\widetilde{D}}=\sqrt{K/(K+2S)} \tag{16}\] ### Optically thick samples #### iii.1.1 No reflection at \(z=0\) The appropriate boundary conditions corresponding to optically thick samples without reflection at \(z=0\) are are \[I_{+}(0)=\bar{U}_{0}\qquad\qquad\bar{I}_{+}(\infty)=0 \tag{17}\] The second boundary condition implies \(A=0\). The in- and outgoing currents are therefore \[I_{\pm}(z)=\frac{1}{2}B(1\pm\beta)e^{-\alpha z} \tag{18}\] Figure 1: Geometry for the discussion of diffuse reflectance and transmission with uniform illumination (plane-wave incidence). We consider a slab of thickness \(t\), which is infinitely extended in \(x\) and \(y\) direction. From the first boundary condition we obtain \[B=\bar{U}_{0}\frac{2}{1+\beta} \tag{19}\] from wich we obtain the ingoing current at \(z=0\) \[I_{-}(0)=\bar{U}_{0}\frac{1-\beta}{1+\beta} \tag{20}\] and hence the reflectivity \[R_{\infty}=\frac{I_{-}(0)}{I_{+}(0)}=\frac{1-\beta}{1+\beta} \tag{21}\] For the Kubelka-Munk function we obtain, using Eq. (11) \[\frac{S}{K} = \frac{1}{2}\left[\left(\frac{1+R_{\infty}}{1-R_{\infty}}\right)^{ 2}-1\right]=\frac{2R_{\infty}}{(1-R_{\infty})^{2}} \tag{22}\] \[= \frac{3}{2}\frac{\lambda_{t}}{\lambda_{a}}\] #### ii.1.2 Reflection at \(z=0\) The first boundary condition is now \[I_{+}(0)=U_{0}+R_{0}I_{-}(0) \tag{23}\] where \(R_{0}\) is the reflectivity at the \(z=0\) boundary. Inserting the expressions (18) for \(I_{\pm}(0)\) we get \[\frac{2}{1+\beta}I_{+}(0)=B=R_{0}\frac{2}{1+\beta}I_{+}(0)=\frac{2}{1+\beta}U _{0}+R_{0}R_{\infty}B \tag{24}\] from which follows \[B=\frac{1+\beta}{2}U_{0}\frac{1}{1-R_{0}R_{\infty}} \tag{25}\] and hence \[R=\frac{1}{U_{0}}I_{-}(0)=\frac{R_{\infty}}{1-R_{0}R_{\infty}} \tag{26}\] ### Optically thin samples For optically thin samples with Reflectivity \(R_{1}\) at the back (\(z=t\)) of the sample and Reflectivity \(R_{0}\) at the front (\(z=0\)) of the sample we have the boundary conditions \[I_{+}(0)=\bar{U}_{0}+R_{0}I_{-}(0)\qquad\qquad I_{-}(t)=R_{1}I_{+}(t) \tag{27}\] Using the definition of \(R_{\infty}\), Eq. (21), we get from the boundary conditions a linear set of equations for the coefficients \(A\) and \(B\), which can be put into the form \[\left(\begin{array}{cc}R_{\infty}-R_{0}&1-R_{\infty}R_{0}\\ (1-R_{\infty}R_{1})e^{\alpha t}&(R_{\infty}-R_{1})e^{-\alpha t}\end{array} \right)\left(\begin{array}{c}A\\ B\end{array}\right)=\left(\begin{array}{c}\frac{2}{1+\beta}\bar{U}_{0}\\ 0\end{array}\right) \tag{28}\] The determinant of the coefficient matrix is \[D=(R_{\infty}-R_{0})(R_{\infty}-R_{1})e^{-\alpha t}-(1-R_{\infty}R_{0})(1-R_{ \infty}R_{1})e^{\alpha t} \tag{29}\] So we get from Kramer's rule \[A=\frac{\bar{U}_{0}}{D}\frac{2}{1+\beta}e^{-\alpha t}(R_{\infty}-R_{1}) \tag{30}\] \[B=-\frac{\bar{U}_{0}}{D}\frac{2}{1+\beta}e^{\alpha t}(1-R_{1}R_{\infty}) \tag{31}\] We obtain for the currents at \(z=0\) and at \(z=t\): \[I_{-}(0) = \frac{1+\beta}{2}[A+R_{\infty}B] \tag{32}\] \[= \frac{\bar{U}_{0}}{D}\bigg{[}e^{-\alpha t}(R_{\infty}-R_{1})-R_{ \infty}e^{\alpha t}1-R_{1}R_{\infty})\bigg{]}\] \[I_{+}(t) = \frac{1+\beta}{2}[R_{\infty}Ae^{\alpha t}+Be^{-\alpha t} \tag{33}\] \[= \frac{\bar{U}_{0}}{D}\bigg{[}R_{\infty}^{2}-1\bigg{]}\,,\] from which we ge the reflectivity \(R\) \[R=\frac{I_{-}(0)}{\bar{U}_{0}}=R_{\infty}\frac{e^{\alpha t}(1-R_{\infty}R_{1} )-e^{-\alpha t}(1-\frac{R_{1}}{R_{\infty}})}{(1-R_{\infty}R_{0})(1-R_{\infty} R_{1})e^{\alpha t}-(R_{\infty}-R_{0})(R_{\infty}-R_{1})e^{-\alpha t}} \tag{34}\] and the transmittivity \(T\) \[T=\frac{I_{+}(t)}{\bar{U}_{0}}=\frac{1-R_{\infty}^{2}}{(1-R_{\infty}R_{0})(1- R_{\infty}R_{1})e^{\alpha t}-(R_{\infty}-R_{0})(R_{\infty}-R_{1})e^{-\alpha t}} \tag{35}\] Introducing the Kubelka-Munk parameters \[a=\frac{1}{2}\bigg{(}\frac{1}{R_{\infty}}+R_{\infty}\bigg{)}\qquad b=\alpha/ S=\frac{1}{2}\bigg{(}\frac{1}{R_{\infty}}-R_{\infty}\bigg{)} \tag{36}\] we get \[R =\frac{R_{1}b\cosh(\alpha t)R_{1}b+(1-R_{1}a)\sinh(\alpha t)}{b(1-R_{0} R_{1})\cosh(\alpha t)+[a(1-R_{0}R_{1})-R_{0}-R_{1}]\sinh(\alpha t)} \tag{37}\] \[T =\frac{b}{b(1-R_{0}R_{1})\cosh(\alpha t)+[a(1-R_{0}R_{1})-R_{0}-R_ {1}]\sinh(\alpha t)} \tag{38}\] If we set \(R_{0}=0\), we get the formulas of Kubelka (1948) [7] \[R=\frac{1-R_{1}a+R_{1}b\coth(\alpha t)}{a-R_{1}+b\coth(\alpha t)} \tag{39}\] and \[T=\frac{b}{b\cosh(\alpha t)+(a-R_{1})\sinh(\alpha t)} \tag{40}\] For \(R_{0}=R_{1}=0\) we get the standard Kubelka-Munk formulas [7; 25; 26], which do not contain the effect of internal reflections. \[R=\frac{e^{\alpha t}+e^{-\alpha t}}{e^{\alpha t}\frac{1}{R_{\infty}}-e^{- \alpha t}R_{\infty}}=\frac{\sinh\alpha t}{a\sinh\alpha t+b\cosh\alpha t} \tag{41}\] \[T=\frac{\frac{1}{R_{\infty}}-R_{\infty}}{e^{\alpha t}\frac{1}{R_{\infty}}-e^{- \alpha t}R_{\infty}}=\frac{b}{a\sinh\alpha t+b\cosh\alpha t} \tag{42}\] Another interesting limit is that of very small \(R_{\infty}\), i.e. \(R_{\infty}\to 0\): \[R =\frac{R_{1}e^{-\alpha t}}{e^{\alpha t}-R_{0}R_{1}e^{-\alpha t}}\] \[=\frac{R_{1}e^{-2\alpha t}}{1-R_{0}R_{1}e^{-2\alpha t}} \tag{43}\] \[T =\frac{1}{e^{\alpha t}-R_{0}R_{1}e^{-\alpha t}}\] \[=\frac{e^{-\alpha t}}{1-R_{0}R_{1}e^{-2\alpha t}}\] ## IV Kubelka-Munk equations as one-dimensional radiative-transfer equations We now want to demonstrate that the Kubelka-Munk equations (10) are the proper radiadive-transfer equations for the diffuse-reflection geometry depicted in Fig. 1. We recall the three-dimensional radiative transfer equations of the light intensity in a turbid medium \[[\lambda_{a}+\mathbf{s}\cdot\nabla]I(\mathbf{r},\mathbf{s}) =-\sum_{\mathbf{s}^{\prime}}q_{\mathbf{ss}^{\prime}}\Big{(}I( \mathbf{r},\mathbf{s})-I(\mathbf{r},\mathbf{s}^{\prime})\Big{)}\] \[=-\lambda_{s}I(\mathbf{r},\mathbf{s})+\sum_{\mathbf{s}^{\prime}} q_{\mathbf{ss}^{\prime}}I(\mathbf{r},\mathbf{s}^{\prime}) \tag{45}\] \(I(\mathbf{r},\mathbf{s})\) is the distribution density of light rays passing through \(\mathbf{r}\) with the direction \(\mathbf{s}=\mathbf{k}/k\), where \(\mathbf{k}\) is the wave vector. \(q_{\mathbf{ss}^{\prime}}=|f(\mathbf{s},\mathbf{s}^{\prime})|^{2}\) is the phase function, i.e. the scattering cross-section from \(\mathbf{s}\) to \(\mathbf{s}^{\prime}\) with \(f(\mathbf{s},\mathbf{s}^{\prime})\) being the corresponding amplitude. \(\sum_{\mathbf{s}^{\prime}}\) is an integral over the entire solid angle, with the original direction \(\mathbf{s}\) being excluded. The second line of Eq. (45) is obtained from the sum rule \[\sum_{\mathbf{s}^{\prime}}q_{\mathbf{ss}^{\prime}}=\sum_{\mathbf{s}^{\prime}}q _{\mathbf{s}^{\prime}\mathbf{s}}=\lambda_{s} \tag{46}\] The three-dimensional diffusion equations (1) and (2) are obtained from Eq. (45) by \((i)\) expanding the angle dependence of \(I(\mathbf{r},\mathbf{s})\) and \(q(\mathbf{s},sh^{\prime})\approx q(\mathbf{s}\cdot\mathbf{s}^{\prime})=q( \cos\gamma)\) in terms of Legendre polynomials and stop after the 1st term (P1 approximation) and then integrating \(\mathbf{s}\) over the total solid angle [9; 24] The two terms of the three-dimensional \(I(\mathbf{r},\mathbf{s})\) in P1 approximation are [9; 18] \[I(\mathbf{r},\mathbf{s})=A_{3d}U(\mathbf{r})+B_{3d}\mathbf{s}\cdot\mathbf{j}( \mathbf{r}) \tag{47}\] with \(U(\mathbf{r})=\sum\limits_{\mathbf{s}}I(\mathbf{r},\mathbf{s})\), \(\mathbf{j}(\mathbf{r})=\sum\limits_{\mathbf{s}}\mathbf{s}I(\mathbf{r},\mathbf{ s})\), and \(A_{3d}=1/\sum\limits_{\mathbf{s}}\mathbf{s}=1/4\pi\) and \(B_{3d}=1/\sum\limits_{\mathbf{s}}\mathbf{s}\cdot\mathbf{s}=3/4\pi\). The corresponding expression in one dimension is \[I(x,\mathbf{s})=A_{1d}U(x)+B_{1d}\mathbf{s}\cdot\mathbf{j}(x) \tag{48}\] with \(A_{1d}=B_{1d}=\sum\limits_{\mathbf{s}}=1/2\), which is just Eq. (9). Because we have shown in the beginning that the diffusion equations (8) are equivalent to the Kubelka-Munk equations (10) we conclude that the P1 approximation, and hence the diffusion approximation in one dimension is exact. This has already been pointed out in Refs. [18; 27]. So we can state that the Kubelka-Munk equations (10) are \((i)\) identical to the three-dimensional diffusion equation, averaged over the lateral dimensions, and \((ii)\) are the proper radiative-transfer equations for the one-dimensional diffuse-reflection problem. Discussion We now turn to the previously published derivations of the Kubelka-Munk equations from the radiative-transfer equations (45) [9; 10; 11; 17]. All these authors start from the three-dimensional radiative transfer equation, in which the material parameters are assumed only to depend on the coordinate \(z\) but the light rays still retain their three-dimensional orientations, parametrized by \(\mu=\mathbf{s}\cdot\mathbf{e}_{x}=\cos\theta\): \[[\lambda_{a}+\mu\frac{\partial}{\partial x}]I(x,\mu)=-\lambda_{s}I(x,\mu)+ \frac{1}{2}\int_{-1}^{1}d\mu q(\mu,\mu^{\prime})I(x,\mu^{\prime}) \tag{49}\] They then identify the contributions to \(I(x,\mu)\) in positive and negative directions as \[\widetilde{I}_{\pm}(x,\mu)=\theta(\pm x)I(x,\mu) \tag{50}\] where \(\theta(x)\) is the Heaviside step function. The in- and outgoing currents are defined as \[\widetilde{I}_{\pm}(x)=\frac{1}{2}\int_{-1}^{1}d\mu I_{\pm}(x,\mu) \tag{51}\] Then the diffusion approximation is done, which, from Eq. (47) gives [9; 28] \[\widetilde{I}_{\pm}(x) =\frac{1}{4}U(x)\pm\frac{1}{2}j_{x}(x)\] \[=\frac{1}{4}U(x)\mp\frac{\widetilde{D}}{2}\frac{\partial}{ \partial x}U(x) \tag{52}\] However, this equation violates the requirement \(I_{+}(x)+I_{-}(x)=\bar{U}(x)\). Obviously this is the reason, why in Refs. [9; 10; 11; 17]\(K\) is identified with \(2\lambda_{a}\), instead of the correct result \(K=\lambda_{a}\). Diffusive reflection with collimated, instead of uniform, illumination may be readily treated in the three-dimensional diffusion (P1) approximation, which, however, is not the subject of the present treatment. ## VI Conclusion We have shown that the Kubelka-Munk equations are identical to the one-dimensional diffusion equation, which is obtained by averaging the three-dimensional diffusion equation with respect to the lateral directions. We obtain as Kubelka-Munk parameters \(K=\lambda_{a}\) (absorptive inverse scattering length) and \(S=\frac{3}{2}\lambda_{t}=\frac{3}{2}\lambda_{s}(1-\langle\cos\gamma\rangle)\), where \(\lambda_{t}\) and \(\lambda_{s}\) are the transport and scattering inverse scattering lengths, and \(\langle\cos\gamma\rangle\) is the anisotropy parameter. Using the 1d diffusion equation we have derived formulas for the diffuse reflection and transmission, which includes possible internal reflections. In the absence of internal reflections these expressions reduce to those given by Kubelka and Munk. We have demonstrated that the Kubelka-Munk equations are the appropriate radiative transfer equations for the reflection problem with plane-wave incidence (uniform illumination).
2307.16196
Shuffled Differentially Private Federated Learning for Time Series Data Analytics
Trustworthy federated learning aims to achieve optimal performance while ensuring clients' privacy. Existing privacy-preserving federated learning approaches are mostly tailored for image data, lacking applications for time series data, which have many important applications, like machine health monitoring, human activity recognition, etc. Furthermore, protective noising on a time series data analytics model can significantly interfere with temporal-dependent learning, leading to a greater decline in accuracy. To address these issues, we develop a privacy-preserving federated learning algorithm for time series data. Specifically, we employ local differential privacy to extend the privacy protection trust boundary to the clients. We also incorporate shuffle techniques to achieve a privacy amplification, mitigating the accuracy decline caused by leveraging local differential privacy. Extensive experiments were conducted on five time series datasets. The evaluation results reveal that our algorithm experienced minimal accuracy loss compared to non-private federated learning in both small and large client scenarios. Under the same level of privacy protection, our algorithm demonstrated improved accuracy compared to the centralized differentially private federated learning in both scenarios.
Chenxi Huang, Chaoyang Jiang, Zhenghua Chen
2023-07-30T10:30:38Z
http://arxiv.org/abs/2307.16196v1
# Shuffled Differentially Private Federated Learning for Time Series Data Analytics ###### Abstract Trustworthy federated learning aims to achieve optimal performance while ensuring clients' privacy. Existing privacy-preserving federated learning approaches are mostly tailored for image data, lacking applications for time series data, which have many important applications, like machine health monitoring, human activity recognition, etc. Furthermore, protective noising on a time series data analytics model can significantly interfere with temporal-dependent learning, leading to a greater decline in accuracy. To address these issues, we develop a privacy-preserving federated learning algorithm for time series data. Specifically, we employ local differential privacy to extend the privacy protection trust boundary to the clients. We also incorporate shuffle techniques to achieve a privacy amplification, mitigating the accuracy decline caused by leveraging local differential privacy. Extensive experiments were conducted on five time series datasets. The evaluation results reveal that our algorithm experienced minimal accuracy loss compared to non-private federated learning in both small and large client scenarios. Under the same level of privacy protection, our algorithm demonstrated improved accuracy compared to the centralized differentially private federated learning in both scenarios. ## I Introduction Federated Learning (FL) ensures privacy by guaranteeing local data storage, yet the risk of privacy brech remains, as attackers can extract sensitive information through reverse analysis during parameter sharing [1][2]. Various privacy protection techniques exist, such as Secure Multi-party Computation (SMC) and Homomorphic Encryption (HE), cryptographic methods, and data perturbation techniques like Differential Privacy (DP). HE offers lossless encryption, but it carries considerable computational overhead [3][4]. Conversely, DP is a lightweight solution that offers quantifiable and context-free privacy protection for machine learning, thus gaining widespread research interest [2][5]. However, the privacy protection offered by DP comes at the cost of utility loss due to noise injection. Thus, it's imperative to develop DP mechanisms that ensure enhanced privacy protection while minimizing noise injection. Currently, most privacy-preserving FL frameworks cater to text and image data, with limited applicability to time series (TS) data. TS data models or patterns must accommodate the inherent temporal dependencies in the data. Noise injection into the gradients of these models can disrupt these dependencies, causing a more significant interference in the learning process and resulting in a higher accuracy drop compared to conventional learning models. Among the available techniques, those employing HE technology have high computational and communication costs, while those using DP technology fare better [6]. However, existing DP technology offers inadequate protection for TS data. Some methods involve decomposing TS data and applying DP technology to select components, which may result in valid information loss [7]. Moreover, these techniques do not consider FL characteristics and cannot handle the requirements for local model updates and attacks originating from multiple sources [8]. In response to these challenges, we present a novel privacy-preserving FL algorithm for TS data, based on Local Differential Privacy(LDP). In comparison to DP, LDP provides enhanced protection against semi-trusted servers, preventing privacy leakage from both servers and malicious clients. We also introduce shuffle mechanisms that satisfy LDP while enhancing utility, resulting in privacy amplification [9][10]. Additionally, we incorporate the model privacy coefficient, which can be independently configured by each client locally. This adjustment allows for the distribution of the privacy budget between the feature extractor and classifier during local training, thus catering to the unique nature of different clients' data and their privacy protection requirements. The experimental results reveal that our algorithm achieved minimal accuracy loss, i.e., 0.9% for 100 clients and 2.8% for 1000 clients, compared to non-private federated learning. It also improved accuracy by 7.2% for 100 clients and 5.9% for 1000 clients under the same privacy level, compared to centralized DP-based federated learning. Our main contributions include: * We propose a novel privacy-preserving FL framework DP-TimeFL for time series based on LDP, ensuring robust privacy safeguards. * By implementing model shuffling, we achieve privacy protection amplification while enhancing the utility of the proposed FL framework. * We perform comprehensive experiments on five TS datasets to demonstrate our security and accuracy. ## II Related works ### _Differentially Private Federated Learning_ DP, a mathematical framework defining privacy properties, is widely used for addressing privacy concerns. Geyer et al. [12] proposed a DP-SGD FL framework, offering varying privacy protection levels, while Wei et al. [13] introduced NbAFL, meeting global DP requirements by adjusting Gaussian noise variance. CDP methods assume trusted servers for model aggregation, which may not be reliable in practice [14]. LDP addresses FL scenarios with untrusted servers, requiring no trusted third party and realizing privacy protection before data transmission. In LDP, clients store local data and interact individually with untrusted servers [11], keeping local model parameters secret. However, DP mechanisms inherently degrade model performance and utility, with LDP potentially decreasing data availability even more than CDP. Recently, privacy amplification in LDP-FL has attracted attention [15][16]. The shuffling technique weakens the attacker's model, applying LDP to achieve accuracy close to CDP [17][18]. By assuming user anonymity, shuffling reduces noise in local models and amplifies privacy. ### _Differentially Private Time Series_ Most private FL frameworks focus on text and image data, with limited works on TS data. A TS feature extraction system was proposed for federated learning using computationally expensive HE [6]. On the other hand, STL-DP [7] and PASTE [8] employ DP for TS privacy. PASTE, designed for data mining, perturbs discrete Fourier transforms of query answers due to TS's temporal correlation. However, it does not suit federated learning's local model updates and cannot defend against untrusted central servers. STL-DP decomposes data using seasonal and trend decomposition with Loess, applying the Fourier perturbation algorithm only to core TS components. This may lead to information loss in the original TS data, and the approach does not consider federated scenarios. ## III Proposed Framework ### _Overview_ The high-level architecture of our DP-TimeFL is shown in Fig. 1. We assume that the shuffler and the cloud server are honest but curious, i.e., the shuffler and server perform the shuffling operation and aggregation operation honestly, respectively. However, both may attempt to infer sensitive information from the data uploaded by the client. We consider model inversion attack and membership inference attack in FL scenarios. The \(N\) clients independently execute local training processes on their local private data, while the cloud server aggregates local gradients. The goal of the training is to obtain a model with higher model utility than training only on one's local data, without revealing the privacy information of one's local private data. As federated learning, the server initializes the global model parameter. Clients perform local training, perturb gradients with Laplace noise, and upload them to the shuffler. The shuffler processes and sends perturbed gradients to the server, which aggregates them to obtain global private gradients. Finally, the server broadcasts the global gradients, allowing clients to update their local weights. Fig. 1: DP-TimeFL Framework ### _Local Perturbation_ The local training model consists of a feature extractor and a classifier. The feature extractor uses a 1D-CNN as the backbone, which is used for TS analytics [23][24]. First, the gradient \(g_{i}^{t}\) is calculated and clipped. Second, Laplace noise is added to the gradient for perturbation, as shown in (1)(2)(3). \[g_{dp}(c_{i}^{t})=\hat{g}(c_{i}^{t})+Lap\left(\frac{(1-k)\Delta f}{\epsilon_{t} }\right), \tag{1}\] \[g_{dp}(f_{i}^{t})=\hat{g}(f_{i}^{t})+Lap\left(\frac{k\Delta f}{\epsilon_{t}}\right) \tag{2}\] where \(k\in(0,1)\) is the model privacy coefficient, independently set by each client locally to adjust the privacy budget allocation between the feature extractor and classifier. This adapts to the uniqueness of different clients' data and their privacy protection goals. The larger the \(k\), the more privacy budget is allocated to the classifier, the less noise is added to the classifier, and the lower the privacy protection level for the classifier. A larger \(k\) value tends to protect the feature extractor rather than the classifier. If \(G_{i}^{t}\) is a tuple containing the gradients of the feature extractor and classifier, then \[G_{i}^{t}(dp)=\hat{G}_{i}^{t}+Lap\left(\frac{\Delta f}{\epsilon_{t}}\right) \tag{3}\] where \(\Delta f\) is the sensitivity and \(\epsilon_{t}\) is the DP parameter of round \(t\). The value range of each gradient \(G_{i}^{t}\) is limited to \((0,1)\) by min-max normalization, and the sensitivity \(\Delta f\) is set to 1 in our method. The DP parameter \(\epsilon_{t}\) is jointly determined by the initial DP parameter \(\epsilon_{0}\) and dynamic adjustment. ### _Privacy Amplification with Shuffling_ In DP-TimeFL, a shuffler is designed according to the existing security shuffle protocol, which treats the shuffler as a black box [17][18]. Regarding the specific implementation, the shuffler can be built based on trusted hardware, SMC, or HE with the help of existing secure shuffling protocols according to the model deployment conditions. We randomly replace the encrypted input gradients via the shuffler to achieve LDP, reducing the added noise, realizing privacy amplification, and preventing side-channel linkage attacks in FL. After the client sends the private gradient to the shuffler, for each gradient \(G_{i}^{t}(dp)\) of the client \(C_{i}\), the shuffler randomly samples a delay \(t_{i}\) from the uniform distribution \(U(0,T)\), in which \(T\) is a shuffling parameter. When FL is initialized, all clients propose their suggested values \(T_{i}\) to the server. The server takes the median value of all \(T_{i}\) as \(T\). For each gradient, the private gradient \(G_{i}^{t}(dp)\) is uploaded to the server at time \(t_{i}\). ## IV Secure Analysis In contrast to CDP, the shuffled DP approach does not depend on a trusted server and has improved security. Moreover, it fills the \(O(\sqrt{n})\) gap in utility between CDP and LDP; that is, shuffled DP can tolerate \(O(\sqrt{n})\) times fewer data errors than LDP. In the traditional CDP model, the Laplace mechanism satisfies \((\epsilon,0)\)-DP, and the error is \(\mathrm{O}(\frac{1}{\epsilon})\). In contrast, general LDP meets the error of \(\Omega(\frac{1}{\epsilon}\sqrt{n})\). The privacy amplification achieved by shuffling can transform the local data that meet \(\epsilon_{l}\)-LDP before shuffling into data that meet \(\epsilon_{c}\)-DP after shuffling. \(\epsilon_{l}\) corresponds to a larger value, indicating lower privacy; \(\epsilon_{c}\) corresponds to a smaller value, indicating higher privacy. Based on the interactive and non-interactive mechanisms in shuffle DP [10][19], the privacy amplification theorem on the non-general and general interactive mechanisms was proposed in [17][18]. It should be noted that the interactive and non-interactive mechanisms here refer to the interaction between users rather than the interaction between iteration rounds. That is, whether the results of a user are affected by the input of other users. In our setting, DP protection performs in a non-interactive manner between users. Therefore, we adopt the non-interactive shuffle privacy protection mechanism. The general privacy amplification theorem applying a non-general interactive mechanism in [17] is shown as follows. **Theorem 1**: _Given \(n\) clients, each with dataset \(D_{i}\), for any \(n\in N_{+}\), \(\delta\in[0,\ 1]\), \(\epsilon_{l}\in\left(0,\frac{1}{2}\ln\frac{n}{\ln\frac{1}{\delta}}\right)\); if the gradient obtained by the local training of the client satisfies \(\epsilon_{l}\)-LDP, then the n outputs after shuffling satisfy \((\epsilon_{c},\delta)\)-DP, where_ \[\epsilon_{c}=\mathrm{O}\left(\left(e^{\epsilon_{l}}-1\right)\sqrt{\frac{ln \frac{1}{\delta}}{n}}\right) \tag{4}\] According to Theorem 1, given \(\delta=10^{-9}\), \(n\) trusted clients, and the corresponding value of \(\epsilon_{c}\), we obtain the results of \(\epsilon_{l}\) before amplification according to Equation 4, as shown in Table I. Obviously, our strategy satisfies LDP achieving a larger \(\epsilon_{l}\)-LDP on the client side and a smaller \(\epsilon_{c}\)-DP on the server side through privacy amplification, thereby obtaining a stronger privacy guarantee than CDP. The results also show that privacy amplification increases as the number of participating clients increases. ## V Experiments ### _Datasets_ We selected five datasets from three real-world applications, namely human activity recognition, sleep stage detection, and machine fault diagnosis, for experimental evaluation. Table III summarizes the details of each dataset, including the total number of samples (\(T\)), number of classes (\(K\)), sequence length (\(L\)), and number of channels (\(C\)). The description of each dataset is as follows: 1. **UCIHAR**[25] UCIHAR dataset contains data from three sensors, namely accelerometer, gyroscope, and body sensors. Each sensor has three channels. The sensors record data for six activities: walking, walking upstairs, walking downstairs, standing, sitting, and lying down. 2. **WISDM**[26] WISDM dataset records the same activities as the UCIHAR with only one three-channel accelerometer for recording, and there is a class imbalance. 3. **HHAR**[27] The Heterogeneity Human Activity Recognition (HHAR) dataset records sensor readings of activities from heterogeneous smartphones. 4. **Sleep-EDF**[28] The Sleep-EDF dataset is used for sleep stage classification tasks, which aims to distinguish electroencephalography (EEG) signals into five stages, i.e., Wake (W), Non-Rapid Eye Movement stages (N1, N2, N3), and Rapid Eye Movement (REM). Following [20], we chose the Fpz-Cz channel data for evaluation. 5. **MFD**[21] The Machine Fault Diagnosis (MFD) dataset [20] has been collected to identify various types of incipient faults using vibration signals. Each sample consists of a single univariate channel with 5120 data points. ### _Experimental Settings_ We adopt the i.i.d. FL setting in experiments. Table II summarizes the experimental settings for each dataset, including the number of total clients (\(N\)), clients participating per round (\(n\)), the number of training samples (\(ts\)), data points per client (\(P\)), batch size (\(B\)), privacy budget \(\epsilon\), and privacy upper limit \(\delta\). Each client locally trains 40 epochs. Different total client numbers \(N\) correspond to different values of \(n\) and \(\delta\). Due to the larger size of SLEEP-EDF and MFD datasets, these two datasets use 100 total clients. We perform data slicing, train/test splitting, and normalization in the data preprocessing stage. The ratio of the training set to the test set for each dataset is approximately 0.9:0.1. We use a sliding window of 128 for human activity recognition datasets. ### _Accuracy Evaluation_ We conducted experiments comparing DP-TimeFL to FedAvg [1] and CDP-FL [12]. We initially fixed \(\epsilon\) values to 8 and 10 as [22]. However, these values led to early privacy budget depletion, stopping iteration with low accuracy, especially for fewer clients, as shown in Fig. 3, Fig. 4, and Fig. 5. To address this, we increased the privacy budget to 100. With a reasonable privacy budget, DP-TimeFL's accuracy on five TS datasets was close to FL without privacy protection, av Fig. 2: Global Accuracy on Different Total Clients eraging 0.9% and 2.8% accuracy loss for 100 and 1000 clients, respectively. As shown in Fig. 2, DP-TimeFL outperformed CDP-based FL by 7.2% and 5.9% for 100 and 1000 clients due to privacy amplification through shuffling perturbed gradients. However, the advantage did not become more pronounced with more clients, as the increased privacy amplification was offset by thinner privacy budget distribution, leading to increased noise and decreased accuracy. Thus, as the number of clients increased from 100 to 1000, overall accuracy decreased on most datasets. ### _Privacy Evaluation_ After each round, we calculate the current iteration's \(\delta\) using the Gaussian moments accountant and remaining privacy budget. When \(\delta\) exceeds the set limit, the iteration stops. As the model converges, gradients become smaller, making noise more impactful, hindering model performance improvement and causing slower \(\delta\) convergence in later training stages, as shown in Fig. 6. Moreover, we clip gradients to ensure that the L2 norm of each gradient does not exceed a predefined threshold, which helps bound the gradient sensitivity. However, gradient clipping may limit update magnitude in later stages of training, slowing down the \(\delta\) convergence. With 1000 clients compared to 100 clients, the lower \(\delta\) results from each client's smaller contribution to updates, leading to tighter privacy bounds but potentially requiring more communication rounds for desired performance. From another view, more clients improve the slowing convergence speed of \(\delta\), benefiting global accuracy by allowing more communication rounds. Fig. 4: Global Accuracy on UCHAR, WISDM, HHAR (\(N\)=1000) Fig. 5: Global Accuracy on SLEEP-EDF, MPD (\(N\)=100) Fig. 3: Global Accuracy on UCHAR, WISDM, HHAR (\(N\)=100) ## VI Conclusion In this work, we proposed a novel privacy-preserving federated learning algorithm for time series data, which employs LDP. We extended the privacy boundary from the server-side to the client-side to defend against attacks from semi-honest servers. Meanwhile, we introduced a shuffle mechanism to LDP, achieving privacy protection amplification and improving the utility. We conducted experiments on five TS datasets, and the evaluations reveal that our algorithm experienced minimal accuracy loss, with 0.9% for 100 clients and 2.8% for 1000 clients, compared to non-private FL. It also improved accuracy by 7.2% for 100 clients and 5.9% for 1000 clients under the same privacy level, compared to CDP-based FL. ## Acknowledgment This work is supported by the National Natural Science Foundation of China(No.52002026).
2310.03632
The exact evaluation of hexagonal spin-networks and topological quantum neural networks
The physical scalar product between spin-networks has been shown to be a fundamental tool in the theory of topological quantum neural networks (TQNN), which are quantum neural networks previously introduced by the authors in the context of quantum machine learning. However, the effective evaluation of the scalar product remains a bottleneck for the applicability of the theory. We introduce an algorithm for the evaluation of the physical scalar product defined by Noui and Perez between spin-network with hexagonal shape. By means of recoupling theory and the properties of the Haar integration we obtain an efficient algorithm, and provide several proofs regarding the main steps. We investigate the behavior of the TQNN evaluations on certain classes of spin-networks with the classical and quantum recoupling. All results can be independently reproduced through the "idea.deploy" framework~\href{https://github.com/lullimat/idea.deploy}{\nolinkurl{https://github.com/lullimat/idea.deploy}}
Matteo Lulli, Antonino Marciano, Emanuele Zappala
2023-10-05T16:06:21Z
http://arxiv.org/abs/2310.03632v2
# Exact Evaluation of Hexagonal Spin-networks and Topological Quantum Neural Networks ###### Abstract The physical scalar product between spin-networks has been shown to be a fundamental tool in the theory of topological quantum neural networks (TQNN), which are quantum neural networks previously introduced by the authors in the context of quantum machine learning. However, the effective evaluation of the scalar product remains a bottleneck for the applicability of the theory. We introduce an algorithm for the evaluation of the physical scalar product defined by Noui and Perez between spin-network with hexagonal shape. By means of recoupling theory and the properties of the Haar integration we obtain an efficient algorithm, and provide several proofs regarding the main steps. We investigate the behavior of the TQNN evaluations on certain classes of spin-networks with the classical and quantum recoupling. All results can be independently reproduced through the "idea.deploy" framework [https://github.com/lulliat/idea.deploy](https://github.com/lulliat/idea.deploy) ## I Introduction The high computational demand from every sector of contemporary science, including particle physics and condensed matter, has propelled the investment in new approaches. These have arguably become the holy grail of scientific computation, e.g. quantum computing. In turn, quantum computational approaches leave the unanswered question of how to process the data in quantum machines such as quantum computers. Important recent developments in deriving novel and efficient algorithms in quantum machine learning have been rooted in the theoretical foundation of either quantum mechanics [1; 2; 3] or its extension to continuous systems, quantum field theory [4; 5; 6; 7; 8]. These attempts constitute the answer to the need of quantum algorithms for quantum computing, and the reason to propose quantum neural networks (QNN) -- see e.g. [3] -- and their extensions in the continuum [4; 7; 8]. A prototype for Universal Quantum Computation is provided by the Reshetikhin-Turaev model [9], as proved by Freedman-Kitaev-Wang [10; 11]. More recently, topological quantum neural networks (TQNN), based on the TQFTs such as the Turaev-Viro model [12] and its physically motivated generalizations, have been proposed as a candidate to provide quantum algorithms in quantum computing. The advantage of TQNNs lies in the fact that they share a common ground with material science, and in particular with the string-net models of Levin-Wen [13; 14]. This thread of thoughts motivates us in believing that a successful translation of the approach by Freedman-Kitaev-Wang in our TQFT methods, which is known to be possible at the mathematical level ([15; 16; 17; 18; 19]) and are at the base of the TQNNs introduced in [4; 7; 8], will result in a Universal Quantum Computing that is implementable in practice in material science. This is achieved through the equivalent language of string-nets [13; 14], providing an alternative to topological quantum computing with anyons. The tight mathematical connection relating Reshetikhin-Turaev model and Turaev-Viro model [15; 16; 17; 18; 19] (one is known to be the "square root" of the other) allows to use our methods based on the latter to recast the former in terms of string-nets, for a material-science concrete implementation through equivalence between spin-nets and Turaev-Viro model [20; 21; 22; 23], rather than their traditional anyonic-based language. TQNNs are represented as spin-network states supported on graphs [4; 7; 8]. These are one-complexes defined as the dual simplicial complexes to the boundaries of a manifold. Spin-networks represent then boundary states (input/output data). The intrinsic quantumness of TQNNs stands in the fact that the dynamical evolution or these boundary states is attained through the sum over an infinite amount of intermediate virtual states (filters/hidden layers). This is the key element to the derivation of novel (quantum) algorithms. The latter are in principle characterized by higher accuracy and less computational time than traditional deep neural networks (DNN) ones, thus more adapt to machine implementations. Within this framework, it becomes then urgent to obtain the exact evaluation of spin-networks. This is a problem that requires, in principle, an exponential time complexity. In fact, the recoupling theory defined by Kauffman and Lins [24] defines a partition function from spin-networks by summing over all possible combinations of admissible colorings, and is based on the (factorial) unraveling of the Jones-Wenzl projector [24; 25]. Recoupling theory was originally introduced to define topological invariants of 3-manifolds. In fact, one could show that the aforementioned partition function defined on spin-networks dual to the cells of a (regular enough) simplicial decomposition of a 3-manifold is invariant under Matveev-Piergallini moves [26; 27], ensuring that the numerical value of the partition function is unchanged when considering homeomorphic topological spaces. The theory has become widely applied in quantum gravity, where it has played a central role in the formulation by Perez and Noui [28] of the physical inner product for Euclidean quantum gravity in 3-dimensions, achieved via the regularization of the projector that imposes the curvature constraint of \(SU(2)\) symmetric \(BF\) theory at the quantum level. More recently, the implementation of a projector similar to the one studied by Perez and Noui, applied to a still topological extended \(BF\) theory provided with cosmological constant, has been derived in [29]. There, it has been shown that the imposition of the curvature constraint with cosmological constant naturally provides the recoupling theory of a quantum group to emerge from the initial \(SU(2)\) symmetry structure. This has finally allowed to introduce recoupling theory of quantum groups in 3-dimensional quantum gravity in a constructive way, explaining the emergence of the recoupling theory of \(SU_{q}(2)\) from that one of \(SU(2)\). The recoupling theories of \(SU(2)\) and \(SU_{q}(2)\) are crucial for the applications into quantum machine learning that were explored in [4; 7; 8]. As we anticipated, the notion of TQNNs is formulated by means of a TQFT, and is in practice evaluated via recoupling. Although in [4; 7; 8] concrete examples were provided only accounting for the recoupling theory of \(SU(2)\), a natural extension to quantum groups, and in particular to the recoupling theory of \(SU_{q}(2)\), can be envisaged following the constructive arguments deployed in [29]. Nonetheless, the main bottleneck of the concrete applicability of the results in [4; 7; 8] remains the ability of evaluating the Perez-Noui projector in an efficient manner. As a subcase, this also includes the problem of evaluating spin-networks in general form, which is a notoriously complicated problem and it has previously been considered in the seminal articles [30; 31], where theoretical and computational results regarding certain specific cases have been considered in detail. We focus in this article on the evaluation of spin-networks of hexagonal shape and arbitrary size, and relate these objects to the pixel space of images to apply TQNNs. We use these results to obtain an algorithm for the evaluation of the Perez-Noui projector on \(SU(2)\)[28], and its generalization to \(SU_{q}(2)\)[29]. The plan of the paper is the following. In Sec. II we delve into the correspondence between the pixel space of images and the hexagonal spin-networks. In Sec. III consider spin-networks that are obtained by juxtaposition of hexagonal cells. In Sec. IV we provide the algorithm for the evaluation of the spin-network. In Sec. V we compute the transition amplitudes between two different hexagonal spin-networks. In Sec. VI we show some numerical results for the transition probability between two different hexagonal spin-networks. In Sec. VII we comment on the relation with the Ising model. Finally, in Sec. VIII we provide outlooks for future investigations and preliminary conclusions. ## II From pixel space to hexagonal spin-networks Our starting point is a correspondence between the pixel space of images, and hexagonal spin-networks. This also motivates our interest in evaluating hexagonal spin-networks, as they are seen to correspond to images, therefore constituting our key to translate data sets into the input of TQNNs. We start our discussion by considering first a very natural approach that rapidly incurs into an unwanted computational overhead. We consider an \(n\times n\) grid where each square indicates a pixel. Each pixel is endowed with a label between 0 and \(m\) indicating the intensity of the black color. It is clear that in this way we can represent a black and white image of \(n\times n\) resolution. To such an image, we can associate a spin-network proceeding as follows. Let \(P_{k}\) denote the \(k^{\text{th}}\) pixel of the grid in the lexicographical order. We introduce the barycenter coordinate of each pixel (square in the grid), and consider the von Neumann neighborhood \(\mathcal{N}_{k}\) of \(P_{k}\), which is given by \(\mathcal{N}_{k}=\{P_{k-1},P_{k+1},P_{k-n},P_{k+n}\}\) with the assumption that one or two of the pixels in \(\mathcal{N}_{k}\) is omitted for pixels \(P_{k}\) along the edges or the corners, respectively. We observe that we do not use periodic boundaries here, so that our resulting spin-networks do not lie in the torus, but in the plane. The centers of \(P_{k}\), which we denote by \(C_{k}\), will be the vertices of the spin-networks, and each \(C_{k}\) is connected to all the vertices corresponding to pixels belonging to its von Neumann neighborhood. The colors of the spin-networks are attributed by labeling the edges between the vertices based on the difference of the pixel values at the vertices \(C_{k}\) and \(C_{l}\) that they connect. This approach was followed for instance in [4]. However, while working in the semi-classical limit does not incur in any problems (see e.g. [4]), when we try to evaluate the spin-networks obtained through this procedure we find that each vertex needs to be desingularized as shown in Figure 1, in order to obtain two trivalent vertices from each 4-valent vertex. Each desingularization will introduce a summation over the admissible colors, and this negatively affects the computational cost of a TQNN algorithm based on spin-networks with such grid supports. Instead, we proceed by considering a honeycomb lattice structure as in Figure 2. It is clear that one can find a one-to-one correspondence between hexagons in the lattice in the figure and a \(2\times 2\) (pixel) image. For the \(n\times n\) pixel space one proceeds analogously. This process allows us to associate to a figure with \(n\times n\) pixel resolution a hexagonal lattice which we will call \(n\times n\) as well. Using a scheme similar to the one described above, we can associate to each pixel in black and white or RGB colors a numerical value between \(0\) and some upper bound \(N\) depending on the coloring scale. Each perimeter of the hexagon is then given the "color" \(r\in[0,N]\) determined by the pixel color. On edges that are shared among hexagons, the colors will be summed. So, if the edge \(e\) is shared between hexagon \(h_{i}\) and \(h_{j}\) with respective colors \(r_{i}\) and \(r_{j}\), we have that \(e\) takes the color \(r_{i}+r_{j}\). At each edge we now associate two projectors (which is the same one as by definition of projector) with the implicit assumption that each edge is labeled by a number of strands that derived by summing pixel colors. Using the definition of spin-network as in [24], we can rewrite the whole hexagon lattice as a spin-network as in Figure 3, where the \(2\times 2\) case is depicted. ## III Honeycomb spin-networks and their evaluation We consider spin-networks that are obtained by juxtaposition of hexagonal cells, where each vertex is trivalent, as depicted in Figure 2, where a four cell honeycomb is shown. In other words, we consider a honeycomb lattice whose vertices are intertwiners, and whose edges are bundles (i.e. tensor products) of \(\mathfrak{su}_{2}(\mathbb{C})\) fundamental representations symmetrized by the _Jones-Wenzl idempotent_, which we will also call _symmetrizer_. We denote by the symbol \(\mathcal{H}_{n}(\bar{a},\bar{b},\bar{c},\bar{d},\bar{e})\) the square honeycomb lattice whose side is of size \(n\) and whose edges are labelled by spin-colors \(\bar{a},\bar{b},\bar{c},\bar{d},\bar{e}\), following a precise scheme that will be described later in the article. Here \(\bar{a}\) etc., indicate vectors of spin colors associated to the edges of the spin-networks. When the spin colors do not play a role in the discussion, or if there is no risk of confusion, we will omit to write the labels and will content ourselves with simply writing \(\mathcal{H}_{n}\). In Figure 2, for example, a square honeycomb lattice with size two side corresponding to a \(2\times 2\) pixel figure: the values \(a,b,c,d\) at the center of the hexagons represent the colors corresponding to the pixel color, and the projectors are labeled by the number of strands obtained by summing the pixel colors Figure 1: Example of spin-network with grid support where desingularization is performed at each 4-valent vertex. Here, the spin colors \(j\) shown in the zoomed in part of the figure, run over all the compatible colors with respect to the incoming edges. Figure 3: Honeycomb lattice with size two side corresponding to a \(2\times 2\) pixel figure: the values \(a,b,c,d\) at the center of the hexagons represent the colors corresponding to the pixel color, and the projectors are labeled by the number of strands obtained by summing the pixel colors Figure 2: Honeycomb lattice with size two side, where corners and vertices are intertwiners. comb lattice of side \(n=2\) is represented. The labels are not assumed to constitute admissible triples a priori, and we set to zero the evaluation of a honeycomb spin-network whose labels contain a non-admissible triple at some vertex. In this article we allow, albeit rather improperly, spin-networks with open ends, i.e. supported on graphs that have edges with one endpoint not connected to any vertex. Considering these types of spin-networks simplifies certain inductive procedures in the constructions, as we shall see in the next results. They will be referred to as _open-end_ or _open-edge_ spin-networks, in the rest of this article. Along with the spin-networks \(\mathcal{H}_{n}\), we also define the open-end spin-networks \(\mathcal{O}_{n}\) as follows. For each \(n\), \(\mathcal{O}_{n}\) is defined as a single hexagonal cell, where we attach three open spin-network edges, symmetric with respect to the hexagonal cell. The central edge is a single edge, while the two lateral edges are assumed to consist of \(2n-1\) connected edges according to the geometry depicted in Figure 4, where there are \(n-1\) vertical edges and \(n\) horizontal ones. The open-end spin-network \(\mathcal{O}_{n}\) is depicted in Figure 5. Let \(\mathcal{N}\) denote a spin-network, and let \(\mathcal{L}\) denote an open-end spin-network, with legs labeled \(a_{1},\cdots,a_{r}\), for some \(r\in\mathbb{N}\). Let \(\bar{v}=(v_{1},\ldots,v_{r})\) denote a list of vertices of \(\mathcal{N}\). Then, we can define the composition, written \(\mathcal{N}\circ_{\bar{v}}\mathcal{L}\), where each edge \(a_{i}\) of \(\mathcal{L}\) is joined with the vertex \(v_{i}\) of \(\mathcal{N}\). If the edges are colored by spin colors, then we set to zero the composition of networks where the colors are not admissible, while we denote the admissible composition by the same symbol as above. Then we have the following result. It holds that \[\mathcal{H}_{n+1}=\mathcal{H}_{n}\circ_{\bar{v}}\mathcal{O}_{n}, \tag{1}\] for every \(n\in\mathbb{N}\), and for some choice of vertices \(\bar{v}\) in \(\mathcal{H}_{n}\) (see Lemma A.1). For a spin-network composition as above (and in the statement of Lemma A.1), we say that the spin-network components \(\mathcal{H}_{n}\) and \(\mathcal{O}_{n}\)_inherit_ the labels from the larger spin-network \(\mathcal{H}_{n+1}\), if the spin colors of the components coincide with the respective ones in \(\mathcal{H}_{n+1}\). When the vertices that are used for the composition are clearly understood, and there is no need to remark how the composition is being performed, we simply write the symbol \(\circ\) without indicated the vector \(\bar{v}\) of vertex indices. We now define the following type of spin-networks, denoted by \(\mathcal{BO}_{n}\), and obtained from the graph supporting \(\mathcal{O}_{n}\) by replacing each lateral vertex by a _bubble graph_ depicted in Figure 6, as well as deleting the lower half of the hexagonal edge, and connecting the first two lateral vertical edges. The graph \(\mathcal{BO}_{n}\) is represented in Figure 7. Lastly, let \(\mathcal{HH}_{n}\) denote the spin-network obtained from \(\mathcal{H}_{n}\) by deleting the hexagons along the upper perimeter. For \(\mathcal{H}_{2}\), for example, this means that one deletes the top hexagon, while for \(\mathcal{H}_{3}\) one deletes the top 3 hexagons and so on. For \(n=1\) we set \(\mathcal{HH}_{1}\) to consist of a single edge corresponding to the lower perimeter of the hexagon \(\mathcal{H}_{1}\). We now set a useful convention on the spin colors labeling edges of the spin-networks \(\mathcal{H}_{n}\), proceeding inductively on \(n\). We start by setting the labels of the hexagon \(\mathcal{H}_{1}\) as in Figure 8. Then, in the decomposition \(\mathcal{H}_{n}=\mathcal{H}_{n-1}\circ_{\bar{v}}\mathcal{O}_{n-1}\), where \(n\geq 2\), we number the edges of \(\mathcal{H}_{n}\) identified with the vertical open edges of \(\mathcal{O}_{n}\) as follows. The central edge is numbered \(0\), then the left branch of \(\mathcal{O}_{n}\) is numbered in increasing order from center to left with odd numbers, while the right branch is numbered in the same way, but with even numbers. At each configuration as in Figure 9, we indicate the five spin colors involved as \(a^{\bullet}_{k},b^{\bullet}_{k},c^{\bullet}_{k},d^{\bullet}_{k},e^{\bullet}_{k}\), and denote the corresponding spin-network by \(S^{\bullet}_{k}\), where \(\bullet\) is a placeholder for an arbitrary index. Here, the subscript indicates the level in which the spin-network portion appears. Level \(k\), indicates that it is part of the \(k+1\) spin-network \(\mathcal{H}_{k+1}\), but it does not lie in the copy of \(\mathcal{H}_{k}\) inside \(\mathcal{H}_{k+1}\) according to Lemma A.1. We will also use another index, which will appear as a superscript, to indicate the position of the spin-network portion within a level. The convention is the following. For levels where an odd number of \(e_{k}\)'s appear, we denote the central \(e_{k}\) as \(e^{0}_{k}\), while those \(e_{k}\)'s that lie on the left will be labeled \(e^{-i}_{k}\), and those on the right \(e^{i}_{k}\), in a symmetric fashion, and with increasing value of \(i\) as the \(e_{k}\)'s are farther from the center. For levels with even number of \(e_{k}\)'s, we omit the central \(e^{0}_{k}\) and follow the same scheme. Observe that for each \(k\) we have that some of the edges of spin-networks \(S^{\bullet}_{k}\) of different levels are connected, and therefore the corresponding labels are identified. In this case, we follow the convention that if \(S^{\bullet}_{k}\) and \(S^{\bullet}_{k-1}\) meet, the connecting edge will take the label of \(S^{\bullet}_{k-1}\), while if \(S^{\bullet}_{k}\) meets another \(S^{\bullet}_{k}\), then the labels reported are those with lower order with respect to the natural lexicographical order \(a<b<c<d<e\). We observe that following the previous conventions, the labels \(a\) and \(b\) will not appear in the spin-network \(\mathcal{H}_{n}\) except in the bottom arc, where they are labeled with subscript \(-1\). Along the edges of \(\mathcal{H}_{n}\), there appear arcs connecting at binary vertices. These edges merge, according to the rules of spin-networks at binary vertices. The labels that we report in these cases are dictated by the following ordering. For positive superscripts (i.e. on the right side of the perimeter), we have the order \(d<c<e\), while for negative superscripts we have \(c<d<e\). Then, on the meeting edges, we relabel the merged edges according to the smallest element. On central cells on top and bottom of the spin-networks, we follow the convention that the largest spin-color label is preserved. The orderings in these cases are the natural ones. Note that the Figure 4: Lateral open-end spin-networks of \(\mathcal{O}_{n}\) only spin-colors that appear on the (lateral) perimeter are given by the letters \(c,d,e\), while the central perimeter cells are just two (bottom and top), so that the rules given above exhaust all the cases. Now, let us define the following quantities. For spin colors \(\bar{a},\bar{b},\bar{c},\bar{d},\bar{e}\) following the convention above, we define \[\Psi(\bar{a},\bar{b},\bar{c},\bar{d},\bar{e}\mid\bar{i}) = \begin{cases}d_{\lfloor\frac{n}{2}\rfloor}^{-\lfloor\frac{n}{2} \rfloor+1}&e_{\lfloor\frac{n}{2}\rfloor+1}^{-\lfloor\frac{n}{2}\rfloor}&i_{ \lfloor\frac{n}{2}\rfloor}^{-\lfloor\frac{n}{2}\rfloor}\\ c_{\lfloor\frac{n}{2}\rfloor+1}^{-\lfloor\frac{n}{2}\rfloor-1}&d_{\lfloor \frac{n}{2}\rfloor-1}^{-\lfloor\frac{n}{2}\rfloor+1}&c_{\lfloor\frac{n}{2} \rfloor}^{-\lfloor\frac{n}{2}\rfloor}\end{cases}\] \[\times\prod_{\lfloor\frac{n+2}{2}\rfloor-1<k\leq 2\lfloor\frac{n}{2 }\rfloor}\begin{cases}c_{k}^{-\lfloor\frac{n+2}{2}\rfloor+\lfloor\frac{k+1}{2 }\rfloor}&e_{k}^{-\lfloor\frac{n+2}{2}\rfloor+\lfloor\frac{k+1}{2}\rfloor}&i _{k}^{-\lfloor\frac{n+2}{2}\rfloor+\lfloor\frac{k+1}{2}\rfloor}\\ c_{k+1}^{-\lfloor\frac{n+2}{2}\rfloor+\lfloor\frac{k+1}{2}\rfloor}&d_{k}^{ -\lfloor\frac{n+2}{2}\rfloor+\lfloor\frac{k+1}{2}\rfloor}\end{cases}.\] Moreover, we define the "formal involution" \(\iota\) which is applied to a symbol as given above, and acts as follows: \(\iota\) exchanges the colors \(d\) to \(c\), it reverses the signs of the superscripts, and it leaves the subscripts the same. Applying the recoupling ([24]) we obtain that the following equality holds for all choices of compatible spin colors \(a,b,c,d,e,f\): This move will be referred to as "bubble move", for simplicity, and its proof is given in Lemma A.2 below. Now, we want to show how to decompose the \(\mathcal{H}_{n+1}\) spin-network in terms of lower degrees spin-networks of type \(\mathcal{HH}_{n}\) and \(\mathcal{BO}_{n}\). For this purpose, we decompose \(\mathcal{H}_{n+1}\) in a linear combination of \(\mathcal{HH}_{n}\) and \(\mathcal{BO}_{n}\) as \[\mathcal{H}_{n+1}(\bar{a},\bar{b},\bar{c},\bar{d},\bar{e})=\Delta _{c_{n+1}^{0}}\theta(d_{n+1}^{0},c_{n+1}^{0},c_{n+1}^{0})\frac{\theta(i_{n+1 }^{0},c_{n+1}^{-1},c_{n}^{-1})}{\Delta_{c_{n+1}^{0}}}\frac{\theta(i_{n+1}^{0},d_{n+1}^{1},d_{n}^{1})}{\Delta_{i_{n+1}^{0}}}\] \[\times\frac{\theta(i_{\lfloor\frac{n}{2}\rfloor}^{-\lfloor\frac{ n}{2}\rfloor},c_{\lfloor\frac{n}{2}\rfloor+1}^{-\lfloor\frac{n}{2}\rfloor+1}^{- \lfloor\frac{n}{2}\rfloor+1}}{\Delta_{i_{\lfloor\frac{n}{2}\rfloor}^{-\lfloor \frac{n}{2}\rfloor}}}\frac{\theta(i_{\lfloor\frac{n}{2}\rfloor}^{\lfloor\frac{ n}{2}\rfloor},d_{\lfloor\frac{n}{2}\rfloor+1}^{\lfloor\frac{n}{2}\rfloor+1})}{ \Delta_{i_{\lfloor\frac{n}{2}\rfloor}^{\lfloor\frac{n}{2}\rfloor}}}\begin{cases} d_{n+1}^{0}&c_{n+1}^{-1}&e_{n+1}^{0}\\ d_{n+1}^{0}&c_{n+1}^{0}&e_{n+2}^{0}\end{cases} \tag{2}\] \[\times\mathcal{HH}_{n}(\bar{a},\bar{b},\bar{c},\bar{d},\bar{e}) \circ_{\bar{b}}\mathcal{BO}_{n}(\bar{a},\bar{b},\bar{c},\bar{d},\bar{e})\sum_ {\bar{i}}\Psi(\bar{a},\bar{b},\bar{c},\bar{d},\bar{e}\mid\bar{i})_{t}(\Psi( \bar{a},\bar{b},\bar{c},\bar{d},\bar{e}\mid\bar{i})),\] where \(\Psi\) and \(\iota\) have been defined above. This result is stated and proved in Lemma A.3. The coefficients \(\Psi(\bar{a},\bar{b},\bar{c},\bar{d},\bar{e}\mid\bar{i})\) will also be written as \(\Psi_{\bar{i}}\) for simplicity, when it is clear what spin colors are being considered. Let \(\mathcal{BO}_{n}\) denote an \(n\)-bubble spin-network as in Figure 7. Here we assume that the spin colors of Figure 6: Bubble graph \(\mathcal{BO}_{n}\) are those inherited by Equation 2. We can now apply Lemma A.2 on each of the bubbles of \(\mathcal{BO}_{n}\). This will gives us \(\mathcal{BO}_{n}\) as a sum on admissible colors of the spin-networks \(\mathcal{O}_{n}\). The evaluation of \(\mathcal{BO}_{n}\) is obtained through the formula \[\mathcal{BO}_{n}(\bar{a},\bar{b},\bar{c},\bar{d},\bar{e},\bar{f}) \tag{33}\] \[= \prod_{n-1\leq k\leq 2n-5}\begin{cases}c_{k}^{\lfloor\frac{k+1}{2} \rfloor-n-1}&p_{k}^{\lfloor\frac{k+1}{2}\rfloor-n-1}&d_{k}^{\lfloor\frac{k+1} {2}\rfloor-n-2}\\ p_{k+1}^{\lfloor\frac{k+2}{2}\rfloor-n-1}&c_{k+1}^{\lfloor\frac{k+2}{2} \rfloor-n-1}&c_{k+1}^{\lfloor\frac{k+2}{2}\rfloor-n}\end{cases}\] \[\times\frac{\theta(c_{k}^{\lfloor\frac{k+1}{2}\rfloor-n-1},c_{k +1}^{\lfloor\frac{k+2}{2}\rfloor-n-1},d_{k}^{\lfloor\frac{k+1}{2}\rfloor-n-2} }{\Delta_{d_{k}^{\lfloor\frac{k+1}{2}\rfloor-n-2}}}\] \[\times\begin{cases}d_{k}^{-\lfloor\frac{k+1}{2}\rfloor+n+1}&p_{k }^{-\lfloor\frac{k+1}{2}\rfloor+n+1}&c_{k}^{-\lfloor\frac{k+1}{2}\rfloor+n+2} \\ p_{k+1}^{-\lfloor\frac{k+2}{2}\rfloor+n+1}&e_{k+1}^{-\lfloor\frac{k+2}{2} \rfloor+n+1}&d_{k+1}^{-\lfloor\frac{k+2}{2}\rfloor+n}\end{cases}\] \[\times\frac{\theta(d_{k}^{-\lfloor\frac{k+1}{2}\rfloor+n+1},e_{k +1}^{-\lfloor\frac{k+2}{2}\rfloor+n+1},d_{k}^{-\lfloor\frac{k+1}{2}\rfloor+n +2}}{\Delta_{d_{k}^{\lfloor\frac{k+1}{2}\rfloor+n+2}}}\] \[\times\mathcal{O}_{n-1}.\] The proof of this fact can be found in Lemma A.4. Observe that the formula holds for \(n\geq 4\), since this step does not appear in the cases \(n=2,3\), as a direct inspection reveals. Observe that properly speaking, the coefficients \(p_{k+1}\) corresponding to \(k=2n-5\) in the product above are identified with other \(p\) coefficients through the Schur's Lemma (i.e. a Kroencker's delta) applied when obtaining Equation (2). For simplicity of notation, we set \(\Phi(\bar{a},\bar{b},\bar{c},\bar{d},\bar{e},\bar{f})\) to be the coefficient appearing in the RHS of Lemma A.4. If summation is to be taken over some of the indices, let us denote them as \(\bar{i}\), then we indicate these indices explicitly as \(\Phi(\bar{a},\bar{b},\bar{c},\bar{d},\bar{e},\bar{f}\mid\bar{i})\). For short, in this situation, we also write \(\Phi_{\bar{i}}\), when the labels are understood. We have \[\mathcal{H}\mathcal{H}_{n+1}\circ_{\bar{v}}\mathcal{O}_{n}=\mathcal{H}_{n+1}, \tag{4}\] where \(\bar{v}\) is the set of vertices as in Lemma A.3. To obtain the general evaluation of the spin network \(\mathcal{H}_{n}\) for arbitrary \(n\), we now proceed inductively by decomposing \(\mathcal{H}_{n}\) into the composition of \(\mathcal{H}_{n-1}\) and a term \(\mathcal{O}_{n}\) whose evaluation can be obtained applying recoupling theory. Throughout, the labels \(\bar{a},\bar{b},\bar{c},\bar{d},\bar{e}\) indicating the colorings assigned to the spin-network will follow the scheme described above. We are now in the position to relate the evaluation of the honeycomb spin-network \(\mathcal{H}_{n+1}\) to the evaluation of \(\mathcal{H}_{n}\), for any given configuration of the spin colors. First, we absorb all the coefficients \(\Psi_{\bar{i}}\) and the extra factors coming from Lemma A.3 and Lemma A.4 to get the new coefficients \(\Psi_{\bar{i}}\) and \(\iota\Psi_{\bar{i}}\). Observe, in fact, that apart from some pre-factors appearing in Lemma A.3, all the coefficients are symmetric with respect to the involution \(\iota\). We therefore use the symmetry to define the terms \(\hat{\Psi}_{\bar{i}}\), and give a square root factor of the terms that are fixed by \(\iota\). This preserves the symmetry between \(\hat{\Psi}_{\bar{i}}\) and \(\iota\hat{\Psi}_{\bar{i}}\). We have \[\mathcal{H}_{n+1}(\bar{a},\bar{b},\bar{c},\bar{d},\bar{e})= \sum_{\bar{i}}\hat{\Psi}(\bar{a},\bar{b},\bar{c},\bar{d},\bar{e}\ |\ \bar{i})\iota\hat{\Psi}(\bar{a},\bar{b},\bar{c},\bar{d},\bar{e}\ |\bar{i})\] \[\times\mathcal{H}_{n}(\bar{a},\bar{b},\bar{c},\bar{d},\bar{e}), \tag{5}\] where \(\mathcal{H}_{n}\) inherits the spin colors of \(\mathcal{H}_{n+1}\). This important result is stated and proved in Theorem A.6. A fundamental computational/algorithmic issue that arises in the evaluation of \(\mathcal{H}_{n}\) following Theorem A.6 regards the inductive determination of the new labels \(\bar{a}^{\prime},\bar{b}^{\prime},\bar{c}^{\prime},\bar{d}^{\prime},\bar{e}^ {\prime}\). In fact, observe that while the labels in Figure 8: Labeling of the edges of \(\mathcal{H}_{1}\) the bulk of the spin-network \(\mathcal{H}_{n-1}\) obtained from the "higher degree" \(\mathcal{H}\) remain the same in the inductive process outlined in the proof of Theorem A.6, the same does not hold true for all the labels in the upper perimeter. In fact, as a consequence of the proof, there are \(2n-3\) labels that we are going to sum over after applying recoupling an appropriate number of times. For instance, in the evaluation of \(\mathcal{H}_{2}\), we sum on a single \(i\), while in \(\mathcal{H}_{3}\) we sum over 3 and so on. These colorings we sum upon are then taken into account in the colorings of \(\mathcal{H}_{n-1}\), and to concretely evaluate \(\mathcal{H}_{n}\) (see appendix) one needs to iteratively take these colorings into account, and device a scheme for the substitution. As the edges where we sum the spin colors all lie in the upper semi-perimeter of \(\mathcal{H}_{n-1}\) (along \(\mathcal{O}_{n}\) in the decomposition of \(\mathcal{H}_{n}\)) following the proof of Theorem A.6, this is not difficult to perform iteratively. We find that the number of summation operations needed to evaluate \(\mathcal{H}_{n}\) grows quadratically with \(n\). More specifically, if \(a_{n}\) denotes the number of summations at \(n\), we have \(a_{n}=a_{n-1}+2n-5\). This is a consequence of Equation 5 (i.e. Theorem A.6) and it is proved in Corollary A.7 below. Another consequence of Equation 5 is that the evaluation of \(\mathcal{H}_{n}(\bar{a},\bar{b},\bar{c},\bar{d},\bar{e})\), with \(n\geq 2\), is given by the formula \[\mathcal{H}_{n}(\bar{a},\bar{b},\bar{c},\bar{d},\bar{e})=\sum_{k=2}^{n}\Psi_{ \bar{i}_{0}}\Phi_{\bar{i}^{\prime}_{0}}\theta(c_{1}^{2},e_{0}^{2},b_{0}^{2}),\] where \(\Psi_{\bar{i}_{k}}\) and \(\Phi_{\bar{i}^{\prime}_{k}}\) have been provided above and the index \(k\) refers to the superscript of the indices of the spin colors \(a,b,c,d,e\). This is shown in Corollary A.8. The simplicity of the formula given in Corollary A.8 is not fully representative of the intrinsic complexity of it. In fact, the main issue in computing the evaluation of the Honeycomb network for an arbitrary \(n\) is that the indices appearing in the summation symbol, which refer to the quantum \(6j\) symbols in the \(\Psi\) and \(\Phi\) coefficients, are not explicitly given, and need to be considered carefully. In fact, at each step, the spin-colors for the level \(n-1\) contain indices of summation from the previous step. ## IV Evaluation of \(\mathcal{H}_{n}\) In this section we give the algorithm for the evaluation of the spin-network \(\mathcal{H}_{n}\) using the steps described in the previous sections. Before giving the general procedure, we will consider an example in detail. We will compute the evaluation of \(\mathcal{H}_{n}\) for arbitrary colors \(a_{k}^{i},b_{k}^{i},c^{i}d_{k}^{i},e_{k}^{i}\). The honeycomb \(\mathcal{H}_{3}\) is the first of the \(\mathcal{H}_{n}\) where the various steps of the algorithm are nontrivial, and it therefore shows the procedure, but with a complexity relatively small and still simple to perform by hand. The spin-network \(\mathcal{H}_{3}\) with the labeling described above is shown in Figure 10. Observe that some of the labels are merged into a single spin color. This is due to the fact that at a binary vertex, different colors would imply that the spin-network is trivial, and therefore it is meaningful to consider only the case when all the perimeter labels are grouped in a way that at binary vertices the incoming edges have the same spin color. Also, the composition of projectors at incident binary vertices squares to the identity, and several concatenated projectors result in a single projector. In other words we can consider these edges as a single "smoothed" edge. We specify also that the procedure given to pass from pixel space to spin-networks automatically implies that the spin colors are the same at these edges, and the spin-network is not trivial due to mismatches at the binary vertices. First, we apply Lemma A.2 to the top of the spin-network to obtain a factor of \(\begin{cases}d_{3}^{0}&c_{3}^{-1}&e_{3}^{0}\\ d_{3}^{1}&c_{3}^{0}&e_{4}^{1}\end{cases}\cdot\Delta_{e_{3}^{0}}^{-1}\theta(d_ {3}^{0},c_{3}^{0},e_{3}^{0})\) multiplying the spin-network of Figure 11 Next, we apply the recoupling Theorem centered on the edges that have a perpendicular red marker. These recouplings can be applied in parallel, in the sense that they do not depend on each other, and the procedure can be performed simultaneously. Each recoupling now implies that a summation on compatible colors appears, along with a \(6j\)-symbol. The indices used for summation Figure 10: The spin-network \(\mathcal{H}_{3}\) with the labeling scheme adopted in this article. Some of the edges’ labels are merged due to the fact that each edge is symmetrized through the Jones-Wenzl projector which, being a projector, is the identity when squared. will be denoted by \(p\), and we obtain a global coefficient \[\sum_{p_{3}^{0}}\left\{\begin{matrix}c_{2}^{-1}&c_{3}^{-1}&p_{3}^{0} \\ d_{3}^{1}&d_{2}^{1}&e_{3}^{0}\end{matrix}\right\}\sum_{p_{2}^{-1}}\left\{ \begin{matrix}e_{2}^{-1}&c_{2}^{-2}&p_{2}^{-1}\\ c_{3}^{-1}&c_{2}^{-1}&d_{2}^{-1}\end{matrix}\right\}\sum_{p_{3}^{0}}\left\{ \begin{matrix}d_{3}^{1}&d_{3}^{1}&p_{2}^{1}\\ d_{2}^{1}&e_{2}^{1}&c_{2}^{1}\end{matrix}\right\}\] \[\times\sum_{p_{3}^{0}}\left\{\begin{matrix}d_{0}^{-1}&c_{2}^{-2}&p_ {1}^{-1}\\ e_{2}^{-1}&d_{1}^{0}&c_{1}^{-1}\end{matrix}\right\}\sum_{p_{3}^{0}}\left\{ \begin{matrix}c_{0}^{0}&e_{2}^{1}&p_{1}^{1}\\ d_{2}^{1}&d_{2}^{1}&e_{3}^{0}\end{matrix}\right\} \tag{6}\] with the resulting spin-network given in Figure 12. Now we can apply the diagrammatic Schur's Lemma (Lemma 7 in [24]) to all the bubbles appearing in Figure 12 and burst them all. This procedure introduces some \(\theta\)'s and quantum dimensions in the coefficients, but more importantly introduces Kronecker's deltas among the indices \(p\)'s. The coefficient multiplying every summand now becomes \[\sum_{p_{3}^{0}}\left\{\begin{matrix}c_{2}^{-1}&c_{3}^{-1}&p_{3}^ {0}\\ d_{3}^{1}&d_{2}^{1}&e_{3}^{0}\end{matrix}\right\}\left\{\begin{matrix}e_{2}^{-1 }&c_{2}^{-2}&p_{3}^{0}\\ c_{3}^{-1}&c_{2}^{-1}&d_{2}^{-1}\end{matrix}\right\}\left\{\begin{matrix}d_{2} ^{1}&d_{3}^{1}&p_{3}^{0}\\ d_{2}^{2}&e_{2}^{1}&e_{2}^{1}\end{matrix}\right\}\] \[\times\frac{\theta(c_{2}^{-2},e_{2}^{-1},p_{3}^{0})}{\Delta_{p_{3} ^{0}}}\frac{\theta(c_{3}^{-1},c_{2}^{-1},p_{3}^{0})}{\Delta_{p_{3}^{0}}}\frac{ \theta(d_{2}^{1},d_{3}^{1},p_{3}^{0})}{\Delta_{p_{3}^{0}}}\frac{\theta(d_{2}^ {2},e_{2}^{1},p_{3}^{0})}{\Delta_{p_{3}^{0}}} \tag{7}\] and the spin-network we obtain (for each given configuration of spin-colors) is given by Figure 13. One extra application of Lemma A.2 now allows us to obtain a sum (over compatible spin-colors) of terms that are proportional to tetrahedra, where the previous coefficients now get an extra factor of \(\left\{\begin{matrix}d_{1}^{0}&d_{0}^{-1}&e_{1}^{0}\\ c_{1}^{0}&c_{1}^{0}&p_{3}^{0}\end{matrix}\right\}\cdot\Delta_{e_{1}^{0}}^{-1} \theta(d_{1}^{0},c_{1}^{0},e_{1}^{0}).\) Since the evaluation of the tetrahedron is known (see Section 8.5 in [24]), the algorithm stops, and we can evaluate the original \(\mathcal{H}_{3}\) through a sum over the compatible spin-color, evaluations of tetrahedra, and evaluations of \(6j\)-symbols and \(\theta\)-nets. The procedure just described for \(\mathcal{H}_{3}\), exemplifies the Figure 11: First step of the algorithm applied to \(\mathcal{H}_{3}\). Figure 12: Second step of the algorithm applied to \(\mathcal{H}_{3}\), where we have applied recoupling to all red marked edges of Figure 11. Figure 13: Spin-network obtained from Figure 12 after bursting the bubbles through the diagrammatic Schur’s Lemma. whole theory in Section III for the evaluation of \(\mathcal{H}_{n}\), and gives a concrete realization of the results of Theorem 6 to pass from \(\mathcal{H}_{3}\) to \(\mathcal{H}_{2}\) (which is a tetrahedron). ``` 0:\(\mathcal{H}_{n}\) with given spin-colors \(\triangleright\) Initialization 0:\((\mathcal{H}_{n})\)\(\triangleright\) Evaluation of \(\mathcal{H}_{n}\) 1:while While \(\mathcal{H}_{n}\) with \(n\geq 3\)do 2: Apply Lemma 2 to top of \(\mathcal{H}_{n}\) 3: Apply Lemma 2 to use recoupling on all edges that connect crown to bulk 4: Remove bubbles through Schur's Lemma 5: Apply Lemma 2 to edges connecting \(\mathcal{H}\mathcal{H}_{n-1}\) to \(\mathcal{B}_{n}\) 6: Apply Lemma 4 write \(\mathcal{B}\mathcal{C}_{n}\) in terms of \(\mathcal{C}_{n}\) 7: Apply Lemma 5 to obtain \(\mathcal{H}_{n}\) 8:endwhile 9: Perform sum over all compatible colors from the while 10: Evaluate the tetrahedra ``` **Algorithm 1** General algorithm for the evaluation of \(\mathcal{H}_{n}\). ## V Computation of transition amplitudes To compute the transition amplitudes between two different hexagonal spin-networks \(\mathcal{H}_{1}\) and \(\mathcal{H}_{2}\), we compute the physical inner product defined by Noui and Perez by means of a projector \(P\)[28]. The definition of [28] was extended in [29] to the case of a projector where the quantum recoupling theory at non-classical \(q\) is used. Physically, this corresponds to the case where the cosmological constant is nontrivial. We will refer to projector and physical product in the classical and quantum case interchangeably. A direct verification using the definitions found in [29] shows that the Haar integration (depicted as black boxes in [28]) satisfies the gauge fixing and summation identities found in the appendix of [28] in the quantum case as well. Using these two properties of the integration and the definition of the projector, we can reduce the computation of the transition amplitudes to evaluations as in Section III. Let \(\mathcal{P}\) denote the projector of [28] as well as the modified version of the quantum case ([29]). Then, the transition amplitude between two spin-networks \(\mathcal{H}_{n}\) and \(\mathcal{H}^{\prime}_{n}\) of the same size, i.e. the physical inner product, is defined by the formula \[\langle\mathcal{H}_{n}|\mathcal{H}^{\prime}_{n}\rangle_{\text{Phys}}:=\langle \mathcal{H}_{n}|\mathcal{P}|\mathcal{H}^{\prime}_{n}\rangle,\] where \(\langle\bullet|\bullet\rangle\) indicates the inner product defined via the Ashtekar-Lewandowski measure. It can be shown that Equation 5 suffices to evaluate transition amplitudes as follows. Then, the physical inner product between \(\mathcal{H}_{n}\) and \(\mathcal{H}^{\prime}_{n}\) is given by \[\langle\mathcal{H}_{n}|\mathcal{H}^{\prime}_{n}\rangle_{\text{Phys}}=\overline {\langle\mathcal{H}_{n}\rangle}\langle\mathcal{H}^{\prime}_{n}\rangle, \tag{8}\] where \(\langle\mathcal{H}_{j}\rangle\) indicates the evaluation computed in Section III and the overbar denotes complex conjugation. This is proved in Lemma 9, and the main step is to use Figure 14 to decouple the evaluation of the two spin-networks (see proof of Lemma 9 below). ## VI Phase-space properties In this Section we explore the _phase space_ for the value of the Perez-Noui projector, relatively to two different sizes of the hexagonal grid, i.e. with \(N=2,3\). Furthermore, we set the value of \(q\) to be the _classical_ one, with \(q=-1\). In order to deal with a finite number of coloring configurations for the hexagonal lattices, we need to set bounds on the possible compatible choices for each edge, i.e. we need to impose a minimum \(c_{m}\) and a maximum \(c_{M}\) color value, and enumerate all the possible coloring configurations in that range, with some constraint coming from the coloring procedure. This is a rather complex combinatorial problem: a first straightforward approach would be to randomly draw colors for each edge and imposing the compatibility conditions at the vertices, with the drawback of searching among \((c_{M}-c_{m})^{N_{e}}\) combinations, with \(N_{e}\) the total number of edges, among which only a very small fraction actually yields compatible colorings. As it appears, the main problem is to find a procedure that automatically yields compatible color configurations. The solution we put forward is to color the graph using its cycles as the fundamental units: assuming one finds all possible graph cycles \(\{\gamma_{i}^{(N)}\}\) (i.e. sequences of Figure 14: Elimination of Haar integration from the bulk. The blue dashed line shows the elimination of diagonal Haar boxes, while the dotted red line shows the elimination of the horizontal Haar box. edges that form close loops) for the \(N\times N\) hexagonal lattice, then one can build compatible colorings configurations in the range \([c_{m},c_{M}]\) by increasing by one the color of each edge belonging to a given cycle \(\gamma_{i}^{(N)}\), with the possibility of increasing multiple times the value of the colors of the edges belonging to the any given cycle. This is a non-local construction of the colorings that automatically assures the compatibility of each configuration. Hence, after enumerating all the cycles, one can build all the possible configurations of maximum cycle color \(c_{M}=1\) simply by coloring one cycle per configuration; then one can build all configurations of maximum cycle color \(c_{M}=2\) by coloring all possible combinations of pairs of cycles, including choosing the same cycle twice, and so on. In this way, we introduce a possible parametrization of the phase space of all the (infinite) compatible colorings that is based on coloring cycles in order to assure the compatibility of each configuration. As a final remark, it is important to consider that finding all possible cycles of the hexagonal graph is again a non-trivial combinatorial problem for which we have developed our own strategy, which will be described in a separate work. Let us now discuss the results for the projector values among any couple of configurations, in relation to a given range of cycles-colorings, for \(N=2,3\). As it turns out, it is not possible to store in memory the results for \(N=4\): the number of cycles in this case is \(N_{c}^{(4)}=18370\), which would all yield the same evaluation, while the number of configurations for all pairs of cycles is \(N_{c_{M}=2}^{(4)}=168737635\), which does not allow to compute all the possible transition values and store them on RAM considering 32-bits precision. Hence, we choose to consider \(N=2\) with \(c_{m}=0\) and \(c_{M}=6\), and \(N=3\) with \(c_{m}=0\) and \(c_{M}=2\), yielding a total number of transition values of \(N_{c_{m}=0,c_{M}=6}^{(3)}=1502260081\) and \(N_{c_{m}=0,c_{M}=2}^{(3)}=1569744400\). We start by associating an integer index \(i\) to each coloring configuration \(|\mathcal{H}_{n}^{(i)}\rangle\), so that the transition matrix \(\mathcal{A}\) reads \[\mathcal{A}_{ij}=|\langle\mathcal{H}_{n}^{(i)}|\mathcal{H}_{n}^{(j)}\rangle|_ {\text{Norm}}^{2}=\frac{|\overline{\langle\mathcal{H}_{n}^{(i)}\rangle} \langle\mathcal{H}_{n}^{(j)}\rangle|^{2}}{\max\left\{|\langle\mathcal{H}_{n}^ {(i)}\rangle|^{4},|\langle\mathcal{H}_{n}^{(j)}\rangle|^{4}\right\}}\,, \tag{9}\] which is such that the diagonal part is normalized to unity. Any random labelling of the coloring states \(|\mathcal{H}_{n}\rangle\) would not yield any apparent structure in the density matrix, hence we decided to rank each state by means of the sum of all the transition probability values between the given state and all the others, i.e. \[\mathcal{S}_{i}=\sum_{j}\mathcal{A}_{ij}. \tag{10}\] As shown in Figs. 15 and 16, if one reorders the labeling according to increasing values of \(S_{i}\), one can use the new ranked indices \(\{i_{\text{R}}\}\) and represent the ranked transition matrix, denoted as \(\mathcal{A}_{i_{\text{R}}j_{\text{R}}}\). One can then see that the values are automatically structured in a block diagonal form, where different states cluster in what we refer to as _classes_: within one class each state is equivalent to the others, in the sense that the transition probability is unity. Another remarkable property is that all the elements of a class also share the same value of the total sum \(\mathcal{S}_{i}\). This feature provides an additional property of these classes: each element belonging to a class has the same global scalar product with all the other elements within the configuration space. This structure is showing how the Perez-Noui projector can be used to distinguish one class of elements from the other, without any prior information, i.e. training, with a structure that spontaneously emerges when considering a simple ranking of the states. In other words, this results is providing direct evidence for the ideas discussed in [8], for which one expects DNNs to emerge as a semi-classical limit of TQNNs. In other words, the block-diagonal part of the transition matrix \(\mathcal{A}\) is a way of representing the saddle point that would be found by training a classifier DNNs on the portion of the configuration space we study here. These results seem very promising for using TQNNs as an image classifier. Figure 15: Transition matrix for \(N=2\) honeycomb lattice with \(q=-1\), maximum cycle-coloring value \(c_{M}=6\). ## VII Relation with the Ising Model DNNs present many affinities with statistical models. Specifically, DNNs' architectures can be addressed from the perspective of statistical physics and Gibbs distributions. An area of research that was very active in the 80's was the one hinging on the implementation of spin-glass models to unveil the way neural networks operate. A flourishing statistical approach is also represented by the so-called Boltzmann machines, networks of neuron-like units symmetrically connected, with neurons can be switched on or off according to a stochastic dynamics. Their learning algorithm [32] allows to achieve complex pattern recognition tasks by adopting a supervised approach. Boltzmann machines emerge as stochastic recurrent neural networks, which have been cast in statistical physics as the disordered versions of the Ising model [33], i.e. the Sherrington-Kirkpatrick model [34]. In particular, the generalisation ability was one of the battle field of these investigations inspired by the statistical analysis of phase transitions. Quantum fluctuations can be rephrased as statistical fluctuations, by means of a standard Wick rotation. This latter transforms the partition function of any quantum theory in the equivalent partition function in statistical mechanics provided with the Gibbs-ensemble measure, namely the negative exponential of the Hamiltonian of the system. On the other hand, the connectivity does naturally enter inside the definition of the semi-classical limit of the QNN/TQNN states through the concept of coarse-graining. Borrowing an intuition proper of statistical mechanics, we may think that blocking and coarse-graining procedures, directly applied at the quantum level on the TQNN states, individuate a class of effective TQNN states that are supported on graphs characterised by a lower topological connectivity, and thus by a lower capacity -- we call these states statistical TQNN (STQNN). More concretely, from an operative point of view, the blocking and the coarse-graining procedures are defined in terms of the ability to carry out measurements. ## VIII Conclusions and Outlooks The enhancement of computational methods is the omnipresent driving factor of today's scientific panorama. The advancement of technological instrumentation has allowed researchers in any field to gather increasingly more data about virtually any aspect of natural science. Nonetheless, advancements in computational ability, with eventual breakthrough, are still required, and probably even more needed that in the past. Quantum computing may represent a milestone along this trajectory. It may pave the way to a shift of perspective in computational methods, with outputs that are qualitatively different and not comparable with classical computing. Quantum computing may furthermore enable to process data in quantum machines, including quantum computers, exploiting the quantum structures of matter. In this article we have delved into the evaluation of spin-networks of hexagonal shape and arbitrary size. We have hence related these objects to the pixel space of images, in order to apply the new tools provided by topological quantum neural networks (TQNNs). We have then constructed an algorithm for the evaluation of the Perez-Noui projector on \(SU(2)\)[28], and extended this result to \(SU_{q}(2)\)[29]. Some aspects of our construction will deserve more detailed investigations in the future. The link between "local" features and "global" ones is among these, and appears of particular interest. The squared norm of the normalized physical scalar product between two different states \(\mathcal{A}_{nn^{\prime}}=|\langle\hat{\mathcal{H}}_{n}|\hat{\mathcal{H}}^{ \prime}{}_{n}\rangle|^{2}\) can be used to rank the states as follows: fix the state \(|\hat{\mathcal{H}}^{\prime}{}_{n}\rangle\) and compute the partial sum \(\mathcal{S}_{n^{\prime}}=\sum_{n}\mathcal{A}_{nn^{\prime}}\); as it happens the value of \(\mathcal{S}_{n^{\prime}}\) can be used to rank each state \(|\hat{\mathcal{H}}^{\prime}{}_{n}\rangle\). At the end of the ranking procedure one finds that the ranked matrix \(\bar{\mathcal{A}}_{nn^{\prime}}\) has a block diagonal Figure 16: Transition matrix for \(N=3\) honeycomb lattice with \(q=-1\), maximum cycle-coloring value \(c_{M}=2\). structure where the blocks are all related to transitions \(|\langle\hat{\mathcal{H}}_{n}|\hat{\mathcal{H}}^{\prime}{}_{n^{\prime}}\rangle|^{2}=1\). It also happens that each block is associated to a unique value of the partial sum \(\mathcal{S}_{n^{\prime}}\). Hence, the states belonging to the blocks display two fundamental property: a "local" property, i.e. the fact that each state has a scalar product equal to one over any other state belonging to the same block; a "global" property, i.e. that all the states belonging to a block yield the same value for the partial sum \(\mathcal{S}_{n^{\prime}}\). This is a remarkable property that links a local feature to a global one. It is possible to associate each of the diagonal blocks to a "class" that, upon visual inspection, seems to yield reasonably distinguishable spin networks in terms of the coloring. The origin of the classification mechanism also deserves more detailed analyses. If one assumes that the overall set of all possible transitions, computed using the Perez-Noui projector, allows to compute the Turaev-Viro invariant, then it might be possible that the partial sum \(\mathcal{S}_{n^{\prime}}\) is related to the Reshetikhin-Turaev invariant. If this is the case, then each diagonal block might be related to a different value of the Reshetikhin-Turaev invariant thus providing a mathematical foundation for the mechanism that is yielding the classification we observe in the ranked transition matrix \(\bar{\mathcal{A}}_{nn^{\prime}}\). Assuming that the Turaev-Viro invariant can be computed from the transition matrix \(\mathcal{A}_{nn^{\prime}}\), then the diagonal blocks in \(\bar{\mathcal{A}}_{nn^{\prime}}\) might represent the saddle point of the Turaev-Viro evaluation, if considering the TV as composed by a sum of the exponential of the values of \(\bar{\mathcal{A}}_{nn^{\prime}}\). In conclusion, the intrinsic quantumness of the TQNN framework [4; 7], in which the dynamical evolution of the boundary states (input/output data) is attained through the sum over an infinite amount of intermediate virtual states (filters/hidden layers), has been realised here by applying the physical projectors to the spin-network states. The quantumness that is intrinsic in this proposed new framework allows us to consider a sum over infinite (virtual) hidden layers, being conjectured at the same time to avoid the issues of redundancy and overfitting [8]. This instantiates novel (quantum) algorithms, the effectiveness and accuracy of which we will have to continue testing, investigating the amount of computational time TQNNs spent in comparison with classical counterparts, such as deep neural networks (DNNs), and delving into the material implementations that exploit topological condensed matter structures described in terms of string-nets [13; 14]. All results can be independently reproduced through the "idea.deploy" framework [https://github.com/lullimat/idea.deploy](https://github.com/lullimat/idea.deploy) ## Appendix A Proofs of the results In this appendix we collect the main results (and their proofs) used in the article to obtain the algorithm. **Lemma A.1**.: _It holds that \(\mathcal{H}_{n+1}=\mathcal{H}_{n}\circ_{\bar{v}}\mathcal{O}_{n}\) for every \(n\in\mathbb{N}\), and for some choice of vertices \(\bar{v}\) in \(\mathcal{H}_{n}\)._ Proof.: The proof is by induction on \(n\), and it does not depend on the colorings of the spin-networks, so that we can omit keeping track of the spin colors, but we can just consider the underlying graphs. The base of induction holds true, since for \(n=1\) the graph \(\mathcal{H}_{n}\) is just a single hexagon cell, and \(\mathcal{H}_{2}\) is obtained by attaching \(\mathcal{O}_{1}\) on the three top vertices of the hexagon cell. Suppose now that the result has been proved for some \(k>1\), and let us consider \(\mathcal{H}_{k+1}\). in the graph of \(\mathcal{H}_{k+1}\) we can isolate a top layer of the graph, where we imagine of cutting the edges that connect the outer perimeter to the inner vertices of \(\mathcal{H}_{k+1}\). This leaves a graph \(\mathcal{H}_{k}\) and detaches an open-edge graph that is readily identified with a copy of the graph \(\mathcal{O}_{k}\). We observe that in this step it might be necessary to eliminate extra vertices inside the edges of the detached graph. This is indeed possible since a binary vertex can be eliminated, and the symmetrizers that label the two edges are compacted into one, using idempotency of the Jones-Wenzl symmetrizer. **Lemma A.2**.: _The following equality holds for all choices of compatible spin colors \(a,b,c,d,e,f\):_ Proof.: Applying recoupling to the edge \(e\) we obtain the equality Now, applying Lemma 7 of [24] (i.e. the diagrammatic Schur's Lemma) we find that the only term in the sum that is not trivial is the one corresponding to \(i=f\), and moreover the previous equation becomes where \(\theta(a,d,f)\) denotes the value of the \(\theta\)-net The evaluation of the latter \(\theta\)-net cancels out with the of the renormalizations (see Appendix A of [35]) \(\sqrt{\theta(a,d,f)}\) of the two 3-vertices \((a,d,f)\) and \((a,d,i)\), completing the proof. **Lemma A.3**.: _Let \(\mathcal{H}_{n}(\bar{a},\bar{b},\bar{c},\bar{d},\bar{e})\) be a honeycomb spin-network with the labeling scheme described above. Then we have_ \[\begin{split}\mathcal{H}_{n+1}&(\bar{a},\bar{b}, \bar{c},\bar{d},\bar{e})=\Delta_{c^{0}_{n+1}}\theta(d^{0}_{n+1},c^{0}_{n+1},e^ {0}_{n+1})\\ &\quad\times\frac{\theta(i^{0}_{n+1},c^{-1}_{n+1},c^{-1}_{n})}{ \Delta_{i^{0}_{n+1}}}\frac{\theta(i^{0}_{n+1},d^{1}_{n+1},d^{1}_{n})}{\Delta_ {i^{0}_{n+1}}}\\ &\frac{\theta(i^{-\lfloor\frac{n}{2}\rfloor}_{\lfloor\frac{n}{ 2}\rfloor},c^{-\lfloor\frac{n}{2}\rfloor+1}_{\lfloor\frac{n}{2}\rfloor+1},c^ {-\lfloor\frac{n}{2}\rfloor+1}_{\lfloor\frac{n}{2}\rfloor+1})}{\Delta_{i^{ \lfloor\frac{n}{2}\rfloor}_{\lfloor\frac{n}{2}\rfloor}}}\frac{\theta(i^{ \lfloor\frac{n}{2}\rfloor}_{\lfloor\frac{n}{2}\rfloor+1},c^{\lfloor\frac{n}{2 }\rfloor-1}_{\lfloor\frac{n}{2}\rfloor+1})}{\Delta_{i^{\lfloor\frac{n}{2} \rfloor}_{\lfloor\frac{n}{2}\rfloor}}}\\ &\quad\times\begin{cases}d^{0}_{n+1}\ \ c^{-1}_{n+1}\ \ e^{0}_{n+1}\end{cases}\\ d^{1}_{n+1}\ \ c^{0}_{n+1}\ \ e^{0}_{n+2}\end{cases}\\ &\quad\times\mathcal{HH}_{n}(\bar{a},\bar{b},\bar{c},\bar{d}, \bar{e})\circ_{\bar{v}}\mathcal{SO}_{n}(\bar{a},\bar{b},\bar{c},\bar{d},\bar{ e})\\ &\quad\times\sum_{\bar{i}}\Psi(\bar{a},\bar{b},\bar{c},\bar{d}, \bar{e}\mid\bar{i})_{t}(\Psi(\bar{a},\bar{b},\bar{c},\bar{d},\bar{e}\mid\bar{ i})),\end{split} \tag{10}\] _where \(\Psi\) and \(\iota\) were defined above, and the fractions appear only when \(n>2\)._ Proof.: We proceed by using Lemma 1, and recoupling theory. First, let us consider the simpler case \(n=2\), which is verified as follows. We write \(\mathcal{H}_{2}(\bar{a},\bar{b},\bar{c},\bar{d},\bar{e})=\mathcal{H}_{1}(\bar{ a},\bar{b},\bar{c},\bar{d},\bar{e})\circ_{\bar{v}}\mathcal{O}_{2}(\bar{a},\bar{b}, \bar{c},\bar{d},\bar{e})\). Let us omit the labels \(\bar{a},\bar{b},\bar{c},\bar{d},\bar{e}\) for simplicity. Then, we apply Lemma 1 on the central edge right above the hexagonal cell of \(\mathcal{H}_{1}\), as given in the decomposition of \(\mathcal{H}_{2}\) above. The resulting spin-network is, with complex \(6j\) factor multiplying it, \(\mathcal{HH}_{1}\circ_{\bar{v}}\mathcal{BO}_{1}\), where \(\bar{v}\) consists of the two vertices on the sides of the hexagonal cell \(\mathcal{H}_{1}\). The complex factor appearing in the sum is the \(6j\)-symbol determined by Lemma 1. In this case there is a single \(6j\), which is seen directly to coincide with the first factor in the formula in the statement of the lemma. The terms containing \(\Psi\) and the fractions containing \(\theta\) and \(\Delta\) are not present in this case. The case for arbitrary \(n\) is similar, and it only requires more applications of the recoupling theorem. More specifically, we apply Lemma 1 to the top of the spin network. This produces the factor \[\Delta_{c^{0}_{n+1}}\theta(d^{0}_{n+1},c^{0}_{n+1},e^{0}_{n+1})\begin{cases}d^{ 0}_{n+1}\ \ c^{-1}_{n+1}\ \ e^{0}_{n+1}\\ d^{1}_{n+1}\ \ c^{0}_{n+1}\ \ e^{0}_{n+2}\end{cases}\] which is the prefactor appearing in the statement. Then, we apply recoupling to the edges that are used to connect \(\mathcal{O}_{n}\) to \(\mathcal{H}_{n}\) in the decomposition \(\mathcal{H}_{n+1}=\mathcal{H}_{n}\circ_{\bar{v}}\mathcal{O}_{n}\), along with the bottom edges of the most lateral hexagons. For each coloring, we now have to consider the coefficients appearing at each application of the recoupling theorem. Now, proceeding along the left side of the graph supporting \(\mathcal{O}_{n}\), we encounter the recoupling of edges \(d^{-\lfloor\frac{n+2}{2}\rfloor+\lfloor\frac{n+2}{2}\rfloor}_{\lfloor\frac{n}{ 2}\rfloor}\), while going in the opposite direction gives the recoupling on \(c^{\lfloor\frac{n+2}{2}\rfloor-\lfloor\frac{k+1}{2}\rfloor}_{\lfloor\frac{k+1} {2}\rfloor}\). This gives rise to the \(6j\)-symbols that constitute the terms indexed by \(k\) appearing in the product that defines \(\Psi\) and \(\iota\Psi\), where one needs to sum over all the compatible \(i\), with respect to the other entries of the \(6j\)-symbol. Finally, on the bottom edges of the equatorial belt of hexagons in the copy of \(\mathcal{H}_{n}\) found inside of \(\mathcal{H}_{n+1}\) we get recoupling on \(c^{-\lfloor\frac{n}{2}\rfloor-1}_{\lfloor\frac{n}{2}\rfloor+1}\) and \(d^{\lfloor\frac{n}{2}\rfloor+1}_{\lfloor\frac{n}{2}\rfloor+1}\), which gives rise to the last two factors in the definition of \(\Psi\) and \(\iota\Psi\). At this point we have a decomposition of the geometric support of the spin-network as \(\mathcal{H}_{n}\circ\mathcal{BO}_{n}\) with extra four bubbles. Using Lemma 7 in [24] to burst the bubbles we obtain \(\mathcal{H}_{n}\circ\mathcal{BO}_{n}\), and the remaining factors that consist of the fractions in the statement of the lemma. This completes the proof. **Lemma A.4**.: _We have, for any \(n\geq 4\), the equality_ \[\mathcal{BO}_{n}(\bar{a},\bar{b},\bar{c},\bar{d},\bar{e},\bar{f})\] \[= \prod_{n-1\leq k\leq 2n-5}\begin{Bmatrix}c_{k}^{\lfloor\frac{k+ 2}{2}\rfloor-n-1}&p_{k}^{\lfloor\frac{k+1}{2}\rfloor-n-1}&d_{k}^{\lfloor\frac{ k+1}{2}\rfloor-n-2}\\ p_{k+1}^{\lfloor\frac{k+2}{2}\rfloor-n-1}&c_{k+1}^{\lfloor\frac{k+2}{2} \rfloor-n-1}&c_{k+1}^{\lfloor\frac{k+2}{2}\rfloor-n}\end{Bmatrix}\] \[\times\frac{\theta(c_{k}^{\lfloor\frac{k+1}{2}\rfloor-n-1},e_{k} ^{\lfloor\frac{k+2}{2}\rfloor-n-1},d_{k}^{\lfloor\frac{k+1}{2}\rfloor-n-2}}{ \Delta_{d_{k}^{\lfloor\frac{k+1}{2}\rfloor-n-2}}}\] \[\times\begin{Bmatrix}d_{k}^{-\lfloor\frac{k+1}{2}\rfloor+n+1}&p_ {k}^{-\lfloor\frac{k+1}{2}\rfloor+n+1}&c_{k}^{-\lfloor\frac{k+1}{2}\rfloor+n +2}\\ p_{k+1}^{-\lfloor\frac{k+2}{2}\rfloor+n+1}&e_{k+1}^{-\lfloor\frac{k+2}{2} \rfloor+n+1}&d_{k+1}^{-\lfloor\frac{k+2}{2}\rfloor+n}\end{Bmatrix}\] \[\times\frac{\theta(d_{k}^{-\lfloor\frac{k+1}{2}\rfloor+n+1},e_{k+ 1}^{-\lfloor\frac{k+2}{2}\rfloor+n+1},d_{k}^{-\lfloor\frac{k+1}{2}\rfloor+n +2}}{\Delta_{d_{k}^{-\lfloor\frac{k+1}{2}\rfloor+n+2}}}\] \[\times\mathcal{O}_{n-1}.\] Proof.: This is an application of the bubble move of Lemma A.2 to each bubble of \(\mathcal{BO}_{n}\). **Lemma A.5**.: _We have_ \[\mathcal{HH}_{n+1}\circ_{\bar{v}}\mathcal{O}_{n}=\mathcal{H}_{n+1},\] _where \(\bar{v}\) is the set of vertices as in Lemma A.3._ Proof.: This result follows from a direct inspection of the graph support of the spin-networks \(\mathcal{HH}_{n+1}\) and \(\mathcal{O}_{n}\). In fact, \(\mathcal{HH}_{n+1}\) is obtained from \(\mathcal{H}_{n+1}\) by discarding the upper hexagonal cells. But then, attaching \(\mathcal{O}_{n}\) re-constructs the missing hexagonal cells. **Theorem A.6**.: _Let \(\mathcal{H}_{n+1}\) denote a honeycomb of size \(n+1\), and let \(\bar{a},\bar{b},\bar{c},\bar{d},\bar{e}\) denote compatible spin colors according to the scheme described above. Then_ \[\mathcal{H}_{n+1}(\bar{a},\bar{b},\bar{c},\bar{d},\bar{e})= \sum_{\bar{i}}\hat{\Psi}(\bar{a},\bar{b},\bar{c},\bar{d},\bar{e} \mid\bar{i})_{\ell}\hat{\Psi}(\bar{a},\bar{b},\bar{c},\bar{d},\bar{e}\mid \bar{i})\] \[\times\mathcal{H}_{n}(\bar{a},\bar{b},\bar{c},\bar{d},\bar{e}),\] _where \(\mathcal{H}_{n}\) inherits the spin colors of \(\mathcal{H}_{n+1}\)._ Proof.: We apply the lemmas previously proved to obtain the result. To simplify notation we omit writing the labels of the spin-networks, but we will assume throughout to follow the conventions outlined above. Observe that using Lemma A.1 we can write \(\mathcal{H}_{n+1}=\mathcal{H}_{n}\circ_{\bar{v}}\mathcal{O}_{n}\). Then, following the convention for the spin colors established in the paragraph preceding Lemma A.3, the edges connecting \(\mathcal{O}_{n}\) to \(\mathcal{H}_{n}\) are labeled by \(e_{k}\), with \(k=0,\dots,2n-2\). So, we apply Lemma A.3 to these edges to obtain \(\mathcal{H}_{n+1}=\sum_{\bar{i}}\Psi_{\bar{i}}i\bar{\Psi}_{\bar{i}}\mathcal{H} \mathcal{H}_{n}\circ_{\bar{v}}\mathcal{BO}_{n}\), where spin colors are intended as in the lemma. From Lemma A.4 we have \(\mathcal{BO}_{n}=\hat{\Psi}_{\bar{i}}i\bar{\Psi}_{\bar{i}}\mathcal{O}_{n-1}\). Therefore, we have found that \(\mathcal{H}_{n+1}=\sum_{\bar{i}}\hat{\Psi}_{\bar{i}}i\mathcal{H}\mathcal{H}_{n} \circ_{\bar{v}}\mathcal{O}_{n-1}\). Lastly, we apply Lemma A.5 to rewrite \(\mathcal{HH}_{n}\circ_{\bar{v}}\mathcal{O}_{n-1}=\mathcal{H}_{n}\). This completes the proof. **Corollary A.7**.: _The number of summation operations needed to evaluate \(\mathcal{H}_{n}\) grows quadratically with \(n\). More specifically, if \(a_{n}\) denotes the number of summations at \(n\), we have \(a_{n}=a_{n-1}+2n-5\)._ Proof.: This is an immediate consequence of Theorem A.6 using induction. In fact, at each step, i.e. for a fixed \(n\), we have a sum on \(2n-5\) indices. To see this, observe that from the proof of Theorem A.6 we have to apply recoupling \(2n-1\) times twice. The second round of re-couplings does not introduce new labels in the summations, since in Lemma A.4 there is no sum. In order to apply Lemma A.5, we need to apply Lemma 7 from [24] on the top of the spin-network, where three of the indices upon which we sum are present. This allows us to reduce the sum to one single index, and factor a summation of quantum dimensions coming from \(i_{0}\) in the final result. Moreover, we notice that the base of \(\mathcal{O}_{n}\) has a merging of \(4\) labels and therefore a two more sums are suppressed. This gives the total number of \(2n-5\) summation indices. Now, we have reduced our evaluation to \(\mathcal{H}_{n-1}\), which inductively carries a summation over \(a_{n-1}\) indices by induction. This completes the proof. **Corollary A.8**.: _The evaluation of \(\mathcal{H}_{n}(\bar{a},\bar{b},\bar{c},\bar{d},\bar{e})\), with \(n\geq 2\), is given by the formula_ \[\mathcal{H}_{n}(\bar{a},\bar{b},\bar{c},\bar{d},\bar{e})=\sum_{k=2}^{n}\hat{ \Psi}_{\bar{i}_{k}}\ell\hat{\Psi}_{\bar{i}_{k}}\theta(c_{1}^{2},e_{0}^{2},b_{0 }^{2}),\] _where the coefficients \(\hat{\Psi}_{\bar{i}_{k}}\) were given above and the index \(k\) refers to the iteration of the application of Theorem A.6._ Proof.: We proceed by induction over \(n\). For \(n=2\) we evaluate the spin-network \(\mathcal{H}_{2}\) directly. Apply Lemma A.2 to the top of the spin-network, where we indicate the top spin color by \(t\). The other that take part in the application of the lemma are, following the previously described conventions, \(c_{1}^{2},b_{0}^{2},c_{2}^{2}b_{2}^{2}\) and \(e_{0}^{2}\) which take the places of \(a,d,c,b\) and \(f\), respectively, in the lemma. Then we obtain A second application of Lemma A.2, this time with \(g\) playing the role of \(e\) in the diagram of the lemma, we find that \[\mathcal{H}_{2}=\Delta_{e_{0}^{2}}\begin{Bmatrix}c_{1}^{2}&b_{1}^{2}&e_{0}^{2} \\ c_{0}^{2}&b_{0}^{2}&t\end{Bmatrix}\begin{Bmatrix}b_{1}^{2}&c_{1}^{2}&e_{0}^{2} \\ d_{0}^{2}&a_{0}^{2}&g\end{Bmatrix}\theta(c_{1}^{2},e_{0}^{2},b_{0}^{2}),\] which concludes the proof of the base of induction. To derive the general formula, now we apply Theorem A.6 to reduce the case of dimension \(n+1\) to \(n\), where \(n=2\) reduces to a \(\theta\)-net as just shown above. With the stratified labelings introduced above, to pass from \(\mathcal{H}_{n+1}\) to \(\mathcal{H}_{n}\) we need to sum over all the \(\bar{i}\). Once we have reduced the size of \(\mathcal{H}_{n}\) by one degree, we apply again Theorem A.6 until we reach the \(n=2\) case. Each time, we relabel all the spin-colors by \(\bar{a}_{k},\bar{b}_{k},\bar{c}_{k},\bar{d}_{k},\bar{e}_{k}\) to reapply all the formulas. This completes the proof. **Lemma A.9**.: _Let \(\mathcal{H}_{1}\) and \(\mathcal{H}_{2}\) be honeycomb spin-networks, and let \(\mathcal{P}\) be as above. Then, the physical inner product is given by_ \[\langle\mathcal{H}_{2}|\mathcal{H}_{1}\rangle_{\mathrm{Phys}}=\langle\mathcal{ H}_{1}\rangle\langle\mathcal{H}_{2}\rangle,\] _where \(\langle\mathcal{H}_{j}\rangle\) indicates the evaluation computed in Section III._ Proof.: We apply the gauge fixing identity and the summation identity (see [36; 37; 28]) repeatedly to eliminate all the Haar integration boxes in the bulk. The case on \(2\times 2\) honeycomb \(\mathcal{H}_{2}\) spin-networks is shown in Figure 14. In this case one proceeds as follows. First, making use of the integration boxes in the perimeter it is possible to eliminate the diagonal Haar integration boxes, as it is explicitly done for the top-left diagonal box via the blue dashed line in Figure 14. Then, we can draw a circle that intersects the spin-networks only horizontally through the central integration boxex, shown in Figure 14 as a dotted red line. This allows us to eliminate the central box. Now, only the perimeter boxes are left, and they have a summation over the projector lines where no other integration box appear. We can therefore apply the summation identity to eliminate them, and therefore decouple the spin-networks completing the \(2\times 2\) case. The figure shows a transition between hexagonal spin-networks where the initial and final states are superposed. The effect of the projector is that of adding lines for colors \(k\) compatible with the spin colors of the states, and Haar integration (black) boxes on the edges. We observe that from the case \(n\times n\) with \(n=3\) one complication is easily seen to arise. In fact, it is not possible to directly eliminate all the boxes in the bulk by only utilizing the gauge fixing identity. It is inf fact possible to eliminate only one box per horizontal row (which in the \(2\times 2\) case happens to eliminate the only horizontal box). However, since the diagonal rows are eliminated via gauge fixing, the horizontal rows can be cleared by an application of the summation identity. The perimeter is likewise cleared of any Haar integration boxes. Although the previous argument provides a relatively detailed argument, we present here the general by induction using the decomposition \(\mathcal{H}_{n+1}=\mathcal{H}_{n}\circ\mathcal{O}_{n}\) from Lemma A.1 for the sake of completeness. In addition, this approach is practically useful for the implementation of the algorithm, which takes advantage of the hierarchic structure of the honeycomb spin-networks. In practice, we use the inductive step to remove the integration boxes from the bulk of \(\mathcal{H}_{n}\), and then use the gauge fixing identity between the integration boxes of \(\mathcal{O}_{n}\) and those boxes in the perimeter of \(\mathcal{H}_{n}\) that are in the bulk of \(\mathcal{H}_{n+1}\). Observe that when decomposing \(\mathcal{H}_{n+1}\), the top edge of \(\mathcal{H}_{n}\) is split in two by a vertex connected with \(\mathcal{O}_{n}\), so the induction is not immediately applicable. However, this is not a problem as the two integration boxes that arise on the two sides of the top vertex abut in an external cell, so that any line that is drawn through them can go out of \(\mathcal{O}_{n}\) without intersecting the spin-network at a point other than a Haar integration box. So, the inductive procedure can be applied with the slight modification of using the gauge fixing identity to delete the diagonal integration boxes with two top integration boxes rather than a single one. The reader can convince themselves directly of the veracity of this assertion by drawing the connecting part of \(\mathcal{H}_{n}\) and \(\mathcal{O}_{n}\). Now, we observe that the only horizontal integration box that is left (on the central vertical leg of \(\mathcal{O}_{n}\)) can be removed by another application of the gauge fixing identity. The two aforementioned diagonal boxes on top of \(\mathcal{H}_{n}\) cannot be eliminated directly both, but just one of them via gauge fixing. However, the remaining one, which is now the only non perimeter box that is left is eliminated via the summation identity, since no other box appear in the top cell of \(\mathcal{H}_{n}\). The remaining perimeter boxes are eliminated once again via summation identity completing the inductive step. Now, using the definition of the Perez-Noui projector \(\mathcal{P}\) by means of the Ashtekar-Lewandowski measure we evaluate the spin networks in the identity element of \(SU(2)\) in the classical case, while we apply them on an element \(H^{-1}\) in the quantum case, where \(H^{-1}\) reproduces the quantum recoupling theory [29]. This gives us the evaluation of \(\mathcal{H}_{j}\), \(j=1,2\), from Section III as stated.
2309.01146
A Spin-dependent Machine Learning Framework for Transition Metal Oxide Battery Cathode Materials
Owing to the trade-off between the accuracy and efficiency, machine-learning-potentials (MLPs) have been widely applied in the battery materials science, enabling atomic-level dynamics description for various critical processes. However, the challenge arises when dealing with complex transition metal (TM) oxide cathode materials, as multiple possibilities of d-orbital electrons localization often lead to convergence to different spin states (or equivalently local minimums with respect to the spin configurations) after ab initio self-consistent-field calculations, which causes a significant obstacle for training MLPs of cathode materials. In this work, we introduce a solution by incorporating an additional feature - atomic spins - into the descriptor, based on the pristine deep potential (DP) model, to address the above issue by distinguishing different spin states of TM ions. We demonstrate that our proposed scheme provides accurate descriptions for the potential energies of a variety of representative cathode materials, including the traditional Li$_x$TMO$_2$ (TM=Ni, Co, Mn, $x$=0.5 and 1.0), Li-Ni anti-sites in Li$_x$NiO$_2$ ($x$=0.5 and 1.0), cobalt-free high-nickel Li$_x$Ni$_{1.5}$Mn$_{0.5}$O$_4$ ($x$=1.5 and 0.5), and even a ternary cathode material Li$_x$Ni$_{1/3}$Co$_{1/3}$Mn$_{1/3}$O$_2$ ($x$=1.0 and 0.67). We highlight that our approach allows the utilization of all ab initio results as a training dataset, regardless of the system being in a spin ground state or not. Overall, our proposed approach paves the way for efficiently training MLPs for complex TM oxide cathode materials.
Taiping Hu, Teng Yang, Jianchuan Liu, Bin Deng, Zhengtao Huang, Xiaoxu Wang, Fuzhi Dai, Guobing Zhou, Fangjia Fu, Ping Tuo, Ben Xu, Shenzhen Xu
2023-09-03T11:45:45Z
http://arxiv.org/abs/2309.01146v1
# A Spin-dependent Machine Learning Framework for Transition Metal Oxide Battery Cathode Materials ###### Abstract The proposed method for the design of the proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the method of the proposed method. The proposed method is based on the method of the method of the proposed method. The proposed method is based on the method of the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The method is based on the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the method of the proposed method. The proposed method is based on the method of the method of the proposed method. The proposed method is based on the method of the method of the proposed method. The proposed method is based on the method of the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the method of the proposed method. The proposed method is based on the method of the method of the proposed method. The proposed method is based on the method of the method of the proposed method. The proposed method is based on the method of the proposed method. The method of the proposed method is based on the method of the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the method of the proposed method. The proposed method is based on the method of the method of the proposed method. The proposed method is based on the method of the proposed method. The proposed method is based on the method of the method of the proposed method. The proposed method is based on the method of the method of the proposed method. The method is based on the method of the method of the proposed method. The proposed method is based on the method of the method of the proposed method. The method based on the method of the method of the proposed method. The proposed method is based on the method of the method of the proposed method. The proposed method is based on the method of the method of the proposed method. The method is based on the method of the method of the proposed method. The proposed method is based on the method of the method of the proposed method. The proposed method is based on the method of the method of the proposed method. The method of the proposed method is based on the method of the method of the proposed method. The method is based on the method of the method of the proposed method. The proposed method is based on the method of the method of the proposed method. The proposed method is based on the method of the method of the proposed method. The proposed method is based on the method of the method of the proposed method. The proposed method is based on the method of the method of the proposed method. The method is based on the method of the method of the proposed method. The method of the method of the proposed method is based on the method of the method of the method of the proposed method. The proposed method is based on the method of the method of the method of the proposed method. The method is based on the method of the method of the method of the proposed method. The method of the method of the proposed method is based on the method of the method of the method of the proposed method. The method is based on the method of the method of the method of the proposed method. The method of the ###### Abstract Owing to the trade-off between the accuracy and efficiency, machine-learning-potentials (MLPs) have been widely applied in the battery materials science, enabling atomic-level dynamics description for various critical processes. However, the challenge arises when dealing with complex transition metal (TM) oxide cathode materials, as multiple possibilities of _d_-orbital electrons localization often lead to convergence to different spin states (or equivalently local minimums with respect to the spin configurations) after _ab initio_ self-consistent-field calculations, which causes a significant obstacle for training MLPs of cathode materials. In this work, we introduce a solution by incorporating an additional feature - atomic spins - into the descriptor, based on the pristine deep potential (DP) model, to address the above issue by distinguishing different spin states of TM ions. We demonstrate that our proposed scheme provides accurate descriptions for the potential energies of a variety of representative cathode materials, including the traditional Li\({}_{x}\)TMO\({}_{2}\) (TM=Ni, Co, Mn, _x_=0.5 and 1.0), Li-Ni anti-sites in Li\({}_{x}\)NiO\({}_{2}\) (_x_=0.5 and 1.0), cobalt-free high-nickel Li\({}_{x}\)Ni\({}_{1.5}\)Mn\({}_{0.5}\)O\({}_{4}\) (_x_=1.5 and 0.5), and even a ternary cathode material Li\({}_{x}\)Ni\({}_{1/3}\)Co\({}_{1/3}\)Mn\({}_{1/3}\)O\({}_{2}\) (_x_=1.0 and 0.67). We highlight that our approach allows the utilization of all _ab initio_ results as a training dataset, regardless of the system being in a spin ground state or not. Overall, our proposed approach paves the way for efficiently training MLPs for complex TM oxide cathode materials. ## Introduction Lithium-ion batteries (LIBs) have revolutionized portable electronic devices and electric vehicles due to their high energy density, light weight, and long cycle life [1, 2, 3, 4]. The cathode, as one of the fundamental components of LIBs, plays a crucial role in determining battery's performances and costs [5, 6]. However, improving the structural stability under high operating voltage for almost all cathode materials remains challenging [7, 8, 9, 10]. Experimentally, many strategies, often relying on trial and error, have been devoted to enhance the structural stability [8, 11, 12, 13, 10]. Benefiting from computational capacities' improvement, atomic-level simulations become increasingly valuable in helping interpret experimental observations and guide materials' designs [14, 15, 16]. Molecular dynamics (MD) simulations, in particular, provide comprehensive dynamic evolutions at the atomic scale, making them widely used in LIB researches [17, 18, 19]. Owing to the trade-off between the accuracy and efficiency, machine learning potentials (MLPs) [20, 21, 22] have been found extensive applications recently in describing the complete dynamics of critical processes in LIBs. For example, MLPs have been successfully applied to the solid electrolyte interface [23], the Si and Li metal anodes [24, 25], and organic/solid-state electrolytes [26, 27]. However, to the best of our knowledge, only a few studies reported MLPs for Li battery cathodes [28, 29], primarily because that _ab initio_ calculations suffer numerous challenges when dealing with complicated transition metal (TM) oxides cathodes. On the one hand, TM ions undergo changes in their oxidation states during the lithium insertion/extraction process. There are multiple possible magnetic configurations (e.g., high-spin (HS), intermediate-spin (IS), and low-spin (LS) states) even for a TM ion at a specific valence state. On the other hand, the density functional theory with the Hubbard U [30] (DFT+U) method is usually applied to those strongly correlated systems to correct the so-called self-interaction errors. A relatively random initial wavefunction or small structural perturbations could result in different localization of \(d\)-orbital electrons on the TM ion sites/orbitals, and converging to different potential energy surfaces (with respect to the degrees of freedom of the TM ions' spin states). The above issues pose significant challenges for constructing cathodes' MLPs. Therefore, it is highly desired to seek a systematic approach to effectively address those challenges. In our previous work, we developed a workflow to identify the magnetic ground state for the LiCoO\({}_{2}\) cathode.[28] Subsequently, we successfully constructed the deep potential (DP) model[31], one of the popular MLPs, for the Li\({}_{x}\)CoO\({}_{2}\) material with different phases and a wide range of concentrations. Specifically, we at the beginning confirmed the magnetic ground state and then excluded the non-ground states (with respect to spin configurations) in DFT\(+\)U single-point calculations. However, we realized that the above workflow has two drawbacks: (1) it relies on manual interventions, which limits its applications to more complex TM oxide cathode materials. (2) it leads to substantial data wastage due to the exclusion of the systems at non-ground spin states, which may require an expensive cost for constructing ground-state MLPs. Here we emphasize that the DFT\(+\)U results converging to non-ground state do not mean that the corresponding electronic-structure calculations are incorrect. As long as the self-consistent-field calculations converge successfully, the obtained electronic-structure results and the associated spin configurations are valid, which are just likely to be local minimums but not global minimums with respect to the degrees of freedom of the TM ions' spin states. A question then naturally arises: how can we automatically determine the magnetic ground state while fully utilizing the data produced by DFT\(+\)U single-point calculations in the MLPs training process? Recently, Xu et al[32] developed the DeePSPIN model to simulate the simultaneous evolution of both the lattice and spin in magnetic materials. Although the DeePSPIN model's primary goal is to handle lattice-spin interactions in complex magnetic systems, its key idea of incorporating the spin into the descriptor inspires us to resolve the cathode materials' MLPs training challenges mentioned above. In fact, the spin configuration of TM ions is an additional feature beyond the atomic coordinates for TM oxide cathodes. In other words, for a specific structure, multiple spin states may exist, corresponding to the spin ground state and "excited" states (illustrated in **Figure 1a**). Therefore, in this work, by employing a collinear DeePSPIN model's framework, we also integrate the spin into the descriptor (as depicted in **Figure 1b**), aiming to distinguish different spin states of TM ions for a specific atomic structure of the studied system. We demonstrate that our proposed scheme can provide accurate descriptions for the potential energies of various representative cathode materials, from simple LiTMO\({}_{2}\) (TM=Ni, Co, Mn) to complex ternary NCM cathodes. More importantly, the current workflow does not require any manual intervention and can fully utilize all data generated from DFT+U single-point calculations, which avoids identification of ground states and data wastage, thus enabling a more efficient and automated workflow. ## 2 Methodology ### The Collinear DeePSPIN Model Here we briefly introduce the implementation of the DeePSPIN model [32]. In this framework, a virtual atom \(\mathbf{R}_{i}^{\prime}\) is introduced near a magnetic atom \(\mathbf{R}_{i}\). The position of the virtual atom is given by the following relationship, \(\mathbf{R}_{i}^{\prime}=\mathbf{R}_{i}+\eta\mathbf{S}_{i},i=1,...,N\) (1) where \(\mathbf{S}_{i}\) is a three-dimensional vector in the _non-collinear_ framework and \(N\) denotes the number of magnetic atoms. \(\eta\) is a hyperparameter, called "virtual length", which Figure 1: (a) Schematic plot of the potential energy surface containing both the degrees of freedom of geometric configurations and spin states. (b) General framework of the DeePSPIN model adopted in this work. (c) Atomic structures of the test systems in this study. is used to control the Euclidean distance between the virtual atom \(\mathbf{R}_{i}^{\prime}\) and the real atom \(\mathbf{R}_{i}\). Different from the format of magnetic moments data required by the DeePSPIN scheme, all DFT\(+\)U calculations in the field of battery cathode simulations are typically performed within the _collinear_ framework. Therefore, the atomic spin, denoted as S\({}_{i}\), is actually a scalar. The positive and negative values indicate the spin-up and spin-down states, respectively. The formula (1) can be rewritten as, \(\mathbf{z}_{i}^{\prime}=z_{i}+\eta\mathrm{S}_{i},i=1,...,N\) (2) where \(\mathbf{z}_{i}^{\prime}\) (\(\mathbf{z}_{i}\)) represents the z-component of the Cartesian coordinate of virtual (real) atoms, and \(\mathrm{S}_{i}\) is the magnetic moment of the \(i\)-th magnetic atom projected on the \(z\) direction. In addition, we introduce a constant \(d\) to the formula (2) to avoid the overlap of the real and virtual atoms when spins are close to zero. \(\mathbf{z}_{i}^{\prime}=z_{i}+\eta\mathrm{S}_{i}+d,i=1,...,N\) (3) By introducing virtual atoms, we effectively represent information of both the atomic geometries and spin states within the extended atomic coordinates, allowing for a more complete description of the system. ### Density Functional Theory Calculations Spin-polarized density functional theory (DFT) calculations in this work were performed within the Vienna Ab initio Simulation Package (VASP, version 5.4.4) [33, 34]. We employed the projector augmented wave (PAW) [35] potentials for modeling the nuclei and the frozen-core electrons of all atoms. The valence electron configurations were 2s\({}^{2}\)2p [4] for O, 3d\({}^{8}\)4s [1] for Co, 1s\({}^{2}\)2s [1] for Li, 3p\({}^{6}\)3d\({}^{6}\)4s [1] for Mn, and 3d\({}^{9}\)4s [1] for Ni. We applied the Perdew-Burke-Ernzerhof (PBE) functional [36] with the Hubbard U correction [30] (PBE\(+\)U) to the transition metal Co, Ni and Mn. The U values for Co, Ni and Mn were set as 5.14 eV, 6.30 eV and 3.9 eV, obtained from previous works [37, 38]. Unless specifically explained, a dense reciprocal-space mesh with 0.25 A-1 and a 520 eV kinetic energy cutoff for the plane wave basis were employed. The self-consistent field electronic-structure calculations were converged within 10\({}^{-5}\) eV for the total energies. ### Training of MLPs The constructions of all datasets used in this study are described in the Supporting Information (SI). We trained the collinear DeePSPIN model with 4\(\times\)10\({}^{6}\) steps using the deepmd-kit software (version 2.2.2)[39, 40]. The embedding network has three layers with 25, 50 and 100 nodes and the fitting network is composed of three layers, each of which has 240 nodes. We used the Adam method[41] to minimize the loss function with an exponentially decay learning rate from 1.00\(\times\)10-3 to 3.51\(\times\)10-8. Due to lack of spin forces' labels in collinear DFT+U calculations, we turn off the spin forces' pre-factor in the loss function during the training process. ## Results and Discussion We first investigate the performance of the pristine DP model (the regular DP model without spin features in the descriptor[31]) for simple LiTMO2 (TM = Ni, Co, Mn) cathodes (atomic structures are displayed in **Figure 1c**) by using the datasets generated by our previously developed workflow[28], where the ground-state results are identified and imported into the neural network training process. We construct the DP models for the Li\({}_{x}\)CoO2, Li\({}_{x}\)MnO2 and Li\({}_{x}\)NiO2 (\(x\) = 0.5 and 1.0). We note that the Co3+/Ni3+ and Co4+/Ni4+ ions are in the low spin states (the magnetic moments are 0/1 \(\mu_{\text{B}}\) and 1/0 \(\mu_{\text{B}}\), respectively), while both the Mn3+ and Mn4+ ions are in the high spin states (the magnetic moments are 4 \(\mu_{\text{B}}\) and 3 \(\mu_{\text{B}}\), respectively), based on the previous well-known knowledge[28, 37, 42, 43]. We evaluate the performances of the traditional DP models by comparing energies and forces predicted by DFT+U calculations and the DP models. The root-mean-square errors (RMSEs) of energies and forces are 3.2 meV/atom and 115 meV/A for LiCoO2, 3.3 meV/atom and 101 meV/ A for Li\({}_{x}\)MnO2, and 2.9 meV/atom and 114 meV/A for Li\({}_{x}\)NiO2, respectively (\(x\) = 0.5 and 1.0, see **Figure 2**). Such small errors show that the pristine DP model can accurately describe the potential energy surfaces of magnetic TM oxide materials as long as the electronic structures of all the configurations in the training datasets are in the spin ground states. We obtain the MLPs by fitting a dataset generated by DFT+U single-point calculations and constructing a mapping between atomic coordinates and [energies + forces]. However, training an accurate MLP becomes challenging when the data from multiple potential energy surfaces (labeled by different spin configurations) are mixed together. This situation somehow resembles a one-to-many mapping, where a single atomic configuration corresponds to multiple energetic states which are actually associated with different spin states of the TM ions. To verify the above claim, we add the magnetic excited state data to the existing ground state dataset of the LiTMO\({}_{2}\). We consider the high spin states of the Co\({}^{3+}\)/Co\({}^{4+}\) (4/5 \(\upmu_{\text{B}}\)) and Ni\({}^{3+}\)/Ni\({}^{4+}\) (3/4 \(\upmu_{\text{B}}\)) ions, and the intermediate spin state of the Mn\({}^{3+}\) (2 \(\upmu_{\text{B}}\)) and the low spin state of the Mn\({}^{4+}\) ions (1 \(\upmu_{\text{B}}\)). We maintain the same training parameters as before. As expected, we can see that the pristine DP model yields considerably poor predictions for both energies and forces Figure 2: Comparisons of the (a) energies and (b) forces predicted by the DFT+U calculations vs. the traditional DP model for Li\({}_{x}\)CoO\({}_{2}\), Li\({}_{x}\)MnO\({}_{2}\) and Li\({}_{x}\)NiO\({}_{2}\) cathodes (\(x=0.5\) and 1.0), where only the spin ground states’ data are taken into consideration. (see **Figure 3**). In particular, the RMSEs of energy comparison are \(\sim\) 100 meV/atom for the test systems, which is unacceptable. The MLP's accuracy thus could be significantly affected by the presence of multiple potential energy surfaces associated with different spin states in the training dataset. By contrast, in the DeePSPIN model, different spin states of TM ions are represented by virtual atoms with distinct positions. Therefore, even if multiple spin states corresponding to the same geometric structure may be mixed in the training dataset, they could be identified as different input data points. We then retrain the above datasets by using our collinear DeePSPIN model. We can see a remarkable improvement in the predictions for both energies and forces (see **Figure 4**). The RMSEs of energies are \(\sim\) 2-3 orders of magnitude lower than those predicted by the pristine DP model. Surprisingly, the energies' RMSEs for Li\({}_{x}\)CoO\({}_{2}\) and Li\({}_{x}\)NiO\({}_{2}\), as well as the forces' RMSE for the Li\({}_{x}\)NiO\({}_{2}\) are even smaller than those given by the DP models that Figure 3: Comparisons of the (a) energies and (b) forces predicted by the DFT+U calculations vs. the pristine DP model for Li\({}_{x}\)CoO\({}_{2}\), Li\({}_{x}\)MnO\({}_{2}\) and Li\({}_{x}\)NiO\({}_{2}\) (\(x=0.5\) and 1.0) cathodes. In addition to the spin ground states’ data, the spin excited states’ data are also included. are trained solely on spin ground states' dataset (as shown in **Figure 2**). Such encouraging performance of the collinear DeePSPIN model can be attributed to its ability to distinguish different spin states. We also realize that the positions of the virtual atoms (representing the information of TM ions' spin states) may affect the accuracy of the collinear DeePSPIN model. We therefore further investigate the dependence of the model's accuracy on two key parameters, \(\eta\) and \(d\) in the formula (3), involved in constructing virtual atoms. We found that the model's performance remains stable when those two parameters change within a reasonable range (see **Figure S1**). For example, for the Li\({}_{x}\)CoO\({}_{2}\) case, RMSEs of energies and forces fluctuate in the range of 2.2 - 3.2 meV/atom and 0.15 - 0.19 eV/A, respectively, when the \(\eta\) and d values continuously increase from 0.1 to 0.5. These results demonstrate that our proposed collinear DeePSPIN model exhibits robustness for training cathode materials' MLPs. Another crucial aspect is that whether the training strategy needs to be modified after introducing an additional degree of freedom in the descriptor. We thus test the impact of the pre-factor of the atomic force in the loss function on the model's accuracy. We find that the force's pre-factor has a negligible effect on the model's performance (see **Table S3**). In other words, we still can employ the training strategy similar with the pristine DP model, which involves progressively increasing the energy's pre-factor and decreasing the force's pre-factor in the loss function, so that the force term dominates initially while the energy becomes more important at end.[31] We have already performed systematic tests to evaluate the robustness of the collinear DeePSPIN model for the complex TM oxide cathode materials MLPs training. Because of the introduction of spin features into the descriptor, the collinear DeePSPIN model is able to distinguish potential energies associated different spin states. Before applying the collinear DeePSPIN model to more complicated application systems, we need to address one more question: can the collinear DeePSPIN model provide accurate predictions for DFT+U data obtained from completely random initial-guess wavefunctions? Here, we note that a completely random initial wavefunction refers to the situation that neither the initial magnetic moments are set nor the total magnetic moment is controlled in DFT+U single-point calculations. Taking the LiCoO\({}_{2}\) and LiNiO\({}_{2}\) as examples, we compare the performance of the pristine DP and the collinear DeePSPIN models (see **Figure S2**). We can see that the pristine DP model provides rather poor predictions, especially for the LiCoO\({}_{2}\). The RMSEs for energies and forces Figure 4: Comparisons of the (a) energies and (b) forces predicted by the DFT+U calculations vs. the collinear DeePSPIN model for Li\({}_{x}\)CoO\({}_{2}\), Li\({}_{x}\)MnO\({}_{2}\) and Li\({}_{x}\)NiO\({}_{2}\) (\(x=0.5\) and \(1.0\)) cathodes. In addition to the spin ground states’ data, the spin excited states’ data are also included. are \(\sim\) 11 meV/atom and 229 meV/A, respectively. In contrast, the collinear DeePSPIN model provides a much better description for both energies and forces (the RMSEs of energies and forces are \(\sim\) 2.8 meV/atom and 130 meV/A for the Li\({}_{x}\)CoO\({}_{2}\), and \(\sim\) 2.0 meV/atom and 87 meV/A for the Li\({}_{x}\)NiO\({}_{2}\), respectively). More importantly, these tests also demonstrate that our proposed collinear DeePSPIN scheme can fully utilize all data generated from DFT+U single-point calculations, regardless of the magnetic states being in ground states or not. As a final example, we apply the collinear DeePSPIN model to more complicated cathode materials to further verify the model's robustness. Here, we consider the following three cases: Li-Ni anti-site defects in the LiNiO\({}_{2}\), the Co-free high-Ni binary Li\({}_{x}\)Ni\({}_{1.5}\)Mn\({}_{0.5}\)O\({}_{4}\) (\(x\)=1.5 and 0.5) and the ternary Li\({}_{x}\)Ni\({}_{1/3}\)Co\({}_{1/3}\)Mn\({}_{1/3}\)O\({}_{2}\) (\(x\)=1.0 and 0.67) cathode (atomic structures are displayed in **Figure 1c**) materials (please refer to SI for the details of the training datasets construction). We emphasize that the completely random initial wavefunctions were used in all DFT+U calculations. We can see that even for such complex cases, our collinear DeePSPIN model still well reproduces the energies and forces given by DFT+U calculations (see **Figure 5**). While the pristine DP model exhibits lower accuracy, especially for the energies (see **Figure S3**). We note that the collinear DeePSPIN model's accuracy can be further improved by using DPGEN[44] concurrent learning process to actively collect the dataset. Therefore, the collinear DeePSPIN model performs as an effective and accurate tool to construct MLPs for complex cathode materials. ## Conclusion and Outlook In this work, we develop a new deep neural network framework based on the pristine DP model by incorporating the atomic spin feature into the descriptor to distinguish different spin states of TM ions in complex cathode materials. We employ a series of test systems, from simple LiTMO\({}_{2}\) (TM=Ni, Co, Mn) to complex ternary NCM cathode materials, to justify the accuracy of our proposed collinear DeePSPIN model, which is demonstrated to be able to well reproduce energies and forces obtained by DFT+U calculations. More importantly, all results generated by DFT+U single-point calculations can be utilized and included in the dataset for training MLPs, regardless of the spin configurations being in a ground state or not. Overall, our proposed scheme provides a promising tool to efficiently train MLPs for complex TM oxide cathode materials. Upon obtaining a robust MLP, we then need to perform MD simulations based on Figure 5: Comparisons of the (a) energies and (b) forces predicted by the DFT+U calculations vs. the collinear DeePSPIN model for Li-Ni anti-site defects in LiNiO\({}_{2}\), Li\({}_{x}\)Ni\({}_{1.5}\)Mn\({}_{0.5}\)O\({}_{4}\) (\(x=1.5\) and \(0.5\)) and Li\({}_{x}\)Ni\({}_{1.3}\)Co\({}_{1/3}\)Mn\({}_{1/3}\)O\({}_{2}\) (\(x=1\) and \(0.67\)). this MLP to derive a dynamic trajectory. In the current force-field model, the potential energies of the spin ground and excited states are all included and the spin degree of freedom serves as an independent variable. We therefore need to conduct energy minimization within the spin subspace at each specific atomic structure to achieve the ground states' energies and forces information along a MD simulation pass. Since the spin value obtained from collinear DFT+U calculations is a discrete scalar, traditional optimization algorithms for continuous variables may not be suitable. To address this issue, we plan to try two possible strategies. The first way is that we can use the automatic differentiation technology of the neural network to calculate the derivative of the energy with respect to the spin degree of freedom to yield "forces" on spin states of TM ions. Owing to lack of spin forces' labels in the training dataset, the values of spin forces obtained in this way may not be sufficiently accurate. However, they can still provide some guidance for our optimization directions of changing spin states. The second strategy is to use global optimization algorithms, such as the genetic algorithm, the particle swarm algorithm, etc., to minimize the energy with respect to the spin degree of freedom considering its discrete characteristic. Once the energy minimization with respect to the spin degree of freedom is completed, we can perform the conventional MD simulations and interface the collinear DeePSPIN model to the DPGEN concurrent learning framework to actively collect dataset. We are working on the development of these approaches. ## Acknowledgements The authors gratefully acknowledge funding support from the DP Technology Corporation (Grant No. 2021110016001141), Chinese Ministry of Science and Technology (Grant No. 2021YFB3800303), the National Natural Science Foundation of China (Grant No. 52273223), the School of Materials Science and Engineering at Peking University, and the AI for Science Institute, Beijing (AISI). The computing resource of this work was provided by the Bohrium Cloud Platform ([https://bohrium.dp.tech](https://bohrium.dp.tech)), which is supported by DP Technology. ## Reference (1) Whittingham, M. S. Lithium Batteries and Cathode Materials. _Chem. Rev._**2004, _104_, 4271-4302. (2) Ritchie, A.; Howard, W. Recent Developments and Likely Advances in Lithium-Ion Batteries. _J. Power Sources_**2006**, _162_, 809-812. (3) Whittingham, M. S. Ultimate Limits to Intercalation Reactions for Lithium Batteries. _Chem. Rev._**2014**, _114_, 11414-11443. (4) Armand, M.; Tarascon, J. M. Building Better Batteries. _Nature_**2008**, _451_, 652-657. (5) Islam, M. S.; Fisher, C. A. J. Lithium and Sodium Battery Cathode Materials: Computational Insights into Voltage, Diffusion and Nanostructural Properties. _Chem. Soc. Rev._**2014**, _43_, 185-204. (6) Chakraborty, A.; Kunnikuruvan, S.; Kumar, S.; Markovsky, B.; Aurbach, D.; Dixit, M.; Major, D. T. Layered Cathode Materials for Lithium-Ion Batteries: Review of Computational Studies on LiNi\({}_{\text{1-X-Y}}\)Co\({}_{\text{x}}\)Mn\({}_{\text{y}}\)O\({}_{\text{2}}\) and LiNi\({}_{\text{1-X-Y}}\)Co\({}_{\text{x}}\)Al\({}_{\text{y}}\)O\({}_{\text{2}}\). _Chem. Mater._**2020**, _32_, 915-952. (7) Canepa, P.; Sai Gautam, G.; Hannah, D. C.; Malik, R.; Liu, M.; Gallagher, K. G.; Persson, K. A.; Ceder, G. Odyssey of Multivalent Cathode Materials: Open Questions and Future Challenges. _Chem. Rev._**2017**, _117_, 4287-4341. (8) Sharifi-Asl, S.; Lu, J.; Amine, K.; Shahbazian-Yassar, R. Oxygen Release Degradation in Li-Ion Battery Cathode Materials: Mechanisms and Mitigating Approaches. _Adv. Energy Mater._**2019**, \(9\), 1900551. (9) Dong, Y.; Li, J. Oxide Cathodes: Functions, Instabilities, Self Healing, and Degradation Mitigations. _Chem. Rev._**2023**, _123_, 811-833. (10) Zhang, H.; Liu, H.; Piper, L. F. J.; Whittingham, M. S.; Zhou, G. Oxygen Loss in Layered Oxide Cathodes for Li-Ion Batteries: Mechanisms, Effects, and Mitigation. _Chem. Rev._**2022**, _122_, 5641-5681. (11) Ning, F.; Li, B.; Song, J.; Zuo, Y.; Shang, H.; Zhao, Z.; Yu, Z.; Chu, W.; Zhang, K.; Feng, G.; et al. Inhibition of Oxygen Dimerization by Local Symmetry Tuning in Li Rich Layered Oxides for Improved Stability. _Nat. Commun._**2020**, _11_, 4973. * Zhou et al. (2022) Zhou, T.; Wang, H.; Wang, Y.; Jiao, P.; Hao, Z.; Zhang, K.; Xu, J.; Liu, J.-B.; He, Y.-S.; Zhang, Y.-X.; et al. Stabilizing Lattice Oxygen in Slightly Li-Enriched Nickel Oxide Cathodes toward High-Energy Batteries. _Chem_**2022**, \(8\), 2817-2830. * Wang et al. (2018) Wang, L.; Chen, B.; Ma, J.; Cui, G.; Chen, L. Reviving Lithium Cobalt Oxide-Based Lithium Secondary Batteries-toward a Higher Energy Density. _Chem. Soc. Rev._**2018**, _47_, 6505-6602. * Eng Alex Yong et al. (2017) Eng Alex Yong, S.; Soni Chhail, B.; Lum, Y.; Khoo, E.; Yao, Z.; Vineeth, S. K.; Kumar, V.; Lu, J.; Johnson Christopher, S.; Wolverton, C.; et al. Theory-Guided Experimental Design in Battery Materials Research. _Sci. Adv._**8**, eabm2422. * Radin et al. (2017) Radin, M. D.; Hy, S.; Sina, M.; Fang, C.; Liu, H.; Vinckeviciute, J.; Zhang, M.; Whittingham, M. S.; Meng, Y. S.; Van der Ven, A. Narrowing the Gap between Theoretical and Practical Capacities in Li-Ion Layered Oxide Cathode Materials. _Adv. Energy Mater._**2017**, \(7\), 1602888. * Takenaka et al. (2021) Takenaka, N.; Bouibes, A.; Yamada, Y.; Nagaoka, M.; Yamada, A. Frontiers in Theoretical Analysis of Solid Electrolyte Interphase Formation Mechanism. _Adv. Mater._**2021**, 2100574. * Yao et al. (2022) Yao, N.; Chen, X.; Fu, Z.-H.; Zhang, Q. Applying Classical, Ab Initio, and Machine-Learning Molecular Dynamics Simulations to the Liquid Electrolyte for Rechargeable Batteries. _Chem. Rev._**2022**, 122, 10970-11021. * Van der Ven et al. (2020) Van der Ven, A.; Deng, Z.; Banerjee, S.; Ong, S. P. Rechargeable Alkali-Ion Battery Materials: Theory and Computation. _Chem. Rev._**2020**, _120_, 6977-7019. * Bedrov et al. (2019) Bedrov, D.; Piquemal, J.-P.; Borodin, O.; MacKerell, A. D.; Roux, B.; Schroder, C. Molecular Dynamics Simulations of Ionic Liquids and Electrolytes Using Polarizable Force Fields. _Chem. Rev._**2019**, _119_, 7940-7995. * Unke et al. (2021) Unke, O. T.; Chmiela, S.; Sauceda, H. E.; Gastegger, M.; Poltavsky, I.; Schutt, K. T.; Tkatchenko, A.; Muller, K.-R. Machine Learning Force Fields. _Chem. Rev._**2021**, _121_, 10142-10186. * Behler (2017) Behler, J. Four Generations of High-Dimensional Neural Network Potentials. Chem. Rev._**2021**, _121_, 10037-10072. * (22) Manzhos, S.; Carrington, T. Neural Network Potential Energy Surfaces for Small Molecules and Reactions. _Chem. Rev._**2021**, _121_, 10187-10217. * (23) Hu, T.; Tian, J.; Dai, F.; Wang, X.; Wen, R.; Xu, S. Impact of the Local Environment on Li Ion Transport in Inorganic Components of Solid Electrolyte Interphases. _J. Am. Chem. Soc._**2023**, _145_, 1327-1333. * (24) Fu, F.; Wang, X.; Zhang, L.; Yang, Y.; Chen, J.; Xu, B.; Ouyang, C.; Xu, S.; Dai, F.-Z.; E, W. Unraveling the Atomic-Scale Mechanism of Phase Transformations and Structural Evolutions During (De)Lithiation in Si Anodes. _Adv. Funct. Mater._**2023**, 2303936. * (25) Jiao, J.; Lai, G.; Zhao, L.; Lu, J.; Li, Q.; Xu, X.; Jiang, Y.; He, Y.-B.; Ouyang, C.; Pan, F.; et al. Self-Healing Mechanism of Lithium in Lithium Metal. _Adv. Sci._**2022**, 2105574. * (26) Dajnowicz, S.; Agarwal, G.; Stevenson, J. M.; Jacobson, L. D.; Ramezanghorbani, F.; Leswing, K.; Friesner, R. A.; Halls, M. D.; Abel, R. High-Dimensional Neural Network Potential for Liquid Electrolyte Simulations. _J. Phys. Chem. B_**2022**, _126_, 6271-6280. * (27) Huang, J.; Zhang, L.; Wang, H.; Zhao, J.; Cheng, J.; E, W. Deep Potential Generation Scheme and Simulation Protocol for the Li\({}_{10}\)GeP\({}_{2}\)S\({}_{12}\)-Type Superionic Conductors. _J. Chem. Phys._**2021**, _154_, 094703. * (28) Hu, T.; Dai, F.-z.; Zhou, G.; Wang, X.; Xu, S. Unraveling the Dynamic Correlations between Transition Metal Migration and the Oxygen Dimer Formation in the Highly Delithiated Li\({}_{x}\)CoO\({}_{2}\) Cathode. _J. Phys. Chem. Lett._**2023**, _14_, 3677-3684. * (29) Zhang, P.; Shang, C.; Liu, Z.; Yang, J.-H.; Gong, X.-G. Origin of Performance Degradation in High-Delithiation Li\({}_{x}\)CoO\({}_{2}\): Insights from Direct Atomic Simulations Using Global Neural Network Potentials. _J. Mater. Chem. A_**2023**, _11_, 5370-5379. * (30) Anisimov, V. I.; Zaanen, J.; Andersen, O. K. Band Theory and Mott Insulators: Hubbard U Instead of Stoner I. _Phys. Rev. B_**1991**, _44_, 943-954. * (31) Zhang, L.; Han, J.; Wang, H.; Car, R.; E, W. Deep Potential Molecular Dynamics: A Scalable Model with the Accuracy of Quantum Mechanics. _Phys. Rev. Lett._**2018**, _120_, 143001. * (32) Yang, T.; Cai, Z.; Huang, Z.; Tang, W.; Shi, R.; Godfrey, A.; Liu, H.; Lin, Y.; Nan, C.-W.; Zhang, L.; et al. Deep Learning Illuminates Spin and Lattice Interaction in Magnetic Materials. **2023**, arXiv:2304.09606. * (33) Kresse, G.; Furthmuller, J. Efficient Iterative Schemes for Ab Initio Total-Energy Calculations Using a Plane-Wave Basis Set. _Phys. Rev. B_**1996**, _54_, 11169-11186. * (34) Kresse, G.; Furthmuller, J. Efficiency of Ab-Initio Total Energy Calculations for Metals and Semiconductors Using a Plane-Wave Basis Set. _Comput. Mater. Sci._**1996**, \(6\), 15-50. * (35) Blochl, P. E. Projector Augmented-Wave Method. _Phys. Rev. B_**1994**, _50_, 17953-17979. * (36) Grimme, S. Semiempirical GGA-Type Density Functional Constructed with a Long-Range Dispersion Correction. _J. Comput. Chem._**2006**, _27_, 1787-1799. * (37) Zhou, F.; Cococcioni, M.; Marianetti, C. A.; Morgan, D.; Ceder, G. First-Principles Prediction of Redox Potentials in Transition-Metal Compounds with $Mathrm{Lda}+US. _Phys. Rev. B_**2004**, _70_, 235121. * (38) Xu, S.; Luo, G.; Jacobs, R.; Fang, S.; Mahanthappa, M. K.; Hamers, R. J.; Morgan, D. Ab Initio Modeling of Electrolyte Molecule Ethylene Carbonate Decomposition Reaction on Li(Ni,Mn,Co)O\({}_{2}\) Cathode Surface. _ACS Appl. Mater. Interfaces_**2017**, \(9\), 20545-20553. * (39) Wang, H.; Zhang, L.; Han, J.; E, W. Deepmd-Kit: A Deep Learning Package for Many-Body Potential Energy Representation and Molecular Dynamics. _Comput. Phys. Commun._**2018**, _228_, 178-184. * (40) Zeng, J.; Zhang, D.; Lu, D.; Mo, P.; Li, Z.; Chen, Y.; Rynik, M.; Huang, L. a.; Li, Z.; Shi, S.; et al. Deepmd-Kit V2: A Software Package for Deep Potential Models. _J. Chem. Phys._**2023**, _159_, 054801. * (41) Kingma, D. P.; Ba Jimmy. Adam: A Method for Stochastic Optimization. **2014**, arXiv:1412.6980. (42) Reed, J.; Ceder, G. Role of Electronic Structure in the Susceptibility of Metastable Transition-Metal Oxide Structures to Transformation. _Chem. Rev._**2004**, _104_, 4513-4534. (43) Chevrier, V. L.; Ong, S. P.; Armiento, R.; Chan, M. K. Y.; Ceder, G. Hybrid Density Functional Calculations of Redox Potentials and Formation Energies of Transition Metal Compounds. _Phys. Rev. B_**2010**, _82_, 075122. (44) Zhang, Y.; Wang, H.; Chen, W.; Zeng, J.; Zhang, L.; Wang, H.; E, W. DP-Gen: A Concurrent Learning Platform for the Generation of Reliable Deep Learning Based Potential Energy Models. _Comput. Phys. Commun._**2020**, _253_, 107206. **Supporting Information** **A Spin-dependent Machine Learning Framework for Transition** **Metal Oxide Battery Cathode Materials** Taiping Hu\({}^{1,2}\), Teng Yang\({}^{3}\), Jianchuan Liu\({}^{4}\), Bin Deng\({}^{5}\), Zhengtao Huang\({}^{3}\), Xiaoxu Wang\({}^{5}\), Fuzhi Dai\({}^{2}\), Guobing Zhou\({}^{6}\), Fangjia Fu\({}^{2,7}\), Ping Tuo\({}^{2}\), Ben Xu\({}^{*,3}\), and Shenzhen Xu\({}^{*,1,2}\) \({}^{1}\)Beijing Key Laboratory of Theory and Technology for Advanced Battery Materials, School of Materials Science and Engineering, Peking University, Beijing 100871, People's Republic of China \({}^{2}\)AI for Science Institute, Beijing 100084, People's Republic of China \({}^{3}\)Graduate School of China Academy of Engineering Physics, Beijing 100088, People's Republic of China \({}^{4}\)HEDPS, CAPT, College of Engineering and School of Physics, Peking University, Beijing 100871, People's Republic of China \({}^{5}\)DP Technology, Beijing 100080, People's Republic of China \({}^{6}\) Institute of Advanced Materials, Jiangxi Normal University, Nanchang 330022, People's Republic of China \({}^{7}\) School of Mathematical Sciences, Peking University, Beijing 100871, People's Republic of China Corresponding author: [email protected], [email protected] ## Computational Details ### Generation of the Dataset The number of frames in the training dataset for each system is listed in Table S1 and S2. #### \(\mathbf{Li_{x}CoO_{2}/Li_{x}MnO_{2}/Li_{x}NiO_{2}}\) (\(x=0.5\) and \(1.0\)) The datasets for the LiCoO\({}_{2}\) and Li\({}_{0.5}\)CoO\({}_{2}\) were extracted from our previous work [1]. For the Li\({}_{x}\)MnO\({}_{2}\) and Li\({}_{x}\)NiO\({}_{2}\), we followed the workflow similar with our previous work to obtained the datasets. The low spin states of the Co\({}^{3+}\), Co\({}^{4+}\) Ni\({}^{3+}\) and Ni\({}^{4+}\) are energetically favorable, while the high spin states of the Mn\({}^{3+}\) and Mn\({}^{4+}\) are more stable [2, 3, 4]. We used the 2\(\times\)2\(\times\)1 supercell for all systems (chemical formula is Li\({}_{x}\)TM\({}_{12}\)O\({}_{24}\)). We employed the method similar to our previous work to obtain the dataset for the spin ground state and excited states. Specifically, we set the initial magnetic moments guess and control the total magnetic moment by using the MAGMOM and NUPDOWN keywords in the VASP input file. #### \(\mathbf{Li}\)-Ni anti-defects in LiNiO\({}_{2}\) Based on the optimized structure of the LiNiO\({}_{2}\), we carried out random exchanges for Li-Ni pairs. Subsequently, the atomic positions and unit cell were fully relaxed. The forces of each direction for every atom in the modeling supercells were converged within 0.01 eV/A. We then added random perturbations for the atomic positions and the unit cell to generate about 200 configurations. Finally, self-consistent field calculations were performed for those structures to obtain the energy, force and projected magnetic moments. We note that the random initial wavefunctions were used in all DFT+U single-point calculations. (\mathbf{Li_{x}Ni_{i.sMn_{0.5}O_{4}}}\) (\(x=0.5\) and \(1.5\)) and Li\({}_{x}\)Ni\({}_{i/3}\)Co\({}_{1/3}\)Mn\({}_{1/3}\)O\({}_{2}\) (\(x=1\) and \(0.67\)) We obtained the initial structures of the LiNi\({}_{1.5}\)Mn\({}_{0.5}\)O\({}_{4}\) from the materials project [5] and the LiNi\({}_{1/3}\)Co\({}_{1/3}\)Mn\({}_{1/3}\)O\({}_{2}\) from the previous work [6, 7]. For the Li\({}_{x}\)Ni\({}_{1/3}\)Co\({}_{1/3}\)Mn\({}_{1/3}\)O\({}_{2}\) case, we used a supercell with chemical formular of Li\({}_{27}\)Ni\({}_{9}\)Co\({}_{9}\)Mn\({}_{9}\)O\({}_{54}\), therefore a 2\(\times\)2\(\times\)1 Monkhorst-Pack k-point mesh was used to sample the Brillouin zone. Both of the lattice parameters and the ionic positions were fully relaxed during the optimizations. We then created several Li-vacancy structures by randomly removing a specific number of Li ions. All those Li-vacancy structures were further fully relaxed. The forces of each direction for every atom in the modeling supercells were converged within 0.01 eV/A in the structural relaxation calculations. We then added perturbations to the ionic position and the unit cell to generated configurations for each system. Finally, self-consistent field calculations were performed for those structures to obtain energies, forces and projected magnetic moments. We note that the random initial wavefunctions were used in all DFT+U calculations.
2302.03423
Extreme multistability in symmetrically coupled clocks
Extreme multistability (EM) is characterized by the emergence of infinitely many coexisting attractors or continuous families of stable states in dynamical systems. EM implies complex and hardly predictable asymptotic dynamical behavior. We analyse a model for pendulum clocks coupled by springs and suspended on an oscillating base, and show how EM can be induced in this system by a specifically designed coupling. First, we uncover that symmetric coupling can increase the dynamical complexity. In particular, the coexistence of multiple isolated attractors and continuous families of stable periodic states is generated in a symmetric cross-coupling scheme of four pendulums. These coexisting infinitely many states are characterized by different levels of phase synchronization between the pendulums, including anti-phase and in-phase states. Some of the states are characterized by splitting of the pendulums into groups with silent sub-threshold and oscillating behavior, respectively. The analysis of the basins of attraction further reveals the complex dependence of EM on initial conditions.
Zhen Su, Jürgen Kurths, Yaru Liu, Serhiy Yanchuk
2023-02-07T12:16:13Z
http://arxiv.org/abs/2302.03423v1
# Extreme multistability in symmetrically coupled clocks ###### Abstract Extreme multistability (EM) is characterized by the emergence of infinitely many coexisting attractors or continuous families of stable states in dynamical systems. EM implies complex and hardly predictable asymptotic dynamical behavior. We analyse a model for pendulum clocks coupled by springs and suspended on an oscillating base, and show how EM can be induced in this system by a specifically designed coupling. First, we uncover that symmetric coupling can increase the dynamical complexity. In particular, the coexistence of multiple isolated attractors and continuous families of stable periodic states is generated in a symmetric cross-coupling scheme of four pendulums. These coexisting infinitely many states are characterized by different levels of phase synchronization between the pendulums, including anti-phase and in-phase states. Some of the states are characterized by splitting of the pendulums into groups with silent sub-threshold and oscillating behavior, respectively. The analysis of the basins of attraction further reveals the complex dependence of EM on initial conditions. **The coexistence of several asymptotic stable states for a dynamical system with fixed parameter values is called multistability. This phenomenon has been identified in diverse fields of science both experimentally and theoretically. Which asymptotic state the system will converge to is determined solely by its initial state. When the number of stable states is infinite, extreme multistability (EM) becomes a dominant feature. Understanding EM and its control is an important issue, because systems with EM offer even greater flexibility than those with finite multistability when switching from one stable state to another. We give an example of EM in a coupled pendulums model that takes into account an escapement mechanism as well as local and global couplings. We have paid a particular attention to the coupling structure that leads to the emergence of EM.** ## I Introduction Complex networks have largely enriched our understanding of a variety of complex dynamical systems in many fields, such as biology, ecology, climatology, sociology, and others [1; 2]. By modeling real-world systems as networks in which collections of dynamic nodes are connected by static or adaptive edges, one can study collective behaviors both analytically and numerically [3; 4; 5; 6; 7]. Synchronization is a ubiquitous dynamical phenomenon that has been observed in many natural and engineering systems [8; 9; 10; 11; 12]. Different types of synchronous patterns have been identified involving complete synchronization [13] (oscillators' states become asymptotically the same with time), cluster synchronization (a network splits into groups of synchronous elements) [14; 15], special types of spatial coexistence of coherent and incoherent states [16; 17; 18], and many others. Various patterns have been found in experimental contexts, such as optoelectronic networks [19], chemical networks [20], neural networks [21], ecological [22], and climate systems [23]. Apart from synchronization, multistability - the coexistence of several asymptotic stable states (attractors) for a given set of parameters - is another intriguing phenomenon which has been studied for decades in modern nonlinear science [24; 25; 26]. The final state of a system with multistability depends crucially on initial conditions. Multistability has also been observed in many areas of science, such as nonlinear optics [27], neuroscience [28], climate dynamics [29], laser physics [30], electronic oscillators [31], and in different classes of systems, such as weakly dissipative systems [32], systems with time delays [33; 34], and coupled systems [35]. Understanding the emergence of co-existing attractors is an important issue, and controlling multistability is an even more difficult task. When the number of co-existing attractors increases infinitely, EM emerges. In coupled systems, the presence of EM has been found to be closely related to partial synchrony [36]. By designing a specific coupling scheme to achieve partial synchrony, one can obtain infinitely many coexisting stable states [36; 37; 38; 39; 40]. Apart from the conservative cases, a common reason for the occurrence of EM in networks is time-reversibility, a special type of spatio-temporal symmetry [41; 42; 43; 44; 45]. Despite the extensive literature on multistable dynamical systems, the emergence of multistability or EM in networked dynamical systems remains a challenging problem due to a large number of possible routes to EM, some of which have yet to be discovered. Analytical and numerical challenges arise from the diversity of coupling topologies and the complexity of individual models. In this work, we address the multistability problem in a mathematical model of coupled clocks suspended on a rotating disc and additionally coupled with springs. The interaction of the clocks with the disc provides the global coupling among all clocks and therefore influences their behavior, similar to the interaction of the pedestrians with the bridge in the famous effect of crowd synchrony on the Millennium Bridge [46]. Such a global scheme has also proved useful in uncovering complex transient states [47]. The oscillating clocks are also locally coupled via springs. In Ref. [48], a similar system of three coupled clocks was studied. The following main results are obtained in this work: * We generalize the system of three coupled clocks [48] into a network-coupled scenario allowing arbitrary coupling configurations. * We investigate how different coupling topologies affect the multistability in systems of three and four coupled clocks. We observe that more symmetric coupling topologies can lead to more complex dynamics with higher multistability. A particularly reach appears to be the "cross-coupling" structure with "diagonal" spring couplings in the system of four coupled clocks. In such a case, we observe EM that combines continuous family of stable attractors with different phase relations between the clocks. We provide an analytical and numerical description of this new phenomenon. * Furthermore, we discuss how the discontinuity of the escapement mechanism affects the multistability in the system. The clocks within certain coupled groups (clusters) remain either silent or oscillating and in-phase synchronized due to the switching of the escapement mechanism. This leads to three qualitatively different discontinuity-induced types of attractors. ## II Model and measures ### General model We first present a mathematical model of the \(N\) coupled pendulum clocks suspended on a rotating disc, see Fig. 1(A). The rotating disc provides a global coupling, while the springs allow for arbitrary local coupling structure. Our model is a generalization of the system of three pendulums from Ref. [48]. The supporting base is placed at the origin \(O\) of the \(xy\) plane and it can oscillate freely around the axis perpendicular to the plane of Fig. 1; the angular deviation of the base is \(\theta\). The properties of the base are described by the moment of inertia \(B_{0}\) [kgm\({}^{2}\)], the stiffness of the spring connecting the base and the static support \(k_{\theta}\) [Nm], and the damping \(c_{\theta}\) [Nms]. Identical pendulums (marked colored filled circles) are suspended at evenly distributed black points \(S_{i}\), \(i=1,2,3,...,N\), i.e., the angles between \(OS_{i}\) and \(OS_{i+1}\) (index \(i\) is considered mod \(N\)) are \(2\pi/N\). The angles \(\alpha_{i}=\sphericalangle(Ox,\ \overline{OS}_{i})\) characterize the angular position of the suspension points \(S_{i}\), where \(Ox\) is the positive \(x\) half-axis. The parameter \(d=|\overline{OS}_{i}|\) (\(i=1,2,...,n\)) is the distance between the origin \(O\) and each suspension point \(S_{i}\). Each pendulum is described by the angle displacement \(\varphi_{i}\), the mass \(m\) [kg], the length \(l\) [m], and the damping coefficient \(c_{\varphi}\) [Nms]. The stiffness coefficients of the springs are \(k_{\varphi}\) [N/m]. The description and the values for all parameters are summarized in Table 1. The equations of motion of the \(N\) coupled pendulums is given by the following system: \[\begin{split}&(B_{0}+mm^{2})\vartheta+k_{\theta}\theta+c_{ \theta}\vartheta+\sum_{i=1}^{n}mr\{[\bar{\varphi}_{i}\sin(\varphi_{i}-\theta -\alpha_{i})+\hat{\varphi}_{i}^{2}\cos(\varphi_{i}-\theta-\alpha_{i})]+g\cos (\alpha_{i}+\theta)\}+\Delta V_{\theta}=0,\\ & m^{2}\dot{\varphi}_{i}+mgl\sin\varphi_{i}+c_{\varphi}\dot{ \varphi}_{i}+mr[\bar{\theta}\sin(\varphi_{i}-\theta-\alpha_{i})-\dot{\theta}^ {2}\cos(\varphi_{i}-\theta-\alpha_{i})]+\Delta V_{\varphi_{i}}=M_{E_{i}},\end{split} \tag{1}\] where \(i=1,2,...,N\). The build-in escapement mechanism produces the moment of force, which is modeled by the discontinuous functions \(M_{E_{i}}\), \(i=1,2,3,...,N\)[49; 50]. These functions depend not only on the displacement \(\varphi_{i}(t)\), but also on the position of the \(i\)-th mechanism's cogwheel versus the mechanism's pallet \(\sigma_{i}(t)\): \[M_{E_{i}}=\begin{cases}M&:\sigma_{i}=1\wedge 0<\varphi_{i}<\varepsilon_{0}, \\ -M&:\sigma_{i}=2\wedge-\varepsilon_{0}<\varphi_{i}<0,\\ 0&:\text{otherwise}.\end{cases} \tag{2}\] Here \(M=0.075\) [Nm] represents the value of the external momentum, while \(\varepsilon_{0}=5.0^{\circ}\) denotes the escapement's threshold (the mechanism turns off as the pendulum exceeds this threshold). In fact, \(\sigma_{i}(t)\) become additional discrete-valued variables in the system that are influencing the system's dynamics via the terms \(M_{E_{i}}\) and which are changing discontinuously according to the following rules: 1. When a pendulum \(\varphi_{i}\) crosses the escapement threshold at some time moment \(t^{*}\): \(\varphi_{i}(t_{*})=\varepsilon_{0}\) with increasing \(\varphi_{i}\), i.e., \(\dot{\varphi}_{i}(t_{*})>0\), the variable \(\sigma_{i}(t)\) is set to \(2\) for all \(t\in[t^{*},t_{\text{e}})\), where \(t_{\text{e}}\) is the time of a next event. 2. When the pendulum \(\varphi_{i}\) crosses the escapement threshold \(\varphi_{i}(t_{*})=-\varepsilon_{0}\) with decreasing \(\varphi_{i}\), i.e., \(\dot{\varphi}_{i}(t_{*})<0\), the variable \(\sigma_{i}(t)\) is set to \(1\) for all \(t\in[t^{*},t_{\text{e}})\), where \(t_{\text{e}}\) is the time moment of a next crossing event. In this way, the variables \(\sigma_{i}(t)\) are piece-wise constant with the possible discrete values \(1\) or \(2\), which change discontinuously when either event (I) or (II) occurs. The terms \(\Delta V_{\varphi_{i}}\) and \(\Delta V_{\theta}\) in model (1) describe the moments of forces from the coupling springs. These two terms can be written explicitly using the following terms: \(s_{ij}\), the constant distance between the \(i\)-th and \(j\)-th clocks when the system stays still, and \(\hat{s}_{ij}(t)\), the time-dependent distance between the \(i\)-th and \(j\)-th clocks for the moving system: \[\begin{array}{l}s_{ij}=r\sqrt{2(1-\cos(\alpha_{i}-\alpha_{j}))},\\ \hat{s}_{ij}=\sqrt{s_{ij}^{2}+2l^{2}(1-\cos(\varphi_{i}-\varphi_{j}))+8lr\sin \left(\frac{\varphi_{i}-\varphi_{j}}{2}\right)\sin\left(\frac{\alpha_{i}- \alpha_{j}}{2}\right)\sin\left(\frac{\varphi_{i}+\varphi_{j}-\alpha_{i}- \alpha_{j}}{2}-\theta\right)},\\ \Delta V_{\theta}=2lrk_{\varphi}\sum\limits_{i=1}^{n}a_{ij}\left(1-\frac{s_{ ij}}{\hat{s}_{ij}}\right)\sin\left(\frac{\varphi_{i}-\varphi_{j}}{2}\right)\sin \left(\frac{\varphi_{i}-\varphi_{j}}{2}\right)\cos\left(\frac{\varphi_{i}+ \varphi_{j}-\alpha_{i}-\alpha_{j}}{2}-\theta\right),\\ \Delta V_{\varphi_{i}}=\sum\limits_{i=1}^{n}a_{ij}k_{\varphi}l\left(1-\frac{s _{ij}}{\hat{s}_{ij}}\right)\left[l\sin(\varphi_{i}-\varphi_{j})+2r\sin\left( \frac{a_{i}-\alpha_{j}}{2}\right)\sin\left(\varphi_{i}-\frac{a_{i}+\alpha_{j}} {2}-\theta\right)\right],\end{array} \tag{3}\] where (\(a_{ij}\)) is the coupling matrix via the springs, i.e., \(a_{ij}=1\) if the pendulum \(i\) is connected with the pendulum \(j\) via a spring and \(a_{ij}=0\) otherwise. \(a_{ii}=0\) since there are no self-loops. The influence of different coupling structures on collective dynamics has not been systematically reported for this model. In the remaining part of this paper, we consider the cases \(N=3\) (Figs. 1(B)-(D)) and \(N=4\) (Figs. 1(E)-(G)). In particular, we focus on the following questions: Figure 1: (A) The scheme of \(N\) coupled identical pendulum clocks (shown as circles with different colors) suspended at evenly distributed black points \(S_{i}\), \(i=1,2,...,N\), on an oscillating supporting base (the \(xy\) plane). The local coupling is realized using springs between the clocks. (B)–(D): For \(N=3\), three types of coupling structures of springs include all-to-all, asymmetric, and symmetric topologies. (E)–(G): For \(N=4\), three types of coupling structures of springs include all-to-all, asymmetric, and symmetric topologies. 1. How do different coupling topologies alter synchronization states and their basins of attraction? 2. Is there extreme multistability in the coupled pendulum model? If so, what is its origin? 3. What are the effects of the discontinuity of the escapement mechanism on the dynamics and multistability? We fix the parameters as in Table 1. For \(N=3\), the three considered types of coupling structures of springs include all-to-all (Fig. 1(B)), asymmetric (Fig. 1(C)), and symmetric (Fig. 1(D); mirror symmetry with respect to the vertical axis) topologies. Identical pendulums are suspended at evenly distributed points \(S_{i}\), \(i=1,2,3\) with \(\sphericalangle(\overline{OS}_{i}\), \(\overline{OS}_{i+1})=120^{\circ}\). For \(N=4\), we also consider three types of coupling structures of springs: all-to-all (Fig. 1(D)), asymmetric (Fig. 1(E)), and symmetric (Fig. 1(F)) topologies. Here also the identical pendulums are suspended at evenly distributed points \(S_{i}\), \(i=1,2,3,4\) with \(\sphericalangle(\overline{OS}_{i}\), \(\overline{OS}_{i+1})=90^{\circ}\). We use Monte Carlo sampling and two classical measures (order parameter and mean frequencies) for the numerical quantification of synchronization states and the analysis of their basins of attraction. ### Measures _Order parameter._ We visualize the dynamics of the synchronization transitions with the Kuramoto order parameter: \[R(t)=\frac{1}{N}\sum_{i=1}^{N}e^{i\varphi_{i}(t)}, \tag{4}\] where \(N\) is the number of oscillators. When \(|R(t)|=1\) (\(|R(t)|\approx 0\)), oscillators are in the complete synchronization (disordered) state. The degree of synchronization in numerical simulations is quantified using the averaged value of the order parameter: \[r=\frac{1}{T_{\text{av}}}\int_{T_{\text{av}}}^{T_{\text{av}}+T_{\text{av}}}|R( t)|dt. \tag{5}\] over the time interval \(T_{\text{av}}=50\) after a sufficiently long transient time \(T_{\text{tr}}\). _Mean frequency._ The mean oscillation frequency of a pendulum is given as: \[\left\langle\omega_{i}\right\rangle=\frac{2\pi n_{i}}{T_{\text{av}}}, \tag{6}\] where the same time interval of \(T_{\text{av}}=50\) is applied and \(n_{i}\) represents the number of complete oscillations of \(i\)th clock within this interval. The number of complete oscillations can be computed using the number of crossings of the Poincare Figure 2: Distributions of the order parameter \(r\) for different coupling topologies. (A)-(C) show for 3-coupled pendulums (\(N=3\)), the identified multistabilities for all-to-all, asymmetric, and symmetric coupling structures (see right-upper corners), respectively. For each coupling topology, 5000 order parameters are obtained from 5000 simulations using random initial conditions. Parameters for \(N=3\) are fixed as in Table 1, in particular, \(\alpha_{1}=\frac{\pi}{2}\), \(\alpha_{2}=\frac{7\pi}{2}\), and \(\alpha_{3}=\frac{11\pi}{2}\). Each vertical bar corresponds to a potential attractor. Similarly, (D)-(F) are multistabilities for 4-coupled pendulums (\(N=4\)), based on 1000 order parameters obtained from 1000 simulations. Parameters for \(N=4\) are also fixed as in Table 1, in particular, \(\alpha_{1}=\frac{\pi}{2}\), \(\alpha_{2}=\pi\), \(\alpha_{3}=\frac{3\pi}{2}\) and \(\alpha_{4}=2\pi\). For \(N=3\), compared with (A) and (C), the asymmetric coupling structure in (B) decreases the dynamical complexity; while for \(N=4\), compared with (E), (F) shows that the symmetric coupling can increase the dynamical complexity due to the emergence of EM. map \(\varphi_{i}=0\) or \(\varphi_{i}=\varepsilon_{0}\). The mean frequency is calculated using the last \(T_{\text{av}}=50\) time units after a sufficiently long transient time \(T_{\text{tr}}\). ## III Collective dynamics for different coupling topologies We first conduct various simulations of the system for random initial conditions. More specifically, we choose the following initial conditions [\(\theta^{0}=0.01,\phi_{1}^{0},\dots,\phi_{n}^{0},\dot{\theta}^{0}=0,\phi_{1}^{0}= 0,\dots,\phi_{n}^{0}=0\)], where \(\phi_{1}^{0},\dots,\phi_{n}^{0}\), are chosen randomly from the interval [\(-\pi,\pi\)]. For the case \(N=3\), simulations with 5,000 different initial conditions are performed with the integration time 15,000, and the last 50 time units are used for the calculation of the order parameter and the mean frequency. For \(N=4\), we perform 1,000 runs with the integration interval 10,000, and the transient 9,950. The \(r\) and \(\langle\omega_{i}\rangle\) from Eqs. (5) and (6), respectively, are used to estimate the synchronization state (attractor) in each simulation. We found that further increase of the number of runs and the integration interval does not affect the obtained results qualitatively. Figure 2 shows the distribution of the order parameter \(r\) for different initial conditions. This distribution reveals the possible number of different attractors. Figures 2(A)-(C) correspond to the coupling structures of three clocks in the Figs. 1(B)-(D), respectively. Here we see finitely many isolated lines indicating a relatively small number of possible synchronization states. Interestingly, the case of asymmetric coupling in Fig. 1(C) exhibits lower dynamical complexity as that shown in Fig. 2(B), since only one line of the distribution of \(r\) is achieved for all initial conditions. Figures 2(A) and 2(C) imply finite multistability with different possible asymptotic values of \(r\). For four coupled clocks (Figs. 2(D)-(F)), we also uncover that different coupling topologies lead to diverse dynamical complexities. Specifically, three different lines of \(r\) are observed in Figs. 2(D) and 2(E). More importantly, if the structure of the coupled clocks maintains the symmetry as Fig. 1(G), the distribution of the asymptotic order parameters in Fig. 2(F) is no longer discrete, but contains continuous parts. Such a distribution indicates higher complexity and even EM. In order to characterize deeper the emergence of EM, we focus on analytical and numerical explanations of this phenomenon in the following sections. ## IV Extreme multistability We recall that EM is potentially observed for the cross-coupling structure of four coupled clocks (Fig. 1(G)), where the distribution of asymptotic order parameters seems to be continuous (Fig. 2(F)). In this scheme, the opposite pendulums are connected by springs. The corresponding coupling matrix has four nonzero entries \(a_{13}=a_{31}=a_{24}=a_{42}=1\), and the angle position parameters are \(\alpha_{1}=\frac{\pi}{2}\), \(\alpha_{2}=\pi\), \(\alpha_{3}=\frac{3\pi}{2}\) and \(\alpha_{4}=2\pi\). The system (1) becomes: \[\begin{array}{l}(B_{0}+4mr^{2})\ddot{\theta}+k_{0}\theta+c_{\varphi}\dot{ \theta}+mr[-\ddot{\varphi}_{1}\cos(\theta-\varphi_{1})-\dot{\varphi}_{1}^{2} \sin(\theta-\varphi_{1})]+mr[\ddot{\varphi}_{2}\sin(\theta-\varphi_{2})-\dot{ \varphi}_{2}^{2}\cos(\theta-\varphi_{2})]\\ +mr[\ddot{\varphi}_{3}\cos(\theta-\varphi_{3})+\dot{\varphi}_{3}^{2}\sin( \theta-\varphi_{3})]+mr[-\ddot{\varphi}_{4}\sin(\theta-\varphi_{4})+\dot{ \varphi}_{4}^{2}\cos(\theta-\varphi_{4})]+\Delta V_{\theta}=0,\\ ml^{2}\ddot{\varphi}_{1}+mgl\sin\varphi_{1}+c_{\varphi}\dot{\varphi}_{1}+mr[ -\ddot{\theta}\cos(\theta-\varphi_{1})+\dot{\theta}^{2}\sin(\theta-\varphi_{ 1})]+\Delta V_{\varphi_{1}}=M_{E_{1}},\\ ml^{2}\dot{\varphi}_{2}+mgl\sin\varphi_{2}+c_{\varphi}\dot{\varphi}_{2}+mr[ \ddot{\theta}\sin(\theta-\varphi_{2})+\dot{\theta}^{2}\cos(\theta-\varphi_{2} )]+\Delta V_{\varphi_{2}}=M_{E_{2}},\\ ml^{2}\dot{\varphi}_{3}+mgl\sin\varphi_{3}+c_{\varphi}\dot{\varphi}_{3}+mr[ \ddot{\theta}\cos(\theta-\varphi_{3})-\dot{\theta}^{2}\sin(\theta-\varphi_{3} )]+\Delta V_{\varphi_{3}}=M_{E_{3}},\\ ml^{2}\dot{\varphi}_{4}+mgl\sin\varphi_{4}+c_{\varphi}\dot{\varphi}_{4}+mr[ -\ddot{\theta}\sin(\theta-\varphi_{4})-\dot{\theta}^{2}\cos(\theta-\varphi_{4 })]+\Delta V_{\varphi_{4}}=M_{E_{4}},\end{array} \tag{7}\] where \[\begin{array}{l}\Delta V_{\theta}=4lrk_{\varphi}\left[\left(1-\frac{s_{24}} {s_{24}}\right)\sin\left(\frac{\varphi_{2}-\varphi_{4}}{2}\right)\sin\left( \theta-\frac{\varphi_{2}+\varphi_{4}}{2}\right)-\left(1-\frac{s_{13}}{\dot{ s}_{13}}\right)\sin\left(\frac{\varphi_{1}-\varphi_{3}}{2}\right)\cos\left( \theta-\frac{\varphi_{1}+\varphi_{3}}{2}\right)\right],\\ \Delta V_{\varphi_{1}}=k_{\varphi}l\left(1-\frac{2r}{s_{13}}\right)[l\sin( \varphi_{1}-\varphi_{3})-2r\sin(\theta-\varphi_{1})],\\ \Delta V_{\varphi_{2}}=k_{\varphi}l\left(1-\frac{2r}{s_{24}}\right)[l\sin( \varphi_{2}-\varphi_{4})-2r\cos(\theta-\varphi_{2})],\\ \Delta V_{\varphi_{3}}=k_{\varphi}l\left(1-\frac{2r}{s_{13}}\right)[l\sin( \varphi_{3}-\varphi_{1})+2r\sin(\theta-\varphi_{3})],\\ \Delta V_{\varphi_{4}}=k_{\varphi}l\left(1-\frac{2r}{s_{24}}\right)[l\sin( \varphi_{4}-\varphi_{2})+2r\cos(\theta-\varphi_{4})].\end{array} \tag{8}\] The distances \(s_{ij}\) and \(\hat{s}_{ij}\) are: \[\begin{array}{l}s_{13}=s_{31}=s_{24}=s_{42}=2r,\\ \hat{s}_{13}=\hat{s}_{31}=\sqrt{4r^{2}+2r^{2}(1-\cos(\varphi_{1}-\varphi_{3}) )-8lr\sin\left(\frac{\varphi_{1}-\varphi_{2}}{2}\right)\sin\left(\theta-\frac{ \varphi_{1}+\varphi_{3}}{2}\right)},\\ \hat{s}_{24}=\hat{s}_{42}=\sqrt{4r^{2}+2r^{2}(1-\cos(\varphi_{2}-\varphi_{4}) )-8lr\sin\left(\frac{\varphi_{2}-\varphi_{4}}{2}\right)\cos\left(\theta-\frac{ \varphi_{2}+\varphi_{4}}{2}\right)}.\end{array} \tag{9}\] As we will see later, the regime of EM is characterized by the emergence of two frequency synchronized clusters each containing two clocks. The following phase relations are observed for the synchronized clusters: "in-phase-in-phase" (II), "in-phase-anti-phase" (IA), "anti-phase-in-phase" (AI) and "anti-phase-anti-phase" (AA). The exact meaning of these relations are given in Table 2. For example, IA means that the clocks in the first cluster are in-phase and anti-phase in the second cluster. Additionally, due to the discontinuity induced by the escapement mechanism, the mixed states are observed, when one or both of the clusters are not oscillating. This is possible due to the fact that the clocks do not cross periodically the escapement threshold and, hence, do not gain energy. The following clusters are observed: "silent-in-phase" (SI), "in-phase-silent" (IS), and "silent-silent" (SS). In Figure 3, we split the probability distribution of the order parameter \(r\) accordingly to the cluster states observed. Specifically, Fig. 3(A) gives the whole distributions, same as in Fig. 2(F)). Figure 3(B) exists from Fig. 3(A) only the order parameters that correspond to II phase clusters, Fig. 3(C) to IA, Fig. 3(D) to AI, and Fig. 3(E) to AA. Only II clusters exhibit a continuous distribution of \(r\), thus suggesting that EM appears due to such type of clusters. In contrast, Figs. 3(C)-(E) shows only a finite number of lines of \(r\). In the following sections, we provide additional analytical and numerical evidences that confirm our observation and explain the phenomenon of EM. ### Family of stable cluster states #### iii.1.1 Theoretical analysis of EM As Fig. 3(B) indicates, coexistence of infinitely many stable states can be related to the emergence of II, the in-phase-in-phase clusters. To study the existence of such clusters, we show that the following subspace of II solutions: \[\varphi_{1}(t)=\varphi_{3}(t)=\psi_{1}(t),\quad\varphi_{2}(t)=\varphi_{4}(t)= \psi_{2}(t),\quad\theta(t)=0. \tag{10}\] is invariant with respect to the solutions of system (7). Indeed, substituting \(\varphi_{1}(t)=\varphi_{3}(t)=\psi_{1}(t)\) and \(\varphi_{2}(t)=\varphi_{4}(t)=\psi_{2}(t)\) into (7), we obtain \(\Delta V_{\theta}=0\) and \(\Delta V_{\varphi_{1}}=0\), and the equations for the new variables \(\theta\), \(\psi_{1}\), and \(\psi_{2}\) read: \[(B_{0}+4mr^{2})\ddot{\theta}+k_{\theta}\theta+c_{\theta}\dot{ \theta}=0, \tag{11a}\] \[m^{2}\ddot{\psi}_{1}+mgl\sin\psi_{1}+c_{\varphi}\dot{\psi}_{1}+ mrl[-\ddot{\theta}\cos(\theta-\psi_{1})+\dot{\theta}^{2}\sin(\theta-\psi_{1})]=M_{E_{1}},\] (11b) \[m^{2}\ddot{\psi}_{2}+mgl\sin\psi_{2}+c_{\varphi}\dot{\psi}_{2}+ mrl[\ddot{\theta}\sin(\theta-\psi_{2})+\dot{\theta}^{2}\cos(\theta-\psi_{2})]=M_{E_{2}},\] (11c) \[m^{2}\ddot{\psi}_{1}+mgl\sin\psi_{1}+c_{\varphi}\dot{\psi}_{1}+ mrl[\ddot{\theta}\cos(\theta-\psi_{1})-\dot{\theta}^{2}\sin(\theta-\psi_{1})]=M_{E_{1}},\] (11d) \[m^{2}\ddot{\psi}_{2}+mgl\sin\psi_{2}+c_{\varphi}\dot{\psi}_{2}+ mrl[-\ddot{\theta}\sin(\theta-\psi_{2})-\dot{\theta}^{2}\cos(\theta-\psi_{2})]=M_{E_{2}}, \tag{11e}\] with \[M_{E_{i}}=\begin{cases}M&:\sigma_{i}=1\wedge 0<\psi_{i}<\varepsilon_{0}\\ -M&:\sigma_{i}=2\wedge-\varepsilon_{0}<\psi_{i}<0\\ 0&:\text{otherwise}\end{cases} \tag{12}\] where \(i=1,2\), \(M=0.075\) [Nm], and \(\varepsilon=5.0^{\circ}\). Now, by setting \(\theta=0\), we observe that Eq. (11a) is satisfied, and the dynamical equations for \(\psi_{1}\) (Eqs. (11b) and (11d)) and for \(\psi_{2}\) (Eqs. (11c) and (11e)) become the same: \[m^{2}\ddot{\psi}_{1}+mgl\sin\psi_{1}+c_{\varphi}\dot{\psi}_{1}=M _{E_{1}}, \tag{13a}\] \[m^{2}\ddot{\psi}_{2}+mgl\sin\psi_{2}+c_{\varphi}\dot{\psi}_{2}=M _{E_{2}}. \tag{13b}\] Thus, we have proven the following: **Proposition 1** (II-cluster subspace): _The subspace (10) of the cluster II solutions is invariant with respect to the solutions (flow) of system (7). This subspace is 4-dimensional (\(S^{2}\times\mathbb{R}^{2}\)), and the flow on this subspace is given by Eqs. (13a) and (13b) which describe the relative motion of the clusters._ Another important observation is that the dynamical equations (13a) for \(\psi_{1}\) and (13b) \(\psi_{2}\) in the II-cluster subspace are (i) the same, and (ii) _uncoupled from each other_, i.e., the equation for \(\psi_{1}\) is independent on \(\psi_{2}\) and vise-versa. The latter property leads to the coexistence of infinitely many asymptotic states (and to EM eventually). We formulate the corresponding result as a proposition. **Proposition 2** (EM of II-cluster states): _Assume that \(c_{\varphi},m,l,g,k_{0},c_{\theta}\) and \(B_{0}\) are positive parameters. Assume also that system (13a) (or, equivalently, (13b)) possesses a stable nontrivial asymptotic state (attractor), and \(\psi^{*}(t)\) is a solution on this attractor (i.e., a single clock has a stable oscillatory state). Then the system on the II-cluster subspace \begin{table} \begin{tabular}{l l} \hline Symbol & Description \\ \hline II & In-phase-in-phase (\(\varphi_{1}(t)=\varphi_{3}(t)\), \(\varphi_{2}(t)=\varphi_{4}(t)\)) \\ IA & In-phase-anti-phase (\(\varphi_{1}(t)=\varphi_{3}(t)\), \(\varphi_{2}(t)=-\varphi_{4}(t)\)) \\ AI & Anti-phase-in-phase (\(\varphi_{1}(t)=-\varphi_{3}(t)\), \(\varphi_{2}(t)=\varphi_{4}(t)\)) \\ AA & Anti-phase-anti-phase (\(\varphi_{1}(t)=-\varphi_{3}(t)\), \(\varphi_{2}(t)=-\varphi_{4}(t)\)) \\ SI & Silent-in-phase (\(\varphi_{1}(t)=\varphi_{3}(t)\) = 0, \(\varphi_{2}(t)=\varphi_{4}(t)\)) \\ IS & In-phase-silent (\(\varphi_{1}(t)=\varphi_{3}(t)\), \(\varphi_{2}(t)=\varphi_{4}(t)=0\)) \\ SS & Silent-silent (\(\varphi_{1}(t)=\varphi_{3}(t)=0\), \(\varphi_{2}(t)=\varphi_{4}(t)=0\)) \\ \hline \end{tabular} \end{table} Table 2: Different phase-clusters possibilities for 4-coupled clocks (7) with cross-coupling structure (see Fig. 1(G)). (13a)-(13b) possesses the following asymptotic states: - II clusters:_ \[\varphi_{1}=\varphi_{3}=\psi_{1}=\psi^{*}(t),\quad\varphi_{2}=\varphi_{4}=\psi_ {2}=\psi^{*}(t+\gamma). \tag{14}\] - SI clusters:_ \[\varphi_{1}=\varphi_{3}=\psi_{1}=0,\quad\varphi_{2}=\varphi_{4}=\psi_{2}=\psi^{* }(t). \tag{15}\] - SS clusters:_ \[\varphi_{1}=\varphi_{3}=\psi_{1}=0,\quad\varphi_{2}=\varphi_{4}=\psi_{2}=0. \tag{16}\] - IS clusters:_ \[\varphi_{1}=\varphi_{3}=\psi_{1}=\psi^{*}(t),\quad\varphi_{2}=\varphi_{4}=\psi _{2}=0. \tag{17}\] _where \(\gamma\) is an arbitrary real constant describing a phase shift between the clusters. Moreover, if \(\gamma^{*}(t)\) is an orbitally asymptotically stable limit cycle (stable periodic oscillations of the clock), then the states (14) build a stable invariant torus foliated by (infinitely many) periodic solutions of the form (14)._ The main message of Proposition 2 is that under "normal conditions" when the single clock oscillates periodically, the coupled system can have stable cluster II oscillations with an arbitrary phase-shift between the clusters. If the phase-shift is zero, the order parameter \(r\) is highest and equal 1, while it can achieve a continuous range of smaller values depending on the phase shift. The coexistence of such states leads to EM. Note that the states SI, SS, and IS do not lead to EM, but correspond to isolated attractors in the coupled system. **Proof of Proposition 2.** Let us first mention that the equilibrium \(\psi_{1}=\psi_{2}=0\) is asymptotically stable, and it corresponds to the stability of a silent state of a pendulum with damping and without external energy inflow. Therefore, the two independent systems (13a) and (13b) can reach both attractors: zero equilibrium and non-trivial attractor corresponding to \(\psi^{*}(t)\), depending on initial conditions. Moreover, an arbitrary phase shift \(\psi^{*}(t+\gamma)\) is clearly also possible and belong to the same nontrivial attractor. This provides the existence of the states (14)-(17). The invariant torus from the proposition corresponds to the direct product of the limit cycles \(\mathcal{C}\times\mathcal{C}\), where \(\mathcal{C}=\{(\psi,\ \dot{\psi})\in(S^{1}\times\mathbb{R}):\psi=\gamma^{*}(t),\ t\in \mathbb{R}\}\). The stability of this torus follows from the orbital stability of the limit cycle in each of the subsystem and the properties of the cross-product of the uncoupled system (13a)-(13b). **End of proof.** #### iii.2.2 Numerical study of EM Figure 4 shows three examples of different stable II synchronization patterns from the continuous family of solutions by Eq. (14). Figure 4 provides (A) almost in-phase, (B) a phase-shifted, and (C) anti-phase relations between the clusters. The dynamics within the clusters is completely synchronized: \(\varphi_{1}=\varphi_{3}\) and \(\varphi_{2}=\varphi_{4}\). These three states are exemplary, and different phase shifts are obtained from different initial conditions. In spite of the phase shift, all pendulums are (mean) frequency synchronized (see the third column of Fig. 4), since they follow the same motion according to Eq. (14), only phase-shifted. The fourth column of Fig. 4 illus Figure 3: Distribution of the order parameter \(r\) for 4 coupled clocks (7) with the coupling topology as in Fig. 1(G). (A) is same as Fig. 2(F), obtained by Monte Carlo sampling with 1,000 random trials. (B)-(E) represent the parts of the distribution (extracted from (A)) that correspond to specific cluster states: (B) counts only the order parameters for the trials ending in II (in-phase-in-phase) configuration, (C) stands for IA, (D) for AI, and (E) for AA, see Table 2 explaining the cluster states. The main observation is that only case (B) is related to the emergence of EM. Parameters are fixed as in Table 1 with \(\alpha_{1}=\frac{\pi}{2}\), \(\alpha_{2}=\pi\), \(\alpha_{3}=\frac{3\pi}{2}\) and \(\alpha_{4}=2\pi\) for the 4-coupled clocks (\(N=4\)). trates the phase-shift between the clusters and their periodic motion (orange curve). The black point shows the Poincare map defined by \(\varphi_{1}=0\) and \(\dot{\varphi}_{1}>0\). As for the order parameters \(r\), Fig. 4(A) presents the trial leading to a relatively high order parameter close to 1 (almost complete synchronization) in Fig. 3(B); while Fig. 4(C) represents the trial which falls into the left side (the inter-group anti-phase synchronization) of the order parameter distribution in Fig. 3(B). The more trials one draws from random initial conditions, the more likely one can fill the gap regarding the order parameter between the inter-group anti-phase synchronization and complete synchronization to generate EM. ### Isolated attractors In addition to infinitely many stable II states from EM described above, system (7) possesses coexisting isolated attractors corresponding to other synchronization patterns. These states can also be treated analytically and numerically in more detail. However, since the main focus of this work is the EM phenomenon, we consider here exemplary only isolated attractors corresponding to IA patterns. #### iii.2.1 Three types of isolated attractors with large-amplitude oscillations Theoretical analysis of IA patterns.The IA solutions are characterized by the phase relations \(\varphi_{1}(t)=\varphi_{3}(t)=\psi_{1}\) and \(\varphi_{2}(t)=-\varphi_{4}(t)=\psi_{2}\). Substituting this into system (7), we obtain: \[\begin{array}{l}(B_{0}+4mv^{2})\ddot{\theta}+k_{\theta}\theta+c_{\theta}\dot {\theta}+mrl[\bar{\psi}_{2}\sin(\theta-\psi_{2})-\dot{\psi}_{2}^{2}\cos( \theta-\psi_{2})]+mrl[\bar{\psi}_{2}\sin(\theta+\psi_{2})-\dot{\psi}_{2}^{2} \cos(\theta+\psi_{2})]+\Delta V_{\theta}=0,\\ ml^{2}\dot{\psi}_{1}+mgl\sin\psi_{1}+c_{\varphi}\dot{\psi}_{1}+mrl[-\ddot{ \theta}\cos(\theta-\psi_{1})+\dot{\theta}^{2}\sin(\theta-\psi_{1})]+\Delta V _{\varphi_{1}}=H_{E_{1}},\\ ml^{2}\dot{\psi}_{2}+mgl\sin\psi_{2}+c_{\varphi}\dot{\psi}_{2}+mrl[\ddot{ \theta}\sin(\theta-\psi_{2})+\dot{\theta}^{2}\cos(\theta-\psi_{2})]+\Delta V _{\varphi_{2}}=M_{E_{2}},\\ ml^{2}\dot{\psi}_{1}+mgl\sin\psi_{1}+c_{\varphi}\dot{\psi}_{1}+mrl[\ddot{ \theta}\cos(\theta-\psi_{1})-\dot{\theta}^{2}\sin(\theta-\psi_{1})]+\Delta V _{\varphi_{3}}=M_{E_{3}},\\ -m\dot{\theta}^{2}\ddot{\psi}_{2}-mgl\sin\psi_{2}-c_{\varphi}\dot{\psi}_{2}+ mrl[-\ddot{\theta}\sin(\theta+\psi_{2})-\dot{\theta}^{2}\cos(\theta+\psi_{2})]+ \Delta V_{\varphi_{4}}=M_{E_{4}}.\end{array} \tag{18}\] Figure 4: Three exemplary dynamical patterns for the regime of EM (see Fig. 3(B)) for 4-coupled pendulums with cross-coupling structure (7). (A) Almost complete synchronization with a small phase shift between the clusters of clocks. (B) Phase synchronization with an intermediate phase shift between the clusters. (C) Anti-phase synchronization between the clusters. Details of column figures are as follows: **a**: the time series for variables \(\theta\), \(\varphi_{1}\), \(\varphi_{2}\), \(\varphi_{3}\), \(\varphi_{4}\), respectively, **b**: the phase-time plots of the pendula, **c**: the mean frequencies of the clocks, **d**: Projections on two phase variables form different clusters (orange lines) and Poincaré maps (black dots). Parameters are fixed as in Table 1 with \(\alpha_{1}=\frac{\pi}{2}\), \(\alpha_{2}=\pi\), \(\alpha_{3}=\frac{3\pi}{2}\) and \(\alpha_{4}=2\pi\) for the 4-coupled clocks (\(N=4\)). Figure 5: Dynamical patterns of three types of isolated attractors with large-amplitude oscillations for 4 coupled pendulums with cross-coupling structure (7). (A)-(C): IA, AI, and AA synchronization patterns (see Table 2 for the explanation of the abbreviations). IA, AI, and AA correspond to the order parameters from Figs. 3(C), (D), and (E), respectively. Information on the columns: **a**: the time series for variables \(\theta\), \(\varphi_{1}\), \(\varphi_{2}\), \(\varphi_{3}\), \(\varphi_{4}\), respectively; **b**: phase-time plots of the pendulum angles; **c** mean frequencies; **d**: projection of the solution on the phase variables from different clusters (orange lines) and Poincaré maps (black dots). Parameters are fixed as in Table 1 with \(\alpha_{1}=\frac{\pi}{2}\), \(\alpha_{2}=\pi\), \(\alpha_{3}=\frac{\pi}{2}\) and \(\alpha_{4}=2\pi\) for the 4-coupled clocks (\(N=4\)). Figure 6: Dynamical patterns of three types of isolated attractors with small-amplitude oscillations for 4 coupled pendulums with cross-coupling structure (7). (A)-(C): SI, ISS, and SS synchronization patterns respectively, (see Table 2 for the pattern explanations). Information along the columns: **a**: time series for variables \(\theta\), \(\varphi_{1}\), \(\varphi_{2}\), \(\varphi_{3}\), and \(\varphi_{4}\), **b**: phase-time plots, **c**: mean frequencies (not available if a pendulum converges to 0), **d**: projections on the (\(\varphi_{2}\),\(\varphi_{3}\))-plane (orange line) and Poincaré maps (black dots, defined by \(\varphi_{2}=0\) and \(\varphi_{3}>0\), \(\varphi_{1}=0\) and \(\dot{\varphi}_{1}>0\), and \(\varphi_{1}=0\) and \(\dot{\varphi}_{1}>0\), for (A), (B), and (C), respectively. Parameters are fixed as in Table 1 with \(\alpha_{1}=\frac{\pi}{2}\), \(\alpha_{2}=\pi\), \(\alpha_{3}=\frac{\pi}{2}\) and \(\alpha_{4}=2\pi\) for the 4-coupled clocks (\(N=4\)). In the following, we introduce the new dimensionless parameter \(\varepsilon=\frac{m\bar{I}}{B_{0}+4mr^{2}}\). For the chosen setup as in Table 1, we have \(\varepsilon=0.02726\), i.e., \(\varepsilon\) is small, and we will employ it in our analysis. In fact, the smallness of this parameter is one of the reasons for the emergence of IA patterns. Defining further \(\bar{k}_{\theta}=\frac{k_{\theta}}{B_{0}+4mr^{2}}=3.7301\), \(\bar{c}_{\theta}=\frac{c_{\theta}}{B_{0}+4mr^{2}}=0.02407\), \(F(\theta,\ \ \psi_{2})=-[\bar{\psi}_{2}\sin(\theta-\psi_{2})-\dot{\psi}_{2}^{2}\cos( \theta-\psi_{2})]+[\dot{\psi}_{2}\sin(\theta+\psi_{2})-\dot{\psi}_{2}^{2}\cos( \theta+\psi_{2})]-\frac{\Delta V_{\theta}}{mr\bar{I}}\) and \(\theta=\varepsilon\psi\), the system (18) can be rewritten in the following form: \[\begin{array}{l}\ddot{\psi}+\bar{c}_{\theta}\psi+\bar{k}_{\theta}\psi=F( \varepsilon\psi,\ \psi_{2}),\\ ml^{2}\ddot{\psi}_{1}+mgl\sin\psi_{1}+c_{\varphi}\psi_{1}+\varepsilon mr[- \ddot{\psi}\cos(\varepsilon\psi-\psi_{1})+\varepsilon\dot{\psi}^{2}\sin( \varepsilon\psi-\psi_{1})]+\Delta V_{\varphi_{1}}=M_{E_{1}},\\ ml^{2}\ddot{\psi}_{2}+mgl\sin\psi_{2}+c_{\varphi}\psi_{2}+\varepsilon mr[ \ddot{\psi}\sin(\varepsilon\psi-\psi_{2})+\varepsilon\dot{\psi}^{2}\cos( \varepsilon\psi-\psi_{2})]+\Delta V_{\varphi_{2}}=M_{E_{2}},\\ ml^{2}\ddot{\psi}_{1}+mgl\sin\psi_{1}+\varepsilon_{\varphi}\psi_{1}+ \varepsilon mr[\ddot{\psi}\cos(\varepsilon\psi-\psi_{1})-\varepsilon\dot{ \psi}^{2}\sin(\varepsilon\psi-\psi_{1})]+\Delta V_{\varphi_{3}}=M_{E_{3}},\\ -ml^{2}\ddot{\psi}_{2}-mgl\sin\psi_{2}-c_{\varphi}\dot{\psi}_{2}+\varepsilon mr [-\ddot{\psi}\sin(\varepsilon\psi+\psi_{2})-\varepsilon\dot{\psi}^{2}\cos( \varepsilon\psi+\psi_{2})]+\Delta V_{\varphi_{4}}=M_{E_{4}},\end{array} \tag{19}\] where \[\begin{array}{l}\Delta V_{\theta}=4lr\psi\left(1-\frac{2r}{\sqrt{4r^{2}+2r^{ 2}(1-\cos(2\psi_{2}))-8lr\sin\psi_{2}\cos(\varepsilon\psi)}}\right)\sin\psi_{ 2}\sin(\varepsilon\psi),\\ \Delta V_{\varphi_{1}}=\Delta V_{\varphi_{3}}=0,\\ \Delta V_{\varphi_{2}}=k_{\varphi}l\left(1-\frac{2r}{3\dot{\varphi}_{4}}\right) [l\sin(2\psi_{2})-2r\cos(\varepsilon\psi-\psi_{2})],\\ \Delta V_{\varphi_{4}}=k_{\varphi}l\left(1-\frac{2r}{3\dot{\varphi}_{4}}\right) [-l\sin(2\psi_{2})+2r\cos(\varepsilon\psi+\psi_{2})],\end{array} \tag{20}\] and the values for the distances satisfy: \[\begin{array}{l}s_{13}=s_{31}=s_{24}=s_{42}=2r,\\ \dot{s}_{13}=\dot{s}_{31}=2r,\\ \dot{s}_{24}=\dot{s}_{24}=\sqrt{4r^{2}+2r^{2}(1-\cos(2\psi_{2}))-8lr\sin\psi_{ 2}\cos(\varepsilon\psi)}.\end{array} \tag{21}\] In the zeroth-order in \(\varepsilon\), system (19) is reduced to \[\ddot{\psi}+\bar{c}_{\theta}\dot{\psi}+\bar{k}_{\theta}\psi=F(0, \ \psi_{2}), \tag{22a}\] \[m^{2}\ddot{\psi}_{1}+mgl\sin\psi_{1}+c_{\varphi}\dot{\psi}_{1}=M _{E_{1}},\] (22b) \[m^{2}\ddot{\psi}_{2}+mgl\sin\psi_{2}+c_{\varphi}\dot{\psi}_{2}+ \Delta V_{\varphi_{2}}=M_{E_{2}},\] (22c) \[m^{2}\ddot{\psi}_{1}+mgl\sin\psi_{1}+c_{\varphi}\dot{\psi}_{1}=M _{E_{1}},\] (22d) \[-ml^{2}\ddot{\psi}_{2}-mgl\sin\psi_{2}-c_{\varphi}\dot{\psi}_{2}- \Delta V_{\varphi_{2}}=-M_{E_{2}}, \tag{22e}\] from which one can see that Eq. (22b) is equivalent to Eq. (22d) and Eq. (22c) to Eq. (22e). Hence, in this approximation, the subspace of the IA solutions: \[\varphi_{1}(t)=\varphi_{3}(t)=\psi_{1},\quad\varphi_{2}(t)=-\varphi_{4}(t)= \psi_{2},\quad\theta(t)=0. \tag{23}\] is invariant. For nonzero but small \(\varepsilon\), we observe the perturbed solutions: \[\varphi_{1}(t)\approx\varphi_{3}(t)=\psi_{1},\quad\varphi_{2}(t)\approx- \varphi_{4}(t)=\psi_{2}. \tag{24}\] Summarizing, the existence of IA patterns can be exactly proven for the limit \(\varepsilon=0\). Since small \(\varepsilon\) is a regular perturbation of system (19), all asymptotically stable periodic attractors in this system will be only slightly perturbed by small \(\varepsilon\)-order terms, and one can observe patterns close to IA. Numerical study of IA, AI, and AA patterns.The patterns IA, AI, and AA correspond to single lines of the order parameter distribution in Fig. 3(C)-(E) and, hence, to isolated attractors in the phase space. Figure 5 reports one example for each of these three patterns. All of them exhibit partial synchronization with quasiperiodic dynamics. The phase-time plots in the column **b** of Fig. 5(A) show that the 1st and 3rd pendulums are in-phase and the 2nd and 4th are anti-phase with \(\varphi_{1}=\varphi_{3}\) and \(\varphi_{2}=-\varphi_{4}\); this is also confirmed by the analytical solutions (23) and (24). Inside the coupled groups, the 1st and 3rd or 2nd and 4th pendulums share the same mean frequency, leading to the multifrequency-clusters [17; 51]. Furthermore, the phase space projection on the plane (\(\varphi_{1}\), \(\varphi_{2}\)) (orange curve) and the corresponding Poincare map (black points, when \(\varphi_{4}=0\) and \(\dot{\varphi}_{4}>0\)) indicates that the motion is quasiperiodic. In summary, Fig. 5 reports different partially synchronous behaviors corresponding to isolated attractors with large-amplitude oscillations: (A) IA, (B) AI, and (C) AA patterns. All these patterns coexist with EM contributing to a complex multistability scenario for the cross coupling structure (Fig. 1(G)). nization between \(\varphi_{1}\) and \(\varphi_{3}\), while \(\varphi_{2}\) and \(\varphi_{4}\) converge to \(0\) and stop oscillating. The periodic motions of \(\varphi_{1}\) and \(\varphi_{3}\) are illustrated by the phase trajectory (the orange line) and corresponding Poincare map (black points, defined by \(\varphi_{1}=0\) and \(\varphi_{1}>0\)) in the column \(\mathbf{d}\) of Fig. 6(B). Summarizing Fig. 6, it shows the coexistence of three types of isolated attractors with small-amplitude oscillations, induced by the discontinuity of the escapement mechanism. ## V Basins of attractions Having clarified the different EM-related asymptotic phase patterns in 4 coupled clocks given by system (7) in Sec. IV, we discuss here their basins of attractions. Instead of randomly choosing initial conditions for each pendulum, we fix some of them while the rest are initialized with discretized values distributed evenly in the given intervals. The corresponding synchronization states are estimated using order parameter \(r\) from Eq. (5), as it effectively identifies different attractors, including those belonging to EM. Numerical results are summarized in Figs. 7, 8, and 9. Figure 7 shows the dependence of the order parameter \(r\) on the initial conditions. For this we fix [\(\theta^{0}=0.01\), \(\varphi_{1}^{0}=\frac{\pi}{4}\), \(\varphi_{3}^{0}=\varphi_{1}^{0}+0.001\), \(\varphi_{4}^{0}=\varphi_{2}^{0}+0.001\), \(\dot{\theta}^{0}=\dot{\varphi}_{1}^{0}=\dot{\varphi}_{2}^{0}=\dot{\varphi}_{3 }^{0}=\dot{\varphi}_{4}^{0}=0\)], and initialize \(\varphi_{3}^{0}\) with \(100\) discretized values evenly distributed in the interval \([-\frac{\pi}{4},\frac{\pi}{4}]\). We can clearly observe both the isolated attractors and a part of the EM regime. Isolated attractors correspond to the flat segments. For example, when \(\varphi_{2}^{0}\) is around \(0\) in Fig. (7), in spite of different initial values of \(\varphi_{2}^{0}\), trials in this segment part have almost the same \(r\). The continuously changing parts of \(r\) in Fig. (7) are in line with the EM family of II states. The abrupt "jumps" therefore represent boundaries between either the basins of the isolated attractors or between the isolated attractors and the EM family. Figure 8 shows a two-dimensional basin of attraction, where we fix [\(\theta^{0}=0.01\), \(\varphi_{1}^{0}=\varphi_{2}^{0}=\frac{\pi}{4}\), \(\varphi_{3}^{0}=\varphi_{1}^{0}-A\), \(\varphi_{4}^{0}=\varphi_{2}^{0}-B\), \(\dot{\theta}^{0}=\dot{\varphi}_{1}^{0}=\dot{\varphi}_{2}^{0}=\dot{\varphi}_{3 }^{0}=\dot{\varphi}_{4}^{0}=0\)], and vary \(A\) and \(B\) changing in the interval \([0,\frac{\pi}{2}]\). The uniform discretization on the grid \(100\times 100\) is used. Four regions in the bottom-left, top-left, bottom-right, and top-right correspond to the II, IA, AI, and AA phase patterns, respectively. The bottom-left part with the visible color gradient correspond to EM. Parameters are fixed as in Table 1 with \(\alpha_{1}=\frac{\pi}{2}\), \(\alpha_{2}=\pi\), \(\alpha_{3}=\frac{3\pi}{2}\) and \(\alpha_{4}=2\pi\) for the 4-coupled clocks (\(N=4\)). Figure 9: Two-dimensional basins of the attractors corresponding to the mixed-mode dynamics with one cluster staying silent and another oscillating. The model for 4 coupled pendulums with cross-coupling structure (7) are considered. Initial conditions are chosen as follows: [\(\theta^{0}=0.01\), \(\varphi_{1}^{0}=\varphi_{2}^{0}=0\), \(\varphi_{3}^{0}=A\), \(\varphi_{4}^{0}=B\), \(\dot{\theta}^{0}=\dot{\varphi}_{1}^{0}=\dot{\varphi}_{2}^{0}=\dot{\varphi}_{3 }^{0}=\dot{\varphi}_{4}^{0}=0\)], where \(A\) and \(B\) are taken from the interval \([-\frac{\pi}{4},\frac{\pi}{2}]\) discretized by \(100\times 100\) evenly sampled points. The white region corresponds to the trivial SS pattern (equilibrium at the origin); light green to SI and IS patterns. Parameters are fixed as in Table 1 with \(\alpha_{1}=\frac{\pi}{2}\), \(\alpha_{2}=\pi\), \(\alpha_{3}=\frac{3\pi}{2}\) and \(\alpha_{4}=2\pi\) for the 4-coupled clocks (\(N=4\)). four parts corresponding to II, IA, AI, and AA phase patterns. In particular, the bottom-left part with the non-constant dependence of the order parameter \(r\) in Fig. 8 corresponds to a subset of an infinite number of stable II states from EM. The other three parts with constant colors are related to isolated attractors of AI, IA, and AA patterns. With changing \(A\) and \(B\), the system moves from complete synchronization to intergroup anti-phase synchronization (see Fig. 4), and further to the multifrequency-cluster state (see Fig. 5). To visualize the basin of the attractors with small-amplitude oscillations, we analyse a set of initial conditions close to the origin. Figure 9 shows the corresponding basins of attraction, with the initial conditions [\(\theta^{0}=0.01\), \(\varphi_{1}^{0}=\varphi_{2}^{0}=0\), \(\varphi_{3}^{0}=A\), \(\varphi_{4}^{0}=B\), \(\dot{\theta}^{0}=\dot{\varphi}_{1}^{0}=\dot{\varphi}_{2}^{0}=\dot{\varphi}_{3} ^{0}=\dot{\varphi}_{4}^{0}=0\)], where \(A\) and \(B\) vary in the interval [\(-\frac{\sigma}{4},\frac{\xi}{4}\)]. In addition to II patterns from EM set, we obtain here three new regions corresponding to the patterns SI, IS, and SS, for which some of the clocks are not oscillating. The basin of the trivial solution SS with all clocks silent is observed in the central part (white); see Fig. 6(C) for the illustration of the pattern. The SI and IS patterns are mixed-mode oscillations, with one cluster silent and another cluster oscillating, corresponding to the light green basin of attraction. ## VI Conclusions In summary, we investigate how different coupling topology affects the collective dynamics in coupled clocks. The considered model includes global as well as local couplings represented by the rotating support disc and springs, respectively. Also, the model contains a discontinuity due to the escapement clock mechanism. We focus on the model of 4 coupled clocks where surprisingly an EM is observed between patterns with different synchronization levels. The EM phenomenon reveals the coexistence of infinitely many stable asymptotic states. The dependence on initial conditions is clarified using the analysis of the basins of attractions. The main conclusions based on our work are as follows: * For both three coupled and four coupled clocks, we use Monte Carlo sampling and find that the symmetric coupling structure can increase the dynamical complexity. This can lead to diverse synchronization patterns and attractors. * We observe the emergence of EM for the case of four coupled clocks (see Fig. 1(G) and model (7)). This phenomenon is solely induced by the cross-coupling topological structure, and it is stable against variations of the system parameters, as far as the clocks remain identical. * We show both analytically and numerically that the emergence of EM is closely related to the II synchronization pattern (see Table 2), where the system splits into two antipodal clusters such that the clocks within the clusters are fully synchronized, but the intra-cluster dynamics can be shifted by an arbitrary phase. Moreover, other three types of isolated attractors with large-amplitude oscillations coexist with the infinite family of states from the EM family. They correspond to the IA, AI, and AA synchronization patterns. * We further uncover the effect of the discontinuity of the system induced by the escapement mechanism. It induces the emergence and coexistence of further three types of isolated attractors with small-amplitude oscillations. These states correspond to SI, IS, and SS patterns (see Table 2). In particular, the IS and SI patterns are mixed states [52] where two clocks are oscillating and the other two stay silent. As observed in our work, the emergence of EM is caused by a particularly designed symmetric coupling structure, rather than by introducing additional quantities into the coupling design. The inclusion of more coupled clocks with different topological coupling structures is an open way to clarify the emergence of EM or chimera states of large coupled systems. Another possible generalization is to include adaptivity in the coupling scheme. ###### Acknowledgements. Z.S. and Y.-R.L. was funded by the China Scholarship Council (CSC) scholarship. J.K. was supported by the Federal Ministry of Education and Research (BMBF) grant No. 01LP1902J (elimXtreme). S.Y. was supported by the German Research Foundation DFG, Project No. 411803875. ## Author contributions Z.S. and Y.S. contributed equally to this work. ## Data availability The code used for this work is available online here.
2305.11080
Inspecting the Geographical Representativeness of Images from Text-to-Image Models
Recent progress in generative models has resulted in models that produce both realistic as well as relevant images for most textual inputs. These models are being used to generate millions of images everyday, and hold the potential to drastically impact areas such as generative art, digital marketing and data augmentation. Given their outsized impact, it is important to ensure that the generated content reflects the artifacts and surroundings across the globe, rather than over-representing certain parts of the world. In this paper, we measure the geographical representativeness of common nouns (e.g., a house) generated through DALL.E 2 and Stable Diffusion models using a crowdsourced study comprising 540 participants across 27 countries. For deliberately underspecified inputs without country names, the generated images most reflect the surroundings of the United States followed by India, and the top generations rarely reflect surroundings from all other countries (average score less than 3 out of 5). Specifying the country names in the input increases the representativeness by 1.44 points on average for DALL.E 2 and 0.75 for Stable Diffusion, however, the overall scores for many countries still remain low, highlighting the need for future models to be more geographically inclusive. Lastly, we examine the feasibility of quantifying the geographical representativeness of generated images without conducting user studies.
Abhipsa Basu, R. Venkatesh Babu, Danish Pruthi
2023-05-18T16:08:11Z
http://arxiv.org/abs/2305.11080v1
# Inspecting the Geographical Representativeness ###### Abstract Recent progress in generative models has resulted in models that produce both realistic as well as relevant images for most textual inputs. These models are being used to generate millions of images everyday, and hold the potential to drastically impact areas such as generative art, digital marketing and data augmentation. Given their outsized impact, it is important to ensure that the generated content reflects the artifacts and surroundings across the globe, rather than over-representing certain parts of the world. In this paper, we measure the geographical representativeness of common nouns (e.g., a house) generated through DALL-\(E\)\(2\) and Stable Diffusion models using a crowdsourced study comprising \(540\) participants across \(27\) countries. For deliberately underspecified inputs without country names, the generated images most reflect the surroundings of the United States followed by India, and the top generations rarely reflect surroundings from all other countries (average score less than \(3\) out of \(5\)). Specifying the country names in the input increases the representativeness by \(1.44\) points on average for DALL-\(E\)\(2\) and \(0.75\) for Stable Diffusion, however, the overall scores for many countries still remain low, highlighting the need for future models to be more geographically inclusive. Lastly, we examine the feasibility of quantifying the geographical representativeness of generated images without conducting user studies. ## 1 Introduction Over the last year, the quality of text-to-image generation systems has remarkably improved [24, 44, 27, 29]. The generated images are more realistic and relevant to the textual input. This progress in text-to-image synthesis is partly fueled by the sheer scale of models and datasets used to train them, and partly by the architectural advancements including Transformers [42] and Diffusion models [14]. Given the impressive generation capabilities that these models display, such models have captured the interest of researchers and general public alike. For instance, DALL-\(E\)\(2\) is being used by over \(1.5\) million users to generate more than \(2\) million images per day for applications including art creation, image editing, digital marketing and data augmentation [1]. Despite the broad appeal of text-to-image models, there are looming concerns about how these models may exhibit and amplify existing societal biases. These concerns stem from the fact that image generation models are trained on large swaths of image-caption pairs mined from the internet, which is known to be rife with toxic, stereotyping, and biased content. Further, internet access itself is unequally distributed, leading to underrepresentation and exclusion of voices from developing and poor nations [6, 2] Figure 1: An illustrative question from our study, where a participant (in this case, from the United States) is presented with an image of a common noun (a house), generated from the Stable Diffusion model. The participant is asked to rate the generated image on how well it reflects the houses in their surroundings. There exists a wide body of work demonstrating biases in large language and vision models [13, 43, 28, 37], and some recent work investigates text-to-image models for biases related to representation of race, gender and occupation [8, 4]. Another important--and often overlooked--aspect of inclusive representation is _geographical representation_. For such systems to be geographically representative, they should generate images that represent the objects and surroundings of different nations in the world, and re-train from overrepresenting certain nations and contributing to their hegemony. For instance, a typical house in the United States looks different from one in Japan. Often the input descriptions to text-to-image models are underspecified, leaving the models to fill in the missing details. In such underspecified descriptions, there is an increasing risk that models overrepresent certain demographics [15]. In addition to representational harms, biased image generation systems can also cause allocational harms as such systems are used to augment datasets, which run the risk of further propagating existing biases. Further, the experience of using systems that underrepresent certain areas would likely be unpleasant for the residents of those regions. In this paper, we measure the degree to which the text-to-image-generation systems produce images that reflect the artifacts and surroundings of participants from different parts of the world (SS2). To answer this question, we conduct a user study involving \(540\) participants from \(27\) different countries. We present each user \(80\) images of common nouns generated from DALL-E \(2\)[24] and Stable Diffusion [27] models. Half of the presented images are generated by specifying the country of the participant in the input, and the remaining images are deliberately underspecified to examine the default generations. The users evaluate the presented images based on a \(5\)-point Likert scale indicating how well do the generated images reflect the given entity in their physical surroundings (see Figure 1). We also ask respondents to score generated images on (i) how realistic they look, and (ii) how the realism impacted their scores about geographical representativeness. Overall, we find that the geographical representativeness of images for many countries is considerably low (SS3). In the unspecified case, i.e., without any country name in the input, we find that the generated images most reflect artifacts from the United States (average geographical representativeness score of 3.35 out of 5), followed by India (score of 3.23) and Canada (score of 2.82), and least reflect the nouns from Greece, Japan and New Zealand (with scores less than or around 2.0). Out of 27 countries, 25 countries have a score of less than 3 for both DALL-E \(2\) and Stable Diffusion models. When we specify the country name in the input prompt, the average score over all the studied countries increases to \(3.49\) (from \(2.39\) in the unspecified case). However, these scores suggest that there is room for future text-to-image models to produce more geographically representative content. Between DALL-E \(2\) and Stable Diffusion, we find DALL-E \(2\) to be better at generating geographically representative content when we specify country names, but we observe no statistically significant difference in the underspecified case.1 We find that the participants' ratings about the realism of the images are correlated with their scores about the geographical representativeness. Footnote 1: Note that the scope and focus of our study is solely on measuring the extent of geographical representativeness for both country-specified and unspecified prompts, rather than finding better ways to prompt the model, or improve the model to produce more geographically inclusive content. Finally, we examine the feasibility of automating the process of quantifying the geographical representativeness of text-to-image generation models through two different ways (SS4). First, we consider the similarity of a country-specific textual prompt and the test image using CLIP, a pre-trained text-image alignment model [21]. Second, we evaluate the viability of using user annotations for DALL-E \(2\) as a means for estimating the geographical representativeness for images generated through Stable Diffusion. We find both these approaches to be inadequate in accurately evaluating the geographical representativeness of the images, emphasizing the need for a user study. We conclude with a discussion on limitations of our work, and suggestions for future research in this area (SS5). ## 2 Approach Geographical Representativeness.We present crowdworkers from different countries with several model-generated images of common nouns, and for each image, we ask them to rate on a scale of \(1\)-\(5\) about how well do the generated images reflect their surroundings. Geographical representativeness (**GR**) of the model \(m\) for country \(c\), is then defined as the average rating participants from that country provide to the model generated images of common nouns (\(\mathcal{N}\)), using a corresponding set of input prompts (\(\mathcal{P}\)). Similarly, we define the realism, \(\text{R}(c,m,p)\), as the average of realism ratings given by participants from country \(c\) to images generated by model \(m\) using a prompt \(p\). Research Questions.Using the above notions of geographical representativeness and realism, we ask: * **RQ1**: Are the images generated using DALL-E \(2\) and Stable Diffusion geographically representative? Do they over-represent rich or populous nations? * **RQ2**: To what extent does specifying the country name in the input improve the representativeness? * **RQ3**: Does the realism of images impact participants' ratings about the geographical representativeness? * **RQ4**: How feasible is it to automatically assess the geographical representativeness of generated images? Selected Countries.We reach out to residents of \(88\) countries using Amazon Mechanical Turk (AMT)2 and Prolific3 crowdsourcing platforms. However, a large majority of crowdworkers belong to only a few countries, and we eventually end up with sufficient responses only from \(27\) countries. We sample the \(88\) countries using weighted random sampling where each nation was weighted by its population. The final set of \(27\) countries (denoted by \(\mathcal{C}\)) includes: the United States of America, Canada, Mexico, Brazil, Chile, the United Kingdom, Italy, Spain, Greece, Japan, Korea, India, Israel, Australia, South Africa, Belgium, Poland, Portugal, Germany, France, Latvia, Hungary, the Czech Republic, Estonia, New Zealand, Finland, and Slovenia. Footnote 2: [https://www.mturk.com](https://www.mturk.com) Footnote 3: [https://www.prolific.co](https://www.prolific.co) Chosen Artifacts.To curate a list of diverse but common artifacts, we extract the most common nouns from the popular Conceptual Captions dataset [32] which contains image-caption pairs, used for training various vision+language systems [17, 39, 16, 25]. We use a POS-dagger from the NLTK library to extract the nouns, and sort them by decreasing order of their frequency. We choose the \(10\) most common nouns after manually excluding nouns that are universal in nature (e.g., sky, sun). The final list of \(10\) common nouns, denoted by \(\mathcal{N}\), includes city, beach, house, festival, road, dress, flag, park, wedding, and kitchen. Input Prompts.As mentioned earlier, we use two types of queries for image synthesis. For half of the queries, we include the country name, and for the remaining half, we do not specify any country name (to assess the default generations). When specifying the country name, we modify the query to "high definition image of a typical [artifact] in [country]", where we include the word typical to generate the most common form of the concept in the specified country. We denote such queries by \(p_{c}\), where \(c\) refers to the country name in question. For the underspecified case, our query is "high definition image of a [artifact]", which we denote by \(p\). We use the same prompt for both DALL-E \(2\) and Stable Diffusion models. Questionnaire Details.For each of the \(10\) nouns, we generate \(8\) images, \(4\) using DALL-E \(2\) and \(4\) from Stable Diffusion. Overall for a given country, our survey comprises \(80\) images. Participants are not privy to the details of the models, and do not know which images were generated from which model. For each image, we ask each participant: "How well does the automatically generated image of this [artifact] reflect the [artifact] in your surroundings in [country]". (See Figure 1). For each question, participants mark their responses using a \(5\)-point Likert scale, where where \(1\) indicates "not at all", and \(5\) represents "to a great extent". After the \(80\) questions, we ask the users to rate the photo-realism of the generated images on a scale of \(1\)-\(5\), and how it impacted their scores about geographical representativeness. We pay AMT participants based on the estimated hourly income of crowdworkers in their respective countries. For participants from Prolific, we pay them a platform-set minimum of \(6.91\) USD per hour. Validating Responses.To verify if the participants answered the questions earnestly, we include \(4\) trick questions which are presented in the same format. Two of these trick questions inquire about apples and milk, whereas the corresponding images are of mangoes and water. Therefore, we expect participants to mark a low score for these two questions. For the other two trick questions, we ask about a pen and sun, and include images of the same, and expect the users to mark a high score. We discard the responses from participants who do not pass these checks. While the crowdsourcing platforms allow us to target users from a given country, we re-confirm with participants if they indeed reside (or have lived) in the specified countries. Figure 2: **Agreement among participants.** We plot the percentage of participants from each country that choose the most common option (for that country). We see that there is a considerable agreement among respondents, as about half the participants in many countries agree on one out of five options. Inter-rater Agreement.We compute (for each country) the percentage of participants who opted for the most selected option. We observe a high agreement among participants; for \(19\) out of the \(27\) studied countries we see that the most common option is picked by over \(50\%\) of the respondents (Figure 2). The agreement would be (on an average) \(20\%\) if participants marked options arbitrarily. The percentages in Figure 2 demonstrate some degree of consensus among participants. Further, we observe the highest agreement for images of flags (\(81\%\)) and the least agreement for kitchens (\(41\%\)). ## 3 Results In this section, we share the findings of our study. First, we discuss the metrics of interest, and then answer the four research questions posed in Section 2. ### Metrics Below, we define a few notations that we use for evaluating the user ratings. Remember from Section 2 that we defined \(\textbf{GR}(c,m,n,p)\) as the geographical representativeness score assigned by participants from country \(c\) to images generated for noun \(n\) from model \(m\) using prompt \(p\). * \(\textbf{GR}(c,m,\cdot,p_{c})\): Average ratings that participants of a country \(c\) assign for geographical representativeness of images generated by model \(m\) across all nouns in \(\mathcal{N}\). Here, we use a country-specific prompt (\(p_{c}\)). * \(\textbf{GR}(c,m,\cdot,p)\): Average ratings that participants of a country \(c\) assign for geographical representativeness of images generated by model \(m\) across all nouns. The prompt \(p\)**does not** specify the country name. * \(\textbf{GR}(\cdot,m,n,p_{c})\): Average ratings that participants for all countries in \(\mathcal{C}\) assign for geographical representativeness (GR) of images of noun \(n\), generated by model \(m\). Here, we use a country-specific prompt (\(p_{c}\)). * \(\textbf{GR}(\cdot,m,n,p)\): Average ratings that participants from all countries in \(\mathcal{C}\) assign for geographical representativeness of images of noun \(n\), generated by model \(m\). The prompt \(p\)**does not** specify the country name. Analogously, we define \(\textbf{R}(c,m,n,p_{c})\) and \(\textbf{R}(c,m,n,p)\) as the average realism score for generated images using country specific (\(p_{c}\)) and unspecific prompt (\(p\)) respectively. ### Geographical Representativeness Here, we elaborate on the extent to which the generated artifacts are geographically representative (**RQ1** in Section 2). We compute the geographical representativeness scores for each country, averaged over the two models for the images generated by prompts that do not specify the country name, i.e., \(\textbf{GR}(c,\cdot,\cdot,p)\). We present these results in Table 1. From the table, we can see that out of the \(27\) countries, \(25\) have a score lower than \(3\) (on a scale of \(1\) to \(5\)), indicating that participants from most of the studied countries do not feel that the generated images reflect their surroundings to a large extent. The only countries to obtain scores higher than \(3\) are the United States (\(3.35\)) and India (\(3.23\)). Interestingly, for DALL-E \(2\), India obtains the highest score (\(3.44\)) followed by the United States (\(3.24\)). The overall least scores are assigned by participants from Greece (\(1.94\)), Japan (\(1.95\)) and Finland (\(2.03\)). The average score across the studied \(27\) countries is \(2.39\). To answer the follow up questions posed in the **RQ1**, about whether the artifacts generated are more representative of richer and populous nations: 1. We find no correlation between the degree of geographical representativeness of the generated images for the studied countries and their per-capita GDP. The Pearson correlation coefficient, \(\rho\), is \(-0.03\). Moreover, after separating the country pool into the "Rich West" countries4 and others, we evaluate if average GR scores of the two groups are different, but we find no statistically significant difference. We acknowledge and speculate that we may observe different trends if the study included participants from many other developing countries. However, significantly improving the coverage of the study is challenging (see Section 5). Footnote 4: As defined per: [https://worldpopulationreview.com/country-rankings/western-countries](https://worldpopulationreview.com/country-rankings/western-countries) 2. We observe that the geographical representativeness scores of the \(27\) countries is positively correlated with their population (\(\rho=0.64\)). This may suggest that the datasets used to pre-train the chosen models contain many images from residents of populous countries. ### Effect of Country-specific Prompts In this subsection, we analyse the geographical representativeness of images generated by including the country name (**RQ2** in Section 2). From Table 1, we observe that for each nation, mentioning its name in the prompt increases the average **GR** score for that country as compared to the under-specified case. We conduct a paired sample t-test to confirm this, and find that indeed there is a statistically significant increase with p-value \(<0.05\). Specifically, adding the country name in the textual query increases the average geographical representativeness score by over \(1.44\) points for DALL-E \(2\) and \(0.75\) for Stable Diffusion. Overall, for \(14\) out of \(27\) countries (despite the increase upon including country names), the geographical representation scores were between \(3\) to \(3.5\), indicating a considerable headroom for future models to generate more representative artifacts. We show illustrative examples of images generated by the unspecified and country-specific prompts in Figure 3. Specifically, we show images for \(5\) countries: Brazil, Mexico, Italy, Japan and South Korea, and \(4\) nouns: house, city, flag and wedding. For each of the nouns, we show images generated by the under-specified prompts first, followed by the ones generated through country specific prompts. In the appendix, we show images generated separately by both DALL-E \(2\)[24] and Stable Diffusion [27] for all the \(10\) nouns, whereas we choose one country from each continent: US, Chile, UK, Japan, South Africa, and Australia. The generated images are presented in Fig. 6 and 7 for DALL-E \(2\), and Fig. 8 and 9 for Stable Diffusion. ### Photo-realism of Generated Images We seek to answer if, and to what degree, does the photo-realism of images impact participants' perceptions of geographical representativeness of a given artifact (**RQ3** in Section 2). We believe that there may be an effect, as unrealistic-looking images might be perceived less geographically appropriate (in the extreme case, unrealistic-looking photos might be hard to even interpret). To answer this question, we ask participants to rate the realism of images generated by DALL-E \(2\) and Stable Diffusion respectively (for both the under-specified and country-specific prompts) on a Likert-scale of \(1\) to \(5\). Additionally, in the exit survey, we ask participants to self assess the impact that the realism of images had on the scores they assigned for geographical representativeness of images. First, we find that geographical representativeness and realism scores are correlated, with a Pearson correlation of \(0.62\) for Stable Diffusion (unspecified case), and \(0.47\) for the case with country names. For DALL-E \(2\) the correlation is not as large (\(0.21\) and \(0.57\) for unspecified and country \begin{table} \begin{tabular}{l l l l l l l} \hline \hline \multirow{2}{*}{Countries} & \multicolumn{2}{c}{Overall} & \multicolumn{2}{c}{DALL-E \(2\)} & \multicolumn{2}{c}{Stable Diffusion} \\ \cline{2-7} & w/ country & Unspecified & w/ country & Unspecified & w/ country & Unspecified \\ & \(\textbf{GR}(c,\cdot,\cdot,p)\) & \(\textbf{GR}(c,\cdot,\cdot,p_{c})\) & \(\textbf{GR}(c,\text{D2},\cdot,p)\) & \(\textbf{GR}(c,\text{D2},\cdot,p_{c})\) & \(\textbf{GR}(c,\text{SD},\cdot,p)\) & \(\textbf{GR}(c,\text{SD},\cdot,p_{c})\) \\ \hline US & \(3.54\pm 0.23\) & \(3.35\pm 0.18\) & \(3.56\pm 0.29\) & \(3.24\pm 0.27\) & \(3.51\pm 0.25\) & \(3.46\pm 0.27\) \\ India & \(3.74\pm 0.26\) & \(3.24\pm 0.41\) & \(4.00\pm 0.22\) & \(3.44\pm 0.49\) & \(3.48\pm 0.49\) & \(3.03\pm 0.41\) \\ Canada & \(3.62\pm 0.40\) & \(2.82\pm 0.51\) & \(3.78\pm 0.55\) & \(2.73\pm 0.59\) & \(3.47\pm 0.52\) & \(2.91\pm 0.66\) \\ South Africa & \(3.25\pm 0.30\) & \(2.74\pm 0.40\) & \(3.49\pm 0.57\) & \(2.70\pm 0.44\) & \(3.02\pm 0.52\) & \(2.78\pm 0.58\) \\ Brazil & \(3.70\pm 0.26\) & \(2.69\pm 0.55\) & \(4.00\pm 0.38\) & \(2.65\pm 0.78\) & \(3.40\pm 0.23\) & \(2.72\pm 0.56\) \\ UK & \(3.82\pm 0.38\) & \(2.65\pm 0.49\) & \(4.14\pm 0.53\) & \(2.41\pm 0.61\) & \(3.48\pm 0.56\) & \(2.88\pm 0.80\) \\ Mexico & \(3.83\pm 0.26\) & \(2.59\pm 0.56\) & \(4.18\pm 0.30\) & \(2.74\pm 0.72\) & \(3.49\pm 0.57\) & \(2.45\pm 0.64\) \\ Spain & \(3.44\pm 0.29\) & \(2.46\pm 0.44\) & \(3.62\pm 0.38\) & \(2.29\pm 0.65\) & \(3.26\pm 0.53\) & \(2.63\pm 0.66\) \\ Portugal & \(3.73\pm 0.29\) & \(2.46\pm 0.47\) & \(4.02\pm 0.40\) & \(2.47\pm 0.73\) & \(3.44\pm 0.61\) & \(2.45\pm 0.54\) \\ Italy & \(3.58\pm 0.47\) & \(2.40\pm 0.49\) & \(3.66\pm 0.66\) & \(2.40\pm 0.70\) & \(3.50\pm 0.66\) & \(2.39\pm 0.63\) \\ Belgium & \(3.49\pm 0.43\) & \(2.40\pm 0.52\) & \(3.76\pm 0.71\) & \(2.28\pm 0.57\) & \(3.21\pm 0.61\) & \(2.52\pm 0.80\) \\ France & \(3.32\pm 0.34\) & \(2.34\pm 0.44\) & \(3.54\pm 0.67\) & \(2.38\pm 0.70\) & \(3.09\pm 0.47\) & \(2.30\pm 0.52\) \\ Poland & \(3.62\pm 0.30\) & \(2.29\pm 0.44\) & \(4.14\pm 0.39\) & \(2.23\pm 0.59\) & \(3.10\pm 0.66\) & \(2.35\pm 0.70\) \\ Germany & \(3.64\pm 0.35\) & \(2.26\pm 0.45\) & \(4.03\pm 0.30\) & \(2.04\pm 0.46\) & \(3.26\pm 0.70\) & \(2.49\pm 0.78\) \\ Australia & \(3.35\pm 0.45\) & \(2.26\pm 0.46\) & \(3.55\pm 0.74\) & \(2.10\pm 0.49\) & \(3.15\pm 0.66\) & \(2.41\pm 0.65\) \\ Czech Republic & \(3.43\pm 0.48\) & \(2.25\pm 0.52\) & \(3.68\pm 0.50\) & \(2.18\pm 0.64\) & \(3.18\pm 0.79\) & \(2.31\pm 0.66\) \\ Hungary & \(3.41\pm 0.49\) & \(2.24\pm 0.55\) & \(3.65\pm 0.59\) & \(2.06\pm 0.52\) & \(3.18\pm 0.74\) & \(2.42\pm 0.76\) \\ New Zealand & \(3.10\pm 0.44\) & \(2.23\pm 0.36\) & \(3.10\pm 0.76\) & \(2.24\pm 0.70\) & \(3.11\pm 0.49\) & \(2.22\pm 0.39\) \\ Estonia & \(3.36\pm 0.25\) & \(2.22\pm 0.33\) & \(3.89\pm 0.58\) & \(2.18\pm 0.51\) & \(2.84\pm 0.49\) & \(2.26\pm 0.49\) \\ Slovenia & \(3.29\pm 0.46\) & \(2.21\pm 0.43\) & \(3.48\pm 0.49\) & \(2.19\pm 0.45\) & \(3.10\pm 0.69\) & \(2.23\pm 0.65\) \\ Chile & \(3.12\pm 0.40\) & \(2.15\pm 0.42\) & \(3.62\pm 0.64\) & \(2.26\pm 0.69\) & \(2.62\pm 0.57\) & \(2.04\pm 0.58\) \\ Israel & \(3.14\pm 0.39\) & \(2.15\pm 0.49\) & \(3.62\pm 0.67\) & \(2.10\pm 0.67\) & \(2.66\pm 0.59\) & \(2.19\pm 0.64\) \\ South Korea & \(3.49\pm 0.24\) & \(2.10\pm 0.39\) & \(3.92\pm 0.45\) & \(2.24\pm 0.66\) & \(3.06\pm 0.47\) & \(1.96\pm 0.44\) \\ Latvia & \(3.52\pm 0.34\) & \(2.10\pm 0.49\) & \(4.11\pm 0.44\) & \(1.87\pm 0.52\) & \(2.93\pm 0.46\) & \(2.32\pm 0.67\) \\ Finland & \(3.62\pm 0.30\) & \(2.03\pm 0.34\) & \(3.93\pm 0.54\) & \(1.95\pm 0.44\) & \(3.30\pm 0.59\) & \(2.10\pm 0.48\) \\ Japan & \(3.55\pm 0.32\) & specific prompts respectively). This is also concordant with the self-evaluation provided by participants, where we note that participants, on average, indicate that the realism influenced their ratings on geographical representativeness to a moderate extent (average score of \(3.5\) on a scale of \(1\)-\(5\)). Interestingly, we find that that the average realism score assigned by participants is lower (averaged over all countries) when the prompt excludes the country name (this difference is statistically significant with p value \(<0.05\)). Albeit, we do see that for some countries, e.g., the United States and Brazil, the realism scores decreases upon including the country names in the prompt. More details and country-wise statistics on the realism values (for all \(27\) countries) can be found in Table 2. ### Comparison of DALL-E \(2\) and Stable Diffusion We compare DALL-E \(2\) vs Stable Diffusion models to see which model produces more geographically representative images (Figure 4). We find that (i) for country-specific prompts, the geographical representativeness of images generated through DALL-E \(2\) are higher than those from Stable Diffusion by about \(0.6\) points (and this difference is statistically significant as per a paired t-test with a p-value \(<0.05\)); and (ii) for country agnostic prompts, the differences are not statistically significant (see Figure 4). ## 4 Feasibility of Automating the Evaluation Evaluating geographical representativeness of text-to-image models through user studies is labor intensive, expensive and not easily reusable (for future models). It would be ideal to automatically quantify the geographical representativeness of unseen test images. In this section, we analyse the feasibility of such automatic evaluation (**RQ4** in Section 2). Particularly, we explore automatically estimating the geographical representativeness using two different approaches: (i) using CLIP (a text-image alignment model) to obtain the similarity between the country-specific textual Figure 4: **DALL-E 2 vs Stable Diffusion:** Average geographical representativeness scores for images generated by DALL-E \(2\) and Stable Diffusion, with and without country-specific prompts. Figure 3: Qualitative examples of images of four common nouns generated by DALL-E \(2\) and Stable Diffusion models. Through these examples and others, we see that the default generations often reflect artifacts from US and Canada. For example, the average score (in unspecified case) for the images of houses generated through DALL-E \(2\) is \(3.95\) for US and Canada, and \(2.09\) for the remaining countries. prompt and the test image; and (b) using the similarity of the test image to already annotated images, i.e., via a \(k\)-nearest neighbor model. We elaborate these schemes below: ### CLIP-based Similarity One of the common techniques used to automatically quantify biases in the text-to-image models is to use CLIP-based similarity as a proxy [21]. For instance, CLIP similarity scores have been previously used to evaluate gender, racial, ethnic and cultural biases in text-to-image models [8, 3, 38]. Further, it has also been used to evaluate cross-lingual coverage of a concept in text-to-image models [31]. To assess if the CLIP model could be a useful tool for automatically estimating the geographical representativeness scores for a given country-noun pair, we use it to obtain the un-normalized similarity score between the image and a query of the form "high definition image of a typical [noun] in [country]", and compare it to the geographical representativeness score assigned by participants from our study. We evaluate if we could reach the same findings (as in SS3) by using the CLIP similarity scores. ResultsOverall, we find that the images generated through country-specific prompts have higher CLIP-based similarity scores than those generated by country-agnostic prompts (p-value \(<0.001\)), for both DALL-E \(2\) and Stable Diffusion. Of all the cases where DALL-E \(2\) images generated using country-specific prompts have a higher score than images generated without country names, \(98.7\%\) of the times the CLIP similarity scores are also higher. For the Stable Diffusion model, the corresponding percentage is \(96.4\%\). These high-level findings are consistent with the user study. However, when we compare the scores of DALL-E \(2\) and Stable Diffusion models, CLIP-based similarity suggests that there is no statistically significant dif \begin{table} \begin{tabular}{l c c c c c} \hline \hline \multirow{2}{*}{Countries} & \multicolumn{2}{c}{DALL-E \(2\)} & \multicolumn{2}{c}{Stable Diffusion} & \multirow{2}{*}{Self-Assessed} \\ \cline{2-2} \cline{4-6} & w/ country & & & & \\ \hline United States & 3.82 \(\pm 0.37\) & 4.00 \(\pm 0.36\) & 3.88 \(\pm 0.40\) & 4.12 \(\pm 0.32\) & 3.88 \(\pm 0.46\) \\ Canada & 4.06 \(\pm 0.27\) & 3.88 \(\pm 0.49\) & 3.06 \(\pm 0.44\) & 3.19 \(\pm 0.40\) & 3.31 \(\pm 0.68\) \\ Mexico & 4.23 \(\pm 0.72\) & 3.31 \(\pm 0.58\) & 3.23 \(\pm 0.63\) & 2.54 \(\pm 0.66\) & 3.62 \(\pm 0.77\) \\ Brazil & 4.07 \(\pm 0.34\) & 4.27 \(\pm 0.39\) & 3.40 \(\pm 0.45\) & 3.27 \(\pm 0.34\) & 3.53 \(\pm 0.58\) \\ Chile & 4.37 \(\pm 0.33\) & 4.16 \(\pm 0.47\) & 3.05 \(\pm 0.45\) & 2.26 \(\pm 0.54\) & 3.58 \(\pm 0.45\) \\ United Kingdom & 4.46 \(\pm 0.34\) & 3.92 \(\pm 0.58\) & 3.38 \(\pm 0.55\) & 3.54 \(\pm 0.51\) & 3.77 \(\pm 0.65\) \\ Italy & 3.73 \(\pm 0.50\) & 3.20 \(\pm 0.62\) & 3.47 \(\pm 0.52\) & 2.60 \(\pm 0.58\) & 3.40 \(\pm 0.69\) \\ Spain & 4.11 \(\pm 0.37\) & 4.00 \(\pm 0.75\) & 3.67 \(\pm 0.53\) & 3.11 \(\pm 0.65\) & 2.89 \(\pm 0.72\) \\ Greece & 4.32 \(\pm 0.29\) & 3.74 \(\pm 0.48\) & 4.00 \(\pm 0.39\) & 3.47 \(\pm 0.55\) & 3.42 \(\pm 0.51\) \\ Poland & 4.73 \(\pm 0.22\) & 4.13 \(\pm 0.41\) & 3.93 \(\pm 0.34\) & 2.8 \(\pm 0.55\) & 3.27 \(\pm 0.68\) \\ Portugal & 4.35 \(\pm 0.40\) & 3.88 \(\pm 0.56\) & 4.11 \(\pm 0.46\) & 3.29 \(\pm 0.51\) & 3.47 \(\pm 0.77\) \\ Belgium & 3.75 \(\pm 0.39\) & 3.75 \(\pm 0.44\) & 3.40 \(\pm 0.32\) & 2.80 \(\pm 0.45\) & 3.05 \(\pm 0.47\) \\ Czech Republic & 3.67 \(\pm 0.38\) & 3.44 \(\pm 0.56\) & 3.11 \(\pm 0.43\) & 2.72 \(\pm 0.48\) & 3.56 \(\pm 0.52\) \\ Hungary & 4.26 \(\pm 0.35\) & 3.84 \(\pm 0.47\) & 4.00 \(\pm 0.36\) & 3.53 \(\pm 0.42\) & 3.21 \(\pm 0.65\) \\ Slovenia & 3.78 \(\pm 0.29\) & 3.61 \(\pm 0.51\) & 3.27 \(\pm 0.37\) & 2.61 \(\pm 0.47\) & 3.33 \(\pm 0.56\) \\ Germany & 4.17 \(\pm 0.28\) & 3.83 \(\pm 0.38\) & 3.50 \(\pm 0.38\) & 3.33 \(\pm 0.44\) & 4.11 \(\pm 0.40\) \\ Latvia & 4.56 \(\pm 0.28\) & 3.33 \(\pm 0.58\) & 2.72 \(\pm 0.40\) & 2.39 \(\pm 0.47\) & 3.67 \(\pm 0.58\) \\ Estonia & 4.21 \(\pm 0.31\) & 3.47 \(\pm 0.42\) & 3.00 \(\pm 0.36\) & 3.11 \(\pm 0.44\) & 3.74 \(\pm 0.50\) \\ Finland & 4.16 \(\pm 0.39\) & 3.95 \(\pm 0.42\) & 3.26 \(\pm 0.46\) & 3.00 \(\pm 0.51\) & 3.74 \(\pm 0.60\) \\ France & 4.00 \(\pm 0.39\) & 3.63 \(\pm 0.47\) & 3.31 \(\pm 0.39\) & 3.37 \(\pm 0.44\) & 3.00 \(\pm 0.55\) \\ India & 4.31 \(\pm 0.26\) & 3.92 \(\pm 0.47\) & 3.62 \(\pm 0.47\) & 3.85 \(\pm 0.37\) & 4.16 \(\pm 0.44\) \\ Japan & 3.89 \(\pm 0.37\) & 3.67 \(\pm 0.46\) & 2.94 \(\pm 0.52\) & 2.78 \(\pm 0.55\) & 3.22 \(\pm 0.50\) \\ South Korea & 4.32 \(\pm 0.29\) & 3.42 \(\pm 0.47\) & 3.21 \(\pm 0.47\) & 3.05 \(\pm 0.61\) & 3.89 \(\pm 0.38\) \\ Israel & 4.54 \(\pm 0.34\) & 4.15 \(\pm 0.36\) & 2.92 \(\pm 0.58\) & 3.31 \(\pm 0.69\) & 2.62 \(\pm 0.75\) \\ Australia & 3.93 \(\pm 0.43\) & 3.80 \(\pm 0.62\) & 3.93 \(\pm 0.54\) & 3.47 \(\pm 0.52\) & 3.67 \(\pm 0.71\) \\ New Zealand & 3.75 \(\pm 0.44\) & 3.40 \(\pm 0.56\) & 2.85 \(\pm 0.38\) & 2.65 \(\pm 0.49\) & 3.35 \(\pm 0.52\) \\ South Africa & 4.06 \(\pm 0.61\) & 3.94 \(\pm 0.50\) & 3.06 \(\pm 0.66\) & 3.44 \(\pm 0.55\) & 3.50 \(\pm 0.65\) \\ \hline Average & 4.14 \(\pm 0.08\) & 3.73 \(\pm 0.10\) & 3.38 \(\pm 0.10\) & 3.09 \(\pm 0.11\) & 3.50 \(\pm 0.12\) \\ \hline \hline \end{tabular} \end{table} Table 2: **Country wise photo-realism scores.** We present how the realism scores of images generated from DALL-E \(2\) (D2) and Stable Diffusion (SD) improve when the country name is specified in the text prompt. Additionally, in the last column we include the scores that users assign when asked about how the realism of images influenced their ratings about geographical representativeness. ference in the geographical representativeness of images generated with country name, which contradicts the results from the participants (they find images generated from DALL-E \(2\) with country-specific prompts to be more geographically representative than ones from Stable Diffusion). Moreover, for images generated without the country name, the CLIP similarity scores are higher for Stable Diffusion than DALL-E \(2\) unlike the human ratings, for which there is no statistically significant difference. Next, we study if we could obtain finer-grained findings similar to what we observe through a human study. For this, we first compute the Pearson's correlation coefficient, \(\rho\), between country-wise geographical representativeness scores and CLIP similarity scores. We find no correlation across all nouns for images generated with country names (\(\rho=0.01\)), and weak correlation for images with country-agnostic prompts (\(\rho=0.34\)). Further, we curate a benchmark comprising pairs of images, and evaluate how often do human preferences (about which of the two images is more geographically representative) match with the one selected through CLIP-based similarity. We note that the agreement is merely \(52.4\%\), where random chance agreement would be \(50\%\). These results indicate that the CLIP-based similarity is an inadequate proxy for the geographical representativeness. ### Estimation using Nearest Neighbors We further explore the viability of estimating the geographical representativeness of a given test image (possibly generated by a future text-to-image generation model) using the existing ratings collected for images from DALL-E \(2\) and Stable Diffusion. For a test image \(X_{n}^{T}\) of a given noun \(n\), we define \(\mathcal{X}_{n}^{c}\) as the set of images of \(n\) annotated by participants of country \(c\). Since a given image may be reflective of surroundings in multiple countries, we attempt to estimate the **GR** scores corresponding to all the studied countries. For \(X_{n}^{T}\), we find its \(k\) nearest neighbors by extracting the feature vectors of \(X_{n}^{T}\) and the images in \(\mathcal{X}_{n}^{c}\) from the vision model used by CLIP, and then computing the cosine similarities between the corresponding features. The predicted **GR** score of \(X_{n}^{T}\) for country \(c\) is the average of the human ratings corresponding to the obtained nearest neighbors. Specifically, we use the participant ratings of DALL-E \(2\) as the training data and those of Stable Diffusion for testing. Therefore, for noun \(n\) and country \(c\), \(|\mathcal{X}_{n}^{c}|=4\), as we have \(4\) annotated images per noun for a given country, \(2\) generated with country-specific prompts, the other \(2\) generated without the country-specific prompts. For example, to evaluate the **GR** score of an image of a house in India generated by Stable Diffusion, we find its \(k\) nearest neighbors among the images that are generated through DALL-E \(2\)_and_ annotated by Indians. The estimated score is then compared to the true ratings of Indian participants. **Results.** Given that \(|\mathcal{X}_{n}^{c}|=4\), we set \(k=1\) for all our experiments. We find that the average correlation coefficient, the correlation between the human marked scores and the estimated scores is moderate (\(\rho=0.46\)) over all the countries in the unspecified case, however, we find no correlation (\(\rho=0.01\)) in the case of country-specific prompts. Further, the mean squared error (MSE) between the human and estimated scores is \(1.39\) for images with country-agnostic prompts and \(1.56\) for images with country-specific prompts. As a reference, we also check the MSE for a baseline value of \(3.0\) for all the test images across all countries (as \(3\) falls in the middle of \(1\)-\(5\) scale). For this reference, the MSE is \(1.18\) for unspecified case and \(0.83\) for country specific case--both these error values are lower than the corresponding values obtained using the estimates from the \(k\) nearest neighbor model. These values point to the infeasibility of using this approach for automatically estimating the geographical representativeness, at least in the current form. We believe that this is partly due to the fact that we only have a few annotated images in the training corpus to match with. We also speculate that the image feature extractors (used for similarity computation) may not extract features that differentiate images along the geographical lines. We further present the MSE scores of the nearest neighbor method by varying the underlying pretrained feature extractor in Table 3. We note that for both the country unspecified and the country specific cases, the MSE values for the predicted **GR** scores with respect to all the feature extractors are higher than that of values obtained using the baseline score of \(3.0\). This further underscores that automatically estimating geographical representativeness of images is challenging. Both the investigated approaches for estimating geographical representativeness turn out to be inadequate. We are able to reach similar high-level conclusions using CLIP-based similarity, but the similarity scores contradicted finer-grained findings. Overall, it is fundamentally challenging to automatically estimate the representativeness of images. \begin{table} \begin{tabular}{l l l} \hline Approach & w/o country & w/ country \\ \hline Reference (\(=3.0\)) & 1.18 & 0.83 \\ \hline Feature extractors: & & \\ VGG16 [35] & 1.55 & 1.52 \\ ResNet18 [12] & 1.67 & 1.77 \\ ResNet50 [12] & 2.04 & 1.62 \\ ViT [10] & 1.81 & 1.51 \\ CLIPVision [21] & 1.38 & 1.56 \\ \hline \end{tabular} \end{table} Table 3: Evaluating the estimated geographical representativeness using \(k\)-nearest neighbor approach. We find the the Mean Squred Errors (MSE) for all the feature extractors are too high to be useful. ## 5 Limitations & Future Directions There are several important limitations of our work. Despite our efforts to reach out to participants from \(88\) countries, we received sufficient responses from users only in \(27\) countries, and hence our study is limited to only \(27\) countries. We received **less than \(5\) responses** from participants in Nepal (1), Bangladesh (2), Malaysia (2), Turkey (5), Singapore (2), Argentina (1), Kenya (3), Venezuela (1), Pakistan (1), Indonesia (2), Nigeria (2), Romania (2), Colombia (3), Namibia (1), and **zero responses** from Laos, Armenia, Yemen, Thailand, Vietnam, Sri Lanka, Kazakhstan, Ukraine, Sierra Leone, Burkina Faso, Morocco, Senegal, Philippines, Egypt, Peru, Ethiopia, Mozambique, Kyrgyz Republic, Tanzania, Mali, Ecuador, Myanmar, Cambodia, Russia, Andorra, Finland, Tunisia, Gabon, Angola, Algeria, Libya, Botswana, and Seychelles. As past surveys note, internet is not uniformly accessible across the globe [6, 2]. The lack of access disproportionately impacts marginalized and poor nations, which further limits the voice residents of marginalized countries have on the internet. Systems trained on the internet data run the risk of excluding such communities. Perhaps due to internet access issues, crowd-sourcing platforms have few (or no) participants from many developing countries, which further exacerbates inclusive development and evaluation of machine learning models (country-wise details can be found in Figure 5). Another weakness of our work is that we evaluate generated images for only \(10\) common nouns. As we evaluate two different models with two different kinds of prompts and use multiple images per noun, we end up with a survey comprising \(80\) images per participant. Including additional nouns would have resulted in longer (or more) surveys and likely lower participation. However, we will open-source the code and required tools for future work to reproduce and extend similar studies. An interesting future direction is to examine techniques to aggregate images (for a given noun and a country) to speed and scale up the evaluation. To improve the models, and the geographical representativeness of the generated images, we believe that more work is required to better document the sources of image-text pairs in the training data so as to understand the distributions of different objects and countries. Further, we need to collect and augment more data from the under-represented countries--there have been some past attempts at scraping more diverse image data [23]. Lastly, we call for improving the participation from under-represented countries in development and evaluation of machine learning models. ## 6 Related Work **Text-to-image Generation.** Over the last few years, models that convert any input text to images have gained significant traction. Initial text-to-image generation model used Generative Adversarial Networks [45, 26, 40, 48] and Generative RNNs [18]. Recent advancements in transformers [42] and diffusion models [27], and their application to text-to-image generation, has improved the quality of generated images. Autoregressive models encode the image as a grid of latent codes and train a multimodal transformer language model to generate the image tokens [25, 7, 44]. Another line of work employs diffusion models for image generation [29, 24, 20, 30]. A different line of work fuses the diffusion models with autoregressive transformers [11]. For our study, we pick DALL\(\cdot\)E \(2\)[24], a diffusion based model released by OpenAI, and Stable Diffusion [27], an open-source latent text-to-image diffusion model, as there are increasing concerns that generated images from these models exhibit and amplify societal biases [4, 24, 44], since they are trained on a large number of text-image pairs scrapped from the web and other sources. **Societal Biases.** There is a growing body of work that critically analyzes the outputs of deep learning models in an attempt to discover and measure societal biases for various downstream applications including image classification [41, 22, 46], image captioning [47, 13], language generation [34, 33], face recognition [5, 9], image search [19], and art creation [36]. A recent study investigates generated Figure 5: The number of participants available for research studies on Prolific are heavily skewed, and have few (or no) participants from many poor and developing nations. Such disparity is a serious challenge for inclusive model development and evaluation. images from DALL-E v1 for gender and racial biases[8], and another work examines the outputs of Stable Diffusion models for stereotypes associated with gender, class, and ethnicity [4]. The latter study [4] showcases several instances of dangerous biases exhibited by these models, and cautions against widespread adoption of such models. Our study is similar in spirit to prior studies that aim to measure societal biases but analyzes--an oft-overlooked aspect of inclusive representation--geographical representation. ## 7 Conclusion In this work, we investigated how well the images generated by two popular text-to-image models (DALL-E \(2\) and Stable Diffusion) reflect surroundings across the world. We conducted a user study involving \(540\) participants from \(27\) countries, wherein we asked participants the degree to which generated images of common nouns reflect their surroundings. We found that when the input prompt does not include any specific country name, users from \(25\) out of \(27\) countries felt that the generated images were less representative of the artifacts, with an average score of \(2.39\). However, ratings increased to \(3.49\) on an average when we included the country name in the text prompts. These results also highlight how there is considerable room for models to generate more geographically representative content. When comparing DALL-E \(2\) with the Stable Diffusion model, we found that DALL-E \(2\) outperformed Stable Diffusion when using country specific inputs, but in other cases, these two models received similar scores. We also explored the feasibility of automating our study, and noted that the explored approaches were inadequate. Lastly, we highlighted key limitations and discussed ideas for future work to scale up the study and improve the geographical representativeness. ## Acknowledgements We thank all the participants for their time and effort in scoring the images. We are grateful to Vinodkumar Prabhakaran, Sameer Singh, Preethi Seshadri and the members of the Vision and AI Lab, Indian Institute of Science, for the their valuable feedback.
2301.03173
Popcorn transitions and approach to conformality in homogeneous holographic nuclear matter
We study cold and dense nuclear matter by using the gauge/gravity duality. To this end we use the Witten-Sakai-Sugimoto model and the V-QCD models with an approach where the nuclear matter is taken to be spatially homogeneous. We focus on the ''popcorn'' transitions, which are phase transitions in the nuclear matter phases induced by changes in the layer structure of the configuration on the gravity side. We demonstrate that the equation of state for the homogeneous nuclear matter becomes approximately conformal at high densities, and compare our results to other approaches.
Jesús Cruz Rojas, Tuna Demircik, Matti Järvinen
2023-01-09T05:13:50Z
http://arxiv.org/abs/2301.03173v1
# Popcorn transitions and approach to conformality in homogeneous holographic nuclear matter ###### Abstract We study cold and dense nuclear matter by using the gauge/gravity duality. To this end we use the Witten-Sakai-Sugimoto model and the V-QCD models with an approach where the nuclear matter is taken to be spatially homogeneous. We focus on the "popcorn" transitions, which are phase transitions in the nuclear matter phases induced by changes in the layer structure of the configuration on the gravity side. We demonstrate that the equation of state for the homogeneous nuclear matter becomes approximately conformal at high densities, and compare our results to other approaches. + Footnote †: preprint: APCTP Pre2022 - 030 ## I Introduction Recent observations of neutron star mergers by the LIGO/Virgo collaboration have opened a new window for studying dense matter in quantum chromodynamics (QCD). In particular, the gravitational and electromagnetic waves observed from the GW170817 merger event [1] already set highly nontrivial constrains for the QCD equation of state [2] at low temperatures and high densities. This progress has boosted interest in theoretical studies of dense QCD, which is a challenging topic as standard theoretical and computational tools do not work in extensive regions of the phase diagram (see the overview in [3]). These include the region of dense nuclear matter, i.e., a nucleon liquid at baryon number densities well above the nuclear saturation density \(\rho_{s}\approx 0.16\,\text{fm}^{-3}\). The difficulty of solving the properties of dense matter calls for new methods. A possibility is to use the gauge/gravity duality. Indeed, applications of holographic QCD models to dense matter have received a lot of interest recently. There has been progress in developing models both for quark matter [4; 5; 6; 7; 8; 9], for nuclear matter [10; 11; 12; 13; 14], and for other phases (such as color superconducting or quarkyonic phases) [15; 16; 17; 18]. See also the reviews [19; 20]. The natural starting point for describing nuclear matter is to study the holographic duals for nucleons. The standard approach [21] boils down to describing them as solitonic "instanton" solutions of bulk gauge fields, i.e., the gauge fields living in the higher dimensional gravity theory. These solitons are localized both in the spatial directions and in the holographic direction (but not in time). Solitons that are duals of isolated nucleons have been solved in various holographic models [22; 23; 24; 25; 26; 27; 28; 29; 30; 31]. Constructing more complicated solutions, and eventually the holographic dual of dense nucleon matter out of these bulk solitons, is however challenging. Some results, which use instanton gases without interactions, are available [32; 33; 34; 35; 13; 36] and also with two-body interactions included [10]. Moreover, at large \(N_{c}\) the nuclear matter is a crystal rather than a liquid of nucleons [37]. Such crystals have been studied by using different toy models and approximations [38; 39; 40; 41; 42; 43]. In this article, we focus on a simpler approach, which treats dense nuclear matter as a homogeneous configuration of the non-Abelian gauge fields in the bulk. This approach was applied to the Witten-Sakai-Sugimoto (WSS) model [44; 45; 46], in [47], and argued to be a reasonable approximation at high density.1 It was further developed in [49; 50] and applied to other models in [11; 14]. Interestingly, dense (and cold) homogeneous holographic nuclear matter was seen to have a high speed of sound, clearly above the value \(c_{s}^{2}=1/3\) of conformal theories [11] (see also [51; 52]). That is, the equation of state is "stiff". This is important as it helps to construct models which pass observational bounds [53; 54; 55]. Footnote 1: An even simpler approach is to treat the baryons as point-like sources in the bulk, which may be a better approximation at low density [47; 48]. Changes in the structure of dense nuclear matter may give rise to transitions within the nuclear matter phase. At large \(N_{c}\), nuclear matter is a crystal of Skyrmions, solitons of the low energy chiral effective theory [56]. As the density increases, the Skyrmion crystal is expected to undergo a transition into half-solitons, where each node of the crystal carries baryon number of one half [57; 58; 59; 60]. Similar structures have been studied by using the gauge/gravity duality in [39]. This topology changing transition has been studied extensively by using an effective field theory approach, which introduces the \(\sigma\) meson of QCD as a pseudo-Nambu-Goldstone mode of broken scale invariance, and vector mesons through the hidden local symmetry approach [61; 62]. This approach is supported by the analysis of the nucleon axial coupling \(g_{A}\) for heavy nuclei [63]. Above the transition density, it was found that the speed of sound rapidly approaches the conformal value \(c_{s}^{2}=1/3\)[64]. At the same time, the polytropic index \(\gamma=d\log p/d\log\epsilon\) takes small values [65; 66] as compared to what is usually found in nuclear theory models [67; 68]. The transition has also been argued [69] to be indicative of quark-hadron continuity [70], which states that there is no phase transition between nuclear and quark matter. Whether the continuity is a feasible possibility is a matter of ongoing debate, see [71; 72; 73; 74] (see also [15] for a holographic discussion). A closely related transition realised in holographic setups is the transition from a single-layer configuration into a double layer configuration: In the low density limit, the location of each soliton is found by individually minimizing its energy, so that the solitons form a single layer at a specific value of the holographic coordinate. For dense configurations, however, the repulsive interactions between solitons will eventually force them out of this layer, which leads to a double layer or a more complicated configuration. This transition was coined the "popcorn" transition in [40]. If interactions between the solitons are attractive at large distances (as is the case for real QCD) the picture is more complicated as the solitons clump together even at low densities, but the transition may still be present. Various phases appear as the density increases further [41; 42] in setups motivated by the WSS model. The simplest case of the transition is however the separation of a single layer into two layers. This kind of transition was also found to take place in the WSS model in various approximations: when the instantons were approximated as point-like objects [17], when including finite widths [36], and when using a homogeneous approach [50]. Indications of such a transition were also seen when using a homogeneous Ansatz for nuclear matter in the hard wall model of [14], where it was interpreted as a transition to a quarkyonic phase [75]. In this article, we study the popcorn transitions within cold homogeneous holographic nuclear matter by using two different models, the top-down WSS model and the bottom-up V-QCD model [19; 76]. These two are arguably the most developed holographic top-down and bottom-up models for QCD at finite temperature and density. For the WSS model a similar analysis was carried out in [50]. This reference used an approach which is slightly different from ours: In their case, a zero curvature condition for the non-Abelian gauge fields in the Lagrangian density is imposed before approximating the density to be homogeneous. We use a somewhat simpler approach where the fields are assumed to be homogeneous to start with. In our case, as we will discuss in detail below, a discontinuity of the gauge fields as a function of the holographic coordinate is required to have nonzero baryon density [47]. This may appear to be a weakness of the simpler approach, but we remark that the discontinuity is actually well motivated, as it can be seen to arise from non-analyticity of the instanton solutions at their centers after smearing over the spatial dimensions [19]. The main goal in this article is to analyze the softening of the equation of state at the phase transition. The main indicators for this are the speed of sound and the polytropic index \(\gamma\). We compute these quantities in both holographic models and compare to results in other setups. In particular, we find interesting similarities with the effective theory approach for the topology changing transition [62; 64]. The rest of the article is organized as follows. In section II we review the setup with homogeneous nuclear matter for the WSS model, and in section III we do the same for the V-QCD model. In section IV we discuss the numerical results for the solutions, the phase transitions, and the equation of state. Finally, we discuss our findings in section V. ## II Homogeneous nuclear matter in the Witten-Sakai-Sugimoto model The phase diagram of QCD has been studied by using several holographic "top-down" models, i.e., models directly based on string theory, such as the Witten-Sakai Sugimoto model [45; 46]. In this model, Witten's non-supersymmetric model for low-energy QCD [44] has been successfully applied to study the spectra and the properties of mesons and baryons. In the WSS model, the pure glue physics of the QFT is described by the dual gravitational background and it is sourced by \(N_{c}\)\(D4\)-branes in type-IIA superstring theory. Fundamental degrees of freedom are included by adding \(N_{f}\) pairs of \(D8\) and \(\overline{D8}\)-branes, such that the strings connecting \(D4-D8\) and \(D4-\overline{D8}\) branes are associated with left and right-handed fermions. Witten's model includes a phase transition involving a topologically nontrivial change in geometry from a low temperature "cigar" geometry to a high temperature black hole geometry [77]. We focus here in the low temperature geometry which we will give explicitly below. In the low temperature geometry the \(D8\) and \(\overline{D8}\)-branes join at the tip of the cigar (see figure 1), which locks together the flavor transformations on the branes, indicating chiral symmetry breaking. As shown in the figure we assume the simplest case where the \(D8\) and \(\overline{D8}\)-branes are antipodal, i.e., at located at exactly opposite curves on the cigar. A chemical potential for the baryon number can be turned on by adding a nonzero source for the temporal component of the Abelian gauge field on the \(D8\)-branes. ### Expanding the Dirac-Born-Infeld action The 10-dimensional metric of the confined low temperature geometry in the Witten model can be written as [78; 79] \[ds^{2}= \left(\frac{U}{R}\right)^{3/2}\left[dx_{\mu}dx^{\mu}+f(U)dx_{4}^{ 2}\right]\] \[+\left(\frac{R}{U}\right)^{3/2}\left[\frac{dU^{2}}{f(U)}+U^{2}d \Omega_{4}^{2}\right] \tag{1}\] where \(R\) is the curvature radius, \[f(U)=1-\frac{U_{KK}^{3}}{U^{3}} \tag{2}\] with \(U_{KK}\) denoting the end of space, and \(d\Omega_{4}^{2}\) is the metric of \(S^{4}\). For the Minkowski metric \(dx_{\mu}dx^{\mu}\) we use mostly plus conventions, and the \(x_{4}\) coordinate is compactified on a circle. The dilaton is given by \[e^{\phi}=g_{s}\left(\frac{U}{R}\right)^{3/4}\, \tag{3}\] where \(g_{s}\) is the string coupling. In the (\(x_{4}\), \(U\))-coordinates this geometry takes the form of a cigar and the regularity at the tip of the cigar links the radius of compactification \(R_{4}\) of the \(x_{4}\) coordinate to the Kaluza-Klein scale characterized by \(U_{KK}\), as \(R_{4}=\frac{(4\pi)R^{3/2}}{3\sqrt{U_{KK}}}\). The simplest D8 brane embedding within the cigar geometry is the antipodal one, given (for example) by \(x_{4}=0\) and \(x_{4}=\pi R_{4}\). By changing the coordinates to \(U=U_{KK}(1+z^{2})^{1/3}\) the induced metric on the brane can be written as \[ds_{\rm ind}^{2}= \left(\frac{U_{KK}}{R}\right)^{3/2}\sqrt{1+z^{2}}\;dx_{\mu}dx^{\mu}\] \[+\left(\frac{R}{U_{KK}}\right)^{3/2}\frac{4dz^{2}}{9\left(1+z^{2 }\right)^{5/6}}\] \[+R^{3/2}\sqrt{U_{KK}}\sqrt[6]{1+z^{2}}d\Omega_{4}^{2}. \tag{4}\] Here the coordinate \(z\) takes both positive and negative values on different branches of the brane. The boundary is at \(z=\pm\infty\) and the tip of the cigar at \(z=0\). See Figure 1 for illustration. We work in units where \(U_{KK}=1\) and \(R^{3}=9/4\). We also start from the Dirac-Born-Infeld action \[S_{\rm DBI}=-\tau_{8}\int d^{\theta}x\;e^{-\phi}{\rm tr}\sqrt{-\det(g+{\cal F })}\, \tag{5}\] where the trace is over flavor indices the brane tension is given by \[\tau_{8}=\frac{1}{(2\pi)^{8}l_{s}^{9}}=\frac{\lambda^{9/2}}{157464\sqrt{2}\pi ^{8}}. \tag{6}\] Here \(l_{s}\) is the string length, \(\lambda\) is the 't Hooft coupling, \({\cal F}\) is the field strength tensor of the gauge field \({\cal A}\), and we used the relations \(R^{3}=\pi g_{s}N_{c}l_{s}^{3}\) and \(2\pi l_{s}g_{s}N_{c}=\lambda\). We use a similar expansion as in the case of V-QCD below [11] so that the non-Abelian component of the gauge fields are treated as small but the Abelian terms are kept unexpanded. To do so, we separate the gauge field into non-Abelian and Abelian components: \({\cal A}=A+\hat{A}\) where \(A\) is non-Abelian and \(\hat{A}\) is Abelian, i.e., proportional to the unit matrix in flavor space (and similarly \({\cal F}=F+\hat{F}\) for the field strengths). We take only the temporal component of the Abelian gauge field to be nonzero, assume that it depends only on the holographic coordinate \(z\), and assume no dependence on the angular coordinates of \(\Omega_{4}\) for all fields, so that these coordinates can be integrated out. Figure 1: Setup in the WSS model. The coordinate \(z\) runs between \(z=-\infty\) and \(z=\infty\) between the two boundaries of the \(D8\) brane embedding as indicated in the figure. The blobs show the locations of the discontinuities for the single layer configuration (left) and for the double layer configuration (right). Then the five-dimensional effective action for the gauge fields, to leading order in the non-Abelian \(F^{2}\), is given as \[S=S_{\rm DBI}^{(0)}+S_{\rm DBI}^{(1)}+S_{\rm CS} \tag{7}\] where the terms arising from the Dirac-Born-Infeld action read \[S_{\rm DBI}^{(0)}=-\frac{\lambda^{3}N_{c}N_{f}}{19683\pi^{5}}\int d ^{5}x\sqrt[3]{1+z^{2}}\sqrt{\left(1+z^{2}\right)^{2/3}-\left(1+z^{2}\right) \Phi^{\prime}(z)^{2}} \tag{8}\] and \[S_{\rm DBI}^{(1)}= -\frac{\lambda N_{c}}{216\pi^{5}}\int d^{5}x\,\operatorname{tr} \left[-\frac{F_{tz}^{2}\left(1+z^{2}\right)}{\left(1-\sqrt[3]{1+z^{2}}\Phi^{ \prime}(z)^{2}\right)^{3/2}}-\frac{F_{ti}^{2}}{\sqrt[3]{1+z^{2}}\sqrt{1-\sqrt[ 3]{1+z^{2}}\Phi^{\prime}(z)^{2}}}\right.\] \[\left.+\frac{F_{ij}^{2}\sqrt{1-\sqrt[3]{1+z^{2}}\Phi^{\prime}(z)^ {2}}}{2\sqrt[3]{1+z^{2}}}+\frac{F_{zi}^{2}\left(1+z^{2}\right)}{\sqrt{1-\sqrt[ 3]{1+z^{2}}\Phi^{\prime}(z)^{2}}}\right]\,. \tag{9}\] Notice that the general Dirac-Born-Infeld action is ambiguous for non-Abelian fields, but up to second order in the expansion the action is non-ambiguous. The Chern-Simons term is \[S_{\rm CS} =\frac{N_{c}}{24\pi^{2}}\int\left\{\omega_{5}+d\left[\hat{A} \wedge\operatorname{tr}\left(2A\wedge F+\frac{i}{2}A^{3}\right)\right]\right.\] \[\left.+3\hat{A}\wedge\operatorname{tr}\left(F\wedge F\right)\right\} \tag{10}\] with the Abelian gauge field normalized as \(\Phi=2\lambda\hat{A}_{t}/(\sqrt{729}\pi).\) Here \[\omega_{5}=\operatorname{tr}\left(A\wedge F^{2}+\frac{i}{2}A^{3}\wedge F-\frac {1}{10}A^{5}\right) \tag{11}\] gives the standard Chern-Simons term for the brane. We used conventions where \(F=dA-iA\wedge A\). Notice that in (10) the Abelian field couples to the instanton density in the bulk as expected (see the last term). Indeed, notice that \(S_{\rm DBI}^{(0)}\) and \(S_{\rm DBI}^{(1)}\) depend on the Abelian gauge field only through its \(z\)-derivative, and only \(S_{\rm CS}\) contains non-derivative dependence on this field. The total baryon charge density is defined as \[\rho_{0}=-\left(\frac{\delta S}{\delta\hat{A}_{t}^{\prime}}\right)_{\rm bdry}= \int dz\,\frac{\delta S}{\delta\hat{A}_{t}}, \tag{12}\] according to holographic dictionary, where only the Chern-Simons term contributes to the last expression. Therefore the baryon charge is given by the coupling of the non-Abelian field to \(\hat{A}_{t}\) in \(S_{\rm CS}\). In other words, the Chern-Simons term determines how the solitons source baryonic charge. We also remark that the construction of the precisely consistent Chern-Simons term is actually rather involved, in general [80], but in the simple case considered here complications do not arise. ### The homogeneous Ansatz Then as the next step, we set \(N_{f}=2\) and insert the homogeneous Ansatz \[A^{i}=h(z)\sigma^{i} \tag{13}\] where \(h(z)\) is a scalar function and \(\sigma^{i}\) are the Pauli matrices. The non-Abelian \(A_{t}\) and \(A_{z}\) components are set to zero. We then find that \[F_{zi}=h^{\prime}(z)\sigma_{i}\,\qquad F_{ij}=2h(z)^{2}\epsilon_{ijk}\sigma^{k} \tag{14}\] while other components of the field strength are zero. We obtain \[S_{\rm DBI}^{(1)}= -\frac{\lambda N_{c}}{36\pi^{5}}\int d^{5}x\Bigg{[}\frac{4h(z)^{ 4}\sqrt{1-\sqrt[3]{1+z^{2}}\Phi^{\prime}(z)^{2}}}{\sqrt[3]{1+z^{2}}}\] \[+\frac{\left(h^{\prime}(z)\right)^{2}\left(1+z^{2}\right)}{\sqrt{ 1-\sqrt[3]{1+z^{2}}\Phi^{\prime}(z)^{2}}}\Bigg{]} \tag{15}\] and the Chern-Simons action contributes as \[S_{\rm CS}=\frac{3N_{c}}{\pi^{2}}\int h(z)^{2}h^{\prime}(z)\hat{A}\wedge dz \wedge dx_{1}\wedge dx_{2}\wedge dx_{3} \tag{16}\] as well as a boundary term \[S_{\rm CS,bdry}=\frac{3N_{c}}{4\pi^{2}}\int_{\rm bdry}h(\pm\infty)^{3}\hat{A }\wedge dx_{1}\wedge dx_{2}\wedge dx_{3} \tag{17}\] which however will vanish when it is evaluated on the solution in our case, because the solution for \(h(z)\) will vanish on the boundary. ### The single layer solution In order to have explicit parity invariance, we assume that \(h(z)=-h(-z)\). Following [47], we assume that the field \(h\) has a discontinuity at \(z=0\), denoted by the blob in figure 1 (left), and approaches different constant values as \(z\to 0\) either from above or from below. As we mentioned above, the discontinuity is required to have a non-vanishing baryon density. Defining the bulk charge density as \[\rho(z,x^{\mu})=-\frac{\delta S}{\delta\,\partial_{z}\hat{A}_{t}(z,x^{\mu})} \tag{18}\] the equation of motion for \(\hat{A}\) implies \[\rho^{\prime}(z)=\frac{3N_{c}}{\pi^{2}}h(z)^{2}h^{\prime}(z). \tag{19}\] The continuous and symmetric solution is given by \[\rho(z)=\left\{\begin{array}{cc}\rho_{0}+\frac{N_{c}}{\pi^{2}}h(z)^{3}\,&(z<0)\\ -\rho_{0}+\frac{N_{c}}{\pi^{2}}h(z)^{3}\,&(z>0)\end{array}\right. \tag{20}\] where \[\rho_{0}=\frac{N_{c}}{\pi^{2}}\lim_{z\to 0+}h(z)^{3}=-\frac{N_{c}}{\pi^{2}} \lim_{z\to 0-}h(z)^{3} \tag{21}\] is the boundary charge density. Notice that as expected, it is sourced by the discontinuity of \(h\). This solution is identified as the single layer solution. To finalize the construction, we require that \(h\) satisfies the equation of motion arising from minimizing the action, except at \(z=0\) where the discontinuity is located. ### The double layer solution A slightly more general solution than the single layer solution however exists: it is possible that the discontinuity of the \(h\) field does not take place at the tip but at a generic value of the holographic coordinate. The simplest of such solutions, which still respects the symmetry \(h(z)=-h(-z)\), is where \(h(z)\) vanishes when \(-z_{c}<z<z_{c}\) so that the discontinuity is located at \(z=\pm z_{c}\), see figure 1 (right). Similar solutions were considered in [17] for point-like instantons. In this case, the solution for the bulk charge density is given by \[\rho(z)=\left\{\begin{array}{cc}\rho_{0}+\frac{N_{c}}{\pi^{2}}h(z)^{3}\,&(z<-z_{c})\\ -\rho_{0}+\frac{N_{c}}{\pi^{2}}h(z)^{3}\,&(z>z_{c})\\ 0\,&(-z_{c}<z<z_{c})\end{array}\right. \tag{22}\] where \[\rho_{0}=\frac{N_{c}}{\pi^{2}}\lim_{z\to z_{c}+}h(z)^{3}=-\frac{N_{c}}{\pi^{2 }}\lim_{z\to(-z_{c})-}h(z)^{3}. \tag{23}\] This solution is identified as the double layer solution. ### Legendre transform to canonical ensemble To determine the phase diagram, one needs to determine the free energy, the energy densities, and the grand potential for the different phases. Thus, we first need to compute the free energy of the baryonic phase. For this purpose we minimize the action for \(h\) to determine the location of the discontinuity. It is convenient to work at fixed baryonic charge rather than chemical potential. To this end, we perform a Legendre transformation for the action (7): \[\widetilde{S}=S+\int\frac{d}{dz}\left(\hat{A}_{t}\,\rho\right)dz \tag{24}\] For convenience we rescale \(\rho\) as \(\rho\to\frac{4\lambda^{4}}{53441\pi^{6}}\rho\equiv\hat{\rho}\). Expanding to first nontrivial order in \(h(z)\) and \(h^{\prime}(z)\), and using equation (18), we can solve for \(\Phi^{\prime}(z)\): \[\Phi^{\prime}(z)=-\frac{\hat{\rho}}{R(z,\hat{\rho})(1+z^{2})N_{c}}-\frac{2187 \hat{\rho}\left(-4h(z)^{4}(1+z^{2})^{\frac{2}{3}}+h^{\prime}(z)^{2}(1+z^{2})^ {2}R(z,\hat{\rho})^{2}\right)}{8\lambda^{2}N_{c}(1+z^{2})^{\frac{2}{3}}R(z, \hat{\rho})^{3}}\, \tag{25}\] where we define \[R(z,\hat{\rho})=\sqrt{1+\frac{\hat{\rho}^{2}}{\left(1+z^{2}\right)^{5/3}N_{c} ^{2}}}. \tag{26}\] Then the Legendre transformed action is \[\widetilde{S}= -N_{c}\int d^{5}x\Bigg{[}\frac{2\lambda^{3}(1+z^{2})^{\frac{2}{3} }R(z,\hat{\rho})}{19683\pi^{5}}+\frac{\lambda\left(4h(z)^{4}(1+z^{2})^{\frac{ 2}{3}}+h^{\prime}(z)^{2}\left((1+z^{2})^{2}R(z,\hat{\rho})^{2}\right)\right)}{ 36\pi^{5}\left((1+z^{2})R(z,\hat{\rho})\right)}\Bigg{]}. \tag{27}\] Now we can find the equation of motion for \(h(z)\) and solve it. For this purpose, we need to find the appropriate asymptotics of the field \(h\) at the boundary: \[h(z)\simeq\frac{h_{1}}{z}\, \tag{28}\] with \(h_{1}\) remaining as a free parameter. ## III Homogeneous nuclear matter in the V-QCD model V-QCD is bottom-up holographic model which contains both glue and flavor sectors. The glue sector is given by the improved holographic QCD framework [81; 82] in which a dilaton field and the potential depending on it are used to implement the essential features of the related QCD sector, i.e. asymptotic freedom, and confinement to deconfinement phase transition. The flavour sector arises from a pair of dynamical space filling flavor branes [83; 84]. In V-QCD, the full back-reaction of the flavor branes is taken into account via the Veneziano limit [85] in which both \(N_{c}\) and \(N_{f}\) are large but their ratio is kept \(\mathcal{O}(1)\) as it is in real QCD [76]. In the V-QCD flavor sector, a tachyon field is used to realize the breaking/restoration of the chiral symmetry. In both sectors, the model parameters are also fixed by considering perturbative QCD results (running of coupling constant and quark mass) at weak coupling [81; 76; 82], by requiring qualitative agreement with QCD (e.g. confinement and discrete spectrum) at strong coupling [86], and by fitting to QCD data (e.g. meson and glueball masses and the equation of state at finite temperature) [87; 88; 89; 6; 31]. For the more complete review about the construction of the V-QCD model, the fit to fix the potentials and comparison with the data, we refer the reader to [19]. In this article, we use one of the models defined in [6] (potentials 7a). The parameter \(b\) appearing in the Chern-Simons action [11] is set to \(b=10\). There are two possible geometries in V-QCD: a horizon-less geometry ending in a "good" kind of singularity [90] (dual to a chirally broken confined phase) and a geometry of charged "planar" black hole [91; 92] (dual to a chirally symmetric deconfined phase). In this article, we focus on the former geometry which is the relevant geometry for cold and dense nuclear matter. This phase also includes chiral symmetry breaking which is induced by the condensate os a scalar field \(\tau\) (the "tachyon") in the bulk. In order to discuss nuclear matter, we will employ here an approach which is essentially the same as the homogeneous approach introduced for the WSS model above [11]. This approach has been improved by combining the predictions of V-QCD with other models [93; 94; 95; 96]. The resultant equation of states have been widely investigated. It was shown that the resultant equations of state are feasible in the sense of being consistent with neutron star observations [93; 94; 96; 97; 98; 99; 100]. They were also used in phenomenological applications such as modeling spinning neutrons stars [98] and neutron star merger simulations [99; 100; 100]. In the first two subsections below, we outline the implementation of homogeneous Ansatz in V-QCD and discuss the single layer solution. For more details we refer to [19]. In the last two subsections, we present the generalization to double layer solution and investigate the possibility of a transition from the single layer to a double layer configuration. ### The homogeneous Ansatz For V-QCD, we use the action with finite baryon density which can be written as \[S_{V-QCD}=S_{\rm glue}+S_{\rm DBI}+S_{\rm CS}. \tag{29}\] The explicit expression for the action can be found in [11]. The renormalization group flow of QCD is modeled through a nontrivial evolution of the geometry between the weak coupling (ultraviolet, UV) and strong coupling (infrared, IR) regions. We will be using here the conformal coordinate \(r\) in the holographic direction [81; 82] for which the UV boundary is located at \(r=0\) while the IR singularity is at \(r=\infty\). As in the case of the WSS model above, we separate the gauge field into non-Abelian and Abelian components: \[\mathcal{A}_{L/R}=A_{L/R}+\hat{A}_{L/R}. \tag{30}\] Here the left and right handed fields arise from \(D4\) and \(\overline{D4}\) branes, respectively [83; 84]. Similarly as in the case of the WSS model above, we turn on temporal component of the vectorial Abelian gauge field \[\hat{A}_{L}=\hat{A}_{R}=\mathbb{I}_{N_{f}\times N_{f}}\Phi(r)dt. \tag{31}\] Then on top this background, the non-Abelian baryonic terms are treated as a perturbation. We expand the DBI action up to a first nontrivial order in the non-Abelian fields (quadratic in the field strengths \(F_{(L/R)}\)). After the expansion, we insert the homogeneous Ansatz for non-Abelian gauge field, i.e. \[A_{L}^{i}=-A_{R}^{i}=h(r)\sigma^{i} \tag{32}\] where \(h(r)\) is a smooth function and \(\sigma^{i}\) are Pauli matrices introducing non-trivial flavor dependence, \(SU(2)\). As result, the action for the flavour sector is written as \[S_{h}=S_{\rm DBI}^{(0)}+S_{\rm DBI}^{(1)}+S_{\rm CS} \tag{33}\] where \(S_{\rm DBI}^{(0)}\) is the DBI action in the absence of solitons, \(S_{\rm DBI}^{(1)}\) is the expansion of the DBI action with homogeneous Ansatz at the second order, \(S_{\rm CS}\) is Chern-Simons term with the homogeneous Ansatz (the explicit expressions are given in [11]). ### The single layer solution The solution for the bulk charge density is found by considering the \(\Phi\) equation of motion [11] \[\rho^{\prime}=-\frac{d}{dr}\frac{\delta S_{h}}{\delta\Phi^{\prime}}=-\frac{\delta S _{h}}{\delta\Phi}=\frac{2N_{c}}{\pi^{2}}\frac{d}{dr}\left[e^{-b\tau^{2}}h^{3}(1- 2b\tau^{2})\right], \tag{34}\] where \(b\) is a parameter in the Chern-Simons term, \(\rho\) is the bulk charge density, and \(\tau\) is the tachyon field. However, the solution for \(\rho\) implied by this equation vanishes both in the UV and in the IR. That is to say, diverging tachyon in the IR set the solution to zero via exponential factor and boundary condition for \(h\) in UV requires it to vanish (since there is no external baryon source). Therefore, as was the case in with the WSS model above, the baryon density is zero, unless we impose an abrupt discontinuity in the field \(h\). Motivated by these considerations, we write the "single layer" solution for V-QCD as [11] \[\rho=\left\{\begin{array}{cc}\rho_{0}+\frac{2N_{c}}{\pi^{2}}e^{-b\tau^{2}}h ^{3}(1-2b\tau^{2}),&(r<r_{c})\\ \frac{2N_{c}}{\pi^{2}}e^{-b\tau^{2}}h^{3}(1-2b\tau^{2}),&(r>r_{c})\end{array}\right. \tag{35}\] where \(\rho_{0}\) is boundary baryon charge density (the physical density) and \(r_{c}\) is the location of the discontinuity. The explicit expression for \(\rho_{0}\) is \[\rho_{0}=\frac{2N_{c}}{\pi^{2}}e^{-b\tau(r_{c})^{2}}(1-2b\tau^{2}(r_{c}))\text {Disc}(h^{3}(r_{c})), \tag{36}\] where we use the notation \(\text{Disc}(g(r_{c}))\equiv\lim_{e\to 0+}(g(r+\epsilon)-g(r-\epsilon))\). For future convenience, we briefly discuss the asymptotics of the field \(h\). In the UV, \(h\) has the asymptotics typical for gauge fields: \[h\simeq h_{1}+h_{2}r^{2}. \tag{37}\] We require that non-Abelian sources vanish and therefore \(h_{1}=0\), but \(h_{2}\) remains as a free parameter (which also determines \(r_{c}\) for given \(\rho_{0}\), see appendix A.2). Following [11], we set \(h(r)=0\) for \(r>r_{c}\). ### The double layer solution In this subsection, we generalize the single layer solution for baryon field \(h\) to have two discontinuities, i.e. \[\rho=\left\{\begin{array}{cc}\rho_{01}+\frac{2N_{c}}{\pi^{2}}e^{-b\tau^{2}} h^{3}(1-2b\tau^{2}),&(r<r_{c1})\\ \rho_{02}+\frac{2N_{c}}{\pi^{2}}e^{-b\tau^{2}}h^{3}(1-2b\tau^{2}),&(r_{c1}<r< r_{c2})\\ \frac{2N_{c}}{\pi^{2}}e^{-b\tau^{2}}h^{3}(1-2b\tau^{2}),&(r>r_{c2})\end{array}\right. \tag{38}\] which we will be calling the double layer solution. There is also a continuity condition on \(\rho\) that is to be satisfied, which is given as \[\rho_{02}=-\frac{2N_{c}}{\pi^{2}}(1-2b\tau^{2})e^{-b\tau^{2}}\text{Disc}\,h^{3 }|_{r=r_{c2}}\,\] \[\rho_{01}-\rho_{02}=-\frac{2N_{c}}{\pi^{2}}(1-2b\tau^{2})e^{-b\tau^{2}}\text{ Disc}\,h^{3}|_{r=r_{c1}}. \tag{39}\] Therefore, summing the equalities above, we identify the boundary baryon charge density \(\rho_{0}\) as \(\rho_{01}\): \[\rho_{0}= \rho(r=0)\] \[= \rho_{01}=-\frac{2N_{c}}{\pi^{2}}\sum_{i=1}^{2}(1-2b\tau^{2})e^{- b\tau^{2}}\text{Disc}\,h^{3}|_{r=r_{ci}}. \tag{40}\] We stress however that even though we call this solution by the same name as the double layer solution for the WSS model, these solutions are quite different. In particular, the double layer V-QCD solution has discontinuities at two values of the holographic coordinate whereas the WSS solution only has discontinuities at a single value. Actually the single layer solution of the V-QCD model is closer to the double layer solution of the WSS model than the double layer solution of the V-QCD model. We will discuss this difference in more detail below. The double layer solution depends on four parameters at fixed \(\rho_{0}\): there is one additional parameter from the location of the extra discontinuity with respect to the single layer solution, and as the solution for \(h\) in the second interval \(r_{c1}<r<r_{c2}\) is independent of the solution in the first interval, there are two additional integration constants from the solution of \(h\). Finally, the generalization to triple layer or even to a solution with a higher number of flavors is straightforward. One only needs to modify the piecewise solution for the charge density \(\rho\) with addition of new intervals. This will introduce three new parameters for each interval. ### Legendre transform to canonical ensemble As in the analysis of the WSS model above, it is convenient to work in the canonical ensemble. The Legendre transformed action for V-QCD becomes [11] \[\widetilde{S}_{h}=-\int d^{5}xV_{\rho}G\sqrt{1+\frac{\rho^{2}}{(V _{\rho}we^{-2A})^{2}}}\] \[\times\left[1+\frac{6w^{2}e^{-4A}h^{4}+6\kappa\tau^{2}e^{-2A}h^{ 2}}{1+\rho^{2}(V_{\rho}we^{-2A})^{-2}}+\frac{3w^{2}e^{-4A}f(h^{\prime})^{2}}{ 2G^{2}}\right]. \tag{41}\] ## IV Results ### Second order transition in the Witten-Sakai-Sugimoto model We start by analyzing the configurations in the WSS model. We set \(\lambda=16.63\)[46] and analyse the solutions numerically (see appendix A). As a function of the chemical potential, we find three phases: 1. Vacuum for \(\mu<\mu_{c}\) with \(\mu_{c}\simeq 0.205\). 2. Single layer phase for \(\mu_{c}<\mu<\mu_{l}\) with \(\mu_{l}\simeq 0.342\) 3. Double layer phase for \(\mu>\mu_{l}\). The phase transition at \(\mu=\mu_{c}\) (\(\mu=\mu_{l}\)) is of first (second) order. Here the second order transition (at the higher value of the chemical potential, \(\mu=\mu_{l}\)), is identified as the popcorn transition. Notice that in the approach of [50], which used a different variation of the homogeneous approach, both the vacuum to nuclear and popcorn transitions were of first order. Even though we are not attempting a serious comparison to QCD data, we note that setting \(M_{KK}=949\) MeV as determined by the mass of the \(\rho\) meson [46], we have (for the quark chemical potential) \(\mu_{c}\simeq 195\) MeV and \(\mu_{l}\simeq 325\) MeV, i.e., numbers in the correct ballpark. We note that \(\mu_{l}/\mu_{c}\simeq 1.67\). Denoting the density of the single layer configuration at \(\mu=\mu_{c}\) as \(\rho_{c}\) (i.e., the analogue of the saturation density), the density \(\rho_{l}\) at the second order transition satisfies \(\rho_{l}/\rho_{c}\simeq 3.4\). Here we are mostly interested in the second order transition from the single to double layer phase. We show the relevant configurations in figure 2 for a choice of densities \(\rho_{0}\) around the critical value \(\rho_{l}\simeq 2.52\times 10^{-4}\). Recall that the single layer configuration is unique for fixed \(\rho_{0}\), whereas the double layer configuration also depends on \(z_{c}\). We show here the double layer profiles which minimize the free energy. They are separate from the single layer configuration only for \(\rho_{0}>\rho_{l}\) (the three highest values in the figure), where they have lower free energies than the single layer solutions. Interestingly, the single and double layer solutions at the same \(\rho_{0}\) are close: The functions \(h(z)\) deviate by at most a few percent in the region \(z>z_{c}\). The deviations for \(\rho(z)\) are slightly higher, and the single layer solution can be viewed as a smoothed out version of the double layer solution. That is, even if we were not considering the double layer solutions explicitly, their presence could be guessed from the single layer solutions. In both cases, deviation is largest close to \(z_{c}\). We also remark that the single layer profiles \(h(z)\) appear to be qualitatively similar to the solutions found in the approach of [50] (see figure 4 in this reference), up to a shift by a constant. ### Analysis of configurations in V-QCD We construct the double layer and single layer solution by the procedure which is outlined in appendix A.2. The essence of the procedure is the minimization of the free energy density at fixed \(\rho_{0}\) depending on the free parameters. In the case of the single layer, there is only one parameter: \(r_{c}\) or equivalently \(h_{2}\), and it is straightforward to solve the equation of state in this case. For the double layer solution, there are four parameters which would make the numerical minimization procedure challenging in contrast to single layer solution. Therefore, while we perform minimization of single layer solution for large domain of \(\rho_{0}\) values, we investigate presence of lower the free energy density of the double layer solution only for solutions obtained by gluing together single layer solutions for some representative values of \(\rho_{0}\) changing from \(0.8\) to \(2.5\). Denoting \(\Delta h_{i}=\) Disc \(h(r_{ci})\), we investigate three qualitatively different configurations: we consider \(\Delta h_{1}<0\) and \(\Delta h_{1}>0\) for double layer solution and \(\Delta h_{1}>0\), \(\Delta h_{2}>0\) for a triple layer solution. For boundary baryon number charge we consider the values of \(\rho_{0}=0.5\), \(\rho_{0}=0.8\) and \(\rho_{0}=2.5\), which will correspond to Figure 2: The profile of the gauge field \(h(z)\) (left) and the bulk charge density \(\rho(z)\) (right) for the single layer (solid curves) and double layer (dashed curves) configurations for various values of the charge density. The vertical dashed lines in the left hand plot denote the discontinuities of the double layer solutions at \(z=z_{c}\). \(\mu/\mu_{c}=1.65\), \(\mu/\mu_{c}=2.04\) and \(\mu/\mu_{c}=3.57\) for the thermodynamics determined by the single layer solution, respectively. While the first choice roughly corresponds to chemical potential values in which double layer solutions is WSS become dominant (as it is seen from figure 3), the other two choices are even much larger than that. In figure 3, the results for the three representative case are shown. The baryon field profile \(h(r)\) and corresponding baryon number densities \(\rho_{0}(r)\) in the bulk are shown in the first and second column respectively. In each plot, the single layer solution minimizing the free energy is shown with gray dashed curves whose parameters are Figure 3: The profile of the gauge field \(h(r)\) (left) and the bulk charge density \(\rho(r)\) (right) for double layer with \(\Delta h_{1}<0\) (first row), double layer with \(\Delta h_{1}>0\) (second row), triple layer solution with \(\Delta h_{1}>0\) and \(\Delta h_{2}>0\) (third row) configurations. The single layer configuration with same boundary charge density \(\rho_{0}=0.8\) is showed with the gray dashed curve in each plots. The parameters \(r_{ci}\) and \(\rho_{0i}\) that characterize the multilayer configurations are shown by blobs. The values of \(r_{ci}\), \(h_{2i}\), \(\rho_{0i}\) and \(f\) are given in table 1. given in the first row of table 1. The red, blue and green solid curves show \(\Delta h_{1}<0\) and \(\Delta h_{1}>0\) double layer and (\(\Delta h_{1}>0\), \(\Delta h_{2}>0\)) triple layer solutions. The parameters \(r_{ci}\), \(h_{2i}\), \(\rho_{0i}\), where \(h_{2i}\) are the asymptotic constants \(h_{2}\) for the single layer solutions that were glued together to obtain the multilayer solutions, and the corresponding free energy densities \(f\) are shown in table 1. The locations of the discontinuities and \(\rho_{0i}\) are also shown in the figures with the blobs. We were able to find double layer solutions which have lower free energy than the single layer solution for the cases of \(\Delta h_{1}>0\) i.e., the second row of figure 3. However, we were not able to find double solutions with \(\Delta h_{1}<0\) that would have lower free energy than the single layer solution (configurations in the first row of the figure). Notice that having solutions with \(\Delta h_{1}>0\) means that contributions to the total charge from the two discontinuities have opposite signs. This means that in the instanton picture, the discontinuities must arise from smearing instantons with opposite charges. This suggests that proton-antiproton pairs are created, which should be forbidden due to the large energy required for such a pair creation. Therefore the configuration of the first row is not physically sound. We suspect that it appears because the homogeneous approximation works poorly with configurations with discontinuities at several values of the holographic coordinate. We also show the example of a triple layer configuration with \(\Delta h_{1}>0\) and \(\Delta h_{2}>0\) on the third row of the plot. ### Speed of sound and polytropic index We now study the physical implications of the phase transition. To this end, we plot the speed of sound and the polytropic index \(\gamma=d\log p/d\log\epsilon\) for the WSS and V-QCD models in figure 4. In these plots, the chemical potential was normalized using the value at the vacuum to nuclear matter transition. In both models, the speed of sound is below the value \(c_{s}^{2}=1/3\) of conformal theories right above the transition to nuclear matter. When \(\mu\) increases, however, the speed of sound crosses this value and reaches values well above it [11]. The speed of sound has a maximum in both model. Even though the location of the maximum is different between the models, the maximal values are rather close: the maximum of \(c_{s}^{2}\) is \(0.463\) (at \(\mu/\mu_{c}=1.355\)) for the WSS model and \(0.504\) (at \(\mu/\mu_{c}=2.246\)) for V-QCD. Eventually at higher densities, the speed of sounds decreases to values closer to the conformal value. This is clearer in the WSS than in the V-QCD model. In the WSS model, where the popcorn transition from a single to a double layer configuration is found, the speed of sound drops to a roughly constant value which closely agrees with the conformal value in the double layer phase: the speed of sound squared is about one per cent higher than the conformal value \(1/3\). Similar results are found for the polytropic index \(\gamma\) in the right hand plot of figure 4. In both models, \(\gamma\) decreases with \(\mu\) in the (single layer) nuclear matter phase. This decrease is fast in the sense that \(\gamma\) drops below the value of \(\gamma=1.75\), which was used as a criterion to separate nuclear matter from quark matter in [67; 68], where equations of state obtained as interpolations between known results from nuclear theory at low density and perturbation theory at high density were considered. For \(\mu/\mu_{c}>1.5\) the results from both model are below this value. At the popcorn transition of the WSS model, \(\gamma\) drops to a value close to one. Our findings indicate that the homogeneous holographic nuclear matter behaves approximately conformally at high densities, i.e., at densities well above the nuclear saturation density (see also [101]). This is particularly clear for the WSS model, which becomes approximately conformal at the popcorn transition. These findings are consistent with earlier studies of homogeneous nuclear matter in the WSS (see, e.g., [102]) and the V-QCD (see, e.g., [94]) models. They also agree with the results found in the effective theory approach of [62; 64]. This agreement in strikingly good for the WSS model, where the results both for the speed of sound (see [64]) and for the polytropic index (see [65]) have been computed. For example, our results for the maximal value of the speed of sound (our value is \(c_{s,\text{max}}\approx 0.68\)) and the density at the popcorn transition (we found \(n_{l}/n_{c}\approx 3.4\)) agree rather well with those of these references - our value for the speed of sound (transition density) is a bit below (above) the values of the effective theory approach. We also remark that the non-monotonic behavior for the speed of sound in the WSS model qualitatively agrees with that found in the point-like instanton gas approach in [15], albeit with a different embedding for the \(D8\) branes. The maximal value found in this reference is also close to the maximal value obtained here. This agree \begin{table} \begin{tabular}{||c c c c||} \hline \(r_{c}\) & \(h_{2}\) & \(\rho_{0}\) & \(f\) \\ \hline \hline \(0.570\) & \(2.991\) & \(0.80\) & \(1.745\) \\ \hline \(\{0.483,0.539\}\) & \(\{3.90,3.10\}\) & \(\{0.80,0.59\}\) & \(2.003\) \\ \hline \(\{0.487,0.525\}\) & \(\{2.90,4.10\}\) & \(\{0.80,0.72\}\) & \(1.626\) \\ \hline \(\{0.476,0.498,0.533\}\) & \(\{2.90,3.20,3.90\}\) & \(\{0.80,0.74,0.64\}\) & \(1.642\) \\ \hline \end{tabular} \end{table} Table 1: The values of \(\{r_{c},\,h_{2},\,\rho_{0},\,f\}\) for the single layer configuration (first row) and \(\{r_{ci},\,h_{2i},\,\rho_{0i},\,f\}\) for the multi layer configurations (second-forth rows) that is shown in figure 3. ment is interesting as it obtained in a completely different approach, which is expected to be reliable at lower densities. Moreover we compare our results to the different approach of homogeneous nuclear matter derived in [50] in appendix B, and mostly find qualitative agreement. ## V Conclusions In this article, we analyzed nuclear matter using a homogeneous approach in two different holographic models: in the top-down WSS model and in the bottom-up V-QCD model. We focused on two topics: the popcorn transitions, where the layer structure of the nuclear matter changes in the bulk, and approach to conformal behavior at high densities. We found a second order popcorn transition in the WSS model, and signs of approach to conformality in both holographic setups. We have several remarks about our results. Firstly, the results in the WSS and V-QCD models appeared to be quite different: in particular, the popcorn transition was only found to take place in the WSS model. This is however not surprising at all and can be seen to follow from the differences in the geometry and the realisation of chiral symmetry breaking between the models as we now explain. Recall that in the WSS model, the geometry ends at the tip of the cigar in the confined phase as shown in figure 1, and chiral symmetry breaking is realised by the joining of the two branches of flavor branes at the tip. In the V-QCD picture there is no cigar structure, and chiral symmetry breaking arises from a condensate of a bulk scalar field. In the WSS model, nuclear matter at low densities is seen to arise from instantons located at the tip, and it is not possible to assign such instantons to be left or right handed. In V-QCD, however, nuclear matter is stabilized at a nontrivial value of the holographic coordinate due to interaction with the bulk scalar field [11], and by definition always contains left and right handed components. Therefore, in V-QCD separate configurations analogous the single and double layer configurations of the WSS in figure 1 do not exist. The configurations of this figure map to the same configuration in V-QCD, which is what we called the single layer configuration. The double layer configuration in V-QCD defined in (38) would map to a more complicated configuration in the WSS model where discontinuities of the \(h\) field are found at two distinct values of \(z\). We found that the results for the equation of state near the popcorn transition of the WSS model closely resemble those obtained by the framework of [62; 64], where effective theory was used to analyse the transition of the Skyrmion crystal to a crystal of half-Skyrmions. This suggests that the transition in the holographic model should be identified with the topology changing transition where half-Skyrmions appear.2 It is however difficult to say anything definite about this because the holographic approach which we used does not contain individual instantons. Moreover, in [50] it was argued that the topology changing transition should not be identified as the transition between the single and double layer solutions, but should take place between solutions of qualitatively different behavior within the single layer solution. Another point is that chiral symmetry should be restored globally at the topology changing transition (meaning that the averages of the condensate over large regions should vanish). This however will not happen for any of the configurations in the WSS approach because the \(D8\) brane action is treated in the probe approximation, and the embedding of the brane is independent of the density. Nevertheless we remark that, as seen from the expressions for the single and double layer configurations in (20) and in (22), the bulk charge density has Figure 4: The speed of sound (left) and the polytropic index \(\gamma=d\log p/d\log\epsilon\) (right). The solid red, dashed green, and dot-dashed blue curves are the results for the single layer configuration in the WSS model, double layer configuration in the WSS model, and the V-QCD model, respectively. support near the tip of the cigar only for the single layer configuration, where the flavor branes join, breaking chiral symmetry. Therefore the double layer configuration can also exist in chirally symmetric backgrounds. Examples of such chirally symmetric double layer configurations were indeed found in [17] (the chirally symmetric quarkyonic matter phase of this reference). Finally, we demonstrated that the homogeneous nuclear matter becomes approximately conformal at high densities, i.e., above few times the nuclear saturation density. That is, the values of the speed of sound lay close to the value \(c_{s}^{2}=1/3\) of conformal theories, and similarly \(\gamma\) lay values close to the value \(\gamma=1\). In particular, the polytropic index reached values well below the value \(\gamma=1.75\) both in the V-QCD model and in the WSS model, which has been used to classify equations of state for nuclear and quark matter in the approach of [67; 68]. That is, the part of the single layer phase and all of the double layer phase would be classified as quark matter in this approach. This appears consistent with the interpretation that the double layer phase is smoothly connected to quark matter [69]. In the V-QCD setup, however, there is a separate strong first order phase transition from nuclear to quark matter at higher densities [94; 11]. In the WSS model there is a separate quark matter phase also, but in this case the transition is weak and even continuity between the phases is a possibility [10]. ###### Acknowledgements. We thank Mannque Rho for the invitation to contribute to the special issue "Symmetries and Ultra Dense Matter in Compact Stars" in Symmetry. We also thank Elias Kiritsis, Nicolas Kovensky, Yong-Liang Ma, and Andreas Schmitt for discussions and correspondence. This work benefited from discussions during the APCTP focus program "QCD and gauge/gravity duality". J. C. R. and M. J. have been supported by an appointment to the JRG Program at the APCTP through the Science and Technology Promotion Fund and Lottery Fund of the Korean Government. J. C. R. and M. J. have also been supported by the Korean Local Governments - Gyeongsangbuk-do Province and Pohang City - and by the National Research Foundation of Korea (NRF) funded by the Korean government (MSIT) (grant number 2021R1A2C1010834). T.D. acknowledges the support of the Narodowe Centrum Nauki (NCN) Sonata Bis Grant No. 2019/34/E/ST3/00405. ## Appendix A Numerical details ### Constructing the solution in the Witten-Sakai-Sugimoto setup Here we summarize the basic steps we follow to find the free energy and the equation of state for the case of the simple profile for the charge density (20) 1. We derive from the action (27) the equation of motion for \(h(z)\). After plugging the baryon charge density \(\rho\) and fixing \(N_{c}\to 3\) and \(\lambda\to 16.63\), the only free parameter is the boundary charge density \(\rho_{0}\). Then we can simply solve the equation for \(h(z)\) for fixed \(\rho_{0}\) from the UV boundary (we still need to fix \(h_{1}\)). 2. We fix the value of \(h_{1}\) by solving for \(h\) for a given fixed \(\rho_{0}\) and chose a value of \(h_{1}\) such that \(\rho(h)=0\) at \(z=0\), we can determine bulk charge density \(\rho\) profile by considering (20). 3. The free energy density is given by explicit integration of (27) from zero to a large cut-off. At this step, we (re)normalize the free energy by subtracting \(\widetilde{S}\) in the absence of baryons from the original \(\widetilde{S}\). 4. From the tabulated data \(\{\rho_{0},F\}\), we can construct \(F(\rho_{0})\) and find at which value of \(\rho\) the transition to nuclear matter happens. The corresponding chemical potential and grand potential can be obtained via \(\mu=dF/d\rho_{0}\) and \(\Omega=F-\rho_{0}\mu=-p\). For the case of the more general solution (22), we need to find the value \(z_{c}\) where the charge density vanishes but then the procedure to find the energy as a function of \(\rho_{0}\) is analogous to the single layer solution above. However one difference with respect to the previous single layer solution is that the value of \(h_{1}\) that minimizes the energy changes for densities larger than a critical value. From the comparison of the free energy we can see that there is a second order phase transition at this critical density \(\rho_{c}\) from the single layer solution to the double layer solution. ### Constructing the solution in the V-QCD setup In this subsection, we summarize and outline the calculation of free energy density and minimization procedure: 1. We work in the probe limit. We first construct the thermal gas background solution for the geometry [76] in the absence of the baryons. 2. Then, from (41) we derive equations of motion for \(h\). After plugging background fields and baryon charge density \(\rho\), the only free parameter is the boundary charge density \(\rho_{0}\). So we can simply solve the equation of motion for \(h\) for fixed \(\rho_{0}\) by from UV boundary. 3. After solving for \(h\) for given fixed \(\rho_{0}\) and chosen \(h_{2}\), we can determine bulk charge density \(\rho\) profile by considering (35). Note that the vanishing point of bulk density profile gives the location of the soliton, i.e \(\rho(r_{c})=0\). 4. The free energy density is given by explicit integration of (41) from boundary to the location of the discontinuity. At this step, we also subtract \(\widetilde{S}_{h}\) in the absence of baryons from original \(\widetilde{S}_{h}\) to (re)normalize the free energy. 5. Now, we can return to our main purpose of minimizing free energy at fixed \(\rho_{0}\) depending on free parameter \(r_{c}\) or equivalently \(h_{2}\). We can simply perform above mention procedure with a loop over \(h_{2}\) values. 6. From the tabulated data, we can construct \(F(h_{2})\) and minimize it. The corresponding chemical potential and grand potential can be obtained via \(\mu=dF/d\rho_{0}\) and \(\Omega=F-\rho_{0}\mu=-p\). For the case of the multi-layer configurations, the number of parameters which should be used in the minimization procedure increase and this makes the similar analysis numerically expensive. This is beyond the scope of this project. Therefore, we decide to analyze the situation by considering some representative situations (the details of them are given in the main text in subsection IV.2) and searching for solutions with lower \(f\) than that of single layer configurations. ## Appendix B Comparison to a different homogeneous approach In this appendix we compare our results to those obtained by employing the homogeneous approach of [50], where one uses a zero curvature condition before taking the system to be homogeneous in the WSS model. We set the parameter \(\Lambda=8\lambda/(27\pi)\) to the value \(\Lambda\approx 1.568\) for consistent comparison with our approach. The results are shown in figure 5. We see that the maximal value of the speed of sound is higher in the approach of [50] than in the other approaches, and the value of \(\mu\) at the popcorn transition is likewise higher than in the approach we used here. For the ratio of transition densities of single layer nuclear matter, we find \(\rho_{l}/\rho_{c}\approx 8.0\). Similarly, the value of the polytropic index \(\gamma\) is relatively high when using the approach of [50]. This also means that the agreement with the effective theory model of [62; 64], which was discussed in the main text, is less good for this approximation.
2304.08820
Motion-state Alignment for Video Semantic Segmentation
In recent years, video semantic segmentation has made great progress with advanced deep neural networks. However, there still exist two main challenges \ie, information inconsistency and computation cost. To deal with the two difficulties, we propose a novel motion-state alignment framework for video semantic segmentation to keep both motion and state consistency. In the framework, we first construct a motion alignment branch armed with an efficient decoupled transformer to capture dynamic semantics, guaranteeing region-level temporal consistency. Then, a state alignment branch composed of a stage transformer is designed to enrich feature spaces for the current frame to extract static semantics and achieve pixel-level state consistency. Next, by a semantic assignment mechanism, the region descriptor of each semantic category is gained from dynamic semantics and linked with pixel descriptors from static semantics. Benefiting from the alignment of these two kinds of effective information, the proposed method picks up dynamic and static semantics in a targeted way, so that video semantic regions are consistently segmented to obtain precise locations with low computational complexity. Extensive experiments on Cityscapes and CamVid datasets show that the proposed approach outperforms state-of-the-art methods and validates the effectiveness of the motion-state alignment framework.
Jinming Su, Ruihong Yin, Shuaibin Zhang, Junfeng Luo
2023-04-18T08:34:46Z
http://arxiv.org/abs/2304.08820v1
# Motion-state Alignment for Video Semantic Segmentation ###### Abstract In recent years, video semantic segmentation has made great progress with advanced deep neural networks. However, there still exist two main challenges \(\mathrm{i.e.}\), information inconsistency and computation cost. To deal with the two difficulties, we propose a novel motion-state alignment framework for video semantic segmentation to keep both motion and state consistency. In the framework, we first construct a motion alignment branch armed with an efficient decoupled transformer to capture dynamic semantics, guaranteeing region-level temporal consistency. Then, a state alignment branch composed of a stage transformer is designed to enrich feature spaces for the current frame to extract static semantics and achieve pixel-level state consistency. Next, by a semantic assignment mechanism, the region descriptor of each semantic category is gained from dynamic semantics and linked with pixel descriptors from static semantics. Benefiting from the alignment of these two kinds of effective information, the proposed method picks up dynamic and static semantics in a targeted way, so that video semantic regions are consistently segmented to obtain precise locations with low computational complexity. Extensive experiments on Cityscapes and CamVid datasets show that the proposed approach outperforms state-of-the-art methods and validates the effectiveness of the motion-state alignment framework. ## 1 Introduction Semantic segmentation [5, 13, 25], as a dense prediction task, assigns every pixel with the category. Based on image-based semantic segmentation, video semantic segmentation [23, 39, 40] introduces temporal information for each frame, which is very challenging and has attracted lots of attention. Recently, video semantic segmentation has made good progress and been widely applied to autonomous driving [15], video surveillance [19], and other fields. For video semantic segmentation, the key is how to utilize temporal information to improve the accuracy and consistency of segmentation results across frames. Towards this end, various methods based on deep learning [18, 28, 31] have been proposed to conduct video semantic segmentation recently. From the perspective of feature representation and use, these existing methods can be summarized into two main categories, _i.e._, direct methods and indirect methods. Direct methods (as shown in Fig. 1 (a)) [12, 39, 40] usually use a separately pretrained optical flow network [11, 17, 29, 34] to propagate information between adjacent frames without fine-tune on semantic segmentation dataset, owing to the lack of optical flow annotations. However, there is a domain shift between the datasets for flow and datasets for segmentation, so this method leads to Figure 1: Comparison of different methods for video semantic segmentation. (a) Direct methods explicitly learn the pixel movement depending on the pretrained optical flow network, which leads to information inconsistency. (b) Indirect methods implicitly model the relationship between all pixels with the attention mechanism, which usually results in high computation costs. (c) The proposed motion-state alignment framework decouples the task into motion alignment, state alignment, and semantic assignment, which ensures consistent and efficient learning of semantics. warp inaccuracy and information inconsistency. Additionally, optical flow estimation has difficulties in handling occlusion and targets moving out of the frame, which is harmful to segmentation quality. The errors of optical flow network will propagate to other frames, which is harmful to the segmentation performance. Additionally, optical flow estimate has difficulties in handling occlusion. To tackle this dilemma, indirect methods (Fig. 1 (b)) [15, 23, 31, 33] propose to calculate a relationship matrix by attention-based mechanism to implicitly align features across multiple frames. Different from direct methods, indirect methods can implicitly extract temporal features. However, these methods calculate the similarity between all pixels across frames, which results in high computation cost and low inference speed. This limits the application on real-world applications, like autonomous driving. Therefore, video semantic segmentation still faces two main difficulties, _i.e_. information inconsistency and computation cost. Information inconsistency is to model temporal consistency between adjacent frames. In video semantic segmentation, applying segmentation network to separate frame will generate inconsistent results, but the relation between frames can increase information and decrease uncertainty during training and inference. Therefore, instead of using one single frame, encoding temporal relation between frames plays an important role in improving performance. Another difficulty is that deep and complex methods based on attention mechanism [30, 33] have high computation cost, although they have succeed in achieving high accuracy. It is hard to extend these outstanding segmentation methods to real-world applications, like autonomous driving. Therefore, methods with low computation cost are very non-trivial in real-world scenes. To address the problem of information inconsistency and high computation cost, we put forward a novel motion-state alignment framework for video semantic segmentation as shown in Fig. 2. We divide the inter-frame information in one video into dynamic and static features, and construct corresponding motion and state alignment branches, which respond for maintaining the consistency between dynamic and static semantics respectively. In the motion alignment branch, an efficient spatial-temporal decoupled Transformer is established to align motion features across adjacent frames, which can maintain the motion consistency between temporal frames with low computations. With a stage Transformer, the state alignment branch is constructed to strengthen static features and maintain state consistency. Additionally, a semantic assignment mechanism is proposed to fuse both dynamic and static semantics with semantic partition. In the end, all pixels are partitioned into different regions with semantic categories as the final segmentation results. In the proposed framework, we construct different modules to align dynamic and static information respectively, which can effectively keep the information consistent across adjacent frames. Experimental results on two challenging video segmentation datasets have shown the effectiveness and efficiency of the proposed framework. Our contributions are as follows: (1) We propose a novel motion-state alignment framework for video semantic segmentation to address information inconsistency and high computation costs from the new motion-state perspective. (2) Motion alignment mechanism is constructed to align information between temporal sequences and keep motion consistency. State alignment mechanism is built to conduct pixel-to-pixel alignment between different fea Figure 2: Framework of the proposed motion-state alignment framework. We first extract the independent features for each frame in one video with the shared feature extractor. Then, the dynamic semantics and static semantics about the current frame are aligned via motion and state alignment branches to produce region descriptors and pixel descriptors. Finally, each pixel descriptor is linked to the corresponding region descriptor in a partitioned way to obtain the final segmentation result. tures and keep static consistency. (3) Our method achieves high accuracy on two challenging datasets: 81.5% mIoU on Cityscapes and 78.8% on CamVid, which is superior to the state-of-the-art methods in video semantic segmentation. In addition, our lightweight model with ResNet18 can reach 24.39 FPS on the Cityscapes dataset with competitive mIoU 75.8%. ## 2 Related Work In this section, we review the related works that aim to deal with the challenges of video semantic segmentation. ### Direct Methods In order to realize temporal consistency, direct methods [12, 39, 40] adopt an optical flow network to predict per-pixel correspondence between the current frame and other frames. NetWarp [12] computes the flow between adjacent frames and warps intermediate features. VPLR [40] leverages an optical flow network and a spatio-temporal transformer recurrent layer to propagate information between frames. EFC [9] jointly trains segmentation network and optical flow network. It takes a lot of time to calculate optical flow between every two frames. To reduce computation cost and speed up the inference, some methods adopt the idea of keyframes and share the feature maps of sparse keyframes. For example, DFF [39] only runs the expensive feature extraction network on keyframes and then warps the feature maps to non-key frames. Accel [18] propagates high-detail features on a reference branch by means of optical flow estimation. Although direct methods are able to capture temporal information successfully, they still cannot be constrained by the disadvantages of optical flow networks. ### Indirect Methods Indirect methods [15, 23, 31] are another kind of method to gain temporal consistency. They can avoid the use of optical flow networks. TMANet [31] explores long-range temporal dependencies by means of the self-attention mechanism, which computes a heavy relation matrix between each pixel of neighboring frames. ETC [23] introduces temporal loss and temporal consistency knowledge distillation methods to improve the temporal consistency. TDNet [15] divides the complex feature network into shallow sub-networks for each frame and performs an attention propagation module to achieve temporal consistency. In indirect methods, although the dimension of feature maps is small, they still compute the high-dimension relation matrix between pixels. Besides, there is a much closer relationship in pixels of consecutive temporal or spatial context. ### Video Transformer In video understanding, like video super-resolution and video classification, it is a new trend to excavate long-sequence dependencies by ways of Transformer architecture [6, 10, 24, 30]. TimeSformer [3] adopts Transformer over the space-time volume on the action classification task. ViViT [1] develops several factorized Transformer blocks on spatial and temporal dimensions. VSR-Transformer [35] uses a spatial-temporal attention layer to exploit spatial-temporal information in video super-resolution. Transformer is very efficient to capture spatial and temporal relations. However, at present, no method keeps eye on applying Transformer to video semantic segmentation. ## 3 The Proposed Approach In the section, we first introduce the novel motion-state perspective, and then explain each part of the framework. ### Motion-state Perspective To deal with the task of video semantic segmentation, we rethink this task from a novel motion-state perspective, in which semantic information in the video is divided into two parts, namely dynamic semantics and static semantics. Dynamic semantics refers to semantic information that can be easily extracted from motion, which is usually beneficial to semantic region location. Static semantics refers to semantic information that can be easily extracted at each state point during motion and contain more details of the semantic area. In this perspective, information consistency consists of motion consistency and state consistency. Based on the motion-state perspective, we propose a novel motion-state alignment framework for video semantic segmentation (named as **MSAF**) as depicted in Fig. 2, to address the difficulties (, information inconsistency and computation cost). In this framework, we first extract independent features for each frame of one video. Then, through the motion alignment mechanism and state alignment mechanism, the dynamic semantics and static semantics about the current frame are aligned to guarantee information consistency. Next, pixel descriptors from the state alignment are linked to the corresponding region descriptors from the motion alignment in a semantic partition strategy. Thus, all pixels can be assigned to different regions with precise semantic categories and obtain the final segmentation results. In this way, our proposed method can learn both dynamic and static semantics, thus the video semantic regions are consistently segmented to obtain precise locations with low computational complexity. Details of the proposed approach are described as follows. ### Feature Extractor To extract features for each frame of one video, we take ResNet [14] as the feature extractor, which removes the last two layers (_i.e_., classification and global average pooling layers) for the pixel-level prediction task. Feature extractor has five stages for feature encoding, named as \(\mathcal{F}_{s}(\pi_{s})\) with parameters \(\pi_{s}\), where \(s=1,2,\dots,5\) represent the \(s\)th stage of the feature extractor. For convenience, we use \(\mathcal{F}_{s}\) to represent \(\mathcal{F}_{s}(\pi_{s})\). To obtain larger feature maps, the strides of all convolutional layers in the residual modules \(\mathcal{F}_{4}\) and \(\mathcal{F}_{5}\) are set to 1. To further enlarge the receptive fields of high-level features, we set the dilation rates [36] to 2 and 4 for convolution layers in \(\mathcal{F}_{4}\) and \(\mathcal{F}_{5}\), respectively. Therefore, after the feature extractor, for a \(H\times W\) input image, we can get a feature map with the size of \(\frac{H}{8}\times\frac{W}{8}\) by the feature extractor. Note that we use the same shared feature extractor to capture features for every frame. ### Motion Alignment In one video, there are two important benefits of frame information. Firstly, information across adjacent frames can increase perception and decrease uncertainty. Secondly, the information can make it easy to capture the motion regions, which is beneficial to semantic region location. In order to deal with the information inconsistency in many existing methods, we propose a motion alignment mechanism to capture the motion information in the video and maintain the motion consistency, as shown in Fig. 2. Specifically, in order to model the motion consistency between frames, with reference to optical flow that models a learnable "flow" to match pixels between frames, we propose an efficient decoupled Transformer to learn the "flow" as depicted in Fig. 3. **Decoupled Transformer.** Transformer [30] is beneficial to long-term and global dependence, which is the main advantage compared with the ordinary convolutional operation. It is a promising choice to model the long-distance relationship in both time and space dimensions by utilizing Transformer, but it has the disadvantage in a lot of computation. Specifically, we represent the extracted features as \(\mathcal{F}_{5}^{t}\) with the size of \(\mathcal{C}_{\mathcal{F}_{5}}\times H_{\mathcal{F}_{5}}\times W_{\mathcal{F}_ {5}}\), where \(t,\mathcal{C}_{\mathcal{F}_{5}},H_{\mathcal{F}_{5}}\) and \(W_{\mathcal{F}_{5}}\) means the frame index, number of the channel, width and height of the feature map. In this paper, we empirically choose three frames with the feature \(\mathcal{F}_{5}^{t-2}\), \(\mathcal{F}_{5}^{t-1}\) and \(\mathcal{F}_{5}^{t}\) as an example. Given a naive explanation, ignoring the effect of scaled dot-product and multi-head on reducing the amount of computation, Transformer can model the relationship between all pixels in a feature map. Thus, when Transformer is utilized to model spatio-temporal information, the computation cost of Transformer is as follows: \[C(\text{Transformer})=3n\times n\times n=3n^{3}, \tag{1}\] where \(n=\mathcal{C}_{\mathcal{F}_{5}}\times H_{\mathcal{F}_{5}}\times W_{\mathcal{F} _{5}}\) means the number of pixels in a feature map. Considering frames \(t-2\), \(t-1\), and \(t\), the computation cost that constructs the relation between one pixel with all other pixels in spatio-temporal dimension is \(n\times n\). Therefore, the total computation cost between frame \(t\) and \(t-1\) as well as \(t-2\) is \(n^{3}\). Therefore, the computation cost of vanilla Transformer is \(3n^{3}\). Usually, this computation cost is tolerable, unless the size of feature graph is relatively large. To reduce the computation cost, we consider decoupling the spatio-temporal relationship and propose a decoupled Transformer (D-Transformer) as shown in Fig. 3. D-transformer consists of two modules _i.e_., pyramid spatial transformer (PST) and aligned temporal transformer (ATT). In PST, features captured by the feature extractor for each frame are as the input. Next, features are downsampled to reduce the amount of computation. The distant frames in the temporal dimension provide more abstract information with a larger sampling rate (\(\times 4\) in frame \(t-2\)), while the near frames provide more specific information (\(\times 2\) in frame \(t-1\)). Then, features after sampling are fed to the multi-head self-attention module (MHSA) [30] to model global dependency in the spatial dimension. After that, the corresponding upsampling operation is adopted to ensure the precise alignment in spatial dimensions. PST block is formally defined as: \[\begin{split}\mathcal{D}_{\text{PST}}^{t-2}&=\text{ US}^{t-2}(\text{MHSA}_{spa}^{t-2}(\text{DS}^{t-2}(\mathcal{F}_{5}^{t-2})))\\ \mathcal{D}_{\text{PST}}^{t-1}&=\text{US}^{t-1}( \text{MHSA}_{spa}^{t-1}(\text{DS}^{t-1}(\mathcal{F}_{5}^{t-1}))),\\ \mathcal{D}_{\text{PST}}^{t}&=\text{MHSA}_{spa}^{t}( \mathcal{F}_{5}^{t})\end{split} \tag{2}\] where US and DS mean up-sampling and down-sampling operation. \(\text{MHSA}_{spa}\) means multi-head self-attention module in the spatial dimension with the following formulation: \[\begin{split} X^{\prime}&=\text{Attention}(\text{ LN}(X))+X\\ \text{MHSA}_{spa}&=\text{MLP}(\text{LN}(X^{\prime}) )+X^{\prime}\end{split} \tag{3}\] where \(X\) is the input tensor, LN is layer normalization layer [2], MLP is multi-layer perceptron, and Attention is the self-attention module referring to [30] for details. In this Figure 3: Structure of decoupled Transformer. way, spatial features of neighboring frames are extracted in a pyramid way, and the computation cost is as follows (note we ignore the reduction of computation caused by down-sampling for fair): \[C(\text{PST})=3\times n^{2}=3n^{2}. \tag{4}\] In ATT, three feature maps of different frames after PST are used as input. Due to motion, a pixel at the current frame may not match the pixel at the same location of adjacent frames. Therefore, it is necessary to first align the spatial information pixel by pixel in the temporal dimension. For this purpose, we introduce deformable convolutional layers [8] in adjacent frames to extract the offset of each pixel caused by motion. The detail is as follows: \[Y(p)=\sum_{k=1}^{K\times K}w_{k}\cdot X(p+p_{k}+\Delta p_{k}), \tag{5}\] where \(K\) defines the number of points that may correspond to the pixel at this position in the adjacent frame, \(X(p)\) is the pixel at location \(p\) of input feature \(X\), and \(Y(p)\) represents the pixel at location \(p\) of input feature \(Y\). \(p_{k}\) means the location range of \(K\times K\) centered on the current position, \(w_{k}\) is the weight value and \(\Delta p_{k}\) is the learned offset. In this way, each pixel can learn the position of the associated pixel in adjacent frames. Here we set \(K=3\) to perceive multiple positions and ensure the information alignment between frames. Then, each pixel at the current frame and its associated pixels in adjacent frames are passed together to MHSA to model the temporal long-term dependency in the spatial dimension. The ATT block is formally defined as: \[\mathcal{D}_{\text{ATT}}=\text{MHSA}_{tem}(\text{DCN}^{t-2}(\mathcal{D}_{ \text{PST}}^{t-2}),\text{DCN}^{t-1}(\mathcal{D}_{\text{PST}}^{t-1}),\mathcal{ D}_{\text{PST}}^{t}), \tag{6}\] where DCN is the deformable convolutional operation and MHSA\({}_{tem}\) means multi-head self-attention in the temporal dimension. DCN is only used in neighbouring two frames to align a pixel in the current frame with pixel in neighbouring frames. In this way, we can get the computation cost of ATT as follows (note the computation cost of DCN can be regarded as \(K^{2}n\)): \[C(\text{ATT})=2\times 3n+2\times 9n=24n. \tag{7}\] Therefore, the whole computation cost of D-transformer is: \[C(\text{D-Transformer}) =C(\text{PST})+C(\text{ATT})\] \[=3n^{2}+24n \tag{8}\] Comparing Equation (1) with (8), we can clearly see that the value of C(Transformer) - C(D-Transformer) increases monotonously and linearly in the range \([\frac{1+\sqrt{33}}{2},+\infty)\), and \(\lim_{n\rightarrow+\infty}\frac{C(\text{Transformer})}{C(\text{D-Transformer})}=n\). Obviously, \(n\gg\frac{1+\sqrt{33}}{2}\), which means that the computation cost of D-Transformer is much smaller than original Transformer and D-Transformer is much more efficient. Therefore, D-Transformer can construct the spatio-temporal relationship between video frames, and implicitly ensure the alignment of motion information between frames, which is beneficial to ensure motion consistency. For convenience, motion alignment branch is denoted as \(\mathcal{M}(\pi_{\mathcal{M}})\) with parameters \(\pi_{\mathcal{M}}\). The output of \(\mathcal{M}\) including three features _i.e_., \(\mathcal{M}^{t-2}\), \(\mathcal{M}^{t-1}\) and \(\mathcal{M}^{t}\) that all have the same shape with \(\mathcal{F}_{5}^{t}\), which captures the dynamic information of one video. In order to obtain semantics, \(\mathcal{M}\) is supervised by the ground-truth semantic mask, and the supervision signal will be mentioned later. Therefore, \(\mathcal{M}\) carries rich dynamic semantics. In this manner, motion alignment efficiently maintains motion consistency. ### State Alignment Except for dynamic information, static information also plays an important role in video semantic segmentation. The static semantics can be easily extracted from a frame described as a motion state at a certain moment, which is usually more conducive to restoring the details of semantic areas. In order to ensure the expressive ability of state information in feature space, we align the state information of different feature sources and propose a state alignment mechanism as depicted in Fig. 2. **Stage Transformer.** As neural network goes deeper, low-level features contain more details of images, and high-level features have more semantic information. To contain both detail and semantic information, in the state alignment branch, we collect features at different levels of the current frame (regarded as state features) and feed them to the proposed stage Transformer (S-Transformer), which can gain detailed information and model the relationship of corresponding pixels between features of different levels in the stage dimension. The S-Transformer block is formally defined as: \[\mathcal{S}_{TR}=\text{MHSA}_{sta}(\mathcal{F}_{3}^{t},\mathcal{F}_{4}^{t}, \mathcal{F}_{5}^{t},\mathcal{M}^{t}), \tag{9}\] where MHSA\({}_{sta}\) is the multi-head self-attention module in the stage dimension. Moreover, to enhance the features of current frames, we also add the features \(\mathcal{M}^{t}\) of current frames from motion alignment to S-Transformer. In this way, static information from different stage and dynamic information of current frame are aligned together and modeled for the state consistency. **Pixel Descriptor.** We define the state alignment branch as \(\mathcal{S}(\pi_{\mathcal{S}})\) with parameters \(\pi_{\mathcal{S}}\), The output of \(\mathcal{S}\) then goes through two convolutional layers to gain pixel descriptor. For convenience, we name the pixel descriptor as \(\mathcal{P}\). To obtain the semantic information, \(\mathcal{P}\) is supervised by the ground-truth mask of semantic segmentation (represented as \(G\)) by minimizing the loss: \[\mathcal{L}_{pd}=CE(\mathcal{P},G), \tag{10}\] where \(CE(\cdot,\cdot)\) is the cross-entropy loss function with the following formulation: \[CE(P,G)=-\sum_{i=1}^{H\times W}\sum_{j=1}^{C}G_{i,j}\mathrm{log}P_{i,j}, \tag{11}\] where \(P_{i,j}\) and \(G_{i,j}\) represents the predicted and real probabilities that the \(i\)th (\(i\in\mathbb{R}^{H\times W}\)) pixel belongs to \(j\)th (\(1\leq j\leq C\)) class, respectively. Under the supervision, we regarded each pixel \(pix\in\mathbb{R}^{\mathcal{C}_{\mathcal{F}_{\mathcal{S}}}\times H_{\mathcal{ F}_{\mathcal{S}}}\times W_{\mathcal{F}_{\mathcal{S}}}}\) in \(\mathcal{P}\) as the pixel descriptor, which represents the static semantics and state details of each pixel. ### Semantic Assignment After obtaining the motion features \(\mathcal{M}^{t}\) and state features \(\mathcal{P}\), we propose the semantic assignment mechanism to produce the final segmentation results as shown in Fig. 2. **Region Descriptor.** From motion information, it is easy to extract region-level information, which is beneficial to semantic region location. Therefore, we introduce the region descriptors to characterize the region-level features. To generate region descriptors, we first add two convolutional layers after \(\mathcal{M}^{t}\) to obtain dynamic information. After the two convolutional layers, the supervision from the ground-truth semantic mask \(G\) is imposed to produce the soft mask \(\mathcal{M}^{t^{\prime}}\) by minimizing the loss: \[\mathcal{L}_{rd}=CE(\mathcal{M}^{t^{\prime}},G). \tag{12}\] Then, we weight \(\mathcal{M}\) with each logit (feature map before softmax) of \(\mathcal{M}^{t}\), and average the result by global pooling in the spatial dimension to obtain region descriptors \(\mathcal{R}\in\mathbb{R}^{Cls\times 1\times 1}\) with \(Cls\) is the number of semantic categories. Each region descriptor \(r\in\mathcal{R}\) represents the region of one semantic category. **Semantic Partition.** With region descriptors and pixel descriptors with similar semantics, we propose the semantic partition strategy to link pixel descriptors to corresponding region descriptors as shown in Fig. 2. In order to judge the association between pixels and regions, we use the minimum cosine distance as the evaluation metric. Specifically, we compute all the distance between pixel descriptor \(pix_{i}\in\mathcal{P},i=1,2,\ldots,H_{\mathcal{F}_{\mathcal{S}}}\times W_{ \mathcal{F}_{\mathcal{S}}}\) with each region descriptor \(r_{j}\in\mathcal{R},j=1,2,\ldots,Cls\), and then the index with the minimum distance is taken as the semantic category. This processing is formulated as: \[\begin{split} CLASS_{pix_{i}}&=\text{argmin}_{j}(d _{1},d_{2},\ldots,d_{j},\ldots,d_{Cls})\\ d_{j}&=1-\cos(pix_{i},r_{j})\\ \cos(pix_{i},r_{j})&=\frac{pix_{i}\cdot r_{j}}{| pix_{i}||r_{j}|}\end{split}, \tag{13}\] where \(|\cdot|\) means the magnitude of a vector. In Equation (13), \(pix_{i}\cdot r_{j}\) is dot product of \(pix_{i}\) and \(r_{j}\). For each \(d_{j}\), the ignoring magnitude operation about \(|pix_{i}|\) doesn't affect the loss value because of the same \(pix_{i}\), and we add normalization operation to \(r_{j}\) for each region descriptors. In this way, Equation (13) can be reduced to a matrix multiplication (, the matrix multiplication of \(\mathcal{P}\) and \(\mathcal{R}\)) followed by an argmin operation in channel dimension. After this, we obtain the final results \(M\), which is expected to approximate the ground-truth masks \(M\) by minimizing the loss: \[\mathcal{L}_{M}=CE(M,G), \tag{14}\] For convenience, semantic assignment branch is denoted as \(\mathcal{A}(\pi_{\mathcal{A}})\) with parameters \(\pi_{\mathcal{A}}\). By taking the losses of Eqs.(10), (12), and (14), the overall learning objective can be formulated as follows: \[\min_{\mathbb{P}}\mathcal{L}_{pd}+\mathcal{L}_{rd}+\mathcal{L}_{M}, \tag{15}\] where \(\mathbb{P}\) is the set of \(\{\pi_{s}\}_{s=1}^{5}\), \(\pi_{\mathcal{M}}\), \(\pi_{\mathcal{S}}\), and \(\pi_{\mathcal{A}}\). ## 4 Experiments and Results ### Experimental Setup **Datasets.** To verify the effectiveness of our proposed method, we evaluate results on two public video semantic segmentation datasets,, Cityscapes [7] and CamVid [4] datasets. The Cityscapes dataset consists of a large set of video frames from 50 European cities. The dataset provides 5,000 fine annotations (, 2,975 for training, 500 for validation, 1,525 for testing) and 20,000 coarse annotations. In Cityscapes, there are 19 semantic categories, and the size of images is 1024x2048. In our experiments, we only use 5,000 finely-annotated images for training and inference. In addition, CamVid has 4 videos with 11-class annotations, which are divided into 367/101/233 for training/validation/testing respectively. The image resolution in CamVid is 640x960. **Evaluation Metrics.** Our method is evaluated on the accuracy, efficiency, and temporal consistency. We apply mean Interaction over Union(mIoU) to report the accuracy. mIoU averages the IoU value in all valid semantic categories of the dataset. Frames per second (FPS) is used to present the efficiency of methods. And temporal consistency(TC) [20] is adopted to evaluate the temporal stability in video tasks, which calculates the mean flow warping error between two neighboring frames. **Training and Inference.** We verify the robustness of our methods on a series of backbones,, ResNet18 [14], ResNet50, ResNet101, PSPNet18 [38], PSPNet50, PSPNet101, HRNet [32]. During training, our method is optimized by AdamW [26] with a batch size of 2 per GPU. We apply a "poly" learning rate policy, and the initial learning rate is 0.00006. A linear warm-up strategy is used in the first 1,500 iterations. We train the model for 80,000 iterations in total. The images are randomly cropped by 769x769 on Cityscapes and 640x640 on CamVid. Data augmentation, like flip and resize, is adopted in our method. ### Comparisons with state-of-the-art methods Our method is compared with state-of-the-art methods including PEARL [19], NetWarp [12], LMA [28], DFF [39], GRFP [27], TMANet [31], DFF [39], LVS [22], MC [16], Accel [18], ETC [23], TDNet [15], EFC [9], STT [21] and PC [37]. In Table 1, we compare our method with recent state-of-the-art methods on the Cityscapes dataset. Our method shows great improvement in accuracy on multiple backbones, which shows the robustness of our method. Our method based on ResNet18 has made great achievement with mIoU 78.5% on the test set, which outperforms other lightweight methods on both val and test sets. Additionally, our ResNet18-based method with input size 512x1024 reaches 24.39 FPS, which can achieve real-time inference. With backbone ResNet50 and PSPNet50, our method also shows great accuracy and speed. In addition, on complex backbones, our method achieves a mIoU of 80.0%, 81.1%, 81.5% with backbone of ResNet101, HRNet48, PSPNet101 respectively. Our method with a complex backbone also has advantages in inference time. As listed in Table 2, we also validate the robustness of our method on the CamVid dataset. We can see that our method with different backbones (, ResNet18, ResNet50, ResNet101, PSPNet101) has achieved significant improvement on both accuracy and efficiency. In particular, compared with Accel in different backbones, our method has significantly improved and runs 2-4 times faster. In Table 1, we also compare Temporal Consistency (TC) with state-of-the-art methods. The TC of our MSAF is higher than other methods, which verifies that our MSAF can achieve alignment between frames implicitly without the help of optical flow algorithms. In Fig. 4, we compare the qualitative results with the state-of-the-art method. We can see that our proposed method can segment the images much more precisely in some complex scenes. These show that our MSAF can extract better semantics information for the segmentation task. (named baseline+TR in Table 3), which has more parameters than the whole framework MSAF. Although baseline+TR has shown an increase of 3.0% compared with baseline (74.5%), our method only with motion alignment (Motion only) achieves better accuracy (79.1%). This explains that our improvement is not due to the use of the Transformer module, but attributed to the design of motion alignment with decoupled transformer for video semantic segmentation. To provide evidence for this further, we remove the motion alignment (w/o Motion) in our MSAF. In this way, there is no temporal information in "w/o Motion". Compared with w/o Motion, MSAF gains an improvement by mIoU 3.1%, which shows that motion alignment is helpful to guarantee the temporal consistency in video semantic segmentation. By the way, DCN brings a gain of 0.6%. We also conduct an experiment with state alignment only (State only) and without state alignment (w/o State) to validate the effectiveness of state alignment. The feature maps from motion alignment and backbone are concatenated and forwarded to semantic alignment. In Table 3, we can see state alignment can improve the accuracy from 79.7% to 80.4%. This also shows that static information is important for video semantic segmentation. To validate the effectiveness of semantic assignment, we remove the semantic assignment (w/o Semantics) and average the masks of motion alignment and state alignment as the final result. We find that the method with semantic assignment achieves 0.6% improvement. ## 5 Conclusion In this paper, we rethink consistency in video semantic segmentation from the perspective of motion and state semantics, and propose a motion-state alignment framework to achieve both dynamic and static consistency. In this framework, we build a motion alignment branch to encode temporal relations with dynamic semantics. State alignment branch is proposed to enhance features at the pixel level to obtain rich static semantics. Finally, semantic assignment links pixel descriptors from state alignment with region descriptors from motion alignment. Experiments on challenging datasets show that our approach outperforms recent state-of-the-art methods in both accuracy and efficiency. Figure 4: Qualitative results on Cityscapes validation set. From top to bottom: original images, the ground truth, LMA, the segmentation results of our ResNet101 based model. \begin{table} \begin{tabular}{c|c c c|c} \hline & MonA & StaA & SemA & mIoU \\ \hline Baseline & & & & 74.5 \\ Baseline + TR & & & & 77.5 \\ Motion only & ✓ & & & 79.1 \\ State only & & ✓ & & 76.5 \\ w/o Motion & & ✓ & ✓ & 77.3 \\ w/o State & ✓ & & ✓ & 79.7 \\ w/o Semantics & ✓ & ✓ & & 79.8 \\ \hline w/o DCN & ✓ & ✓ & ✓ & 79.8 \\ MSAF & ✓ & ✓ & ✓ & **80.4** \\ \hline \end{tabular} \end{table} Table 3: Performance of different settings of the proposed method (ResNet-50 based) on Cityscapes validation dataset.
2302.05052
Debiasing Recommendation by Learning Identifiable Latent Confounders
Recommendation systems aim to predict users' feedback on items not exposed to them. Confounding bias arises due to the presence of unmeasured variables (e.g., the socio-economic status of a user) that can affect both a user's exposure and feedback. Existing methods either (1) make untenable assumptions about these unmeasured variables or (2) directly infer latent confounders from users' exposure. However, they cannot guarantee the identification of counterfactual feedback, which can lead to biased predictions. In this work, we propose a novel method, i.e., identifiable deconfounder (iDCF), which leverages a set of proxy variables (e.g., observed user features) to resolve the aforementioned non-identification issue. The proposed iDCF is a general deconfounded recommendation framework that applies proximal causal inference to infer the unmeasured confounders and identify the counterfactual feedback with theoretical guarantees. Extensive experiments on various real-world and synthetic datasets verify the proposed method's effectiveness and robustness.
Qing Zhang, Xiaoying Zhang, Yang Liu, Hongning Wang, Min Gao, Jiheng Zhang, Ruocheng Guo
2023-02-10T05:10:26Z
http://arxiv.org/abs/2302.05052v2
# Debiasing Recommendation by Learning Identifiable Latent Confounders ###### Abstract. Recommendation systems aim to predict users' feedback on items not exposed to them. Confounding bias arises due to the presence of unmeasured variables (e.g., the socio-economic status of a user) that can affect both a user's exposure and feedback. Existing methods either (1) make untenable assumptions about these unmeasured variables or (2) directly infer latent confounders from users' exposure. However, they cannot guarantee the identification of counterfactual feedback, which can lead to biased predictions. In this work, we propose a novel method, i.e., identifiable deconfounder (iDCF), which leverages a set of proxy variables (e.g., observed user features) to resolve the aforementioned non-identification issue. The proposed iDCF is a general deconfounded recommendation framework that applies proximal causal inference to infer the unmeasured confounders and identify the counterfactual feedback with theoretical guarantees. Extensive experiments on various real-world and synthetic datasets verify the proposed method's effectiveness and robustness. recommendation systems, causal inference, confounding bias + Footnote †: [FOOTNOTE:12]Footnote †: [FOOTNOTE:12]Footnote 1: footnotemark: + Footnote †: [FOOTNOTE:12]Footnote 1: footnotemark: backdoor adjustment [20] and inverse propensity reweighting [23] to mitigate the specific bias [27, 34, 35]. But in practice, it is more common for there to be unmeasured confounders that cannot be directly accessed from the recommendation datasets due to various reasons, such as privacy concerns, e.g., users usually prefer to keep their socio-economic statuses private from the system. Alternatively, in most real-world recommendation scenarios, one even do not know what or how many confounders exist. In general, it is impossible to obtain an unbiased estimate of the potential outcome, i.e., the user's counterfactual feedback, without additional related information about unmeasured confounders [14]. As a result, previous methods have relied on additional assumptions regarding unmeasured confounders. For example, RD-IPS [4] assumes the bounded impact of unmeasured confounders on item exposure and performs robust optimization for deconfounding. Invariant Preference Learning [30] relies on the assumption of several abstract environments as the proxy of unmeasured confounders and applies invariant learning for debiasing. However, these methods heavily rely on assumptions about unmeasured confounders and do not provide a theoretical guarantee of the identification [20] of the potential outcome. Another line of methods, such as [24, 33, 37], assume the availability of an additional instrumental variable (IV), such as search log data, or mediator, such as click feedback to perform classical causal inference, such as IV-estimation and front door adjustment [20]. However, it is hard to find and collect convincing instrumental variables or mediators that satisfy the front door criteria [8, 20] from recommendation data. Different from previous methods, Deconfounder [29] does not require additional assisted variables and approximates the unmeasured confounder with a substitute confounder learned from the user's historical exposure records. Nevertheless, it has the inherent non-identification issue [3, 6], which means Deconfounder cannot yield a unique prediction of the user's feedback given a fixed dataset. Figure 1 shows such an example where the recommender model yields different feasible predictions of users' feedback due to the non-identification issue. Hence, a glaring issue in the current practice in the recommender systems falls onto the identifiability of the user's counterfactual feedback (potential outcome) in the presence of unmeasured confounders. This paper focuses on identifying the potential outcome by mitigating the unmeasured confounding bias. As the user's exposure history is helpful but not enough to infer the unmeasured confounder and identify the counterfactual feedback, additional information is required. Fortunately, such information can be potentially accessed through users' features and historical interactions with the system. Using the previous example, while the user's socio-economic status (unmeasured confounder) cannot be directly accessed, we can access the user's consumption level from his recently purchased items, whose prices will be beneficial in inferring the user's socio-economic status. To this end, we formulate the debiasing recommendation problem as a causal inference problem with multiple treatments (different items to be recommended), and utilize the proximal causal inference technique [26] which assumes the availability of a proxy variable (e.g., user's consumption level), that is a descendant of the unmeasured confounder (e.g., user's socio-economic status). Theoretically, the proxy variable can help infer the unmeasured confounder and the effects of exposure and confounders on the feedback. This leads to the identification of the potential outcome (see our Theorem 4.3), which is crucial for accurate predictions of users' counterfactual feedback to items that have not been exposed. Practically, we choose user features as proxy variables since they are commonly found in recommender system datasets and the theoretical requirement of proxy variables is easier to be satisfied compared with the instrumental variables and mediators [17]. Specifically, we propose a novel approach to address unmeasured confounding bias in the task of debiasing recommender system, referred to as the _identifiable deconfounder_ (iDCF). The proposed method is feedback-model-agnostic and can effectively handle situations where unmeasured confounders are present. iDCF utilizes proxy variables and exposure history to infer the distribution of unmeasured confounders and then uses this information to train the feedback prediction model that estimates the combined effect of confounders and exposure on the user's feedback. In the inference stage, the adjustment method [21] is applied to adjust for confounding bias. We evaluate the effectiveness of iDCF on a variety of datasets, including both real-world and synthetic, which shows its superior performance and robustness regarding different confounding effects and data density in predicting user feedback. Moreover, on the synthetic dataset with the ground truth of the unmeasured confounder known, we also explicitly show that iDCF can learn a better latent confounder in terms of identifiability. Our main contributions are summarized as follows: * We highlight the importance of identification of potential outcome distribution in the task of debiasing recommendation systems. Moreover, we demonstrate the non-identification issue of the Deconfounder method, which can finally lead to inaccurate feedback prediction due to confounding bias. * We propose a general recommendation framework that utilizes proximal causal inference to address the non-identification issue Figure 1. When predicting the user’s feedback, the non-identification of the user’s counterfactual feedback will make the recommendation method yield different feasible predictions (probabilities in the interval) that are compatible with the given dataset and will not converge, even with infinite data, leading to the uncertainty of the user’s feedback. See Example 3.1 for more details. in the task of debiasing recommendation systems and provides theoretical guarantees for mitigating the bias caused by unmeasured confounders. * We conduct extensive experiments to show the superiority and robustness of our methods in the presence of unmeasured confounders. ## 2. Related Work ### Deconfounding in Recommendation As causal inference becomes a popular approach in debiasing recommendation systems and examining relationships between variables (Han et al., 2017; Chen et al., 2017), researchers are focusing more on the challenge of confounding bias. Confounding bias is prevalent in recommendation systems due to various confounding factors. For example, item popularity can create a popularity bias and be considered a confounder. Several studies have addressed specific confounding biases, such as item popularity (Han et al., 2017; Chen et al., 2017; Li et al., 2018), video duration (Li et al., 2018), video creator (Chen et al., 2017), and selection bias (Krause et al., 2018). However, many unmeasured confounders may also exist that cannot be observed, which makes the classical deconfounding methods like inverse propensity weighting (IPW) not applicable. To deal with the confounding bias in the presence of unmeasured confounders, (Beng et al., 2017) assumes a bounded confounding effect on the exposure and applies robust optimization to improve the worst-case performance of recommendation models, (Han et al., 2017; Li et al., 2018; Li et al., 2018) take additional signals as mediators or instrumental variables to eliminate confounding bias. (Li et al., 2018) assumes the existence of several environments to apply invariant learning. As shown in experiments, these additional assumptions on unmeasured confounders may lead to sub-optimal recommendation performance. Moreover, they also fail to provide a theoretical guarantee of the identification of users' counterfactual feedback. There is also another line of work (Han et al., 2017; Li et al., 2018; Li et al., 2018) that considers the multiple-treatment settings (Han et al., 2017) and infers a substitute confounders from the user's exposure to incorporate them into the preference prediction models. However, these methods cannot guarantee the identification of the user's preference, which may lead to inconsistent, thus poor recommendation performances. ### Proximal Causal Inference Proximal causal inference (Han et al., 2017; Chen et al., 2017; Li et al., 2018; Li et al., 2018) assumes the existence of proxy variables of unmeasured confounders in the single-treatment regime, and the goal is to leverage proxy variables to identify causal effects. Kuroki and Pearl (Kuroki and Pearl, 2018) study the identification strategy in the different causal graphs. Miao et al. (Miao et al., 2018) generalize their strategy and show nonparametric identification of the causal effect with two independent proxy variables. Miao et al. (Miao et al., 2018) further use negative control exposure/outcome to explain the usages of proxy variables intuitively. However, these methods usually rely on informative proxy variables to infer the unmeasured confounder while our method formulates the recommendation problem in the multiple-treatment setting, which enables us to leverage the information from the user's exposure to infer the unmeasured confounder, relax the requirement on the proxy variables, and still theoretically guarantees the identification of the potential outcome (Li et al., 2018). ## 3. Problem Formulation In this section, we first analyze the recommendation problem from a causal view in the presence of unmeasured confounders. Then we show that Deconfounder(Han et al., 2017), one of the widely-used methods for recommendations with unobserved confounder, suffers the non-identification issue, i.e., it cannot predict the user's preference consistently, through an illustrative example. This observation motivates our method, which we will detail in the next section. ### Notations We start with the notations used in this work. Let scalars and vectors be signified by lowercase letters (e.g., \(a\)) and boldface lowercase letters (e.g., \(a\)), respectively. Subscripts signify element indexes. For example, \(\mathbf{a}_{i}\) is the \(i\)-th element of the vector \(\mathbf{a}\). The superscript of a potential outcome denotes its corresponding treatment (e.g., \(r_{ui}^{\mathbf{a}}\)). We adopt the potential outcome framework (Han et al., 2017) with multiple treatments (Han et al., 2017) to formulate the problem. The causal graph is shown in Figure 2. Let \(\mathcal{U}=\{u\}\) and \(\mathcal{I}=\{i\}\) denote the set of users and items, respectively with \(|\mathcal{U}|=m,|\mathcal{I}|=n\). We define the following components of the framework: * Multiple treatments: \(\mathbf{a}_{u}=[a_{u1},a_{u2},\ldots,a_{un}]\in\{0,1\}^{n}\) is the observed exposure status of user \(u\), where \(a_{ui}=1\) (\(a_{ui}=0\)) means item \(i\) was exposed to user \(u\) (not exposed to user \(u\)) in history. * Observed outcome: \(r_{ui}\) denotes the observed feedback of the user-item pair \((u,i)\) and \(\mathbf{r}_{u}=[r_{u1},...,r_{un}]\) signifies the observed feedbacks of user \(u\). * Potential outcome: \(r_{ui}^{\mathbf{a}}\) denotes the potential outcome 1 that would be observed if the user's exposure had been set to the vector value \(\mathbf{a}\in\{0,1\}^{n}\). Following previous work (Han et al., 2017), we assume \(r_{ui}^{\mathbf{a}}\) is only affected by the exposure of item \(i\) to user \(u\). Footnote 1: The distribution of potential outcome \(r_{ui}^{\mathbf{a}}\) is equivalent to \(p(r_{ui}|d\mathbf{o}(\mathbf{a}))\) in the structural causal model (SCM) framework. * Unmeasured confounder: \(z_{u}\in\mathbb{R}^{d}\) (e.g., the user's socio-economic status) denotes the d-dimensional unmeasured confounder that causally influences both user's exposures \(\mathbf{a}_{u}\) and feedbacks \(\mathbf{r}_{u}\). * Given observational data \(\{\mathbf{a}_{u},\mathbf{r}_{u}\}_{u\in\mathcal{U}}\), a recommendation algorithm aims to accurately predict the feedback of user \(u\) on item \(i\) if the item had been exposed to \(u\), i.e., the expectation of the potential outcome \(E[r_{ui}^{\mathbf{a}}]\), where \(\mathbf{a}_{i}=1\). Practically, for a user \(u\), items are ranked by the predicted \(E[r_{ui}^{\mathbf{a}}]\) such that the user will likely give positive feedback to items ranked in top positions. However, in real-world scenarios, as the data of the recommendation system is naturally collected as users interact with the recommended items without randomized controlled trials, there usually exists some confounder, \(\mathbf{z}_{u}\) as shown in Figure 2, which affects both the user \(u\)'s exposure status \(\mathbf{a}_{u}\) (i.e., the treatment) and the user's feedback on items \(\mathbf{r}_{u}\) (i.e., the outcome), resulting in the spurious correlation when the user's feedback is simply estimated by \(p(r_{ui}|\mathbf{a}_{u})\). For instance, the example we discussed in Section 1. Previous work (Han et al., 2017; Li et al., 2018; Li et al., 2018) takes \(\mathbf{z}_{u}\) as a specific factor, for example, item popularity, video duration, video creators, etc. But under most real-world circumstances, we cannot access the complete information of \(\mathbf{z}_{u}\). Thus, this work focuses on a more general problem setting where \(\mathbf{z}_{u}\) is an unmeasured confounder. As shown in Figure 2a, \(\mathbf{z}_{u}\) is a latent variable represented by a shaded node. ### Identification with unmeasured confounder To learn the counterfactual feedback from user \(u\) on item \(i\), i.e., \(E[r_{ui}^{\mathbf{a}}]\), the identification of the potential outcome distribution \(p(r_{ui}^{\mathbf{a}})\) from observational data is required. In general, accurately predicting a user's feedback through data-driven models is only possible when causal identifiability has been established. When all confounders are _measured_, \(p(r_{ui}^{\mathbf{a}})\) can be identified through the classical g-formula2(K Eq. (4), \(p(r_{ui}^{\mathbf{a}})\) can be uniquely determined, which motivates the usage of proximal causal inference [26]. Inspired by the above intuition, we reformulate the recommendation problem with the unmeasured confounder from the view of proximal causal inference. Specifically, we assume that one can observe additional information of user \(u\), called proxy variable \(w_{u}\), which is directly affected by the unmeasured confounder \(z_{u}\) and independent of the feedback \(r_{ui}\) given the unmeasured confounder \(z_{u}\) and the exposure vector \(\mathbf{a}_{u}\), i.e.,: \[\mathbf{w}_{u}=g(\mathbf{z}_{u}),\mathbf{w}_{u}\perp\mathbf{r}_{ui}|(z_{u},\mathbf{a}_{u}),\] where \(g\) is an unknown function. Fortunately, in the recommendation scenario, such a proxy variable can be potentially accessed through the user's features, including user portraits summarized from interaction history. For example, when the unmeasured confounder \(\mathbf{z}_{u}\) is the user's socio-economic status, which usually cannot be directly accessed, possibly due to privacy concerns, one can take the proxy variable as the average price of items that the user recently purchased, which is pretty helpful in inferring the user's socio-economic status since high consumption often implies high socio-economic status. Moreover, such a proxy variable will not directly affect the user's feedback if the user's socio-economic status is already given. We first show that the user's counterfactual feedback \(p(r_{ui}^{\mathbf{a}})\) in Example 3.1 can be identified with a proxy variable \(\mathbf{w}_{u}\). **Example 4.1** (_Success in identifying \(p(r_{ui}^{\mathbf{a}})\) with proxy variable_): _Following the settings in Example 3.1, we introduce an observable proxy variable \(w_{u}\) that indicates the user's consumption level affected by the socio-economics status \(z_{u}\) in the recommendation platform, the corresponding causal graph is shown in Figure 1(b). We assume \(w_{u}\) is a Bernoulli random variable with mean \(\mu(\mathbf{z}_{u})\in(0,1)\) and \(w_{u}\) is correlated with \(z_{u}\) condition on \(\mathbf{a}_{u}\)._ Similar to Example 3.1, \(p(r_{ui}=1|\mathbf{a},w_{u})\), the probability that user \(u\) will give positive feedback to item \(i\) with given exposure status \(\mathbf{a}\) and consumption level \(w_{u}\), can be inferred from the given dataset, and \(p(\hat{z}_{u}|\mathbf{a},w_{u})\) is assumed to be uniquely determined by factor models. Again, for ease of illustration, we denote \(\pi_{r_{ui}=1|\mathbf{a},w}\coloneqq p(r_{ui}=1|\mathbf{a}_{u}=\mathbf{a},w_{u}=w)\) and \(\pi_{\hat{z}_{u}=1|\mathbf{a}_{u}}\coloneqq p(\hat{z}_{u}=1|\mathbf{a}_{u}=\mathbf{a},w_{ u}=w)\). Now, while there are still four unknown entries \(\{p_{\mathcal{I}u},z,r\in\{0,1\}\}\) as in Eq. (4), the number of constraints increases from three to four with the two conditional marginal distributions \(\pi_{r_{ui}=1|\mathbf{a},w_{u}=0}\) and \(\pi_{r_{ui}|\mathbf{a},w=1}\), i.e., \[\begin{array}{ll}\sum_{z}\sum_{r}p_{\mathcal{I}r|\mathbf{a}}=1,&\sum_{z}p_{ \mathcal{I}z1|\mathbf{a}}\frac{\pi_{\hat{z}_{u}=\mathbf{a},w=1}}{\pi_{\hat{z}_{u}=\bm {a},w=1}}=\pi_{r_{ui}=1|\mathbf{a},w=1},\\ \sum_{r}p_{\mathcal{I}r|\mathbf{a}}=\pi_{\hat{z}_{u}=1|\mathbf{a}_{u}}&\sum_{z}p_{ \mathcal{I}z1|\mathbf{a}}\frac{\pi_{\hat{z}_{u}=\mathbf{a},w=0}}{\pi_{\hat{z}_{u}=\bm {a},w=0}}=\pi_{r_{ui}=1|\mathbf{a},w=0}.\end{array} \tag{6}\] The following lemma shows the identification result of \(p(r_{ui}^{\mathbf{a}})\). **Lemma 4.2**: _There exists a unique solution of \(\{p_{\mathcal{I}u},z,r\in\{0,1\}\}\) from Eq. (6), leading to the identification of the potential outcome \(p(r_{ui}^{\mathbf{a}})\) calculated from Eq. (2)._ **General framework of identifying the user's counterfactual feedback \(p(r_{ui}^{\mathbf{a}})\) with proxy variables.** Next, we show how to identify \(p(r_{ui}^{\mathbf{a}})\) with proxy variables in general. Observing that \[p(r_{ui}^{\mathbf{a}})=E_{\hat{z}_{u}}[p(r_{ui}|\mathbf{a},\hat{z}_{u})]=\int_{\hat{z} }p(\hat{z}_{u}=z)p(r_{ui}|\mathbf{a},\hat{z}_{u}=z)dz, \tag{7}\] * **Learning Latent Confounder:** This stage aims to learn a latent confounder \(\hat{z}_{u}\) with the help of proxy variables, such that the learned \(\hat{z}_{u}\) is equivalent to the true unmeasured confounder \(z_{u}\) up to some transforms [9, 17] and can provide additional constraints to infer the user's feedback \(r_{ui}\), which cannot be achieved by the substitute confounder in Deconfounder. Specifically, we aim to learn its prior distribution, i.e., \(p(\hat{z}_{u})\). Since \[p(\hat{z}_{u}=z)=E_{\mathbf{a}_{u},w_{u}}[p(\hat{z}_{u}|\mathbf{a}_{u},w_{u})],\] (8) and \(p(\mathbf{a}_{u},w_{u})\) is measured from the dataset, thus the main challenge is to learn \(p(\hat{z}_{u}|\mathbf{a}_{u},\mathbf{w}_{u})\), which can be learned by reconstructing the exposure vector \(\mathbf{a}_{u}\) based solely on \(\mathbf{w}_{u}\), since: \[p(\mathbf{a}_{u}|\mathbf{w}_{u})=\int_{\hat{z}}p(\mathbf{a}_{u}|\hat{z}_{u}=z)p(\hat{z}_{u} =z|\mathbf{a}_{u},\mathbf{w}_{u})dz.\] (9) For example, we can apply the widely-used iVAE [9] model, then \(p(\hat{z}_{u}|\mathbf{a}_{u},\mathbf{w}_{u})\) and \(p(\mathbf{a}_{u}|\hat{z}_{u})\) are estimated by the encoder and the decoder respectively. * **Feedback with given latent confounder:** This stage aims to learn \(p(r_{ui}|\mathbf{a}_{u},\hat{z}_{u})\), i.e., user \(u\)'s feedback on item \(i\) under the fixed exposure vector \(\mathbf{a}_{u}\) and latent confounder \(\hat{z}_{u}\). With the help of \(p(\hat{z}_{u}|\mathbf{a}_{u},\mathbf{w}_{u})\) learned in the first stage, one can infer it by directly fitting the observed users' feedback \(p(r_{ui}|\mathbf{a}_{u},\mathbf{w}_{u})\), since: \[p(r_{ui}|\mathbf{a}_{u},\mathbf{w}_{u})=\int_{\hat{z}}p(r_{ui}|\hat{z}_{u}=z,\mathbf{a}_{u} )p(\hat{z}_{u}=z|\mathbf{a}_{u},\mathbf{w}_{u})dz.\] (10) Then the potential outcome (i.e., the user's counterfactual feedback) distribution \(p(r_{ui}^{\mathbf{a}})\) is identified by applying Eq. (7). The following theorem shows the general theoretical guarantee on identification of \(p(r_{ui}^{\mathbf{a}})\) through the aforementioned two-step procedure. **Theorem 4.3** (Identification with proxy variable [17]): _Under the consistency, ignorability, positivity, exclusion restriction, equivalence, and completeness assumptions, for any latent joint distribution \(p(\mathbf{a}_{u},\hat{z}_{u}|\mathbf{w}_{u})\) that solves \(p(\mathbf{a}_{u}|\mathbf{w}_{u})=\int_{\hat{z}}p(\mathbf{a}_{u},\hat{z}_{u}=z|\mathbf{w}_{u})dz\), there exists a unique solution \(p(r_{ui}|\hat{z}_{u},\mathbf{a}_{u})\) to the equation Eq. (10) and the potential outcome distribution is identified by Eq. (7)._ **Remark 1** (About assumptions): _Note that Theorem 4.3 relies on several assumptions: consistency, ignorability, positivity, exclusion Figure 3. The framework of the proposed method iDCF. restriction, equivalence, and completeness. The first 3 assumptions are standard assumptions in causal inference (Zhou et al., 2017; Zhang et al., 2018). Informally, exclusion restriction requires the proxy variable to be independent of the user's feedback condition on the confounder and exposure, which can be reasonable in a recommendation system since the proxy variable (e.g., user's consumption level) is mainly used to implicitly infer the hidden confounder (e.g., user's income) that directly affects user's feedback. Equivalence requires the unmeasured confounder can be identified from the dataset up to a one-to-one transform, which is also feasible with various factor models (Zhou et al., 2017; Zhang et al., 2018). Completeness requires that the proxy variable contains enough information to guarantee the uniqueness of the statistic about the hidden confounder, which can also be feasible in recommendation scenarios since the variability in the unmeasured confounders (e.g., user's socio-economics status) is usually captured by variability in the user features (e.g., user's consumption level)._ ### Practical Implementation Next, we describe how the proposed iDCF implements the identification steps described in Section 4.1 practically. We need to specify: **Training Stage:** (1) How to learn the latent confounder, i.e., \(p(\hat{\mathbf{z}}_{u})\) in Eq. (8)? As discussed in Section 4.1, the main challenge is to learn \(p(\hat{\mathbf{z}}_{u}|\mathbf{a}_{u},\mathbf{w}_{u})\). (2) How does our method learn users' feedback with the given latent confounder, i.e., \(p(r_{ui}|\mathbf{a}_{u},\hat{\mathbf{z}}_{u})\)? **Inference Stage.** With \(p(\hat{\mathbf{z}}_{u})\) and \(p(r_{ui}|\mathbf{a}_{u},\hat{\mathbf{z}}_{u})\) learned in the training stage, how does the proposed iDCF framework infer the unbiased feedback of users following Eq. (7)? **Learning Latent Confounder.** We use iVAE (Zhou et al., 2017) to learn the latent confounder, since it is widely used to assist identification of latent variables by leveraging auxiliary variables which are equivalent to proxies. Specifically, we simultaneously learn the deep generative model and approximate posterior \(q_{\phi}(\hat{\mathbf{z}}_{u}|\mathbf{a}_{u},\mathbf{w}_{u})\) of the true posterior \(p_{\theta}(\hat{\mathbf{z}}_{u}|\mathbf{a}_{u},\mathbf{w}_{u})\) by maximizing \(\mathcal{L}(\theta,\phi)\), which is the evidence lower bound (ELBO) of the likelihood \(\log p_{\theta}(\mathbf{a}_{u}|\mathbf{w}_{u})\): \[\begin{split} E&\{\log p_{\theta}(\mathbf{a}_{u}|\mathbf{ w}_{u})\}\geq\mathcal{L}(\theta,\phi):=\\ =& E[E_{q_{\phi}(\hat{\mathbf{z}}_{u}|\mathbf{a}_{u},\mathbf{ w}_{u})}[\log p_{\theta}(\hat{\mathbf{z}}_{u}|\mathbf{w}_{u})-\log q_{\phi}(\hat{ \mathbf{z}}_{u}|\mathbf{a}_{u},\mathbf{w}_{u})]\\ &+E_{q_{\phi}(\hat{\mathbf{z}}_{u}|\mathbf{a}_{u},\mathbf{w}_{u})}[\log p_{ \theta}(\mathbf{a}_{u}|\hat{\mathbf{z}}_{u})]]\\ \end{split} \tag{11}\] where according to the causal graph in Figure 2b, \(\log p_{\theta}(\mathbf{a}_{u},\hat{\mathbf{z}}_{u}|\mathbf{w}_{u})\) is further decomposed as follows: \[\begin{split}\log p_{\theta}(\mathbf{a}_{u},\hat{\mathbf{z}}_{u}|\mathbf{w}_{ u})=&\log p_{\theta}(\mathbf{a}_{u}|\hat{\mathbf{z}}_{u},\mathbf{w}_{u})+\log p_{ \theta}(\hat{\mathbf{z}}_{u}|\mathbf{w}_{u})\\ =&\log p_{\theta}(\mathbf{a}_{u}|\hat{\mathbf{z}}_{u})+\log p_{ \theta}(\hat{\mathbf{z}}_{u}|\mathbf{w}_{u}).\end{split} \tag{12}\] Following (Zhou et al., 2017), we choose the prior \(p_{\theta}(\hat{\mathbf{z}}_{u}|\mathbf{w}_{u})\) to be a Gaussian location-scale family, and use the reparameterization trick (Kang et al., 2017) to sample \(\hat{\mathbf{z}}_{u}\) from the approximate posterior \(q_{\phi}(\hat{\mathbf{z}}_{u}|\mathbf{a}_{u},\mathbf{w}_{u})\) as \[\begin{split} p_{\theta}(\hat{\mathbf{z}}_{u}|\mathbf{w}_{u})& :=N(\mu_{w}(\mathbf{w}_{u}),v_{w}(\mathbf{w}_{u})),\\ q_{\phi}(\hat{\mathbf{z}}_{u}|\mathbf{a}_{u},\mathbf{w}_{u})& :=N(\mu_{w}(\mathbf{a}_{u},\mathbf{w}_{u}),a_{w}(\mathbf{a}_{u},\mathbf{w}_{u})), \end{split} \tag{13}\] where \(\mu_{w},v_{w},\mu_{w},\mu_{w},\nu_{w}\) are modeled by 4 different MLP models. To this end, the calculation of the expectation \(I\) of Eq. (11) can be converted to the calculation of the Kullback-Leibler divergence of two Gaussian distributions: \[\begin{split} E_{q_{\phi}(\hat{\mathbf{z}}_{u}|\mathbf{a}_{u},\mathbf{w}_{u}) }[\log p_{\theta}(\hat{\mathbf{z}}_{u}|\mathbf{w}_{u})-\log q_{\phi}(\hat{\mathbf{z}}_{u}| \mathbf{a}_{u},\mathbf{w}_{u})]\\ =&-KL(N(\mu_{uw}(\mathbf{a}_{u},\mathbf{w}_{u}),a_{uw}(\mathbf{a}_ {u},\mathbf{w}_{u}),N(\mu_{w}(\mathbf{w}_{u}),v_{w}(\mathbf{w}_{u}))).\end{split} \tag{14}\] As for \(II\), since the hidden confounder directly affects each element of the exposure vector, we use a factorized logistic model as \(p_{\lambda}(\mathbf{a}_{u}|\mathbf{z}_{u})\), i.e., \(p_{\lambda}(\mathbf{a}_{u}|\mathbf{z}_{u})=\prod_{i=1}^{n}Bernoulli(\mathbf{a}_{ui}|\mathbf{z}_{ u})\), which is also modeled by a MLP \(\mu_{z}(\mathbf{z})\). Then the log-likelihood \(\log p_{\theta}(\mathbf{a}_{u}|\hat{\mathbf{z}}_{u})\) becomes the negative binary cross entropy: \[\log p_{\theta}(\mathbf{a}_{u}|\hat{\mathbf{z}}_{u})=\sum_{i=1}^{n}\mathbf{a}_{ui}\log(\mu_ {z}(\mathbf{z}_{u})_{i})+(1-\mathbf{a}_{ui})\log(1-\mu_{z}(\mathbf{z}_{u})_{i}).\] Then, through maximizing Eq. (11), we are able to obtain the approximate posterior of latent confounder \(q_{\phi}(\hat{\mathbf{z}}_{u}|\mathbf{a}_{u},\mathbf{w}_{u})\). **Feedback with given latent confounder.** As shown in Eq. (9), with \(q_{\phi}(\hat{\mathbf{z}}_{u}|\mathbf{a}_{u},\mathbf{w}_{u})\) estimated through iVAE, the user's feedback on item \(i\) with the latent confounder \(\hat{\mathbf{z}}_{u}\), i.e., \(p(r_{ui}|\mathbf{a}_{u},\hat{\mathbf{z}}_{u})\), can be learned by fitting a recommendation model on the observed users' feedback. Following the assumption in Section 3.1 where \(r_{ui}^{a}\) is only affected by the exposure of item \(i\) to user \(u\), we use a point-wise recommendation model \(f(u,i,\mathbf{z}_{u};\eta)\) parameterized by \(\eta\) to estimate \(p(r_{ui}|\mathbf{a}_{u},\hat{\mathbf{z}}_{u})\). Specifically, we adopt a simple additive model \(f(u,i,\mathbf{z}_{u};\eta)=f_{1}(u,i)+f_{2}(\hat{\mathbf{z}}_{u},i)\) that models the user's intrinsic preference and the effect of the latent confounder separately. The corresponding loss function is: \[\mathcal{L}_{iDCF}(\eta)=\frac{1}{|\mathcal{D}|}\sum_{(u,l)\in\mathcal{D}}I(E_{q _{\phi}(\hat{\mathbf{z}}_{u}|\mathbf{a}_{u},\mathbf{w}_{u})}[f(u,i,\hat{\mathbf{z}}_{u};\eta) ],r_{ui}), \tag{15}\] where \(l(\cdot,\cdot)\) is one of the commonly-used loss functions for recommendation systems, e.g., MSE loss and BCE loss. **Inference Stage.** In practice, for most real-world recommendation datasets, the user's feature \(\mathbf{w}_{u}\) is invariant in the training set and test set. Therefore, identifying \(p(r_{ui}^{a})\) is equivalent to identifying \(p(r_{ui}^{a}|\mathbf{w}_{u})\) since \(p(r_{ui}^{a})=\int_{w}p(r_{ui}^{a}|\mathbf{w}_{u}=w)p(\mathbf{w}_{u}=w)dw\) and \(p(\mathbf{w}_{u}=w)=1\) for those specific w associated with user \(u\). The corresponding identification formula becomes: \[\begin{split} p(r_{ui}^{a}|\mathbf{w}_{u})&=\int_{z}p( \hat{\mathbf{z}}_{u}=z|\mathbf{w}_{u})p(r_{ui}|\mathbf{a},\hat{\mathbf{z}}_{u}=z)dz\\ &=E_{\hat{\mathbf{z}}_{u}|\mathbf{w}_{u}}[p(r_{ui}|\mathbf{a},\hat{\mathbf{z}}_{u})],\end{split} \tag{16}\] where \(p(r_{ui}|\mathbf{a},\hat{\mathbf{z}}_{u}=z)\) is estimated by the learned recommendation model \(f(u,i,\mathbf{z}_{u};\eta)\) and \(p(\hat{\mathbf{z}}_{u}=z|\mathbf{w}_{u})\) is approximated by the encoder \(q_{\phi}(\hat{\mathbf{z}}_{u}|\mathbf{a}_{u},\mathbf{w} * **RQ2** What is the performance of iDCF under different confounding effects and dense ratios of the exposure matrix? * **RQ3** How does the identification of latent confounders impact the performance of iDCF? ### Experiment Settings Finally, we cover the implementation details, evaluation metrics, and how we search for optimal hyper-parameters. **Dataset.** Following previous work (Cai et al., 2018; Wang et al., 2019; Wang et al., 2019), we perform experiments on three real-world datasets: Coat3, Yahoo!R34 and KuaiRand5 collected from different recommendation scenarios. Each dataset consists of a biased dataset of normal user interactions, and an unbiased uniform dataset collected by a randomized trial such that users will interact with randomly selected items. We use all biased data as the training set, 30% of the unbiased data as the validation set, and the remaining unbiased data as the test set. For Coat and Yahoo!R3, the feedback from a user to an item is a rating ranging from 1 to 5 stars. We take the ratings \(\geq 4\) as positive feedback, and others as negative feedback. For KuaiRand, the positive samples are defined according to the signal "IsClick" provided by the platform. Footnote 3: [https://wwwsco.semdbex.yahoo.com/](https://wwwsco.semdbex.yahoo.com/) Footnote 4: [https://kaiurand.com/](https://kaiurand.com/) Moreover, to answer RQ2 and RQ3, we also generate a synthetic dataset with groundtruth of the unmeasured confounder known for in-depth analysis of the iDCF. **Baselines.** We compare our method 6 with the corresponding base models and the state-of-the-art deconfounding methods that can alleviate the confounding bias in recommendation systems in the presence of unmeasured confounders. Footnote 6: [https://anonymous.depen.science/r/iDCF-64B7/](https://anonymous.depen.science/r/iDCF-64B7/) * MF (Fan et al., 2018) & MF with feature (MF-WF). We use the classical Matrix Factorization (MF) as the base recommendation model. Since our method utilizes user features, for a fair comparison, we consider MF-WF, a variant of MF model augmented with user features. * DCF (Wang et al., 2019). Deconfounder (DCF)addresses the unmeasured confounder by learning a substitute confounder to approximate the true unmeasured confounder and applying the g-formula for debiasing. However, as discussed before, it fails to guarantee the identification of users' feedback, leading to the inconsistent prediction of users' feedback. * IPS (Wang et al., 2019) & RD-IPS (Cai et al., 2018). IPS is a classical propensity-based deconfounding method that ignores the unmeasured confounder and directly leverages the exposure to estimate propensity scores to reweight the loss function. RD-IPS is a recent IPS-based deconfounding method that assumes the bounded confounding effect of the unmeasured confounders to derive bounds of propensity scores and applies robust optimization for robust debiasing. The implementation of the two methods leverages a small proportion of unbiased data to get more accurate propensity scores. * InvPref (Wang et al., 2019). InvPref assumes the existence of multiple environments as proxies of unmeasured confounders and applies invariant learning (Beng et al., 2018; Chen et al., 2018) to learn the user's invariant preference. * DeepDCF-MF. DeepDCF (Wang et al., 2019) extends DCF by applying deep models and integrating the user's feature into the feedback prediction model to control the variance of the model. For a fair comparison, we adapt their model with MF as the backbone model. * iDCF-W. iDCF-W is a variant of iDCF that does not leverage proxy variables. We adopt VAE (Kuai et al., 2019) to learn the substitute confounder in such a scenario, with other parts staying the same with iDCF. **Implementation Details.** _Outcome Model._ Our method is model-agnostic in the sense that it works with any outcome prediction model. For ease of comparison, we follow the recent work on unmeasured confounders (Wang et al., 2019), and adopt matrix factorization (MF) as the backbone model. Specifically, we take \(f(u,i,\hat{\mathbf{z}}_{u};\eta)=f_{1}(u,i)+f_{2}(\hat{\mathbf{z}}_{u},i)\) in Eq. (14), where \[f_{1}(u,i)=\mathbf{e}_{u}^{T}\mathbf{e}_{i}+b_{u}+b_{i},f_{2}(\hat{\mathbf{z}}_{u},i)= \hat{\mathbf{z}}_{u}^{T}\mathbf{c}_{i}, \tag{16}\] where \(\mathbf{e}_{i},\mathbf{c}_{i}\) are different embeddings of item \(i\), \(\mathbf{e}_{u}\) is embedding representation of user \(u\), \(b_{u},b_{i}\) are the user preference bias term and item preference bias term, respectively. During training, \(\hat{\mathbf{z}}_{u}\) is sampled from \(q_{\phi}(\hat{\mathbf{z}}_{u}|\mathbf{a}_{u},\mathbf{w}_{u})\) to approximate the integral in Eq. (9). In the inference phase, we direct take \(\mathbf{\bar{z}}_{u}=E_{q_{\phi}(\hat{z}_{u}|\mathbf{a}_{u},\mathbf{w}_{u})}[\hat{\mathbf{z}}_{u}]\) and estimate the user's feedback on item \(i\) as follows: \[\hat{r}_{ui}=\mathbf{e}_{u}^{T}\mathbf{e}_{i}+\hat{\mathbf{z}}_{u}^{T}\mathbf{c}_{i}+b_{u}+b_{ i}. \tag{17}\] _Hyper-parameter search._ For all recommendation models, we use grid search to select the hyper-parameters based on the model's performance on the validation dataset. The learning rate is searched from {1e-3, 5e-4, 1e-4, 5e-5, 1e-5}, and the weight decay is chosen from {1e-5, 1e-6}. For the baselines with public implementation, we adopt their codes and follow the suggested range of hyper-parameters. The public implementation of IPS and RD-IPS (Cai et al., 2018) relies on a small set of unbiased data to obtain the propensity scores, we follow their procedure and extract the same proportion of unbiased data from the validation set. For a fair comparison, we use ADAM (Kingma et al., 2014) for the optimization of all models. \begin{table} \begin{tabular}{c c c c c} \hline \hline Dataset & \#User & \#Item & \#Biased Data & \#Unbiased Data \\ \hline Coat & 290 & 300 & 6,960 & 4,640 \\ Yahoo! R3 & 5,400 & 1,000 & 129,179 & 54,000 \\ KuaiRand & 23,533 & 6,712 & 1,413,574 & 954,814 \\ \hline \hline \end{tabular} \end{table} Table 1. The statistic of Coat, Yahoo!R3, and KuaiRand. **Evaluation Metrics** are \(NDCG@K\) and \(Recall@K\). For each method, we report the average value and standard deviation with 10 different random seeds. The p-value of the T-test between our method and the best baseline is also reported. ### Performance Comparison (RQ1) The experimental results on the three real-world datasets are shown in Table 2. We can observe that: * The proposed iDCF consistently outperforms the baselines with statistical significance suggested by low p-values w.r.t. all the metrics across all datasets, showing the gain in empirical performance due to the identifiability of counterfactual feedback by inferring identifiable latent confounders. This is further verified by experimental results in the synthetic dataset (see Section 5.3). * DCF, DeepDCF-MF, iDCF-W and iDCF achieve better performance than the base models (MF and MF-WF) in Yahoo!R3 and KuaiRand. This implies that leveraging the inferred hidden confounders to predict user preference can improve the model performance when the sample size is large enough. Moreover, deep latent variable models (VAE, iVAE) perform better than the simple Poisson factor model in learning the hidden confounder with their ability to capture nonlinear relationships between the treatments and hidden confounders. * However, the poor performance of DeepDCF-MF, iDCF-W, and DCF in Coat shows the importance of the identification of the feedback through learning the identifiable latent confounders. While the proposed iDCF provides the guarantee on the identification of the counterfactual feedback in general, these methods cannot guarantee the identification of the feedback. Therefore, iDCF outperforms DeepDCF-MF in all cases, even though they take the same input and use similar MF-based models for feedback prediction. * MF-WF slightly outperforms MF in all cases, showing that incorporating user features into the feedback prediction model improves the performance. Moreover, DeepDCF-MF outperforms iDCF-W in all datasets except Yahoo!R3. Note that DeepDCF-MF incorporates user features into the feedback prediction model while iDCF-W does not. This implies that the effectiveness of incorporating user features into feedback prediction depends on whether the user features are predictive of the user preference. For example, in Yahoo!R3, the user features are from a question-naire that contains questions about users' willingness to rate different songs that might influence their exposure but are not directly related to their feedback. DeepDCF-MF directly incorporates such user features into the feedback prediction model, which introduces useless noise. This may explain why DeepDCF-MF is outperformed by iDCF-W in this dataset. ### In-depth Analysis with Synthetic Data (RQ2 & RQ3) Our method relies on the inference of the unmeasured confounder. However, in real-world datasets, the ground truth of unmeasured confounders is inaccessible. To study the influence of learning identifiable latent confounders on the recommendation performance, we create a synthetic dataset (see Appendix A for details) to provide the ground truth of the unmeasured confounder. There are three important hyper-parameters in the data generation process: \(\alpha\) controls the density of the exposure vector, a larger \(\alpha\) means a denser exposure vector. \(\beta\) is the weight of the confounding effect of the user's preference, a larger \(\beta\) means the confounder has a stronger effect on the user's feedback. \(\gamma\) controls the weight of the random noise in the user's exposure, a larger \(\gamma\) means the user's exposure is more random. Similar to the real-world datasets, for each user, we randomly select 15 items and collect these data as the unbiased dataset. The data pre-processing is the same as the experiments on real-world dataset in Section 5.1-5.2. **RQ2: Performance of iDCF under different confounding effects and dense ratio of the exposure matrix.** We conduct experiments on the simulated data to study the robustness of our method. The results show that iDCF is robust and can still perform well under varying confounding effects and dense ratios. **Effect of confounding weight.** We fix the dense ratio \(\alpha=0.1\) and the exposure noise weight \(\gamma=0\), then vary the confounding weight \(\beta\). Recall a larger \(\beta\) means a stronger confounding effect. The result is shown in Table 3 and we find that: * The proposed method iDCF outperforms the baselines in all cases with small standard deviations. * As the confounding effect \(\beta\) increases, the performance gap between iDCF and the best baselines becomes more significant, measured by both the mean NDCG@5, Recall@5 and the p-value. This justifies the effectiveness of deconfounding of iDCF. **Effect of density of exposure vector.** Next, we investigate the performance of iDCF under different dense ratios \(\alpha\) by fixing \(\beta=2.0,\gamma=0\). Due to space limit, we only report NDCG@5 of the best four methods in Table 3 in Figure 3(a). It can be found that: * Overall, all the recommendation methods achieve better performances with less sparse data as \(\alpha\) increases. * Similar to the observations in the Coat dataset, iDCF is more robust than the baselines when exposure becomes highly sparse. At the same time, iDCF-W and DeepDCF-MF achieve very poor performance with highly sparse data with small \(\alpha\). This further verifies the efficacy of learning identifiable latent confounders. **RQ3: Influence of learning identifiable latent confounders.** The synthetic dataset enables us to visualize the true unmeasured confounder and study the influence of the identifiability of the learned confounders on the model performance. Here, we show the identification of the latent confounder by visualization, and conduct experiments to study the robustness of iDCF against different exposure noise weights \(\gamma\) with fixed \(\alpha=0.1\) and \(\beta=2.0\). The empirical results show that our method can better identify the unmeasured confounder, leading to more accurate feedback predictions. **Visualization of the learned latent confounder.** Figure 4(a) shows the conditional distributions of the two-dimensional _ground truth_ of the unmeasured confounder \(P(z_{u}|\mathbf{w}_{u})\) with the exposure noise weight \(\gamma=0\). We use iDCF and iDCF-W to learn the corresponding latent confounders, respectively, and we plot the posterior distributions \(p(\hat{z}_{u}|\mathbf{a}_{u},\mathbf{w}_{u})\) and \(p(\hat{z}_{u}|\mathbf{a}_{u})\) in Figures 4(b) and 4(c). It can be shown that iDCF can identify a better latent confounder than iDCF-W does, which helps to explain the observation that iDCF is better than iDCF-W in previous experiments. **Impact of the exposure noise on the learned confounder.** Next, we vary the exposure noise weight \(\gamma\) to study the impact of the \(\gamma\) on the learned latent confounder. The intuition behind this experiment is that as the weight of the noise increases, there will be more randomness in the exposure vectors, making it more challenging to infer the unmeasured confounders. To assess the accuracy of the learned confounders in approximating the ground truth, we compute the mean correlation coefficients (MCC) between the learned confounders and the ground truth. MCC is a widely accepted metric in the literature for evaluating the identifiability of learned latent variables [9]. The results are presented in Table 4. As observed in the table, the results suggest that as the noise level increases, it becomes increasingly difficult to approximate the ground truth using the learned confounders, while iDCF is much more robust regarding to the increasing noise level, compared to iDCF-W. **Impact of exposure noise on the feedback prediction.** Moreover, we conduct experiments to investigate how the performance of iDCF varies with the exposure noise weight \(\gamma\). We choose MF and iDCF-W as the baselines because (1) MF is an empirically stable recommendation model and (2) iDCF-W is the same as iDCF except it does not guarantee the identifiability of the learned confounders. We report NDCG@95 in Figure 3(b). The results indicate that, in general, as the exposure noise increases, it becomes more challenging to identify latent confounders, which in turn makes it more difficult to predict counterfactual feedback. These results, \begin{table} \begin{tabular}{c|c|c|c|c} \hline Model & \(\gamma=0\). & \(\gamma=5.0\) & \(\gamma=10.0\) & \(\gamma=15.0\) \\ \hline iDCF-W & 0.6050 & 0.3374 & 0.1023 & 0.1001 & 0.0682 \\ iDCF (ours) & 0.8394 & 0.8052 & 0.6955 & 0.6914 & 0.6062 \\ \hline \end{tabular} \end{table} Table 4. The mean correlation coefficients (MCC) between the true confounder and the latent confounders learned by iDCF and iDCF-W model. A larger MCC means a larger correlation with the true unmeasured confounder. Figure 4. Recommendation performance on the simulated datasets with different (a) exposure density ratios and (b) exposure noise weights. A larger \(\alpha\) means denser user exposure. A larger \(\gamma\) means the exposure contains more random noise. \begin{table} \begin{tabular}{c|c c|c c|c c} \hline \multirow{2}{*}{Datasets} & \multicolumn{3}{c|}{Coat} & \multicolumn{3}{c|}{Yahoo/R3} & \multicolumn{2}{c}{KuaiRand} \\ & NDCG@5 & RECALL@5 & NDCG@5 & RECALL@5 & NDCG@5 & RECALL@5 \\ \hline MF & \(0.5524\pm 0.0144\) & \(0.5294\pm 0.0227\) & \(0.5629\pm 0.0100\) & \(0.7129\pm 0.0106\) & \(0.3748\pm 0.0018\) & \(0.3247\pm 0.0013\) \\ MF-WF & \(0.5529\pm 0.0101\) & \(0.5341\pm 0.0143\) & \(0.5649\pm 0.0073\) & \(0.7144\pm 0.0086\) & \(0.3762\pm 0.0014\) & \(0.3255\pm 0.0013\) \\ IPS & \(0.5450\pm 0.0161\) & \(0.5260\pm 0.0191\) & \(0.5490\pm 0.0058\) & \(0.6967\pm 0.0096\) & \(0.3696\pm 0.0011\) & \(0.3224\pm 0.0009\) \\ RD-IPS & \(0.5448\pm 0.0147\) & \(0.5240\pm 0.0157\) & \(0.5550\pm 0.0051\) & \(0.7020\pm 0.0068\) & \(0.3690\pm 0.0016\) & \(0.3207\pm 0.0011\) \\ InvPref & \(0.5405\pm 0.0135\) & \(0.5295\pm 0.0225\) & \(0.5928\pm 0.0038\) & \(0.7414\pm 0.0052\) & \(0.3778\pm 0.0020\) & \(0.3283\pm 0.0014\) \\ DCF & \(0.5509\pm 0.0093\) & \(0.5329\pm 0.0152\) & \(0.5675\pm 0.0047\) & \(0.7116\pm 0.0059\) & \(0.3751\pm 0.0015\) & \(0.3243\pm 0.0012\) \\ DeepDCF-MF & \(0.5373\pm 0.0066\) & \(0.5141\pm 0.0113\) & \(0.6395\pm 0.0044\) & \(0.7729\pm 0.0056\) & \(0.4078\pm 0.0013\) & \(0.3491\pm 0.0010\) \\ iDCF-W & \(0.5255\pm 0.0137\) & \(0.4971\pm 0.0183\) & \(0.6410\pm 0.0029\) & \(0.7712\pm 0.0033\) & \(0.4072\pm 0.0009\) & \(0.3481\pm 0.0011\) \\ iDCF (ours) & \(0.5744\pm 0.0122\) & \(0.5504\pm 0.0126\) & \(0.6455\pm 0.0023\) & \(0.7837\pm 0.0035\) & \(\mathbf{0.4093\pm 0.0004}\) & \(\mathbf{0.3513\pm 0.0009}\) \\ \hline p-value & \(7e^{4}\) & \(2e^{-2}\) & \(2e^{-3}\) & \(1e^{-4}\) & \(5e^{-3}\) & \(1e^{-4}\) \\ \hline \end{tabular} \end{table} Table 2. Recommendation performances on Coat, Yahoo/R3 and KuaiRand. The p-value under t-test between iDCF and the best baseline on each dataset is displayed. \begin{table} \begin{tabular}{c|c c|c c|c c} \hline \multirow{2}{*}{Datasets} & \multicolumn{3}{c|}{Coat} & \multicolumn{3}{c|}{Yahoo/R3} & \multicolumn{2}{c}{KuaiRand} \\ & NDCG@5 & RECALL@5 & NDCG@5 & RECALL@5 & NDCG@5 & RECALL@5 \\ \hline MF & \(0.7911\pm 0.0022\) & \(0.6724\pm 0.0020\) & \(0.8029\pm 0.0018\) & \(0.6593\pm 0.0018\) & \(0.8217\pm 0.0026\) & \(0.6423\pm 0.0023\) \\ MF-WF & \(0.7914\pm 0.0021\) & \(0.6716\pm 0.0020\) & \(0.8028\pm 0.0023\) & \(0.6588\pm 0.0019\) & \(0.8220\pm 0.0030\) & \(0.6414\pm 0.0029\) \\ DCF & \(0.7904\pm 0.0019\) & \(0.6720\pm 0.0013\) & \(0.8024\pm 0.0025\) & \(0.6593\pm 0.0020\) & \(0.8223\pm 0.0031\) & \(0.6423\pm 0.0035\) \\ IPS & \(0.7890\pm 0.0026\) & \(0.6706\pm 0.0017\) & \(0.7982\pm 0.0014\) & \(0.6552\pm 0.0020\) & \(0.8159\pm 0.0038\) & \(0.6379\pm 0.0028\) \\ RD-IPS & \(0.7878\pm 0.0058\) & \(0.6694\pm 0.0029\) & \(0.8001\pm 0.0022\) & \(0.6569\pm 0.0027\) & \(0.8193\pm 0.0026\) & \(0.6396\pm 0.0021\) \\ InvPref & \(0.7953\pm 0.0027\) & \(\mathbf{0.6761\pm 0.0033}\) & \(0.7985\pm 0.0029\) & \(0.6556\pm 0.0036\) & \(0.8144\pm 0.0051\) & \(0.6358\pm 0.0039\) \\ DeepDCF-MF & \(0.7917\pm 0.0017\) & \(0.6715\pm 0.0019\) & \(0.8060\pm 0.0032\) & \(0.6601\pm 0.0029\) & \(0.8220\pm 0.0026\) & \(0.6421\pm 0.0028\) \\ iDCF-W & \(0.7901\pm 0.0010\) & \(0.6703\pm 0.0015\) & \(0.8050\pm 0.0029\) & \(0.6590\pm 0.0040\) & \(0.8226\pm 0.0015\) & \(0.6420\pm 0.0008\) \\ iDCF (ours) & \(\mathbf{0.7973\pm 0.0023}\) & \(0.6735\pm 0.0020\) & \(\mathbf{0.8168\pm 0.0013}\) & \(\mathbf{0.6683\pm 0.0015}\) & \(\mathbf{0.8368\pm 0.0019}\) & \(\mathbf{0.6549\pm 0.0025}\) along with those in Table 4, show that a better approximation of the ground truth confounders often leads to better estimation of the true user feedback. ## 6. Conclusion and Future Work In this work, we studied how to identify the user's counterfactual feedback by mitigating the unmeasured confounding bias in recommendation systems. We highlight the importance of identification of the user's counterfactual feedback by showing the non-identification issue of the Deconfounder method, which can finally lead to inconsistent feedback prediction. To this end, we propose a general recommendation framework that utilizes proximal causal inference to address the non-identification issue and provide theoretical guarantees for mitigating the bias caused by unmeasured confounders. We conduct extensive experiments to show the effectiveness and robustness of our methods in real-world datasets and synthetic datasets. This work leverages proxy variables to infer the unmeasured confounder and users' feedback. In the future, we are interested in trying more feasible proxy variables besides user features, and how to combine different proxy variables to achieve better performance. It also makes sense to apply our framework in more applications in recommendation systems.
2308.10362
Vehicle Cameras Guide mmWave Beams: Approach and Real-World V2V Demonstration
Accurately aligning millimeter-wave (mmWave) and terahertz (THz) narrow beams is essential to satisfy reliability and high data rates of 5G and beyond wireless communication systems. However, achieving this objective is difficult, especially in vehicle-to-vehicle (V2V) communication scenarios, where both transmitter and receiver are constantly mobile. Recently, additional sensing modalities, such as visual sensors, have attracted significant interest due to their capability to provide accurate information about the wireless environment. To that end, in this paper, we develop a deep learning solution for V2V scenarios to predict future beams using images from a 360 camera attached to the vehicle. The developed solution is evaluated on a real-world multi-modal mmWave V2V communication dataset comprising co-existing 360 camera and mmWave beam training data. The proposed vision-aided solution achieves $\approx 85\%$ top-5 beam prediction accuracy while significantly reducing the beam training overhead. This highlights the potential of utilizing vision for enabling highly-mobile V2V communications.
Tawfik Osman, Gouranga Charan, Ahmed Alkhateeb
2023-08-20T20:43:11Z
http://arxiv.org/abs/2308.10362v1
# Vehicle Cameras Guide mmWave Beams: Approach and Real-World V2V Demonstration ###### Abstract Accurately aligning millimeter-wave (mmWave) and terahertz (THz) narrow beams is essential to satisfy reliability and high data rates of 5G and beyond wireless communication systems. However, achieving this objective is difficult, especially in vehicle-to-vehicle (V2V) communication scenarios, where both transmitter and receiver are constantly mobile. Recently, additional sensing modalities, such as visual sensors, have attracted significant interest due to their capability to provide accurate information about the wireless environment. To that end, in this paper, we develop a deep learning solution for V2V scenarios to predict future beams using images from a 360 camera attached to the vehicle. The developed solution is evaluated on a real-world multi-modal mmWave V2V communication dataset comprising co-existing 360 camera and mmWave beam training data. The proposed vision-aided solution achieves \(\approx 85\%\) top-5 beam prediction accuracy while significantly reducing the beam training overhead. This highlights the potential of utilizing vision for enabling highly-mobile V2V communications. Deep learning, computer vision, beam tracking, mmWave, vehicle-to-vehicle. ## I Introduction Millimeter wave (mmWave) and terahertz (THz) communications adopt large antenna arrays and use narrow directive beams to guarantee sufficient receive power [1]. Accurately aligning the narrow beams is crucial for achieving high data rates in highly-mobile applications such as vehicle-to-vehicle (V2V) communication scenarios. However, selecting the optimal beams for these systems with large antenna arrays is typically associated with a large training overhead, making it challenging for mmWave/THz communication systems to support these future applications. Prior work on reducing the mmWave beam training overhead have typically focused on constructing adaptive beam codebooks [2], designing beam tracking techniques [3], and leveraging the channel sparsity and efficient compressive sensing tools [4, 5]. However, these classical approaches typically can only reduce the training overhead by one order of magnitude, which might not be sufficient for systems with large antenna arrays and highly mobile applications. This motivates exploring new approaches to overcome this beam training overhead and enable highly-mobile mmWave/THz V2V communication systems. The use of machine learning (ML) techniques to address the beam prediction task has become increasingly popular in recent years [6, 7, 8, 9, 10]. These solutions primarily aim to leverage additional information to enhance the awareness of the wireless environment. Furthermore, the additional sensing modalities have been shown to be instrumental in the development of real-time digital twins (digital replicas) of the physical environments that can be utilized for making efficient communication and sensing decisions [11, 12]. In [6], the authors propose to utilize the receive wireless signature to predict the optimal beam indices at the basestation. Another approach is to utilize position information, as demonstrated in [7], to predict the optimal beam index. Although incorporating location data can help reduce the training overhead, relying solely on GPS data may lead to inaccurate predictions due to inherent errors. To eliminate the beam training overhead, some researchers leverage other sensing modalities such as cameras, LiDAR, and radar. For instance, [8] utilizes RGB images captured by a camera at the basestation to guide beam prediction, while [9] uses LiDAR point cloud data to facilitate beam prediction and tracking. Furthermore, [10] proposes using radar installed at the basestation to predict the optimal beam index. However, all these solutions are designed for vehicle-to-infrastructure (V2I) scenarios and are limited to a single-user setting. In this paper, we propose to leverage visual data captured by cameras installed on the vehicles to realize beam tracking in V2V communication scenarios. Given a sequence of recent RGB images of the wireless environment and the initial optimal receive power vector, the objective is to predict the optimal beam corresponding to the latest data capture. The main contributions of the paper can be summarized as follows: * Formulating the vision-aided V2V beam tracking problem in mmWave/THz wireless networks considering practical visual and communication models. * Given a sequence of image samples and the receive power vector corresponding to the first sample in the sequence, developing a machine learning-based solution that is capable of (i) detecting objects of interest in the Fig. 1: An illustration of the adopted system model. The receiver vehicle utilizes the visual data to aid mmWave beam tracking. wireless environment and extracting the relevant features, (ii) identifying the transmitter in the scene, and (iii) efficiently predicting the optimal beam for the future sample. * Providing the first real-world evaluation of vision-aided V2V beam tracking based on our large-scale dataset, DeepSense 6G [13], that consists of co-existing multi-modal sensing and wireless communication data. Based on the adopted real-world dataset, the developed solution achieves \(98\)% transmitter identification accuracy. Further, the proposed solution achieves a top-5 prediction accuracy of \(\approx 85\)% while reducing the beam training overhead. This highlights the capability of the proposed sensing-aided beam tracking approaches to reduce the beam training overhead significantly. ## II System Model and Problem Formulation This work considers a V2V communication scenario consisting of a vehicle acting as the transmitter and another vehicle acting as the receiver in a real wireless communication environment as shown in Fig. 1. In particular, the communication system model consists of: (i) A mobile mmWave receiver equipped with a set of \(M\)-element Uniform Linear Array (ULA), each directed towards a different direction to provide \(360^{\circ}\) coverage, a \(360^{\circ}\) RGB camera, and an RTK GPS receiver. (ii) A mobile transmitter equipped with an omni-directional antenna. In this section, we first present the adopted wireless communication model. Then, we formulate the sensing-aided V2V beam tracking problem. ### _Communication model_ The system model adopted in this paper constitutes a receiver vehicle that employs four mmWave transceivers, each with \(M\) antennas to communicate with the transmitting vehicle, equipped with a single quasi-omnidirectional mmWave transceiver. Let \(d\in\{front,back,left,right\}\) denote the communication and sensing direction of the receiver vehicle. Considering a geometric channel model, the channel \(\mathbf{h}_{d}\in\mathbb{C}^{M\times 1}\) between the transmitter and receiver can be written as: \[\mathbf{h}_{d}=\sum_{\ell=1}^{L_{d}}\alpha_{\ell,d}\mathbf{a}\left(\theta_{ \ell,d}^{az},\phi_{\ell,d}^{el}\right), \tag{1}\] where \(L_{d}\) is the number of channel paths, \(\theta_{\ell,d}^{az}\), \(\phi_{\ell,d}^{el}\) are the azimuth and the elevation angle of arrival, respectively, of the \(\ell\)th channel path. \(\alpha_{\ell,d}\) is the complex path gain. The communication system adopted in this work uses a predefined beamforming codebook \(\boldsymbol{\mathcal{F}}=\left\{\mathbf{f}_{m}\right\}_{m=1}^{Q}\), where \(\mathbf{f}_{m}\in\mathbb{C}^{M\times 1}\) and \(Q\) is the total number of beamforming vectors in the codebook. It is important to note that the total number of beamforming vectors is \(4Q\), with \(Q\) beams in each direction. Now, given the geometric channel model \(\mathbf{h}_{d}\), assume that the transmitter transmits the complex data symbol \(x\in\mathbb{C}\) to the receiver, and the \(d\)-directional antenna array of the receiver receives the signal via the beamforming vector \(\mathbf{f}_{d,m}\). The received signal \(y_{d}\) can then be written as \[y_{d}=\mathbf{f}_{d,m}^{H}\mathbf{h}_{d}x+n_{k}, \tag{2}\] where \(n_{k}\) is a noise sample drawn from a complex Gaussian distribution \(\mathcal{N}_{\mathbb{C}}(0,\sigma^{2})\). The transmitted complex symbol \(x\in\mathbb{C}\) needs to satisfy the following constraint \(\mathbb{E}\left[\left|x\right|^{2}\right]=P\), where \(P\) is the average symbol power. The optimal beam index can be represented as the results of the maximization problem given as \[\underset{d,m}{\text{max}}\ |\mathbf{f}_{d,m}^{H}\mathbf{h}_{d}|^{2}, \tag{3}\] where the solution to the maximization problem can be obtained by an exhaustive search over the beam codebook. ### _Problem formulation_ As presented in (3), the optimal beam can be computed by either utilizing the explicit channel knowledge, which is generally hard to acquire or by performing an exhaustive beam search that results in large beam training overhead. In this work, instead of following the conventional beam training approach, we propose to utilize additional sensing modalities such as \(360^{\circ}\) RGB images to predict the optimal beam index. In particular, at any given time \(t\), we aim to predict the optimal beam index based on the current and previous visual data captured by the camera installed at the receiver vehicle. It is important to note here that the highly mobile nature of the V2V communication scenarios necessitates frequent updates to the communication beam after the initial connection has been established. These frequent updates will further increase the beam training overhead impacting the reliability and latency of the wireless communication systems. One promising solution to minimize this overhead is to track the transmitter vehicle over time using the RGB images and then predict the optimal beam, eliminating the need for any additional beam training. However, to track the transmitter vehicle over time, it is crucial first to identify the transmitter in the RGB images. For this, in addition to the RGB images, we propose to utilize the previous optimal receive power vector, which can be estimated when the connection between the vehicles is first established. In this work, to facilitate the processing of the captured \(360^{\circ}\) image, we split it into two \(180^{\circ}\) field-of-view (FoV) images, covering the front and back sides, respectively. This allows us to focus on each side of the receiver vehicle independently and reduce the computational complexity of our approach. Let \(\mathbf{X}_{f}[t]\in\mathbb{R}^{W\times H\times C}\)and \(\mathbf{X}_{b}[t]\in\mathbb{R}^{W\times H\times C}\) denote the front and back RGB images of the environment, respectively, captured at time instant \(t\), where \(W\), \(H\), and \(C\) are the width, height, and the number of color channels for the image. Further, let \(\mathbf{p}[t]\in\mathbb{R}^{1\times 4Q}\) denote the mmWave receive power vector at the receiver vehicle. At any time instant \(\tau\in\mathbb{Z}\), the basestation captures a sequence of RGB images, and the mmWave receive power vector corresponding to the time instant of the first image capture, \(\mathbf{S}[\tau]\), defined as \[\mathbf{S}[\tau]=\left\{\left\{\mathbf{X}[t]\right\}_{t=\tau-r+1}^{\tau}, \mathbf{p}[\tau-r+1]\right\}, \tag{4}\] where \(\mathbf{X}[t]\in\left\{\mathbf{X}_{f}[t],\mathbf{X}_{b}[t]\right\}\) and \(r\in\mathbb{Z}\) is the length of the input sequence or the observation window to predict the optimal beam index. In particular, at any given time instant \(\tau\), the goal in this work is find a mapping function \(f_{\Theta}\) that utilizes the available sensory data samples \(\mathbf{S}[\tau]\) to predict (estimate) the future optimal beam index \(\mathbf{\hat{f}}[\tau+1]\in\boldsymbol{\mathcal{F}}\) with high fidelity. The mapping function can be formally expressed as \[f_{\Theta}:\mathbf{S}[\tau]\rightarrow\mathbf{\hat{f}}[\tau+1]. \tag{5}\] In this work, we propose to utilize a machine learning-based model to learn this prediction function \(f_{\Theta}\). The objective is to maximize the number of correct predictions over all the sample in \(\mathcal{D}=\left\{(\mathbf{S}_{u},\mathbf{f}_{u}^{*})\right\}_{u=1}^{U}\), where \(U\) is the total number of samples in the dataset. This can be formally written as \[f_{\Theta^{*}}=\underset{f_{\Theta}}{\text{argmax}}\prod_{u=1}^{U}\mathbb{P} \left(\mathbf{\hat{f}}_{u}=\mathbf{f}_{u}^{*}|\mathbf{S}_{u}\right), \tag{6}\] where the joint probability in (6) is factored out to convey the identical and independent (i.i.d.) nature of the samples in dataset \(\mathcal{D}\). ## III Vision-Aided V2V Beam Tracking: Proposed Solution In this section, we propose a sensing-aided V2V beam tracking solution that operates through a sequence of four distinct sub-tasks. The first sub-task is an object detection solution, and it involves the detection of relevant objects of interest in the field of view (FoV) of the mmWave basestation situated on the receiver vehicle, using the visual data. The second sub-task is user identification, which involves utilizing the mmWave receive power vector to identify the transmitter within the wireless environment [14]. In the third sub-task, we utilize the coordinates predicted in the second task to track the object of the transmitter vehicle across the subsequent image samples. The final sub-task implements a beam prediction solution to predict the optimal beam index based on the sequence of bounding box center coordinates obtained from the preceding sub-tasks. **Object Detection:** The visual data contains detailed information about the wireless environment, including vehicles that may house the transmitter unit. The first step in our solution involves identifying all objects belonging to the same visual class as the transmitter unit. To accomplish this, we employ a pre-trained object detection model called YOLOv7 [15]. The model has been adapted and fine-tuned to specifically detect relevant objects within the surrounding environment, including cars, trucks, and buses. We extract a 4-dimensional vector consisting of the bottom-left coordinates \([x_{1},y_{1}]\) and the top-right coordinates \([x_{2},y_{2}]\) for each object detected by the YOLOv7 model. These vectors are then processed to obtain the center coordinates of the objects in the 2D vector space, which are subsequently normalized to fall within the range of \([0,1]\). The normalized center coordinates are further transformed into polar coordinates \((d,\theta)\), where \(d\) is the radius and \(\theta\) is the angle in degrees. This transformation is performed using the bottom-center coordinate as the reference point. Furthermore, we concatenated the polar coordinates of all the objects to form one dimensional vector, \(\mathbf{d}\in\mathbb{R}^{2M\times 1}\), where \(M\) is the number of objects detected. The length of the vector, \(\mathbf{d}\) for each data sample, is determined by the count of candidate objects detected in the visual data. It is important to note here that this step is performed for all the \(r\) image samples in the input sequence. **Transmitter Identification:** In the implementation of our single sample-based transmitter identification solution, we execute a two-step process. The first step involves utilizing a fully connected neural network (FCN) that is trained Fig. 2: This figure illustrates the stages of the transmitter identification task. The front and back image is fed into a deep learning model to extract the bounding box coordinates. The mmWave Beam power vector is provided as an input to a fully connected neural network to predict the transmitter coordinates, subsequently the candidate transmitter’s coordinate is selected from the possible coordinates using the nearest distance-based algorithm. through supervised learning to estimate the probable polar coordinate of the transmission candidate using the normalized mmWave beam power vector. To facilitate the training and testing of the FCN model, we manually annotate \(600\) image samples, generating a dataset that includes the ground-truth mmWave beam power and the corresponding transmitter's polar coordinates. The mmWave receive power vector and the transmitter coordinates are normalized using the global normalization and the min-max normalization, respectively. The specific training and testing parameters for the model are outlined in Table I. Next, we employ a nearest distance-based algorithm to identify the potential transmitter coordinates in the polar transformation plane. To achieve this, we calculate the Euclidean distance between the predicted coordinates and all the potential coordinates in \(\mathbf{d}\). The coordinate with the minimum Euclidean distance is then selected as the candidate transmitter coordinate (\(d_{0},\theta_{0}\)) in the first image sample of the input sequence. **Transmitter Tracking:** As outlined in Section II-B, the ground-truth mmWave receive power vector is accessible only for the first sample in the sequence. Consequently, the proposed user (transmitter) identification solution can only be performed for the first data sample in the sequence. Next, in order to identify the transmitter in the consecutive \(r-1\) samples, we propose a transmitter-tracking solution. The proposed solution involves tracking the transmitter object over the next \(r-1\) image samples, using the predicted polar coordinates from the transmitter identification model (\(d_{0},\theta_{0}\)) and the vectors [ \(\mathbf{d_{1}},\ldots,\mathbf{d_{r-1}}\) ] obtained from the object detection task. The solution utilizes the nearest distance algorithm to track the polar coordinates of the potential transmitter over the next \(r-1\) image samples, as shown in Fig. 3. For this, the Euclidean distance between the polar coordinates of all objects in the current sample and the predicted coordinate of the transmitter in the last sample is computed. The object with the smallest distance to the previously identified transmitter coordinate is then selected as the coordinate of the transmitter for the current sample. The details on how this bounding box coordinates translation and tracking are depicted in Fig. 3. It is important to note that we have converted the bounding box coordinates from both the front and back images into a single 2D polar plane. This conversion aids in tracking objects as they move between the front and back images. One challenge encountered involved instances where different sections of a single car were detected in both the front and back images. To address this, we devised an algorithm to coherently map the bounding box coordinates onto the 2D polar plane. This ensured that all relevant objects in the \(360^{\circ}\) image were represented as one Fig. 3: This figure presents an overview of the proposed deep learning and distance-based tracking solution. The bounding box coordinates for the front and back images are first transformed into a 2D polar plane. The mmWave beam power is fed into the transmitter identification model, to predict the current transmitter coordinates. The nearest distance-based algorithm is employed to track the transmitter coordinates in the 2D polar planes over the next four image samples. The tracked transmitter coordinates are fed to the LSTM and beam prediction sub-networks point on the 2D plane. **Recurrent Prediction:** In the final task, we leverage a recurrent neural network(RNN) to map the sequential transmitter coordinates tracked over a window of \(r\) image samples in III to predict the future optimal mmWave beam index. The recurrent neural network adopted in this work is the Long Short-Term network (LSTM) [16]. The input to the LSTM model is the sequence of \(r\) polar coordinates of the transmitter, {\([d_{0},\phi_{0}]\),...,\([d_{r-1},\phi_{r-1}]\)}, and the output is the future optimal beam index. We designed, trained, and tested a two-layered LSTM model using the parameters stated in Table I. ## IV Testbed Description and Development Dataset To evaluate the effectiveness of the proposed sensing-aided beam tracking solution, we utilize the DeepSense 6G dataset [13]. DeepSense 6G is the first large-scale real-world multi-modal dataset developed for sensing-aided wireless communication applications. It consists of co-existing multi-modal data such as mmWave wireless communication, GPS data, vision, Radar, and LiDAR collected in a real-world wireless environment. In this section, we describe the testbed and the development dataset. **DeepSense 6G Testbed 6:** This paper adopts scenario \(36\) of the DeepSense 6G dataset designed specifically to study high-frequency V2V communication in the real world. The DeepSense testbed \(6\), as shown in Fig. 4 is utilized in the data collections. It consists of two units: (i) Unit \(1\), mobile receiver (vehicle) equipped with four \(60\) GHz mmWave Phased arrays facing four different directions, i.e., front, back, left, right. Each phased array adopts a uniform linear array (ULA) with \(M=16\) elements and utilizes an over-sampled pre-defined codebook of \(Q=64\) beam directions to receive the transmitted signal. It is further equipped with a \(360^{\circ}\) RGB camera, four mmWave single-chip FMCW radars (operating between 76-81GHz), one 3D LiDAR with 32 vertical and 1024 azimuth channels, and one GPS RTK kit. (ii) Unit \(2\), a mobile transmitter equipped with a \(60\) GHz quasi-omni antennas always oriented towards the receiver unit and a GPS receiver to capture the real-time position information. **Development Dataset:** The evaluation of the proposed sensing-aided V2V beam tracking solution requires data collected in a real wireless environment. This paper uses the publicly available scenario \(36\) of the DeepSense 6G dataset. This scenario includes two vehicles traveling close to each other in the same direction of traffic. The collected dataset includes diverse real-world vehicular scenarios such as passing, lane changes, \(4\)-way intersections, and stop signs. The data was collected continuously with a sampling rate of \(10\) samples/s, resulting in an initial raw dataset of \(\approx 31\)k samples. This raw dataset was subsequently processed by synchronizing and aligning data across modalities and cleaned up by manual data filtering. The new dataset after post-processing is made up of \(\approx 21\)k samples. The goal of this work is to observe a sequence of \(r=5\), consisting of the current and next four \(360^{\circ}\) image samples, and predict the optimal beam index for the sixth sample in the sequence. Therefore, to generate the final development dataset for the beam tracking task, the new dataset is further processed using a sliding window to generate a time-series dataset consisting of 5 input images, the normalized mmWave power vector of the first sample in the sequence, and the optimal beam index of the sixth sample. In order to train the proposed transmitter identification model, as described in Section III, we require the ground-truth bounding box center coordinates of the transmitter in the scene. For this, we manually annotate \(600\) image samples, which are further split into training and testing samples with a 70/30 ratio. We adopted the K-fold validation method for the beam tracking task, randomly splitting the \(21\)k data sequences into \(5\) folds, each consisting of \(\approx 4.2\)k sequences. ## V Experimental Setup In this section, we present the neural network training parameters and the adopted evaluation metrics. ### _Network Training_ As described in Section III, we trained and tested two different neural networks in our proposed solution. In the Fig. 4: This figure shows a detailed description of the DeepSense 6G Testbed 6 adopted to collect the real-world multi-modal V2V data samples. The front car (receiver unit) is equipped with four \(60\) GHz mmWave Phased arrays, \(360^{\circ}\) RGB camera, four mmWave FMCW radars, one 3D LiDAR, and a GPS receiver. The back car (transmitter unit) is equipped with a \(60\) GHz quasi-omni antennas and a GPS receiver. transmitter identification stage, the mmWave receive power vectors are provided as input to the FCN model to predict the polar coordinates of the transmitter object, which is subsequently utilized to approximate the bounding box center coordinates of the transmitter. The FCN is trained using the mean square error loss function and the AdamW optimizer. In the beam prediction stage, the 2-layered LSTM model is trained using categorical cross-entropy loss function and Adam optimizer. The LSTM model takes the sequence of transmitter coordinates from the transmitter tracking algorithm and predicts the future optimal beam. PyTorch and TensorFlow deep learning frameworks were utilized for the training, validation, and testing of the FCN and LSTM models, respectively. All the simulations were performed on a single NVIDIA Quadro 6000 GPU. The detailed design and training hyper-parameters are presented in Table I. ### _Evaluation Metrics_ To evaluate the effectiveness of our proposed solution, we use top-k accuracy as the primary metric. This metric measures the percentage of test samples for which the optimal ground-truth beam is among the top-k predicted beams. To evaluate the effectiveness of our proposed solution, we use top-k accuracy as the primary metric. The top-k accuracy measures the percentage of test samples for which the optimal ground-truth beam is included in the top-k predicted beams. In particular, we consider the top-\(1\) and top-\(5\) accuracies to provide a comprehensive evaluation of our proposed solution. ## VI Performance Evaluation Given the experimental setup described in Section V, in this section, we study the beam tracking performance of the proposed vision-aided V2V beam tracking solution. We first evaluate the performance of the proposed transmitter identification solution, followed by an in-depth analysis of the overall beam tracking performance. **Can visual and wireless data be utilized for transmitter identification?** As the proposed solution is a multi-stage algorithm, we first analyze the performance of the transmitter identification model based on evaluation metrics such as the top-1 accuracy and R-squared score between the predicted and ground truth transmitter coordinates, as shown in Table II. The top-1 accuracy is the probability of successfully selecting the true transmitter coordinates based on the output of the FCN model and the nearest distance-based algorithm. It is important to note here that the results for the transmitter identification task are based on \(30\%\) of the manually annotated \(630\) samples discussed in Section III. From Table II, it is observed that the proposed solution achieves \(99.19\%\) transmitter identification accuracy on this small manually annotated dataset. The high accuracy of the proposed approach highlights that the sensing-aided solution can help identify the transmitter in the \(360^{\circ}\) images captured by the receiver vehicle with high fidelity. **Can visual and wireless data be utilized for beam tracking?** As presented in Section IV, the development dataset of \(21\)k data sequences is randomly split into \(5\) folds, each consisting of \(\approx 4.2\)k sequences. This was done to ensure that there is no data leakage that might impact the overall beam-tracking performance of the proposed solution. In Table II, we present the top-\(1\) and top-\(5\) beam tracking performance for the \(5\) different folds. It is observed that the proposed beam tracking solution achieves an average top-\(1\) accuracy of \(\approx 45\%\) and a top-\(5\) accuracy of \(\approx 85\%\). Considering the top-\(5\) beam prediction accuracy, it can be inferred that the proposed sensing-aided beam tracking solution can predict the optimal beam index with notable accuracy while significantly reducing the beam training overhead. One promising way to satisfy the reliability and latency requirements of real-world deployment can be to augment lightweight beam training along with the proposed sensing-aided solution. Therefore, instead of performing an exhaustive beam search over the entire beam codebook (which has a size of \(4Q=256\) for this dataset, considering the four phased arrays), we can obtain the optimal beam by performing beam training over only the top-\(5\) predicted beams. Next, we present an in-depth analysis of the proposed beam-tracking solution. **Impact of beam difference on the beam tracking performance:** The objective of this work is to observe a sequence of \(r=5\) current and previous \(360^{\circ}\) image samples and predict the optimal beam index for the fifth sample in the sequence. In this section, we investigate the beam tracking performance of the proposed solution versus the beam difference between the first and the fifth samples in the sequence. It is important to note that the beam difference between the first and the fifth sample Fig. 5: This figure shows the performance of the proposed V2V beam tracking solution versus the beam difference between the first and the fifth sample in the each sequence. in a sequence captures the relative displacement between the transmitter and the receiver vehicle in the real world. Therefore, a significant beam difference within a sequence (for example, beam difference of \(\geq 3\)) can translate to vehicles overtaking each other, lane changes, and turns. As such, these sequences are generally far more challenging than those with smaller beam differences. We present the performance of the proposed sensing-aided V2V beam tracking solution in Fig. 5 for varying beam differences within a sequence. It is observed that the performance of the model decreases as the beam difference between the first sample and the fifth sample increases. Nonetheless, as shown in this figure, even for beam differences of approximately \(10\), our solution achieves a top-5 beam prediction accuracy of over \(50\%\). This further highlights the following: (i) The proposed solution can predict the optimal beam index with high fidelity for even the most challenging real-world scenarios, and (ii) augmenting our sensing-aided wireless communication systems with a lightweight beam training method is crucial to achieving optimal performance in practical wireless environments. **What is the impact of the relative speed between the transmitter and receiver vehicle?** As presented in Section IV, scenario \(36\) of the DeepSense 6G dataset adopted in this work includes two vehicles (transmitter and receiver vehicle) traveling near each other in the same direction. Our goal is first to identify the transmitter in the scene and then track the transmitter over time to predict the optimal beam index. Given the temporal nature of the problem statement, vehicular speed plays a critical role in determining the efficacy of any proposed solutions. Further, given the dynamic nature of the V2V communication dataset, the vehicles travel at different speeds, and their relative speed affects the beam tracking performance. Therefore, we study the impact of the relative speed between the transmitter and receiver on the beam tracking accuracy. Similar to the sequences with significant beam differences, higher relative velocity translates to considerable relative displacement between the transmitter and receiver vehicle. It poses a considerable challenge to the beam tracking task. In Fig. 6, we plot the beam tracking accuracy versus the relative speed between the transmitter and receiver vehicle. It is observed that the increase in relative velocity impacts the overall beam tracking performance. However, as shown in Fig. 6, even when the relative speed is more than \(16\) m/s or \(\approx 57\) Kph, the model can predict the top-5 beams with more than \(40\%\) accuracy, highlighting the efficacy of the proposed solution. **What is the impact of the number of objects detected in the visual scene?** The first two stages of the proposed solution include (i) object detection and (ii) transmitter identification. We adopt a state-of-the-art object detection model (YOLOv7) in the object detection stage to detect and identify the relevant objects in the wireless environment. However, one key challenge here is the missed objects, i.e., the objects that the model does not detect. For instance, if the transmitter is not detected in the first stage, the downstream beam tracking performance will also be adversely impacted. Further, as the number of objects in the visual scene increases, the chances of missed object detection also increase. The increase in the number of objects also impacts the next stage, i.e., transmitter identification. As described in Section III, the proposed transmitter identification solution utilizes the mmWave receive power vector to estimate the approximate center coordinates of the transmitter in the scene. The object in the visual scene with the shortest distance to this predicted value is then picked as the transmitting candidate. Now, as the number of objects increases, the chances of wrongly identifying a non-transmitting candidate as the transmitter also increase. All these highlight that the changes in the number of objects in the visual scene can impact the beam tracking performance. In Fig. 7, we plot the beam tracking accuracy versus the average number of objects detected in a sequence. We observe that the proposed beam tracking solution achieves a stable performance irrespective of the average number of relevant objects in the \(360^{\circ}\) camera scene for the samples within a sequence. However, it is important to note that the dataset does not have an evenly distributed number of objects in the samples. The average number of cars in the range \(1-4\) has more samples, approximately \(45\%\), followed by the range \(5-10\) with about \(25\%\). The range \(21-26\) has the least number of samples, which is less than \(1\%\) of the whole dataset. ## VII Conclusion This paper explores the potential of leveraging visual sensory data for beam tracking in a mmWave V2V commu Fig. 6: This figure shows the performance of the proposed V2V beam tracking solution versus the relative speed of the transmitter and receiver vehicle. Fig. 7: This figure shows the performance of the proposed V2V beam tracking solution versus the average number of objects detected in each sample. nication system. We formulate the vision-aided V2V beam tracking problem and develop an efficient machine learning-based solution to predict the optimal beam indices. Next, to evaluate the efficacy of the proposed solution, we adopt a real-world multi-modal V2V communication scenario from the DeepSense 6G dataset. The evaluation results demonstrate that the proposed vision-aided solution can learn to identify the transmitter in the visual scene, track the transmitter over a sequence of RGB images, and predict the optimal beam index with high fidelity. In particular, the proposed solution achieves a top-\(1\) accuracy of \(\approx 45\%\) and a top-\(5\) accuracy of \(\approx 85\%\) in predicting the optimal beams. These results highlight the potential of leveraging the visual sensors to enable highly-mobile mmWave V2V communication systems.
2302.11027
Analysis of Real-Time Hostile Activitiy Detection from Spatiotemporal Features Using Time Distributed Deep CNNs, RNNs and Attention-Based Mechanisms
Real-time video surveillance, through CCTV camera systems has become essential for ensuring public safety which is a priority today. Although CCTV cameras help a lot in increasing security, these systems require constant human interaction and monitoring. To eradicate this issue, intelligent surveillance systems can be built using deep learning video classification techniques that can help us automate surveillance systems to detect violence as it happens. In this research, we explore deep learning video classification techniques to detect violence as they are happening. Traditional image classification techniques fall short when it comes to classifying videos as they attempt to classify each frame separately for which the predictions start to flicker. Therefore, many researchers are coming up with video classification techniques that consider spatiotemporal features while classifying. However, deploying these deep learning models with methods such as skeleton points obtained through pose estimation and optical flow obtained through depth sensors, are not always practical in an IoT environment. Although these techniques ensure a higher accuracy score, they are computationally heavier. Keeping these constraints in mind, we experimented with various video classification and action recognition techniques such as ConvLSTM, LRCN (with both custom CNN layers and VGG-16 as feature extractor) CNNTransformer and C3D. We achieved a test accuracy of 80% on ConvLSTM, 83.33% on CNN-BiLSTM, 70% on VGG16-BiLstm ,76.76% on CNN-Transformer and 80% on C3D.
Labib Ahmed Siddique, Rabita Junhai, Tanzim Reza, Salman Sayeed Khan, Tanvir Rahman
2023-02-21T22:02:39Z
http://arxiv.org/abs/2302.11027v1
Analysis of Real-Time Hostile Activity Detection from Spatiotemporal Features Using Time Distributed Deep CNNs, RNNs and Attention-Based Mechanisms ###### Abstract Real-time video surveillance, through CCTV camera systems has become essential for ensuring public safety which is a priority today. Although CCTV cameras help a lot in increasing security, these systems require constant human interaction and monitoring. To eradicate this issue, intelligent surveillance systems can be built using deep learning video classification techniques that can help us automate surveillance systems to detect violence as it happens. In this research, we explore deep learning video classification techniques to detect violence as they are happening. Traditional image classification techniques fall short when it comes to classifying videos as they attempt to classify each frame separately for which the predictions start to flicker. Therefore, many researchers are coming up with video classification techniques that consider spatiotemporal features while classifying. However, deploying these deep learning models with methods such as skeleton points obtained through pose estimation and optical flow obtained through depth sensors, are not always practical in an IoT environment. Although these techniques ensure a higher accuracy score, they are computationally heavier. Keeping these constraints in mind, we experimented with various video classification and action recognition techniques such as ConvLSTM, LRCN (with both custom CNN layers and VGG-16 as feature extractor) CNNTransformer and C3D. We achieved a test accuracy of 80% on ConvLSTM, 83.33% on CNN-BiLSTM, 70% on VGG16-BiLstm,76.76% on CNN-Transformer and 80% on C3D. Deep learning; Video classification; Neural network; Attention based encoder; Violence detection; LRCN; ConvLSTM; Transformer; C3D ## I Introduction Globalization resulted in a more advanced world with cutting-edge technologies and innovations. Along with advancements, it also led to an increase in criminal activity around the world. This made crime detection and its prevention a dire need in today's time. Video surveillance systems proved to be an efficient method for monitoring crimes as they can record human activities in real-time. Installing a camera in a location acts as an eye that monitors the area and helps to provide security to those who are within its range. The goal of surveillance systems deployed in schools, hospitals, parks, prisons, banks, markets, streets, etc. is to detect hostile activities and alert the concerning authorities about the situation. These systems give crucial data gathered straight from the action scene. However, the volume of video recordings might quickly overwhelm human operators. Distinguishing between violent and non-violent scenes at times becomes difficult and time-consuming, resulting in little to no manual response at all. Moreover, activities such as jogging, dancing, or face-to-face conversations, appear to be extremely comparable to aggressive conduct. As a result, significant research effort was dedicated to inventing systems that proactively interprets data in an attempt to detect anomalous behavior, alert automatically, and securely delete unnecessary in- formation. Although these researches do not include activities such as fist bumps, high fives, hugs, and so on which results in false positive predictions. A variety of Deep Learning algorithms are developed to study human actions in real life. HAR is a popular time series classification problem in which data from multiple time step is used to accurately classify the actions being performed. Moreover, for video classification tasks, developing an image classifier will not be feasible as it tries to predict every single frame, resulting in prediction flickering. On top of that, image classifiers do not take environmental context into consideration. Therefore, keeping these constraints in mind, we proposed deep learning models that can be easily integrated with IOT such as LRCN (both custom CNN and pre-trained), C3D, ConvLSTM, and CNN-Transformers to help automate surveillance systems. These models when deployed, are trained to have high accuracy and help classify non-violent behaviors from violent ones. The key contribution to the research are these models are lightweight and easily deployable, trained on real life gestures that can result on false positive alarms and have lower inference time. The primary objectives of this research are as follows: * To create deep learning models to detect violent or non violent human behavior in an automated way. * To process raw CCTV footage to extract features so that it can help us classify normal human activity from abnormal human activity. * To make a real-time monitoring system with the help of IoT that will be cost-efficient, safe, and more effective than the existing surveillance systems. ## II Related Works: In [1], a deep learning classifier was proposed to detect abnormal crowd activities. To extract the features of videos collected from movies, researchers used Fully Convolutional Neural Networks (FCN). A pre-trained FCN was fed the optical flow of two successive video frames as well as the individual frames, derived from AlexNet to obtain more valuable appearance and motion information. Their method provided both spatial and temporal continuity. They replaced the two-stream CNN architecture with a two-stream FCN one. They designed a simple but effective method for encoding highdimensional feature maps and then used binary codes to find patterns. To determine the degree of abnormality in the video, the abnormal coefficient was calculated using an iterative quantization (ITQ) method based on the feature map from the FCN. In paper [2], the researchers presented a low latency human detector for unmanned aerial vehicles (UAVs) with optical flow and CNNs. The proposed method included quick ROI generation and extraction and a two-stream CNN classifier to detect running people by distinguishing appearance and motion features from walking people or other interferences. They came up with ROIs for human categorization in real time. The optical flow was calculated with every two successive frames to locate the candidate targets quickly. A series of preprocessing techniques were used to extract the ROIs, Including morphological expansion, spatial average filtering, and outer contour extraction. A small-kernel CNN was presented to accurately recognize running humans in varied backgrounds. The small kernel reduced inference time. Field experiments and benchmark testing revealed that their system could recognize moving people at 15 FPS with an accuracy of 81.1% in complex environments for UAV scenes. Two Stacked Denoising Autoencoder (SDAE) network was used to learn appearance and motion features from videos to reduce computational complexity in [3]. The first SDAE was used to capture static appearance clues. The second SDAE extracted motion features. Using two SDAEs, they extracted depth appearance features and depth motion features of the trajectories. They used the bag of word method to build two vocabularies. At first, they extracted deep motion features, which were then clustered. To get the most compact vocabulary, the Agglomerative Information Bottleneck approach was used. To reduce the vocabulary and minimize mutual information loss, their approach repeatedly combined two visual terms. Adaptive feature fusion methods improved the distinctiveness of these features. The researchers used Sparse representations to detect abnormal behaviour and improve detection accuracy. In [4], they proposed a model which was able to detect abnormal activity without the presence of an alarming or harmful object itself. The model generated macroblock motion vectors from video compression methods with three purposes which were: requirement of reliability with low false-positive rate, the ability to distinguish between normal and abnormal activity, and identification of abnormal behavior with less computing power. They strictly restricted the use of segmentation and tracking to avoid any kind of semantic interpretation. So, they first converted the video to motion vectors by compressing the data. They used motion vectors to generate motion elements, and the entering frame was assessed to the predictive model, where low probability indicated abnormal activity. They used probability distribution of each frame to prepare a histogram and detected the frame with less occurrence of features below a threshold value as abnormal. The paper [5] suggested a model that was used to detect any suspicious presence with the help of a message alert. Here, they included the background removal technique to improve the detection of the moving object. STIPs are retrieved from depth videos using a compression algorithm that successfully eliminates chaotic readings. The system looked at mainly three limitations like the height, time, and body movement therefore, when all the aspects will be satisfied the person will be identified as a doubtful person. Firstly, the background reduction was done on the raw video data. After which the extracted frame from the video was converted in-frame sequence. This helped them to recognize the actions and when suspicious activities were detected the alarm will go off sending alert messages. In paper[13], Single frame prediction methods were utilized to automate surveillance systems to identify unusual behavior. The research integrated transfer learning on models pre trained on imagenet to classify anomalous behaviors ranging from pickpocketing, burglary, breaking in, and so on. The paper [14] designed spatial-temporal CNN frameworks to generate high-level features from both spatial and temporal dimensions. The model used visual data from a single, static image as well as dynamic volume data from consecutive frames to identify and classify anomalous events in overcrowded surveillance videos. The model was only used on spatial-temporal volumes of interest (SVOI) to optimize operational cost. In another paper, [15] the Deep Belief Networks (DBNs) architecture was used to learn robust features for pattern recognition where an unstructured DBN was developed to retrieve generalized underlying characteristics and a one-class SVM was taught to use the DBN's features that learned to retrieve properties that were taught to compress high-dimensional inputs into a low-dimensional component set using a non-linear dimensionality reduction approach. ## III Work Plan Our research aims to create deep learning models that can classify CCTV video footage into violent or nonviolent. We aim to train the models to consider classifying the entire video on a specific time frame as opposed to every single frame separately. Single frame classifications result in flickering and cause high false alarms, so we explore techniques that will classify images in a sequence. ## IV Methodology ### _Dataset Description_ We used a research dataset created by Bianculi [6] et al. as our input data. In this dataset, there are 350 clips that are in MP4 format (H.24 codec) where 120 clips contain nonviolent videos and 230 clips contain violent videos. The frame rate per video is 30 fps, and the resolution is fixed to 1920x1080 pixels for all the clips. The videos which are labeled as violent include generic hostile behaviors in public places whereas the nonviolent videos include normal gestures as well as the ones that are similar to the hostile ones. The average length of these videos is 5.63 seconds, where the largest is 14 seconds, and the shortest is 2 seconds. The characters in the videos are actors who acted out as per the requirements. In order to differentiate between the two behaviors, individuals were asked to indulge in actions that include kicking, slapping, punching, beating, firing guns, stabbing, hugging, clapping, being euphoric, exchanging high fives and waving at each other. These actions not only helped to detect abnormal behaviors but also prevented the results from being false positives which in turn assured more accurate results. ### _Algorithm Description_ #### Iv-B1 Long-term Recurrent Convolutional Network The idea of LRCN [7] is to extract spatial features from each frame using convolution neural networks. The outputs of these convolutional networks are passes into a Bi-LSTM network, which fuses temporal features onto extracted spatial features to classify them. For our research, we built a custom CNN feature extractor model as well as experimented with a pre-trained model like VGG-16. Both the models take 90x90 pixels as input. For the custom CNN model, the convolutional layer is modeled to take the frames as inputs, perform operations using convolutional filters which are a matrix (in our case of 3x3 size) with a random set of values that convolve over the image and compute the dot operation and then pushes the output onto the next layer.The following equations (1,2,3,4) summarize an input frame and generate an output matrix by conducting convolution across \(k\) channels: \[A_{o}^{(m)}=g_{m}\underset{k}{\big{(}}\quad\underset{\begin{subarray}{c} \alpha k\\ k\end{subarray}}{W^{(m)}}*A_{k}^{(m-1)}+b_{o}^{(m)}\big{)} \tag{1}\] \[W-ok*A_{k}[s,t]=\underset{pq}{a}*b \tag{2}\] \[a=A_{k}[s+p,t+q] \tag{3}\] \[b=W_{ok}[P-1-p,Q-1-q] \tag{4}\] Following each convolutional layer, max pooling, the number of parameters is shrunk in the network that cuts the convolutional load which is depicted in the preceding equations (5,6). The output is finally flattened. We used the following two activation functions in our model, Rectified Linear Unit (ReLU) [7] depicted in equation (7) and SoftMax [11] which converts a system's output into a probability distribution across expected classes. For the VGG-LSTM model, we used a VGG-16 network trained on ImageNet to extract features by importing the model and excluding the top. Then it was wrapped in a time distributed layer which was then passed to a bidirectional lstm of 256 filter followed by a dense layer of 256 filters with ReLu activation and the output layer with 2 neurons. The output vector from both the models is then passed to the time distributed layer which makes the models do the convolutional operations across a defined time-step so that the Bidirectional LSTM can learn the changes of the spatial features of the video frames and temporal weights across the defined time-step as well as learn the temporal changes better as all the hidden states contain information of both past and future. Finally, the output of the Bi-LSTM is passed to a Fig. 1: The flow chart of the proposed hostile behavior detection model Fig. 2: This is the list of the recorded actions, with the number of occurrences in the dataset dense layer with two heads at the end. The SoftMax activation function is utilized to replicate the 1-0 impulse carried away as an activation function.The layer has a bias parameter, b which is shown in the equations (8,9). \[y=A\underset{i}{\cdot}x+b \tag{5}\] \[y_{i}=\underset{j=1}{\cdot}\left(A_{ij}x_{j}\right)+b_{i} \tag{6}\] \[y=max(0,x) \tag{7}\] \[y=A\underset{i}{\cdot}x+b \tag{8}\] \[y_{i}=\underset{j=1}{\cdot}\left(A_{ij}x_{j}\right)+b_{i} \tag{9}\] #### Iii-B2 Convolutional Long Short Term Memory Since LSTM alone cannot deal with spatial data, we have also used ConvLSTM as it can be used for both the time and spectral domain. This is achieved by using ConvLSTM with 3D tensor inputs, cell outputs, hidden states and gates whose final dimensions are spatial. Although ConvLSTM has the same structure as LSTM, their main difference lies in the input-to-state and state-to-state transitions. In the following equations(10,11,12,13,14,15), the '\(\sigma\)', '\({}^{*}\) and '\({}^{*}\)' denotes the activation function, convolution operator and Hadamard product: \[i_{t}=\sigma(w_{xi}x_{t}+w_{hht-1}+w_{ct}\ \ ^{\circ}\ \ c_{t-1}+b_{i}) \tag{10}\] \[f_{t}=\sigma(w_{yi}x_{t}+w_{hft}t-1+w_{cf}\ \ ^{\circ}\ \ c_{t-1}+b_{f}) \tag{11}\] \[\tau_{t}=tanh(w_{x0}x_{t}+w_{h0}h_{t-1}+b_{c}) \tag{12}\] \[c_{t}=f_{t}\ ^{\circ}\ c_{t-1}+i_{t}\ ^{\circ}\ c_{t} \tag{13}\] \[o_{t}=\sigma(w_{x0}x_{t}+w_{h0}h_{t-1}+w_{co}o\tau_{t}+b_{o}) \tag{14}\] \[h_{t}=o_{t}tanh(c_{t}) \tag{15}\] From the expressions above, we can say \(C_{t-1}\) is the current position where \(X_{t}\) is its input and \(H_{t-1}\) is the state and result of the final neuron. The convolution filter is 2- dimensional with a \(k\) x \(k\) kernel where the dimension of the convolutional kernel is denoted by \(k\). The ConvLSTM takes the frames of the video as the input and the multidimensional convolution operates on each frame to extract the features. Unlike the CNN model, ConvLSTM can transfer and process data in both, interlayer as well as the intro-layer making it more efficient to extract features compared to CNN. #### Iii-B3 3D Convolutional Neural Networks The 3D Convolutional Neural Networks (C3D) [8] extract temporal and spatial features from video clips unlike the 2D-CNNs. This is because 2D convolution on a video segment squeezes the temporal features after convolving which results in an overall feature map with no dynamic depiction. In order to produce the 3D cube to obtain the 3D convolution, a 3D filter kernel is combined by stacking a number of frames together. The 3D CNN is designed in such a way that multiple feature maps can be generated in the later layers by placing them in the same location in the previous layers. The input dimensions for the videos are frames x height x width x channel in the manner: 25 x 90 x 90 x 3. The first 3D convolutional layer has 64 filters followed by a ReLu activation function. This is followed by a max pooling which estimates the highest value within every feature map patch and pools the most prominent feature in each patch. This is followed by another similar 3D convolutional layer as the first one with 32 filters with ReLu activation followed by another similar max pooling layer. The ConvNets extract the graphical properties of an image and organize them in a low-level representation such as a vector. The 3D CNN finds the vector for a stack of images to label the input video correctly. The flatten layers turn the input into a one dimensional vector output and pass it to the dense layers while adding weights to each data and classifying them followed by a dropout of 0.5. #### Iii-B4 Convolutional Neural Network Transformer The Conv2D extracts spatial features, wraps them around a time distributed layer passes the output to the transformer. Transformers [12] use the attention-mechanism and are a sequence to sequence model, with an encoder and a decoder. Unlike other sequence models, they do not not use any Recurrent Neural Networks. The structure of encoder and decoder allows the layering of modules several times on each other. The modules primarily consist of Feed Forward and Multi-Head Attention layers. In the embedded vector representation of each video, it is necessary to add the relative position. The equations (16, 17) define the process explained above where \(V\), \(Q\) and \(K\) refers to the values, query and keys of all the videos in the sequence. The attention module consists of different video sequences for \(V\) and \(Q\) whereas the multi-head attention mode contains the same video sequence for both. The attention module does so by multiplying and adding the values in V with weights, called attention-weights as shown in equation (18): \[Attention(Q,K,V)=softmax\ \ \frac{QK^{T}}{\sqrt[]{d_{k}}}V \tag{16}\] \[a=softmax\ \frac{QK^{T}}{\sqrt[]{d_{k}}}V \tag{17}\] The weights represent the influence of the video sequence of Q on that of \(K\). The purpose of implying the SoftMax function is to create dissemination of 0 and 1. Repeating the attention mechanism several times enables the model to adapt and learn the various orientations of \(V\), \(Q\), and \(K\). While training, the model learns the weight matrices, \(W\), and multiplying them with \(V\), \(Q\) and \(K\) results in the linear orientation of the video sequences. Each position of the attention module contains different matrices of \(V\), \(Q\) and \(K\). This is because at a time, we can focus only on the entire input sequence of the encoder or a portion of input sequence of the decoder. The input sequence from the encoder and decoder is connected together up to a position by the multi-head attention module. The next layer is the feed-forward which is a pointwise layer. This means that the network contains identical parameters at each point, and each video from the provided sequence is a distinct, identical linear transformation. ## V Implementation and Results ### _Implementation_ To train our dataset in the desired models, we had to first preprocess the data. This step included extracting frames from each video, rescaling the frame, normalizing the input data and lastly, applying one-hot encoding. We extracted 25 frames per video to get a good training accuracy and rescaled them to ensure the images are all of the same size. In the preprocessing step, we labelled the nonviolent data as 0 and violent as 1. After this step, the dataset was divided into three sets which are train set, test set and validation set. Finally, we embedded the dataset into our models which are C3D, ConvLSTM, CNN-Transformer and LRCN (CNN- BiLSTM, VGG-BiLSTM). ### _Result Analysis_ #### V-B1 Accuracy and Quality Scores Amongst all the models, the CNN-Transformer model performs the most optimally with 0.74 being the F1 score for non violent and 0.79 being for the violent case. edges and lines and and forms a separated representation of the frames. The brightened up areas of the images are feature maps that are passed on to the top layers so the models can learn the useful information information #### V-C4 Confusion Matrices In figure (6), (0,0) represents true positives for non violent class ;(0,1) false positives for non-violent class; (1,0) false positives for violent class and (1,1) true positives for violent class. The confusion matrix for CNN-transformer shows that it can successfully identify 23 out of 30 cases, ConvLSTM identifies 24 cases successfully, LRCN and VGG-Bi-LSTM identifies 25 and 21 cases respectively and C3D identifies 23 cases in total. The models falsely identify the rest of the cases. ## VI Conclusion and Future Plan The importance of surveillance systems has been reduced due to the inability to notify timely after detecting a crime. To eradicate this issue, we have proposed models that can help prevent crime by tracking and analyzing the footage from CCTV cameras in real time. Moreover, even if a security officer misses a notification at the moment, the investigation team can use the time at which the notification was sent afterward and find the recording of that exact time. This will make the job of the investigating team easier and help in prevailing justice for the victims. We aim to optimize our models by introducing more samples for the nonviolent class. After optimization, our task is to deploy our deep-learning models in a Jetson nano compared to a Raspberry Pi because it has a higher ram capacity, better CPU and 128 Cuda cores which will be very useful for running the models.
2302.09668
Physics-aware deep learning framework for linear elasticity
The paper presents an efficient and robust data-driven deep learning (DL) computational framework developed for linear continuum elasticity problems. The methodology is based on the fundamentals of the Physics Informed Neural Networks (PINNs). For an accurate representation of the field variables, a multi-objective loss function is proposed. It consists of terms corresponding to the residual of the governing partial differential equations (PDE), constitutive relations derived from the governing physics, various boundary conditions, and data-driven physical knowledge fitting terms across randomly selected collocation points in the problem domain. To this end, multiple densely connected independent artificial neural networks (ANNs), each approximating a field variable, are trained to obtain accurate solutions. Several benchmark problems including the Airy solution to elasticity and the Kirchhoff-Love plate problem are solved. Performance in terms of accuracy and robustness illustrates the superiority of the current framework showing excellent agreement with analytical solutions. The present work combines the benefits of the classical methods depending on the physical information available in analytical relations with the superior capabilities of the DL techniques in the data-driven construction of lightweight, yet accurate and robust neural networks. The models developed herein can significantly boost computational speed using minimal network parameters with easy adaptability in different computational platforms.
Arunabha M. Roy, Rikhi Bose
2023-02-19T20:33:32Z
http://arxiv.org/abs/2302.09668v1
# Physics-aware deep learning framework for linear elasticity ###### Abstract The paper presents an efficient and robust data-driven deep learning (DL) computational framework developed for linear continuum elasticity problems. The methodology is based on the fundamentals of the Physics Informed Neural Networks (PINNs). For an accurate representation of the field variables, a multi-objective loss function is proposed. It consists of terms corresponding to the residual of the governing partial differential equations (PDE), constitutive relations derived from the governing physics, various boundary conditions, and data-driven physical knowledge fitting terms across randomly selected collocation points in the problem domain. To this end, multiple densely connected independent artificial neural networks (ANNs), each approximating a field variable, are trained to obtain accurate solutions. Several benchmark problems including the Airy solution to elasticity and the Kirchhoff-Love plate problem are solved. Performance in terms of accuracy and robustness illustrates the superiority of the current framework showing excellent agreement with analytical solutions. The present work combines the benefits of the classical methods depending on the physical information available in analytical relations with the superior capabilities of the DL techniques in the data-driven construction of lightweight, yet accurate and robust neural networks. The models developed herein can significantly boost computational speed using minimal network parameters with easy adaptability in different computational platforms. Keywords: Physics Informed Neural Networks (PINNs); Artificial neural networks (ANNs); Linear elasticity; Bi-harmonic equations; Deep learning (DL) ## 1 Introduction : In recent years, driven by the advancement of bigdata-based architectures (Khan et al., 2022), deep learning (DL) techniques (LeCun et al., 2015) have shown great promises in computer vision (Voulodimos et al., 2018; Roy and Bhaduri, 2021; Roy et al., 2022; Roy and Bhaduri, 2022; Roy et al., 2022), object detection (Zhao et al., 2019; Chandio et al., 2022; Roy et al., 2022; Singh et al., 2023), image classification (Rawat and Wang, 2017; Irfan et al., 2021; Jamil et al., 2022; Khan et al., 2022), damage detection (Guo et al., 2022; Glowacz, 2022, 2021) brain-computer interfaces (Roy, 2022, 202, 2022, 2023) and across various scientific applications (Butler et al., 2018; Ching et al., 2018; Bose and Roy, 2022). The success of these methods, such as various classes of Neural Networks (NNs), can be largely attributed to their capacity in excavating large volumes of data in establishing complex high-dimensional non-linear relations between input features and output (Kutz, 2017). However, the availability of sufficient data is a major bottleneck for analyzing various complex physical systems (Butler et al., 2018; Ching et al., 2018). Consequently, the majority of state-of-the-art machine learning algorithms lack robustness in predicting these systems. Upon availability of sufficient data, these have also garnered considerable success in problems governed by physics, such as dynamical systems (Dana and Wheeler, 2020), geosciences (DeVries et al., 2018; Bergen et al., 2019; Racca and Magri, 2021; Saha et al., 2021; Jahanbakht et al., 2022), material science and informatics (Butler et al., 2018; Ramprasad et al., 2017; Batra et al., 2021; Maatta et al., 2021), fluid mechanics (Kutz, 2017; Brunton et al., 2020), various constitutive modeling (Tartakovsky et al., 2018; Xu et al., 2021), etc. Their applicability however may be further enhanced by utilizing physical information available by mathematical/ analytical means. The recent endeavor of scientific and engineering community has been in attempting to incorporate such physical information within their predictive scheme in small data regimes. The incorporation of physical information into the DL framework may have several advantages. First, as previously mentioned, in absence of sufficient data, it may be possible to solely utilize physical knowledge for solving such problems (Raissi et al., 2019), or to the least, enhance solutions in a data-driven predictive scheme (Raissi et al., 2020; Karniadakis et al., 2021). For example, in Sirignano and Spiliopoulos (2018), a high-dimensional Hamilton-Jacobi-Bellman PDE has been solved by approximating the solution with a DNN trained to satisfy the differential operator, initial condition, and boundary conditions. In incompressible fluid mechanics, the use of the solenoidality condition of the velocity fields restricts the solution space of the momentum equations. Therefore, this condition may be used as a constraint for solving the governing equations (conventional solvers are generally developed in a way to satisfy this constraint through the Poisson equation for pressure), or at least improve the predictions in a data-driven approach. Second, physical systems are often governed by laws that must satisfy certain properties, such as invariance under translation, rotation, reflection, etc. In a purely data-driven approach, it is almost impossible for a DL algorithm to inherit those properties entirely from data without explicit external forcing. Embedding such properties in the DL algorithm might automatically improve the accuracy of the predictions. For example, Ling et al. (2016) used a Tensor-based Neural Network (TBNN) to embed Galilean invariance that improved NN models for Reynolds-averaged Navier Stokes (RANS) simulations for the prediction of turbulent flows. And lastly, any scientific problem is governed by some underlying mechanism dictated by physical laws. Neglect of such physical information in a purely data-driven framework in the current state of affairs is, therefore, an unsophisticated approach, if not an ignorant one. Partial differential equations (PDEs) represent underlying physical processes governed by first principles such as conservation of mass, momentum, and energy. In most cases, analytical solutions to these PDEs are not obtainable. Various numerical methods, such as finite-difference (Sengupta, 2013), finite element (FE) (Zienkiewicz and Taylor, 2005), Chebyshev and Fourier spectral methods (Boyd, 2001), etc are used to obtain approximate solutions. However, such techniques are often computationally expensive and suffer from various sources of errors due to the complex nature of the underlying PDEs, numerical discretization and integration schemes, iterative convergence techniques, etc. Moreover, the solution of inverse problems is the current endeavor of the engineering community which requires complex formulations and is often prohibitively expensive computationally. The use of the NNs in solving/modeling the PDEs governing physical processes in a forward/ inverse problem is an important challenge worth pursuing, as these methods have the capacity to provide accurate solutions using limited computational resources in a significantly robust framework relative to the conventional methods. In this paper, we explore the possibility of using NN to obtain solutions to such PDEs governing linear continuum elasticity problems applicable in solid mechanics. There has been a recent thrust in developing machine learning (ML) approaches to obtain the solution of governing PDEs (Karniadakis et al., 2021; von Rueden et al., 2019). The idea is to combine traditional scientific computational modeling with a data-driven ML framework to embed scientific knowledge into neural networks (NNs) to improve the performance of learning algorithms (Lagaris et al., 1998; Raissi and Karniadakis, 2018; Karniadakis et al., 2021). The _Physics Informed Neural Networks_ (PINNs) (Lagaris et al., 1998; Raissi et al., 2019, 2020) were developed for the solution and discovery of nonlinear PDEs leveraging the capabilities of deep neural networks (DNNs) as universal function approximators achieving considerable success in solving forward and inverse problems in different physical problems such as fluid flows (Sun et al., 2020; Jin et al., 2021), multi-scale flows (Lou et al., 2021), heat transfer (Cai et al., 2021; Zhu et al., 2021), poroelasticity (Haghighat et al., 2022), material identification (Shukla et al., 2021), geophysics (bin Waheed et al., 2021, 2022), supersonic flows (Jagtap et al., 2022), and various other applications (Waheed et al., 2020; Bekar et al., 2022). Contrary to traditional DL approaches, PINNs force the underlying PDEs and the boundary conditions in the solution domain ensuring the correct representation of governing physics of the problem. Learning of the governing physics is ensured by the formulation of the loss function that includes the underlying PDEs; therefore labeled data to learn the mapping between inputs and outputs is no more necessary. Such architectural construction can be utilized for complex forward and inverse (finding parameters) solutions for various systems of ODEs and PDEs (Karniadakis et al., 2021). Additionally, the feed-forward neural networks utilize graph-based automated differentiation (AD) (Baydin et al., 2018) to approximate the derivative terms in the PDEs. Various PINNs architectures notably self-adaptive PINNs (McClenny and Braga-Neto, 2020), extended PINNs (XPINN) (Hu et al., 2021; De Ryck et al., 2022) have been proposed that demonstrated superior performance. Moreover, multiple DNN-based solvers such as cPINN (Jagtap et al., 2020), XPINNs (Jagtap and Karniadakis, 2021), and PINNs framework for solid mechanics (Haghighat et al., 2021) have been developed that provide important advancement in terms of both robustness and faster computation. In this regard, (Haghighat et al., 2020, 2021) have been the breakthrough works geared towards developing a DL-based solver for inversion and surrogate modeling in solid mechanics for the first time utilizing PINNs theory. Additionally, PINNs have been successfully applied to the solution and discovery in linear elastic solid mechanics (Zhang et al., 2020; Samaniego et al., 2020; Haghighat et al., 2021; Guo and Haghighat, 2020; Vahab et al., 2021; Rezaei et al., 2022; Zhang et al., 2022), elastic-viscoplastic solids (Frankel et al., 2020; Goswami et al., 2022; Arora et al., 2022; Roy and Guha, 2022), brittle fracture (Goswami et al., 2020) and computational elastodynamics (Rao et al., 2021) etc. The solution of PDEs corresponding to elasticity problems can be obtained by minimizing the network's loss function that comprises the residual error of governing PDEs and the initial/boundary conditions. In this regard, PINNs can be utilized as a computational framework for the data-driven solution of PDE-based linear elasticity problems that can significantly boost computational speed with limited network parameters. The potential of the PINNs framework in achieving computational efficiency beyond the capacity of the conventional computational methods for solving complex problems in linear continuum elasticity is the main motivation behind the present work. In the present work, an efficient data-driven deep learning computational framework has been presented based on the fundamentals of PINNs for the solution of the linear elasticity problem in continuum solid mechanics. In order to efficiently incorporate physical information for the elasticity problem, an improved multi-objective loss considering additional physics-constrained terms has been carefully formulated that consists of the residual of governing PDE, various boundary conditions, and data-driven physical knowledge fitting terms that demonstrate the efficacy of the model by accurately capturing the elasticity solution. Several benchmark problems have been proposed for the first time utilizing PINNs theory. In this regard, PINNs have been proposed for the first time utilizing PINNs theory. In this regard, PINNs have been proposed for the first time utilizing PINNs theory. In this regard, PINNs have been proposed for the first time utilizing PINNs theory. In this regard, PINNs have been proposed for the first time utilizing PINNs theory. In this regard, PINNs have been proposed for the first time utilizing PINNs theory. In this regard, PINNs have been proposed for the first time utilizing PINNs theory. In this regard, PINNs have been proposed for the second time utilizing PINNs theory. In this regard, PINNs have been proposed for the second time utilizing PINNs theory. In this regard, PINNs have been proposed for the second time utilizing PINNs theory. In this regard, PINNs have been proposed for the second time utilizing PINNs theory. In this regard, PINNs have been proposed for the second time utilizing PINNs theory. In this regard, PINNs have been proposed for the second time utilizing PINNs theory. In this regard, PINNs have been proposed for the second time utilizing PINNs theory. In this regard, PINNs have been proposed for the second time utilizing PINNs theory. In this regard, PINNs have been proposed for the second time utilizing PINNs theory. In this regard, PINNs have been proposed for the second time utilizing PINNs theory. In this regard, PINNs have been proposed for the second time utilizing PINNs theory theory. In this regard, PINNs have been proposed for the second time utilizing PINNs theory theory. In this regard, PINNs have been proposed for the second time utilizing PINNs theory theory. In this regard, PINNs have been proposed for the second time utilizing PINNs theory theory. In this regard, PINNs have been proposed for the second time utilizing PINNs theory theory. In this regard, PINNs have been proposed for the second time utilizing PINNs theory theory. In this regard, PINNs have been proposed for the second time utilizing PINNs theory theory theory. In this regard, PINNs have been proposed for the second time utilizing PINNs theory theory theory. In this regard, PINNs have been proposed for the second time utilizing PINNs theory theory theory. In this regard, PINNs have been proposed for the second time utilizing PINNs theory theory theory. In this regard, PINNs have been proposed for the second time utilizing PINNs theory theory theory. In this regard, PINNs have been proposed for the second time utilizing PINNs theory theory theory theory. In this regard, PINNs have been proposed for the second time utilizing PINNs theory theory theory theory. In this regard, PINNs have been proposed for the second time utilizing PINNs theory theory theory theory theory. In this regard, PINNs have been proposed for the second time utilizing PINNs lems including the Airy solution to an elastic plane-stress problem for an end-loaded cantilever beam and simply supported rectangular Kirchhoff-Love thin plate under transverse sinusoidal loading conditions have been solved which illustrates the superiority of the proposed model in terms of accuracy and robustness by revealing excellent agreement with analytical solutions. The employed models consist of independent multi-layer ANNs that are separately trained on minimizing the prescribed loss function specific to the problem under consideration. The performance of PINNs has been evaluated for different activation functions and network architectures. Furthermore, we have illustrated the applicability of data-driven enhancement using the smart initialization of a data-driven learning-based approach in reducing training time, while simultaneously improving the accuracy of the model which is not possible in conventional numerical algorithms. Such an approach would be important in achieving computational efficiency beyond the capacity of conventional computational methods for solving complex linear elasticity problems. The present study also demonstrates the contribution of analytical solutions for the data-driven construction of an accurate and robust PINNs framework that can significantly boost computational speed utilizing minimal trainable network parameters. The paper is organized as follows: Section 2 introduces the background of PINNs theory and the generalized idea of implementing multi-objective loss function into the PINNs framework; In section 3, a brief overview of the theory of linear elasticity has been presented; Section 4 introduces the extension of the proposed PINNs framework for the Airy solution to an elastic plane-stress problem for an end-loaded cantilever beam; in section 5, the proposed PINNs framework has been extended to the solution of Kirchhoff-Love thin plate governed by Biharmonic PDE; Section 7 deals with the relevant finding and prospects of the current work. Finally, the conclusions have been discussed in section 8. ## 2 Physics-Informed Neural Networks : The concept of training a NN in the PINNs framework is the construction of the loss function. The loss function is intended to embed the underlying physics which is represented in mathematical terms by the PDEs and the associated boundary conditions. In this section, we discuss the construction of the proposed multi-object loss functions for embedding a data-driven phys ical model that has been associated with the PINNs framework. Let us consider a fully connected NN defined by \[\mathscr{N}^{k+1}(\mathscr{N}^{k})=\varkappa^{k}(\boldsymbol{W}^{k}\cdot\mathscr{ N}^{k}+\boldsymbol{b}^{k}) \tag{1}\] where \(k\in\{0,1,\cdots,N\}\) represents the layer number of NN. \(\mathscr{N}\) is a nonlinear map defined by \(\mathscr{N}^{m}(\hat{\boldsymbol{x}}^{m}):=\varkappa^{m}(\boldsymbol{W}^{m} \cdot\boldsymbol{x}^{m}+\boldsymbol{b}^{m})\) for \(m^{th}\)-layer where \(\boldsymbol{W}^{m}\) and \(\boldsymbol{b}^{m}\) represents the weights and biases of this transformation, respectively; \(\varkappa(\cdot)\) is the non-linear transformer or activation function acting on a vector element-wise. Therefore, \(k=0\) represents the input layer of the NN taking in the input \(\boldsymbol{x}^{0}\). Also consider a steady state general nonlinear partial differential operator \(\mathscr{G}\) operated on a scalar solution variable \(\phi(\vec{x})\) such that, \[\mathscr{G}\phi(\vec{x})=0\hskip 28.452756pt\vec{x}\in\mathbb{R}^{n_{dim}} \tag{2}\] Since \(\mathscr{G}\) is a differential operator, in general, Eq. 2 is accompanied by appropriate boundary conditions to ensure the existence and uniqueness of a solution. Let us assume, it is subjected to the boundary condition \(\mathscr{B}\,\phi(\partial\vec{\Gamma})=\tau(\partial\vec{x})\) on the boundary \(\vec{\Gamma}\) in domain \(\Omega\in\mathbb{R}^{n_{dim}}\), \(n_{dim}\) being the spatial dimension. In a PINNs framework, the solution to Eq. 2, \(\phi(\boldsymbol{x})\), subjected to the aforementioned boundary condition may be approximated for an input \(\boldsymbol{x}=\vec{x}\) by constructing a feed-forward NN expressed mathematically as (3) where \(\hat{\phi}\) is the approximate solution to Eq. 2; \(\hat{\circ}\) denotes the general compositional construction of the NN; the input to the NN \(\mathscr{N}^{0}:=\boldsymbol{x}^{0}=\vec{x}=(x_{1},x_{2},\cdots x_{n_{dim}})\) is the spatial coordinate at which the solution is sought. Following Eq. 1 and Eq. 3, if \(\boldsymbol{W}^{i}\) and \(\boldsymbol{b}^{i}\) are all collected in \(\theta=\bigcup_{i=0}^{N}(\boldsymbol{W}^{i},\,\boldsymbol{b}^{i})\), the output layer \(\mathscr{N}^{N}\) contains the approximate solution \(\hat{\phi}(\vec{x})\) to the PDE such that \[\mathscr{N}^{k+1}=\hat{\phi}\left[\boldsymbol{x},\theta\right]=\left[\hat{ \phi}_{1},\hat{\phi}_{2},...,\hat{\phi}_{m}\right] \tag{4}\] The spatial dependence of \(\hat{\phi}\) is implicitly contained in the NN parameter \(\theta\). In the internal/ hidden layers of NN, several variations of nonlinear transformer or the activation function \(\varkappa\) may be used, such as, the hyperbolic-tangent function \(\tanh(\xi)\), the sigmoid function \(\varkappa(\xi)=1/(1+e^{-\xi})\), the rectified linear unit (ReLU) \(\varkappa(\xi)=\max(0,\xi)\), etc. The activation in the final layer is generally taken to be linear for regression-type problems considered here. ### Embedding constraints in NN : This section briefly describes the general idea of embedding linear constraints into NN (Lagaris et al., 1998; Du and Zaki, 2021). Let us consider \(\mathbb{U}\) and \(\mathbb{A}\), two complete normed vector spaces, where NN function class \(\mathbb{M}\subset\mathbb{U}\) need to be constrained. A linear constraint on \(\phi\in\mathbb{M}\) can be expressed as: \[\mathscr{P}\phi(\mathbf{x})=0,\quad\phi\in\mathbb{M} \tag{5}\] where, \(\mathscr{P}:\mathbb{U}\to\mathbb{A}\) expresses a linear operator on \(\mathbb{U}\). Generally, a such constraint can be realized for solving PDEs in most of the DL framework by minimizing the following functional \[\mathscr{J}_{A}=\|\mathscr{P}\phi\|_{\mathbb{A}},\quad\phi\in\mathbb{M} \tag{6}\] where \(\|\centerdot\|_{\mathbb{A}}\) denotes the norm corresponding to space \(\mathbb{A}\). It is noteworthy to mention that the aforementioned procedure approximately enforces linear constraint in Eq. 5. However, the accuracy of the imposed constraint relies on the relative weighting between the constraint and other objectives involved in the training include the satisfaction of the governing PDEs or the integration of data-driven schemes. ### Multiple objective loss functions : In order to incorporate physical information of the problem, one of the possibilities is to impose Eq. 2 as a _hard constraint_ in \(\mathbf{x}\in\Omega\) while training the NN on the physical data. Mathematically, such a condition is imposed by formulating a constrained optimization problem which can be expressed as (Krishnapriyan et al., 2021), \[\min_{\theta}\Delta_{\mathcal{L}}(\mathbf{x},\theta)\quad\text{s.t.}\quad\mathscr{ G}\phi(\vec{x})=0. \tag{7}\] where \(\Delta_{L}\) represents data-driven physical knowledge fitting term which includes the imposed initial and boundary conditions. \(\mathscr{G}\phi(\vec{x})\) denotes the constraint corresponding to the residual PDE imposing the governing PDE itself. Thus, it is important to carefully impose appropriate constraints for the NN to realize the underlying physics of the problem. In the present work, we propose a multi-objective loss function that consists of residuals of governing PDEs, various boundary conditions, and data-driven physical knowledge fitting terms that can be expressed in the following general form: \[\Delta_{\mathcal{L}}(\boldsymbol{x},\theta)=\varphi\|\mathscr{G}\phi(\boldsymbol {x})-\hat{0}\|_{\Omega}+\beta_{u}\|\mathscr{B}^{\Gamma_{u}}\phi-g^{\,\Gamma_{ u}}\|_{\Gamma_{u}}+\beta_{t}\|\mathscr{B}^{\Gamma_{t}}\phi-g^{\,\Gamma_{t}}\|_{ \Gamma_{t}}+\alpha\|\phi-\tilde{\phi}\|_{\Omega}+\cdots \tag{8}\] where, \(\Delta_{\mathcal{L}}(\boldsymbol{x},\theta)\) is the total loss function; the symbol \(\|\odot\|\) represents the mean squared error norm, i.e., \(\|\bigodot\|=MSE(\bigodot)\) for regression type problem; \(\|\mathscr{G}\phi(\boldsymbol{x})-\hat{0}\|_{\Omega}\) denotes the residual of the governing differential relation in Eq. 2 for \(\boldsymbol{x}\in\Omega\); \(\Gamma_{u}\) and \(\Gamma_{t}\) are the Dirichlet and Neumann boundaries subjected to conditions \(\mathscr{B}^{\Gamma_{u}}\phi=g^{\,\Gamma_{u}}\) and \(\mathscr{B}^{\Gamma_{t}}\phi=g^{\,\Gamma_{t}}\), respectively. The values of \(g^{\,\Gamma_{u}}\) and \(g^{\,\Gamma_{t}}\) are specific to the problem under consideration, and therefore, pre-specified as inputs to the problem/ loss function. Note \(\varphi\), \(\beta_{u}\), and, \(\beta_{t}\), are weights associated with each loss term regularizing the emphasis on each term (the higher the relative value, the more emphasis on satisfying the relation). The remaining task is to utilize standard optimization techniques to tune the parameters of the NN minimizing the proposed objective/ loss function \(\Delta_{\mathcal{L}}(\boldsymbol{x},\theta)\) in Eq. 8. However, even with a large volume of training data, such an approach may not guarantee that the NN strictly obeys the conservation/governing equations in Eq. 2. Thus, additional loss terms to fit the observation data can be introduced. Hence, in the proposed objective loss function, additional loss terms such as \(\|\phi-\bar{\phi}\|_{\Omega}\) have been included that represent the data-driven physical knowledge fitting term for the state variable \(\phi(\vec{x})\). Here, \(\bar{\phi}\) is the true (target) value of \(\phi\) provided from either the analytical solution (if available), numerical simulation, or experimental observations. \(\alpha\) is the weight associated with the data-driven physical knowledge fitting term for \(\phi(\vec{x})\). In the NN approximation, various degrees of differentials of the state variable \(\phi(\boldsymbol{x})\) (i.e., \(\phi^{{}^{\prime}}(\boldsymbol{x})\), \(\phi^{{}^{\prime\prime}}(\boldsymbol{x}),\cdots\) ) can also be included (if known) for stronger coupling in the data-driven approach. The partial differentials of \(\phi(\boldsymbol{x})\) may be evaluated utilizing the graph-based automatic differentiation (Baydin et al., 2018) with multiple hidden layers representing the nonlinear response in PINNs. Following the same steps, the initial conditions can also be incorporated in Eq. 8. The loss from the initial conditions is not included herein due to the quasi-static nature of the elasticity problem. In a more general case, the additional loss term \(\|\phi_{0}-\hat{\phi}_{0}\|_{\Omega}^{t=t_{0}}\) should be added for the loss contribution from the initial condition. Finally, the optimal network parameters of NN \(\tilde{\theta}\) can be obtained by optimizing the loss function in Eq. 8 as \[\tilde{\theta}=\arg\min_{\theta\subset\mathbb{R}^{N^{t}}}\Delta_{\mathcal{L}}( \boldsymbol{\tilde{X}},\theta). \tag{9}\] where, \(\tilde{\theta}:=\bigcup_{i=0}^{N}(\boldsymbol{\tilde{W}}^{i},\boldsymbol{ \tilde{b}}^{i})\) is the set of optimized network parameters; \(N^{t}\) is the total number of trainable parameters; and \(\boldsymbol{\tilde{X}}\in\mathbb{R}^{N_{c}\times N^{t}}\) is the set of \(N_{c}\) collocation points used for optimization. ## 3 Theory of linear elastic solid: Consider an undeformed configuration \(\mathcal{B}\) of an elastic body bounded in the domain \(\Omega\subset\mathbb{R}^{n_{dim}}\) (\(1\leq n_{dim}\leq 3\)) with boundary \(\Gamma=\Gamma_{u}\cup\Gamma_{t}\) where \(\Gamma_{u}\neq\emptyset\) is the Dirichlet boundary, \(\Gamma_{t}\) is the Neumann boundary, and \(\Gamma_{u}\cap\Gamma_{t}=\emptyset\). With respect to the undeformed surface, the elastic body can be subjected to a prescribed displacement \(\boldsymbol{\bar{u}}\) on \(\Gamma_{D}\), and a prescribed surface traction \(\boldsymbol{\bar{t}}\in[\mathcal{L}^{2}(\Gamma_{t})]^{n_{dim}}\). Additionally, a body force of density \(\boldsymbol{B}\in[\mathcal{L}^{2}(\Omega)]^{n_{dim}}\) in \(\Omega\) can be prescribed with respect to the undeformed volume. Using a standard basis \(\{\boldsymbol{e_{i}}\}\) in \(\mathbb{R}^{n_{dim}}\), we can express the displacement, \(\boldsymbol{u}=u_{i}\boldsymbol{e_{i}}\), and its gradient, \(\nabla\boldsymbol{u}=\frac{1}{2}\left(u_{i,j}+u_{j,i}\right)\boldsymbol{e_{i}} \otimes\boldsymbol{e_{j}}\); where, \(\otimes\) denotes the tensor products. Second order symmetric tensors are linear transformations in \(\mathbb{S}\), defined as \(\mathbb{S}:=\left\{\boldsymbol{\xi}:\mathbb{R}^{n_{dim}}\rightarrow\mathbb{R} ^{n_{dim}}|\,\boldsymbol{\xi}=\boldsymbol{\xi^{T}}\right\}\) with inner product \(\boldsymbol{\xi}:\boldsymbol{\xi}=\text{tr}\,\left[\boldsymbol{\xi} \boldsymbol{\xi^{T}}\right]\equiv\xi_{ij}\xi_{ij}\). Therefore, the stress tensor can be expressed as \(\boldsymbol{\sigma}:=\sigma_{ij}\boldsymbol{e_{i}}\otimes\boldsymbol{e_{j}}\). For infinitesimal strain, displacement gradient tensor \(\nabla\boldsymbol{u}\) can be expressed as: \(\nabla\boldsymbol{u}=\boldsymbol{\varepsilon}+\boldsymbol{\omega}\) where \(\boldsymbol{\varepsilon}:=\frac{1}{2}\left[\nabla\boldsymbol{u}+\nabla( \boldsymbol{u})^{\mathsf{T}}\right]\) is the infinitesimal strain tensor with \(\nabla\times\boldsymbol{\varepsilon}=e_{ijk}\,\varepsilon_{rj,i}\,\boldsymbol {e_{k}}\otimes\boldsymbol{e_{r}}\), and \(\boldsymbol{\omega}:=\frac{1}{2}\left[\nabla\boldsymbol{u}-\nabla(\boldsymbol {u})^{\mathsf{T}}\right]\) is the infinitesimal rotation tensor. ## 3.1 Compatibility condition: In the context of infinitesimal strain theory, we seek to find \(\boldsymbol{u}:\Omega\rightarrow\mathbb{R}^{n_{dim}}\) and corresponding \(\boldsymbol{\varepsilon}:\Omega\rightarrow\mathbb{R}^{n_{dim}\times n_{dim}}\), and \(\boldsymbol{\sigma}:\Omega\rightarrow\mathbb{R}^{n_{dim}\times n_{dim}}\) for a given infinite elastic solid satisfying the following compatibility conditions (Marsden and Hughes, 1994): \[\boldsymbol{R:}=\nabla\times(\nabla\times\boldsymbol{\varepsilon})^{\mathsf{ T}}=\boldsymbol{0}; \tag{10}\] where, \(R\) is Saint-Venant compatibility tensor. Alternatively, the elastic solid should satisfy the Navier-Cauchy equations which can be expressed as (Lurie, 2010): \[\begin{array}{c}(\lambda+\mu)\nabla(\nabla\cdot\textbf{{u}})+\mu\textbf{{ \Delta}}\textbf{{u}}+\textbf{{B}}=\textbf{0},\hskip 14.226378pt\mbox{in }\Omega\\ \textbf{{u}}\mid_{\Gamma_{D}}=\bar{\textbf{{u}}};\end{array} \tag{11}\] where \(\textbf{{u}}=(u_{1},u_{2},...,u_{n_{dim}})\) is the unknown displacement field; \(\mu>0\) and \(\lambda>-\mu\) are Lame constants; \(\nabla\), \(\mathbf{{\Delta}}\), and \(\nabla\) represent the gradient, the Laplacian, and the divergence operators, respectively. Equation 11 satisfies the continuity of the displacement field \(\textbf{{u}}\) and Dirichlet boundary condition. ### Equilibrium condition: In addition, the equilibrium condition and the Neumann boundary condition should be satisfied which can be expressed as (Marsden and Hughes, 1994): \[\begin{array}{c}\nabla\cdot\textbf{{\sigma}}+\textbf{{B}}=\textbf{0},\hskip 14.226378pt \mbox{in }\Omega\\ \textbf{{t}}:=\mathbb{T}\textbf{{u}}=\bar{\textbf{{t}}},\hskip 14.226378pt\mbox{on }\Gamma_{t} \hskip 14.226378pt\textbf{{\sigma}}\mid_{\Gamma_{t}}\textbf{{\hat{n}}}=\bar{ \textbf{{t}}}\end{array} \tag{12}\] where, \(\bar{\textbf{{t}}}\) is a prescribed function on \(\Gamma_{t}\); \(\hat{\textbf{n}}\) is the field normal to \(\Gamma_{t}\). Equation 12 satisfies the momentum equation and the Neumann boundary condition where \(\mathbb{T}\) follows the conformal derivative operator such that (Atkin and Fox, 2005) \[\mathbb{T}\textbf{{u}}=\lambda(\textbf{{\Delta}}\,\textbf{{u}})\cdot\textbf{{ \hat{n}}}+2\mu\frac{\partial\textbf{{u}}}{\partial\textbf{{\hat{n}}}}+\mu \textbf{{\hat{n}}}\times(\nabla\times\textbf{{u}}) \tag{13}\] ### Constitutive relation: Subsequently, the elastic constitutive relation can be expressed from generalized Hooke's law (Timoshenko, 1970) as: \[\textbf{{\sigma}}=\textbf{{C}}:\textbf{{\varepsilon}} \tag{14}\] where, the fourth-order stiffness tensor \(\textbf{{C}}=C_{ijkl}\textbf{{e}}_{i}\otimes\textbf{{e}}_{j}\otimes\textbf{{ e}}_{k}\otimes\textbf{{e}}_{l}\) denotes the constitutive relation that maps the displacement gradient \(\nabla\textbf{{u}}\) to the Cauchy stress tensor \(\sigma\). For an isotropic linearly elastic material, \(C_{ijkl}=\lambda\delta_{ij}\delta_{kl}+\mu(\delta_{ik}\delta_{jl}+\delta_{il} \delta_{jk})\) where \(\delta_{ij}\) is the Kronecker delta. The components of the stress tensor \(\mathbf{\sigma}\), and the strain tensor \(\mathbf{\varepsilon}\), are expressed as : \[\sigma_{ij}(\mathbf{u})=\lambda\delta_{ij}\sum_{k=1}^{n_{dim}}\varepsilon_{kk}(\mathbf{u} )+2\mu\varepsilon_{ij}(\mathbf{u}),\qquad\varepsilon_{ij}(\mathbf{u})=\frac{1}{2}\left( \frac{\partial u_{i}}{\partial x_{j}}+\frac{\partial u_{j}}{\partial x_{i}} \right),\qquad i,j=1,2,...,n_{dim}. \tag{15}\] Note that \(\mathbf{\sigma}\) is the Cauchy stress tensor in linear elasticity applicable under small deformation. The constitutive relation in terms of strain can be alternatively expressed as, \[\varepsilon_{ij,kl}+\varepsilon_{kl,ij}-\varepsilon_{ik,jl}-\varepsilon_{jl, ik}=0\qquad i,j,k,l\in 1,2,...,n_{dim}. \tag{16}\] Equations governing a linear elastic boundary value problem (BVP) are defined by Eqs. 11-16 where the field variables \(\mathbf{u},\mathbf{\sigma},\mathbf{\varepsilon}\) can be obtained for given material constants (Atkin and Fox, 2005; Lurie, 2010). ## 4 PINNs formulation for continuum linear elasticity: The proposed PINNs framework is applied to linearly elastic solids. A two-dimensional (\(n_{dim}=2\)) problem is considered. The input features (variables) to the models are the spatial coordinates \(\mathbf{x}=(x,y)\). A separate NN is used to approximate each output field variable. As shown in Fig. 1, displacement \(\mathbf{u}(\mathbf{x})\), stress \(\mathbf{\sigma}(\mathbf{x})\), and strain \(\mathbf{\varepsilon}(\mathbf{x})\) fields are obtained by densely connected independent ANNs. For \(n_{dim}=2\), considering symmetry of the stress and strain tensors, \(\mathbf{u}(\mathbf{x})\), \(\mathbf{\sigma}(\mathbf{x})\), and \(\mathbf{\varepsilon}(\mathbf{x})\) fields can be approximated as: \[\mathbf{u}(\mathbf{x})\simeq\Xi_{\mathbf{u}}^{\sf NN}(\mathbf{x})=\begin{bmatrix}\tilde{u}_{x }^{\sf NN}(\mathbf{x})\\ \tilde{u}_{y}^{\sf NN}(\mathbf{x})\end{bmatrix} \tag{17}\] \[\mathbf{\sigma}(\mathbf{x})\simeq\Xi_{\mathbf{\sigma}}^{\sf NN}(\mathbf{x})=\begin{bmatrix} \tilde{\sigma}_{xx}^{\sf NN}(\mathbf{x})&\tilde{\sigma}_{xy}^{\sf NN}(\mathbf{x})\\ \tilde{\sigma}_{yx}^{\sf NN}(\mathbf{x})&\tilde{\sigma}_{xy}^{\sf NN}(\mathbf{x}) \end{bmatrix};\qquad\mathbf{\varepsilon}(\mathbf{x})\simeq\Xi_{\mathbf{\xi}}^{\sf NN}(\bm {x})=\begin{bmatrix}\tilde{\varepsilon}_{xx}^{\sf NN}(\mathbf{x})&\tilde{ \varepsilon}_{xy}^{\sf NN}(\mathbf{x})\\ \tilde{\varepsilon}_{yx}^{\sf NN}(\mathbf{x})&\tilde{\varepsilon}_{xy}^{\sf NN}( \mathbf{x})\end{bmatrix} \tag{18}\] Here \(\Xi_{\mathbf{u}}^{\sf NN}(\mathbf{x})\), \(\Xi_{\mathbf{\sigma}}^{\sf NN}(\mathbf{x})\), and \(\Xi_{\mathbf{\xi}}^{\sf NN}(\mathbf{x})\) denote the NN approximations for \(\mathbf{u}(\mathbf{x})\), \(\mathbf{\sigma}(\mathbf{x})\), and \(\mathbf{\varepsilon}(\mathbf{x})\), respectively. ### Loss function: To define the loss function for the linear elasticity problem, governing equations including compatibility conditions, equilibrium conditions, constitutive relations, and boundary conditions that fully describe the problem have been considered. Additionally, as in a data-driven approach, the field variables in Eq. 8 have been included. The generalized mutli-objective loss functional \(\Delta_{\mathcal{L}}\) can be expressed as: \[\Delta_{\mathcal{L}}(\mathbf{x},\theta)=\varphi\,\Delta_{\mathcal{L}}^{\Omega}+ \varphi_{e}\,\Delta_{\mathcal{L}}^{e}+\varphi_{c}\,\Delta_{\mathcal{L}}^{c}+ \beta_{u}\,\Delta_{\mathcal{L}}^{\Gamma_{u}}+\beta_{t}\,\Delta_{\mathcal{L}}^{ \Gamma_{t}}+\alpha_{\mathbf{u}}\,\Delta_{\mathcal{L}}^{\mathbf{u}}+\alpha_{\mathbf{\sigma} }\Delta_{\mathcal{L}}^{\mathbf{\sigma}}+\alpha_{\mathbf{\varepsilon}}\Delta_{\mathcal{L}} ^{\mathbf{\xi}} \tag{19}\] where, \(\Delta_{\mathcal{L}}^{e}\), \(\Delta_{\mathcal{L}}^{c}\), and \(\Delta_{\mathcal{L}}^{\Omega}\) are the loss components from the equilibrium condition (Eq. 12), constitutive relation (Eq. 14), and the compatibility condition (Eq. 15), respectively; \(\Delta_{\mathcal{L}}^{\Gamma_{u}}\) and \(\Delta_{\mathcal{L}}^{\Gamma_{t}}\) represent the loss components computed at the Dirichlet boundary \(\Gamma_{u}\), and the Neumann boundary \(\Gamma_{t}\) (Eq. 11), respectively; \(\Delta_{\mathcal{L}}^{\mathbf{u}}\), \(\Delta_{\mathcal{L}}^{\mathbf{\sigma}}\), and \(\Delta_{\mathcal{L}}^{\mathbf{\xi}}\) are the loss components for the fields \(\mathbf{u}(\mathbf{x})\), \(\mathbf{\sigma}(\mathbf{x})\), and \(\mathbf{\varepsilon}(\mathbf{x})\), respectively, when a data driven approach is pursued. The coefficients \(\varphi,\varphi_{e},\varphi_{c},\beta_{u},\beta_{t},\alpha_{\mathbf{u}},\alpha_{ \mathbf{\sigma}}\), and \(\alpha_{\mathbf{\varepsilon}}\) are the weights associated with each loss term that dictates the emphasis on each penalty term. Evidently, the terms in the cost function are the measures of the errors in the displacement and stress fields, the momentum balance, and the constitutive law. The explicit expression for each term in \(\Delta_{\mathcal{L}}(\mathbf{x},\theta)\) is, \[\Delta_{\mathcal{L}}^{\Omega} = \frac{1}{N_{c}^{\Omega}}\sum_{l=1}^{N_{c}^{\Omega}}\lVert\nabla \cdot\mathbf{\Xi}_{\mathbf{\sigma}}^{\mathsf{NN}}(\mathbf{x}_{l|\Omega})+\mathbf{B}(\mathbf{x}_{l| \Omega})\rVert \tag{20}\] \[\Delta_{\mathcal{L}}^{c} = \frac{1}{N_{c}^{\Omega}}\sum_{l=1}^{N_{c}^{\Omega}}\lVert\mathbf{\Xi }_{\mathbf{\sigma}}^{\mathsf{NN}}(\mathbf{x}_{l|\Omega})-\mathbf{C}\left[\nabla\cdot\Xi_{ \mathbf{u}}^{\mathsf{NN}}(\mathbf{x}_{l|\Omega})\right]\rVert\] (21) \[\Delta_{\mathcal{L}}^{\Gamma_{u}} = \frac{1}{N_{c}^{\Gamma_{u}}}\sum_{k=1}^{N_{c}^{\Gamma_{u}}} \lVert\mathbf{\Xi}_{\mathbf{u}}^{\mathsf{NN}}(\mathbf{x}_{k|\Gamma_{u}})-\mathbf{\tilde{u}}( \mathbf{x}_{k|\Gamma_{u}})\rVert\] (22) \[\Delta_{\mathcal{L}}^{\Gamma_{t}} = \frac{1}{N_{c}^{\Gamma_{t}}}\sum_{j=1}^{N_{c}^{\Gamma_{t}}} \lVert\mathbf{\Xi}_{\mathbf{\sigma}}^{\mathsf{NN}}(\mathbf{x}_{j|\Gamma_{t}})\mathbf{\hat{n}} -\mathbf{\tilde{t}}(\mathbf{x}_{j|\Gamma_{t}})\rVert\] (23) \[\Delta_{\mathcal{L}}^{\mathbf{u}} = \frac{1}{N_{c}^{\Omega}}\sum_{l=1}^{N_{c}^{\Omega}}\lVert\mathbf{\Xi }_{\mathbf{u}}^{\mathsf{NN}}(\mathbf{x}_{l|\Omega})-\mathbf{\hat{u}}(\mathbf{x}_{l|\Omega})\rVert\] (24) \[\Delta_{\mathcal{L}}^{\mathbf{\sigma}} = \frac{1}{N_{c}^{\Omega}}\sum_{l=1}^{N_{c}^{\Omega}}\lVert\mathbf{\Xi }_{\mathbf{\sigma}}^{\mathsf{NN}}(\mathbf{x}_{l|\Omega})-\mathbf{\hat{\sigma}}(\mathbf{x}_{l| \Omega})\rVert\] (25) \[\Delta_{\mathcal{L}}^{\mathbf{\xi}} = \frac{1}{N_{c}^{\Omega}}\sum_{l=1}^{N_{c}^{\Omega}}\lVert\mathbf{\Xi }_{\mathbf{\xi}}^{\mathsf{NN}}(\mathbf{x}_{l|\Omega})-\mathbf{\hat{\varepsilon}}(\mathbf{x}_{l| \Omega})\rVert \tag{26}\] where, \(\left\{\mathbf{x}_{1|\Omega},...,\mathbf{x}_{N_{c}^{\Omega}|\Omega}\right\}\) are randomly chosen collocation points over the domain \(\Omega\); \(\left\{\mathbf{x}_{1|\Gamma_{u}},...,\mathbf{x}_{N_{c}^{\Gamma_{u}}|\Gamma_{u}}\right\}\) and \(\left\{\mathbf{x}_{1|\Gamma_{t}},...,\mathbf{x}_{N_{c}^{\Gamma_{t}}|\Gamma_{t}}\right\}\) are those chosen randomly along the boundaries \(\Gamma_{u}\) and \(\Gamma_{t}\), respectively. The terms \(\hat{\mathbf{u}}(\mathbf{x}_{l|\Omega})\), \(\hat{\mathbf{\sigma}}(\mathbf{x}_{l|\Omega})\), and \(\mathbf{\hat{\varepsilon}}(\mathbf{x}_{l|\Omega})\) represent the true (target) value obtained by means of analytical solution or high-fidelity simulation. The weights \(\varphi,\varphi_{e},\varphi_{c}\in\mathbb{R}^{+}\) are the weights corresponding to the compatibility, equilibrium, and constitutive relations, respectively. In general, these coefficients can be prescribed as 1 for solving a relatively less complex problem, whereas, \(\beta_{u}\) and \(\beta_{t}\) are the binary (i.e., either 0 or 1) integers. The weights \(\alpha_{i}=1;\ \forall\ i=\mathbf{u},\mathbf{\sigma},\mathbf{\varepsilon}\) for a complete data driven approach for \(\mathbf{u}(\mathbf{x})\), \(\mathbf{\sigma}(\mathbf{x})\), and \(\mathbf{\varepsilon}(\mathbf{x})\), respectively at the collocation points \(N_{c}^{\Omega}\). However, we prescribe \(\alpha_{i}=0\ \forall\ (i=\mathbf{u},\mathbf{\sigma},\mathbf{\varepsilon})\) as labeled training data is unavailable, which may not guarantee the accuracy of PINNs solutions. The forward problem is studied herein, where the displacement, stress, and strain fields are obtained as the PINNs solutions assuming material properties \(\lambda\) and \(\mu\) remain constant. However, the loss functional in Eq. 19 can also be utilized in an inverse problem for parameter identification, where \(\lambda\) and \(\mu\) can be treated as network outputs which may vary during training (Fig. 1). For the network construction in the PINNs framework, SciANN (Haghghat and Juanes, 2021), a convenient high-level Keras (Chollet et al., 2015) wrapper for PINNs is used. ### Solution for linear elasticity problem : For this study, an end-loaded isotropic linearly elastic cantilever beam of height \(2a\), length \(L\), thickness \(b\) (assuming \(b\ll a\)) has been considered to ensure a state of plane-stress condition as shown in Fig. 2. The left edge of the beam is subjected to a resultant force \(P\). Whereas, the right-hand end is clamped. The top and bottom surfaces of the beam, \(y=\pm a\) are traction free. An approximate solution to the problem can be obtained from the Airy function discussed next. #### 4.2.1 The Airy solution to the end-loaded cantilever beam: The Airy solution in Cartesian coordinates \(\Omega\subset\mathbb{R}^{2}\) can be found from the Airy potential \(\phi(x,y)\) that satisfies (Bower, 2009), \[\nabla\phi=\frac{\partial^{4}\phi}{\partial x^{4}}+2\frac{\partial^{4}\phi}{ \partial x^{2}\partial y^{2}}+\frac{\partial^{4}\phi}{\partial y^{4}}=\mathbf{ C}(\nu)(\frac{\partial b_{x}}{\partial x}+\frac{\partial b_{y}}{\partial y}) \tag{27}\] where, \[\mathbf{C}(\nu)=\left\{\begin{array}{ll}\frac{1-\nu}{1-2\nu}&\text{(plane strain)}\\ \frac{1}{1-\nu}&\text{(plane stress)}\end{array}\right. \tag{28}\] Figure 2: (a) Elastic plane-stress problem for an end-loaded cantilever beam of length \(L\), height \(2a\) and out-of-plane thickness \(b\) which has been clamped at \(x=L\); (b) distributions of total collocations points \(N_{c}=5,000\) on the problem domain and various boundaries during PINNs training. Here, the body forces \(b_{x}\), \(b_{y}\) have the form \(\rho_{0}b_{x}=\frac{\partial\Omega}{\partial x},\;\rho_{0}b_{y}=\frac{\partial \Omega}{\partial y}\) ; \(\Omega(x,y)\) is the positional scalar function. The solution of the Airy function can be expressed in the polynomial form \(\phi(x,y)=\sum_{m=0}^{\infty}\sum_{n=0}^{\infty}A_{mn}\,x^{m}y^{n}\). For \(m+n\leq 3\), the terms automatically satisfy the biharmonic equation for any \(A_{mn}\). Additionally, \(\phi\) must satisfy the following traction boundary conditions on \(\Omega\). \[\frac{\partial^{2}\phi}{\partial y^{2}}n_{x}-\frac{\partial^{2}\phi}{\partial x \partial y}n_{y}=t_{x};\quad\frac{\partial^{2}\phi}{\partial x^{2}}n_{y}- \frac{\partial^{2}\phi}{\partial x\partial y}n_{y}=t_{y} \tag{29}\] Here, \((n_{x},n_{y})\) are the components of a unit vector normal to the boundary. For the end-loaded cantilever beam, the Airy function can be formulated as, \[\phi=-\frac{3P}{4ab}xy+\frac{P}{4a^{3}b}xy^{3} \tag{30}\] where, \(\sigma_{xx}=\frac{\partial^{2}\phi}{\partial y^{2}}-\Omega;\quad\sigma_{yy}= \frac{\partial^{2}\phi}{\partial x^{2}}-\Omega;\quad\sigma_{xy}=\sigma_{yx}=- \frac{\partial^{2}\phi}{\partial x\partial y}\) with \(\Omega=0\). At the clamped end, \(x_{1}=L\), displacement boundary conditions are \(u_{x}=u_{y}=\partial u_{y}/\partial x=0\). The top and bottom surfaces of the beam (i.e., \(y=\pm a\)) are traction free, \(\sigma_{ij}n_{i}=0\), that requires \(\sigma_{yy}=\sigma_{xy}=0\). Whereas, the resultant of the traction acting on the surface at \(x=0\) is \(-Pe_{y}\) with traction vector \(t_{i}=\sigma_{ij}n_{j}=-\sigma_{xy}\delta_{iy}=-\frac{3P}{4ab}(1-\frac{y^{2}}{ a^{2}})\delta_{iy}\). The resultant force can be obtained as : \(F_{i}=b\int_{-a}^{a}-\frac{3P}{4ab}(1-\frac{y^{2}}{a^{2}})\delta_{iy}dx_{2}=-P \delta_{iy}\). On satisfaction of the aforementioned conditions, approximate analytical solutions for the displacements \(u_{x}\), \(u_{y}\), the strain fields \(\varepsilon_{xx}\), \(\varepsilon_{yy}\), \(\varepsilon_{xy}\) and the stress fields \(\sigma_{xx}\), \(\sigma_{yy}\), \(\sigma_{xy}\) can be expressed as: \[u_{x} = \frac{3P}{4Ea^{3}b}x^{2}y-(2+\mu)\frac{P}{4Ea^{3}b}y^{3}+3(1+\mu) \frac{Pa^{2}}{2Ea^{3}b}y-\frac{3PL^{2}}{4Ea^{3}b}y \tag{31}\] \[u_{y} = -\frac{3\mu P}{4Ea^{3}b}xy^{2}-\frac{P}{4Ea^{3}b}x^{3}+\frac{3PL^ {2}}{4Ea^{3}b}x-\frac{PL^{3}}{2Ea^{3}b}\] (32) \[\varepsilon_{xx} = \frac{3P}{2Ea^{3}b}xy;\quad\varepsilon_{yy}=-\frac{3P\mu}{2Ea^{3} b}xy;\quad\varepsilon_{xy}=\frac{3P(1+\mu)}{4Eab}\left(1-\frac{y^{2}}{a^{2}}\right)\] (33) \[\sigma_{xx} = \frac{3P}{2a^{3}b}xy;\quad\sigma_{yy}=0;\quad\sigma_{xy}=\frac{3P }{4ab}\left(1-\frac{y^{2}}{a^{2}}\right) \tag{34}\] These analytical solutions for \(\textbf{{u}}(\textbf{{x}})\), \(\boldsymbol{\sigma}(\textbf{{x}})\), and \(\boldsymbol{\varepsilon}(\textbf{{x}})\) have been used as \(\hat{\textbf{{u}}}(\textbf{{x}}_{l|\Omega})\), \(\hat{\boldsymbol{\sigma}}(\textbf{{x}}_{l|\Omega})\), and \(\hat{\boldsymbol{\varepsilon}}(\textbf{{x}}_{l|\Omega})\) at the collocation points for data-driven enhancement in Eqs. 24-26, respectively, for solving the field variables in the proposed PINNs framework. **4.2.2 PINNs solutions for linear elasticity problem:** For the benchmark, end-loaded cantilever beam problem, \(L=3\) m, \(a=0.5\) m, and \(b=0.001\) m have been considered. The material properties are, Young's modulus \(E=1\) GPa, and the Poisson ratio \(\nu=0.25\) as shown in Fig. 2 -(a). Unless otherwise stated, a total of \(N_{c}=5,000\) randomly distributed collocation points over the domain and boundaries have been used for training the PINNs model as shown in Fig. 2 -(a). During training, the optimization loop was run for 500 epochs using the Adam optimization scheme with a learning rate of 0.001, and a batch size of 32 for optimal accuracy and faster convergence. The Airy solutions for various fields including displacements \(u_{x}\), \(u_{y}\), stresses \(\sigma_{xx}\), \(\sigma_{yy}\), \(\sigma_{xy}\), and strains \(\varepsilon_{xx}\), \(\varepsilon_{yy}\), \(\varepsilon_{xy}\) as in Eqs. 31-34 are shown in Fig. 3-(a). The corresponding PINNs approximations using the tanh activation function are shown in Fig. 3 -(b). Additionally, in Fig. 3 -(c), the absolute error between the Airy solutions and PINNs predictions for each field variable is shown. The overall results from PINNs are in excellent agreement with the Airy solutions. The PINNs approximations attained satisfactory accuracy with low absolute errors for all field variables. For the displacement fields, the absolute error is relatively high near to clamped edge for \(u_{x}\). For \(u_{y}\), the absolute error is maximum at the midsection and near the horizontal edges as shown in Fig. 3 -(c). This is due to the approximate nature of the Airy solutions at clamped end \(x_{1}=L\) for the displacement boundary conditions \(u_{x}=u_{y}=\partial u_{y}/\partial x=0\). Such differences also propagate through the solutions of stress and strain fields, where PINNs predictions slightly deviate from the Airy solutions, in particular, near the free vertical and horizontal edges as shown in Fig. 3 -(c). However, according to Saint-Venant's principle, these deviations do not sufficiently influence the solution far from the end, which is reflected in the result. Overall, the proposed PINNs model can capture the distributions of various fields accurately from the solution of the Airy stress function. #### 4.2.3 Suitable activation function : The impact of the use of various activation functions on training the PINNs models in predicting field variables and the epoch evolution of various components of the loss function is explored. The ReLU, sigmoid, and tanh activation functions are compared; the network architecture remains the same: the number of neurons in each layer \(\mathscr{N}=20\) with the total number of hidden layers \(L_{n}=5\) in the PINNs model. The evolution of the total loss \(\Delta_{\mathcal{L}}\), and the Figure 3: (a) The Airy solutions for displacements \(u_{x}\), \(u_{y}\), stresses \(\sigma_{xx}\), \(\sigma_{yy}\), \(\sigma_{xy}\), strains \(\varepsilon_{xx}\), \(\varepsilon_{yy}\), \(\varepsilon_{xy}\); (b) corresponding PINNs solutions for \(\tilde{\mathfrak{u}}_{x}^{\sf NN}\), \(\tilde{\mathfrak{u}}_{y}^{\sf NN}\), \(\tilde{\sigma}_{xx}^{\sf NN}\), \(\tilde{\mathfrak{g}}_{yy}^{\sf NN}\), \(\tilde{\mathfrak{g}}_{xy}^{\sf NN}\), \(\tilde{\mathfrak{g}}_{xx}^{\sf NN}\), \(\tilde{\mathfrak{g}}_{yy}^{\sf NN}\), and \(\tilde{\mathfrak{g}}_{xy}^{\sf NN}\). (c) absolute error between the Airy solutions and PINNs predictions associated with each field variables for an end-loaded cantilever beam. constitutive loss \(\Delta^{\Omega}_{\mathcal{L}}\) are depicted in Fig. 4. Additionally, values of the various loss components and training times \(t_{tr}\) at the end of training are compared in Table. 1. Evidently, the \(\tanh\) activation provides the best performance in terms of the value of the total loss at the end of training. The final constitutive loss with \(\tanh\) activation is significantly lower compared to the other two activations illustrating the suitability of the use of the \(\tanh\) activation for the PINNs model for solving the elasticity problem herein. In addition, all other loss components obtained are lowest upon using the \(\tanh\) activation as shown in Table 1. Comparing the evolution of \(\Delta_{\mathcal{L}}\), the convergence characteristics for the ReLU activation are better compared to the \(\tanh\) with fewer fluctuations and rapid decrease in loss values as shown in Fig. 4-(a). However, the \(\tanh\) illustrates better adaptability in the constitutive loss with an excellent convergence rate in Fig. 4-(b). Out of the three activations, ReLU performs the worst possibly due to its derivative being discontinuous. However, the total loss for all three activations is negligible (loss value in the range below \(10^{-4}\) to \(10^{-5}\)) within 200 epochs indicating the adaptability of the proposed PINNs framework to any of these activations provided the models are trained sufficiently long. In comparing the training time, the \(\tanh\) activation takes longer for the same number of epochs compared to the other two. This coincides with the fact that the evolution of the total loss has a higher degree of discontinuity. However, the model with the ReLU activation trains the fastest possibly due to its linear nature. From the comparison, it can be concluded that although \(\tanh\) is the best in terms of accuracy, however, ReLU can be an optimal choice of activation considering both accuracy and training time for solving elasticity equation in the proposed PINNs framework. #### 4.2.4 Influence of network complexity: It is worth mentioning that the PINNs approximations are sensitive to network architecture including the depth of the hidden layer and the number of network parameters. In this section, the influence of network architecture parameters, i.e., the number of neurons in each hidden layer \(\mathscr{N}\), and the number of hidden layers \(L_{n}\) on the accuracy and the efficiency of the PINNs solution are explored. Since the tanh activation performs the best in terms of accuracy (see previous section), it is chosen as the activation for different networks used in the following experiments. In the current study, four different networks considering the combinations \(\mathscr{N}=20,40\), and \(L_{n}=5,10\) are tested, and values of different loss components at the end of the training, train \begin{table} \begin{tabular}{c c c c c c c c c c} \hline Activation Function & \(\Delta_{\mathcal{L}}^{\Omega}\) & \(\Delta_{\mathcal{L}}^{c}\) & \(\Delta_{\mathcal{L}}^{\Gamma_{u}}\) & \(\Delta_{\mathcal{L}}^{\Gamma_{t}}\) & \(\Delta_{\mathcal{L}}^{\boldsymbol{u}}\) & \(\Delta_{\mathcal{L}}^{\boldsymbol{\sigma}}\) & \(\Delta_{\mathcal{L}}^{\boldsymbol{\xi}}\) & \(\Delta_{\mathcal{L}}\) & \(t_{tr}\) \\ & & & & & & & & & (\(min\)) \\ \hline ReLU & 107.16 & 43.43 & 14.51 & 36.75 & 24.97 & 1.07 & 5.48 & 233.37 & 9.4 \\ Sigmoid & 30.96 & 54.33 & 517.38 & 126.14 & 37.85 & 124.51 & 592.82 & 1483.99 & 13.8 \\ tanh & 4.56 & 0.73 & 31.47 & 25.64 & 3.11 & 9.60 & 10.45 & 85.56 & 15.7 \\ \hline \end{tabular} \end{table} Table 1: Influence of different activation functions on the final values of various loss components (in \(10^{-09}\)) and training times \(t_{tr}\) in the proposed PINNs model for solving linear elastic beam problem. \begin{table} \begin{tabular}{c c c c c c c c c c c} \hline Network & \(n_{p}\) & \(t_{tr}\) & \(\Delta_{\mathcal{L}}^{\Omega}\) & \(\Delta_{\mathcal{L}}^{c}\) & \(\Delta_{\mathcal{L}}^{\Gamma_{u}}\) & \(\Delta_{\mathcal{L}}^{\Gamma_{t}}\) & \(\Delta_{\mathcal{L}}^{\boldsymbol{u}}\) & \(\Delta_{\mathcal{L}}^{\boldsymbol{\sigma}}\) & \(\Delta_{\mathcal{L}}^{\boldsymbol{\xi}}\) & \(\Delta_{\mathcal{L}}\) \\ identifier & & (\(min\)) & & & & & & & & \\ \hline N-1 (\(\mathscr{N}=20\), \(L_{n}=5\)) & 22,706 & 15.7 & 4.56 & 0.73 & 31.47 & 25.64 & 3.11 & 9.60 & 10.45 & 85.56 \\ N-2 (\(\mathscr{N}=40\), \(L_{n}=5\)) & 113,530 & 23.8 & 2.21 & 90.39 & 77.73 & 59.58 & 4.29 & 24.16 & 78.39 & 336.75 \\ N-3 (\(\mathscr{N}=20\), \(L_{n}=10\)) & 54,494 & 18.3 & 6.89 & 0.89 & 12.73 & 65.42 & 13.01 & 17.19 & 4.67 & 120.8 \\ N-4 (\(\mathscr{N}=40\), \(L_{n}=10\)) & 272,472 & 32.3 & 2.78 & 3.67 & 18.78 & 12.63 & 24.19 & 43.10 & 2.49 & 107.64 \\ \hline \end{tabular} \end{table} Table 2: Influence of network parameters \(\mathscr{N}\) and \(L_{n}\) on training times \(t_{tr}\) and final values various loss components (in \(10^{-09}\)) for tanh activation. ing duration (\(t_{tr}\)), along with model complexities in terms of network parameters (\(n_{p}\)) for these architectures are presented in Table. 2. For fair comparison, \(N_{c}=5,000\) for all experiments. The evolution of the total loss \(\Delta_{\mathcal{L}}\) and the constitutive loss \(\Delta_{\mathcal{L}}^{\Omega}\) for these networks are shown in Fig. 5. From the comparisons, for the chosen number of collocation points relatively shallow network \(\mathscr{N}=20\), \(L_{n}=5\) provides the best performance in terms of \(\Delta_{\mathcal{L}}\) and \(\Delta_{\mathcal{L}}^{\Omega}\) at the end of training. Additionally, the time required for training is faster due to a significantly lower number of network parameters. However, for a relatively deeper network, \(\mathscr{N}=20\), \(L_{n}=10\) with increased network complexity, the performance of the model degrades with respect to loss values as shown in Table. 2 possibly due to an increase in variability and reduction in bias. Interestingly, an increase in the number of neurons \(\mathscr{N}=40\) while maintaining the depth of the network (\(L_{n}=5\)) leads to the worst performance which can be attributed to over-fitting (Bilbao and Bilbao, 2017; Jabbar and Khan, 2015). The epoch evolution of the loss for various network architectures demonstrates the efficacy of a relatively shallow network with significantly faster training for solving elasticity problems in the proposed PINNs framework. ## 5 PINNs formulation for linear elastic plate theory : In this section, the PINNs framework is expanded for the solution of the classical Kirchhoff-Love thin plate (Timoshenko and Woinowsky-Krieger, 1959) subjected to a transverse loading in linearly elastic plate theory. In the subsequent section, the Kirchhoff-Love theory has been Figure 5: Comparison of (a) total loss \(\Delta_{\mathcal{L}}^{\Omega}\); (b) constitutive loss \(\Delta_{\mathcal{L}}^{\Omega}\) for various combinations of network parameters \(\mathscr{N}\) and \(L_{n}\) considering tanh activation function. briefly described; PINNs formulation for solving the governing fourth-order biharmonic partial differential equation (PDE) for the solution of the thin plate is elaborated. For a benchmark problem, the proposed PINNs approach is applied for the solution of a simply supported rectangular plate under a transverse sinusoidal loading condition. ### Kirchhoff-Love thin plate theory : Thin plates are structurally planar elements that have small thickness relative to their in-plane dimensions which can be simplified as a two-dimensional plate problem. According to the Kirchhoff-Love theory, the kinetics of a thin plate under the effect of a distributed transverse loading \(q=q(x,y)\) can be described by a fourth-order differential equation (Timoshenko and Woinowsky-Krieger, 1959; Reddy, 2006). \[\mathbf{\Delta}(\mathscr{D}\mathbf{\Delta}w)=q \tag{35}\] When the elastic plate is bounded in the domain \(\Omega\subset\mathbb{R}^{2}\), Eq. 35 is known as the Kirchhoff-Love equation. In Cartesian coordinates, \(w=w(x,y)\) represents the transverse displacement field, \(\mathscr{D}=\mathscr{D}(x,y)\) is the bending stiffness of the plate, and \(\mathbf{\Delta}=\partial^{2}/\partial x^{2}+\partial^{2}/\partial y^{2}\) is the Laplace operator. Considering a homogeneous and isotropic plate (i.e., \(\mathscr{D}\equiv\) constant ), Eq. 35 becomes the biharmonic equation (Timoshenko and Woinowsky-Krieger, 1959; Szilard and Nash, 1974) \[\mathscr{D}\mathbf{\Delta}^{2}w=\mathscr{D}\left(\frac{\partial^{4}w}{\partial x ^{4}}+2\frac{\partial^{4}w}{\partial x^{2}\partial y^{2}}+\frac{\partial^{4}w }{\partial y^{4}}\right)=q \tag{36}\] Under appropriate boundary conditions, and with \(\mathscr{D}(x,y)>0\) and \(q(x,y)\geq 0\), both being known, the problem possesses a unique solution for the displacement \(w(x,y)\). The set of solution variables includes the primitive variable deflection \(w\), and the derived quantities, moments \(M_{xx}\), \(M_{yy}\), \(M_{xy}=-M_{yx}\), and shearing forces \(Q_{xx}\), \(Q_{yy}\). The expressions for the derived fields are, \[M_{xx}=-\mathscr{D}\left(\frac{\partial^{2}w}{\partial x^{2}}+\nu\frac{\partial ^{2}w}{\partial y^{2}}\right);\quad M_{yy}=-\mathscr{D}\left(\frac{\partial^{ 2}w}{\partial y^{2}}+\nu\frac{\partial^{2}w}{\partial x^{2}}\right);\quad M_{ xy}=-\mathscr{D}(1-\nu)\left(\frac{\partial^{2}w}{\partial x\partial y}\right) \tag{37}\] \[Q_{xx}=\frac{\partial M_{yx}}{\partial y}+\frac{\partial M_{xx}}{\partial x} =-\mathscr{D}\frac{\partial}{\partial x}\left(\frac{\partial^{2}w}{\partial x ^{2}}+\frac{\partial^{2}w}{\partial y^{2}}\right);\quad Q_{yy}=\frac{\partial M _{yy}}{\partial y}-\frac{\partial M_{xy}}{\partial x}=-\mathscr{D}\frac{ \partial}{\partial y}\left(\frac{\partial^{2}w}{\partial x^{2}}+\frac{ \partial^{2}w}{\partial y^{2}}\right) \tag{38}\] ### 5.2 PINNs formulation for the Biharmonic equation: For solving the Biharmonic equation using the PINNs framework, the input features are the spatial coordinates \(\mathbf{x}:=(x,y)\); the field variables, \(w(\mathbf{x})\), \(\mathbf{M}(\mathbf{x})\), and \(\mathbf{Q}(\mathbf{x})\) are obtained using multiple densely connected independent ANNs, with each network approximating one of the outputs (Fig. 7). Different field variables approximated by the NNs are as follows: \[w(\mathbf{x})\simeq\Xi_{w}^{\sf NN}=\tilde{\sf w}^{\sf NN}(\mathbf{x}) \tag{39}\] \[\mathbf{M}(\mathbf{x})\simeq\Xi_{M}^{\sf NN}=\begin{bmatrix}\tilde{\sf M }_{xx}^{\sf NN}(\mathbf{x})&\tilde{\sf M}_{xy}^{\sf NN}(\mathbf{x})\\ \tilde{\sf M}_{yx}^{\sf NN}(\mathbf{x})&\tilde{\sf M}_{yy}^{\sf NN}(\mathbf{x})\end{bmatrix}; \qquad\mathbf{Q}(\mathbf{x})\simeq\Xi_{\mathbf{Q}}^{\sf NN}=\begin{bmatrix}\tilde{\sf Q}_{ xx}^{\sf NN}(\mathbf{x})\\ \tilde{\sf Q}_{yx}^{\sf NN}(\mathbf{x})\end{bmatrix} \tag{40}\] where, \(\Xi_{w}^{\sf NN}\), \(\Xi_{\mathbf{M}}^{\sf NN}\), and \(\Xi_{\mathbf{Q}}^{\sf NN}\) are the neural network appoxinations. From the NN approximations of the fields, the muti-objective loss function \(\Delta_{\mathcal{L}}(\mathbf{x},\theta)\) can be defined as: \[\Delta_{\mathcal{L}}(\mathbf{x},\theta)=\varphi\,\Delta_{\mathcal{L}}^{\Omega}+ \beta_{u}\,\Delta_{\mathcal{L}}^{\Gamma_{u}}+\beta_{t}\,\Delta_{\mathcal{L}} ^{\Gamma_{t}}+\alpha_{w}\,\Delta_{\mathcal{L}}^{w}+\alpha_{M}\,\Delta_{ \mathcal{L}}^{M}+\alpha_{Q}\,\Delta_{\mathcal{L}}^{Q} \tag{41}\] where, \(\Delta^{\Omega}_{\cal L}\), \(\Delta^{\Gamma_{u}}_{\cal L}\), \(\Delta^{\Gamma_{t}}_{\cal L}\) are the losses in the domain \(\Omega\), and along the boundaries \(\Gamma_{u}\) and \(\Gamma_{t}\), respectively. Their expressions are, \[\Delta^{\Omega}_{\cal L} = \frac{1}{N^{\Omega}_{c}}\sum_{l=1}^{N^{\Omega}_{c}}\lVert\nabla^{ \mathbf{2}}\nabla^{\mathbf{2}}w-\frac{\hat{q}}{\mathscr{D}}\rVert \tag{42}\] \[\Delta^{\Gamma_{u}}_{\cal L} = \frac{1}{N^{\Gamma_{u}}_{c}}\sum_{k=1}^{N^{\Gamma_{u}}_{c}} \lVert\mathbf{\Xi^{\mbox{\scriptsize{NN}}}_{\bf w}}(\mbox{\boldmath $x$}_{k|\Gamma_{u}})-\bar{w}(\mathbf{x}_{k|\Gamma_{u}})\rVert\] (43) \[\Delta^{\Gamma_{t}}_{\cal L} = \frac{1}{N^{\Gamma_{t}}_{c}}\sum_{j=1}^{N^{\Gamma_{t}}_{c}} \lVert\mathbf{\Xi^{\mbox{\scriptsize{NN}}}_{\bf M}}(\mbox{\boldmath $x$}_{j|\Gamma_{t}})-\mathbf{\bar{M}}(\mathbf{x}_{j|\Gamma_ {t}})\rVert \tag{44}\] where, \(\left\{\mathbf{x}_{1|\Omega},...,\mathbf{x}_{N^{\Omega}_{c} |\Omega}\right\}\), \(\left\{\mathbf{x}_{1|\Gamma_{u}},...,\mathbf{x}_{N^{\Gamma _{u}}_{c}|\Gamma_{u}}\right\}\), \(\left\{\mathbf{x}_{1|\Gamma_{t}},...,\mathbf{x}_{N^{\Gamma _{t}}_{c}|\Gamma_{t}}\right\}\) are the collocation points over the domain \(\Omega\), and along the boundaries \(\Gamma_{u}\) and \(\Gamma_{t}\), respectively; \(\varphi\in\mathbb{R}^{+}\) is the penalty coefficient for imposing the biharmonic relation in Eq. 36. Additionally, data driven estimates of \(w(\mathbf{x})\), \(\mathbf{M}(\mathbf{x})\), and \(\mathbf{Q}(\mathbf{x})\) at the collocation points across \(\Omega\) are used to define \(\Delta_{\cal L}(\mathbf{x},\theta)\). \[\Delta^{w}_{\cal L} = \frac{1}{N^{\Omega}_{c}}\sum_{l=1}^{N^{\Omega}_{c}}\lVert\mathbf{\Xi^{\mbox{\scriptsize{NN}}}_{\bf w}}(\mathbf{x}_{l| \Omega})-\hat{w}(\mathbf{x}_{l|\Omega})\rVert \tag{45}\] \[\Delta^{M}_{\cal L} = \frac{1}{N^{\Omega}_{c}}\sum_{l=1}^{N^{\Omega}_{c}}\lVert\mathbf{\Xi^{\mbox{\scriptsize{NN}}}_{\bf M}}(\mathbf{x}_{l| \Omega})-\mathbf{\hat{M}}(\mathbf{x}_{l|\Omega})\rVert\] (46) \[\Delta^{Q}_{\cal L} = \frac{1}{N^{\Omega}_{c}}\sum_{l=1}^{N^{\Omega}_{c}}\lVert\mathbf{\Xi^{\mbox{\scriptsize{NN}}}_{\bf Q}}(\mathbf{x}_{l| \Omega})-\mathbf{\hat{Q}}(\mathbf{x}_{l|\Omega})\rVert \tag{47}\] Here, \(\hat{w}(\mathbf{x}_{l|\Omega})\), \(\mathbf{\hat{M}}(\mathbf{x}_{l|\Omega})\), and \(\mathbf{\hat{Q}}(\mathbf{x}_{l|\Omega})\) are obtained by means of analytical or high-fidelity numerical solutions. Note, \(\alpha_{i}=1\); \(\forall\;i=w,M,Q\) for data-driven enhancement coupled with physics-informed regression by forcing the PDE constraints in Eqs. 36-38. Whereas, \(\alpha_{i}=0\) switches off the data-driven enhancement of accuracy of the NN approximations. The loss function in Eq. 41 can either be used for obtaining PINNs approximations of \(w(\mathbf{x})\), \(\mathbf{M}(\mathbf{x})\), and \(\mathbf{Q}(\mathbf{x})\) (i.e., forward problem ), or identification of model parameters \(\lambda\) and \(\mu\) (i.e., inverse problem ). ### Simply supported Kirchhoff-Love plate: A simply supported rectangular plate of size (\(a\times b\)) under a sinusoidal load \(q(x,y)=q_{0}\sin\frac{\pi x}{a}\sin\frac{\pi y}{b}\) is considered in Cartesian coordinates as shown in Fig. 7. Here, \(q_{0}\) is the intensity of the load at the center of the plate. The following boundary conditions are applied at the simply supported (SS) edges: \[w = 0;\hskip 28.452756pt\frac{\partial^{2}w}{\partial x^{2}}=0\qquad \text{for $x=0$ and $x=a$} \tag{48}\] \[w = 0;\hskip 28.452756pt\frac{\partial^{2}w}{\partial y^{2}}=0\qquad \text{for $y=0$ and $y=b$} \tag{49}\] **5.3.1 Analytical solution:** Along with the governing equation in Eq. 36 and the boundary conditions in Eqs. 48- 49, the analytical solutions of \(w\) are obtained as: \[w=\frac{q_{0}}{\pi^{4}(\frac{1}{a^{2}}+\frac{1}{b^{2}})^{2}}\sin\frac{\pi x}{a }\sin\frac{\pi y}{b} \tag{50}\] Utilizing Eqs. 37-38, analytical solutions for the moments \(M_{xx}\), \(M_{yy}\), \(M_{xy}\) and the shearing forces, \(Q_{xx}\), \(Q_{yy}\) are obtained as: \[M_{xx} = \frac{q_{0}}{\pi^{2}\left(\frac{1}{a^{2}}+\frac{1}{b^{2}}\right) ^{2}}\left(\frac{1}{a^{2}}+\frac{\nu}{b^{2}}\right)\sin\frac{\pi x}{a}\sin \frac{\pi y}{b} \tag{51}\] \[M_{yy} = \frac{q_{0}}{\pi^{2}\left(\frac{1}{a^{2}}+\frac{1}{b^{2}}\right) ^{2}}\left(\frac{\nu}{a^{2}}+\frac{1}{b^{2}}\right)\sin\frac{\pi x}{a}\sin \frac{\pi y}{b}\] (52) \[M_{xy} = \frac{q_{0}(1-\nu)}{\pi^{2}\left(\frac{1}{a^{2}}+\frac{1}{b^{2}} \right)^{2}ab}\left(\frac{\nu}{a^{2}}+\frac{1}{b^{2}}\right)\cos\frac{\pi x}{a }\cos\frac{\pi y}{b}\] (53) \[Q_{xx} = \frac{q_{0}}{\pi a\left(\frac{1}{a^{2}}+\frac{1}{b^{2}}\right)} \cos\frac{\pi x}{a}\sin\frac{\pi y}{b}\] (54) \[Q_{yy} = \frac{q_{0}}{\pi a\left(\frac{1}{a^{2}}+\frac{1}{b^{2}}\right)} \sin\frac{\pi x}{a}\sin\frac{\pi y}{b} \tag{55}\] Figure 7: Benchmark problem setup for Kirchhoff-Love plate: (a, b) simply supported rectangular plate of \(a=200\) cm and \(b=300\) cm with thickness \(t=1\) cm subjected to transverse sinusoidal loading of intensity \(q_{0}=9.806\times 10^{-4}\) MPa; (b) distributions of total collocations points \(N_{c}=10,000\) on the problem domain and various boundaries during PINNs training. These analytical solutions, \(w(\mathbf{x})\), \(\mathbf{M}(\mathbf{x})\), and \(\mathbf{Q}(\mathbf{x})\) have been utilized as \(\hat{w}(\mathbf{x}_{l|\Omega})\), \(\mathbf{\hat{M}}(\mathbf{x}_{l|\Omega})\), and \(\mathbf{\hat{Q}}(\mathbf{x}_{l|\Omega})\) for data driven enhancement in Eqs. 45-47, respectively for the PINNs approximations of the field variables. ### PINNs solutions for the Biharmonic equation: For the benchmark problem, a rectangular plate (\(a=200\) cm, \(b=300\) cm) with thickness \(t=1\) cm is considered with the following material properties: Young's modulus of elasticity \(E\)= 202017.03 MPa, Poisson's ratio \(\nu=0.25\), and flexural rigidity \(\mathscr{D}\)= 17957 N-m. The sinusoidal load intensity \(q_{0}=9.806\times 10^{-4}\) MPa is presribed as shown in Fig. 7. A similar problem has been also solved in the recent work(Vahab et al., 2021). Unless otherwise stated, the total number of randomly distributed collocation points, \(N_{c}=10,000\) is used during the training of the PINNs model. Additionally, a learning rate of 0.001, and a batch size of 50 were prescribed for optimal accuracy and faster convergence of the optimization scheme. For better accuracy during training, the Adam optimization scheme is employed with 1000 epochs. In the present study, three different activation functions were tested (see section 5.4.1). In Fig. 8(a-f), the analytical solution for various fields including plate deflection \(w\), moments \(M_{xx}\), \(M_{yy}\), \(M_{xy}\), and shearing forces \(Q_{xx}\), and \(Q_{yy}\) in Eqs. 50-55 are shown. Corresponding approximations from PINNs for various activation functions are shown in Fig. 8 (a-f) which illustrate the efficacy of the proposed model in terms of accuracy and robustness as excellent agreement with the analytical solutions is evident. #### 5.4.1 Influence of the activation function: The accuracy of the field variables and epoch evolution of the loss functions are explored for various activation functions for solving the fourth-order biharmonic PDE. To this end, three different activations, i.e., ReLU, sigmoid, and tanh are selected; the network used is defined by \(\mathscr{N}=20,L_{n}=5\). The corresponding results are depicted in Fig. 8 (g-l). Based on the results, all the activations perform well as the NN approximations are in good agreement with the analytical solutions both qualitatively and quantitatively. For further insight into the Figure 8: Solution of field variables obtained from (a-f) analytical solutions (left to right): \(w\), \(M_{xx}\), \(M_{yy}\), \(M_{xy}\), \(Q_{xx}\), and \(Q_{yy}\) ; (g-l) proposed PINNs results (left to right): \(\tilde{\mathsf{w}}^{\mathsf{NN}}\), \(\tilde{\mathsf{M}}^{\mathsf{NN}}_{xx}\), \(\tilde{\mathsf{M}}^{\mathsf{NN}}_{xy}\), \(\tilde{\mathsf{M}}^{\mathsf{NN}}_{yy}\), \(\tilde{\mathsf{Q}}^{\mathsf{NN}}_{xx}\), and \(\tilde{\mathsf{Q}}^{\mathsf{NN}}_{yy}\) for activation functions (i) ReLU, (ii) sigmoid, and (iii) tanh. Figure 10: Comparison of (a) total loss \(\Delta_{\mathcal{L}}\); (b) constitutive loss \(\Delta_{\mathcal{L}}^{\Omega}\) during training for \(\tanh\), sigmoid and ReLU activation functions for network parameters \(\mathcal{N}=20,L_{n}=5\). influence of an activation function on the accuracy of the solutions, the absolute error between the analytical solutions and the PINNs approximations for each field variable is compared for the solutions obtained with different activations in Fig. 9 (a-f). From the comparison, ReLU provides the least absolute error distributions in solving the Biharmonic equation for the simply supported plate. Although, the sigmoid activation provides the best result for \(|M_{xy}-\tilde{\mathsf{M}}_{xy}^{\mathsf{NN}}|\), the absolute error for the rest of the fields is higher compared to the solutions obtained with ReLU. Because of the sinusoidal nature of the solution, it was expected that tanh activation might be specifically suitable for this problem. Surprisingly, tanh provides worse results compared to ReLU and sigmoid activations. This can be due to the complex nature of the solution space, where ReLU can provide better adaptability during training. Furthermore, in Fig. 10, the epoch evolution of the total loss \(\Delta_{\mathcal{L}}^{\Omega}\), and constitutive loss \(\Delta_{\mathcal{L}}^{\Omega}\) is compared for different activation functions. For a particular epoch, ReLU performs better than the other two activations for \(\Delta_{\mathcal{L}}\). For \(\Delta_{\mathcal{L}}^{\Omega}\), tanh activation shows better convergence and the lowest loss value at the end of training due to the sinusoidal nature of the solution of the Biharmonic PDE. However, the fluctuations in the loss curve for tanh have a relatively higher variance compared to ReLU and sigmoid. As reported in Table 3, overall, performance in terms of various loss components at the end of training is superior for the ReLU activation for solving the Biharmonic PDE using the proposed PINNs framework. Additionally, the model with the ReLU activation requires the least training time \(t_{tr}\), indicating better convergence and faster computation of the forward and backpropagation steps. As was found for the linear elasticity problem, PINNs solutions are sensitive to the NN architecture. Various parameters that influence the NN architectures, the number of neurons in each hidden layer \(\mathscr{N}\), and the total number of hidden layers \(L_{n}\), on the accuracy of the model and the efficiency of training the model have been explored herein. Because of its superior performance for the problem, ReLU is chosen as the activation function. Four different networks with combinations \(\mathscr{N}=20,40\), and \(L_{n}=5,10\) were trained. Corresponding network parameters (\(n_{p}\)), model training time (\(t_{tr}\)), and values of different loss components at the end of training have been presented in Table. 4. The comparisons of the absolute error between the analytical solutions and the PINNs approximations for each field are shown in Fig. 11. Comparisons of the total loss \(\Delta_{\mathcal{L}}\), the constitutive loss \(\Delta_{\mathcal{L}}^{\Omega}\) for various combinations of network parameters, \(\mathscr{N}\) and \(L_{n}\) are shown in Fig. 12. Based on the comparisons shown in Fig. 11, increased network depth improves the accuracy of the PINNs approximations for all variables. Predictions by both networks with \(L_{n}=10\) are superior compared to the analytical solutions for the chosen number of collocation points. On the other hand, an increase in the number of neurons in each layer increases model prediction variance which is reflected in the higher absolute error comparisons for \(\mathscr{N}=20,40\) and \(L_{n}=10\). Similar conclusions may be drawn based on Fig. 12 and Table. 4. The total and constitutive losses are minimum for \(\mathscr{N}=40\) and \(L_{n}=10\) at the end of training. However, the approximations by this model have higher variance. Expectedly, more complex models (higher \(L_{n}\)), or with larger \(n_{p}\), require longer training time \(t_{tr}\). For the chosen number of collocation points, \(L_{n}=10\) is optimal. #### 5.4.3 Smart initialization of data-driven enhancement: In this section, we explore the applicability of data-driven enhancement in the proposed PINNs framework to improve the accuracy of the solution. Initially, the network is trained with relatively low \(N_{c}=10,000\). The pre-trained model is then trained for the higher number of collocation datasets \(N_{c}=15,000\) and \(N_{c}=20,000\) to further improve the model accuracy. Figure 12: Comparison of (a) total loss \(\Delta_{\mathcal{L}}^{\Omega}\); (b) constitutive loss \(\Delta_{\mathcal{L}}^{\Omega}\) for various combinations of network parameters \(\mathscr{N}\) and \(L_{n}\) considering ReLU activation. \begin{table} \begin{tabular}{c c c c c c c c c c} \hline Network & \(n_{p}\) & \(t_{tr}\) & \(\Delta_{\mathcal{L}}^{\Omega}\) & \(\Delta_{\mathcal{L}}^{\Gamma_{u}}\) & \(\Delta_{\mathcal{L}}^{\Gamma_{t}}\) & \(\Delta_{\mathcal{L}}^{w}\) & \(\Delta_{\mathcal{L}}^{M}\) & \(\Delta_{\mathcal{L}}^{Q}\) & \(\Delta_{\mathcal{L}}\) \\ identifier & & \((min)\) & & & & & & & \\ \hline N-1 (\(\mathscr{N}=20\), \(L_{n}=5\)) & 12,940 & 23.1 & 5.34 & 132.31 & 1672.91 & 278.43 & 498.76 & 101.36 & 2689.11 \\ N-2 (\(\mathscr{N}=40\), \(L_{n}=5\)) & 52,760 & 29.8 & 0.47 & 35.13 & 467.34 & 128.38 & 198.11 & 40.29 & 869.72 \\ N-3 (\(\mathscr{N}=20\), \(L_{n}=10\)) & 32,056 & 31.7 & 0.07 & 82.15 & 86.84 & 77.82 & 298.01 & 10.17 & 555.06 \\ N-4 (\(\mathscr{N}=40\), \(L_{n}=10\)) & 126,224 & 42.8 & 0.009 & 0.67 & 5.12 & 4.21 & 0.53 & 0.17 & 10.709 \\ \hline \end{tabular} \end{table} Table 4: Influence of network parameters \(\mathscr{N}\) and \(L_{n}\) on training times \(t_{tr}\) and final values of various loss components (in \(10^{-05}\)) for \(\tanh\) activation. The idea is to speed up the training by utilizing pre-trained weights; the initial states of the PINNs models in the later phases of training are not random anymore. The speed-up is reflected in Figs. 13-(a, b) when the convergence of the loss curves (\(\Delta_{\mathcal{L}}\) and \(\Delta_{\mathcal{L}}^{\Omega}\)) for the pre-trained models corresponding to \(N_{c}=15,000\) and \(N_{c}=20,000\) are much improved compared to the first training phase with \(N_{c}=10,000\). In Fig. 13-(c), the absolute errors between the approximations and analytical solutions are shown which demonstrate significant improvement of the PINNs approximations with the increase in \(N_{c}\). Additionally, parameters related to the efficiency of the network training processes with initialization of data-driven enhancement are reported in Tab. 5. The loss terms quickly reduce by orders of magnitude in the second training phase which indicates that for the considered network architecture, \(N_{c}=15000\) is possibly optimal. ## 6 Discussions : In the current study, a generalized PINNs framework for solving problems in linear continuum elasticity in the field of solid mechanics is presented. The fundamentals of the PINNs framework involve a construction of the loss function for physics-informed learning of the NNs through the embedding of the linear constraint during training. Following the PINNs philosophy to solve the linear elastic problem accurately, a multi-objective loss function has been formulated and implemented. The proposed multi-objective loss function consists of the residual of the governing PDE, various boundary conditions, and data-driven physical knowledge fitting terms. Additionally, weights corresponding to the terms in the loss function dictate \begin{table} \begin{tabular}{c c c c c c c c c c} \hline Network & \(N_{c}\) & Epochs & \(\Delta_{\mathcal{L}}^{\Omega}\) & \(\Delta_{\mathcal{L}}^{\Gamma_{t}}\) & \(\Delta_{\mathcal{L}}^{\Gamma_{u}}\) & \(\Delta_{\mathcal{L}}^{w}\) & \(\Delta_{\mathcal{L}}^{M}\) & \(\Delta_{\mathcal{L}}^{Q}\) & \(\Delta_{\mathcal{L}}\) & \(t_{tr}\) \\ identifier & & & & & & & & & \((min)\) \\ \hline N-1 & 10000 & 1000 & 5.34 & 132.31 & 1672.91 & 278.43 & 498.76 & 101.36 & 2689.11 & 23.1 \\ N-TL1 & 15000 & 250 & 0.025 & 1.31 & 17.34 & 1.43 & 13.11 & 9.89 & 43.11 & 5.1 \\ N-TL2 & 20000 & 250 & 0.005 & 0.71 & 2.96 & 2.01 & 2.56 & 0.87 & 9.11 & 7.2 \\ \hline \end{tabular} \end{table} Table 5: Network parameters, training time, and the component of loss for different smart initialization of data-driven enhancement models. the emphasis on satisfying the specific loss terms. To demonstrate the efficacy of the framework, the Airy solution to an end-loaded cantilever beam and the Kirchhoff-Love plate theory governed by fourth-order Biharmonic PDE has been solved. The proposed PINNs framework is shown to accurately solve different fields in both problems. Parametric investigations on activation functions and network architectures highlight the scope of improvement in terms of solution accuracy and performance. Data-driven enhancement of the PINNs approximations using analytical solutions significantly boosts accuracy and speed only using minimal network parameters. Therefore, such an approach can be employed to enhance solution accuracy for complex PDEs. Additionally, the applicability of a smart initialization of data-driven enhancement learning-based approach quickening the training process and also improving model accuracy have been illustrated. Such an approach would be key in achieving computational efficiency beyond conventional computational methods for solving linear continuum elasticity. The proposed PINNs elasticity solvers utilize Tensorflow as the backend which can be easily deployed in CPU/ GPU clusters, whereas, conventional algorithms lack such adaptability. Thus, it opens new possibilities for solving complex elasticity problems that have remained unsolved by conventional numerical algorithms in the regime of continuum mechanics. It is however worth noting that exploitation of the computational advantages of the PINNs framework depends on various factors including the choice of the network architectures, hyperparameter tuning, sampling techniques (distribution) of collocation points, etc. It has been shown that appropriate combinations of such factors significantly improve the training process and the trained models. In the present study, random sampling of the collocation points has been considered which is simple, yet powerful, that can lead to a significantly better reconstruction of the elastic fields. Importantly, this approach does not increase computational complexity, and it is easy to implement. However, in elastic/elastoplastic PDE problem which exhibits local behavior (e.g., in presence of sharp, or very localized, features) or problems with singularities the performance of PINNs may vary drastically with various sampling procedures (Daw et al., 2022; Leiteritz and Pfluger, 2021). To overcome such an issue, a failure-informed adaptive enrichment strategy such as failure-informed PINNs (FI-PINNs) can be employed that adopts the failure probability as the posterior error indicator to generate new training points in the failure region (Gao et al., 2022). Furthermore, the basic resampling scheme can be further improved with a gradient-based adaptive scheme to relocate the collocation points through a cosine-annealing to areas with higher loss gradient, without increasing the total number of points that demonstrated significant improvement under relatively fewer number of collocation points and sharper forcing function (Subramanian et al., 2022). In addition, the evolutionary sampling (Evo) method (Daw et al., 2022) that can incrementally accumulate collocation points in regions of high PDE residuals can be an efficient choice for solving various time-dependent PDEs with little to no computational overhead. Instead of using a random approach such as Latin Hypercube sampling, in the future, different deterministic and pseudo-random sampling strategies such as Sparse Grid sampling or Sobol Sequences can be employed to further improve the performance of the model. Furthermore, it is critical to obtain the statics of saturation along different parts of the solution domain during the training of DNNs (Glorot and Bengio, 2010; Rakitianskaia and Engelbrecht, 2015). The saturation occurs when the hidden units of a DNN predominantly output values close to the asymptotic ends of the activation function range which reduces the particular PINNs model to a binary state, thus limiting the overall information capacity of the NN (Rakitianskaia and Engelbrecht, 2015; Bai et al., 2019). The saturated units can make gradient descent learning slow and inefficient due to small derivative values near the asymptotes which can hinder the training PINNs efficiently (Bai et al., 2019). Thus, in the future, NN saturation can be studied quantitatively in relation to the ability of NNs to learn, generalize, and the degree of regression accuracy. In addition, various weighting coefficients of the loss terms in Eq. 8 and implementation of second-order optimization techniques (Tan and Lim, 2019) can accelerate the training significantly. Based on the performance of the PINNs framework herein, further studies quantifying the computational gains of the PINNs approach compared to conventional numerical methods are in order. The proposed approach can be extended to the solution in various computational mechanics problems such as soil plasticity (Chen and Baladi, 1985; Bousshine et al., 2001), strain-gradient plasticity (Guha et al., 2013, 2014), composite modeling (Roy, 2021) etc. Furthermore, the present model can be employed to predict microstructure evolution in Phase-field (PF) approach including various solid-solid phase transitions (PTs) (Levitas et al., 2013; Levitas and Roy, 2015; Roy, 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 202; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 202; 2020; 202; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 202; 2020; 2020; 2020; 2020; 2020; 2020; 202; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 2020; 202; 2020; solid-solid PT via intermediate melting (Levitas and Roy, 2016; Roy, 2021a,f,e,d,b, 2022d), etc. ## 7 Conclusions : Summarizing, the current work presents a deep learning framework based on the fundamentals of PINNs theory for the solution of linear elasticity problems in continuum mechanics. A multi-objective loss function is proposed for the linear elastic solid problems that include governing PDE, Dirichlet, and Neumann boundary conditions across randomly chosen collocation points in the problem domain. Multiple deep network models trained to predict different fields result in a more accurate representation. Traditional ML/ DL approaches that only rely on fitting a model that establishes complex, high-dimensional, non-linear relationships between the input features and outputs, are unable to incorporate rich information available through governing equations/ physics-based mathematical modeling of physical phenomena. Conventional computational techniques on the other hand rely completely on such physical information for prediction. The PINNs approach combines the benefits of the DL techniques in the extraction of complex relations from data with the advantages of the conventional numerical techniques for physical modeling. The proposed method may be extended to nonlinear elasticity, viscoplasticity, elastoplasticity, and various other mechanics and material science problems. The present work builds a solid foundation for new promising avenues for future work in machine learning applications in solid mechanics. **Acknowledgements:** The support of the Aeronautical Research and Development Board (Grant No. DARO/08/1051450/M/I) is gratefully acknowledged. **Competing interests:** The author declares no competing interests.
2307.14781
Contrastive Knowledge Amalgamation for Unsupervised Image Classification
Knowledge amalgamation (KA) aims to learn a compact student model to handle the joint objective from multiple teacher models that are are specialized for their own tasks respectively. Current methods focus on coarsely aligning teachers and students in the common representation space, making it difficult for the student to learn the proper decision boundaries from a set of heterogeneous teachers. Besides, the KL divergence in previous works only minimizes the probability distribution difference between teachers and the student, ignoring the intrinsic characteristics of teachers. Therefore, we propose a novel Contrastive Knowledge Amalgamation (CKA) framework, which introduces contrastive losses and an alignment loss to achieve intra-class cohesion and inter-class separation.Contrastive losses intra- and inter- models are designed to widen the distance between representations of different classes. The alignment loss is introduced to minimize the sample-level distribution differences of teacher-student models in the common representation space.Furthermore, the student learns heterogeneous unsupervised classification tasks through soft targets efficiently and flexibly in the task-level amalgamation. Extensive experiments on benchmarks demonstrate the generalization capability of CKA in the amalgamation of specific task as well as multiple tasks. Comprehensive ablation studies provide a further insight into our CKA.
Shangde Gao, Yichao Fu, Ke Liu, Yuqiang Han
2023-07-27T11:21:14Z
http://arxiv.org/abs/2307.14781v1
# Contrastive Knowledge Amalgamation for Unsupervised Image Classification ###### Abstract Knowledge amalgamation (KA) aims to learn a compact student model to handle the joint objective from multiple teacher models that are are specialized for their own tasks respectively. Current methods focus on coarsely aligning teachers and students in the common representation space, making it difficult for the student to learn the proper decision boundaries from a set of heterogeneous teachers. Besides, the KL divergence in previous works only minimizes the probability distribution difference between teachers and the student, ignoring the intrinsic characteristics of teachers. Therefore, we propose a novel Contrastive Knowledge Amalgamation (CKA) framework, which introduces contrastive losses and an alignment loss to achieve intra-class cohesion and inter-class separation. Contrastive losses intra- and inter- models are designed to widen the distance between representations of different classes. The alignment loss is introduced to minimize the sample-level distribution differences of teacher-student models in the common representation space. Furthermore, the student learns heterogeneous unsupervised classification tasks through soft targets efficiently and flexibly in the task-level amalgamation. Extensive experiments on benchmarks demonstrate the generalization capability of CKA in the amalgamation of specific task as well as multiple tasks. Comprehensive ablation studies provide a further insight into our CKA. Keywords:Knowledge amalgamation Contrastive learning. ## 1 Introduction Reusing pre-trained models to get lite ones for reducing computation costs of training a new one from scratch has been a trending research topic in recent years [8, 7]. Knowledge Distillation (KD) methods [6] train a light-weight target model (the _"student"_ model) by learning from a well-trained cumbersome model (the _"teacher"_ model), which improves the performance of students with any architectures compared to the models trained from scratch. Knowledge amalgamation (KA) [13, 11] aims to train a versatile student model by transferring knowledge from multiple pre-trained teachers. The above method requires mapping teachers and student to a common representation space. The student learn similar intermediate features through the aggregated cues from the pre-trained teachers. Further, by integrating probability knowledge from pre-trained teachers using KL divergence, student can predict the joint of teachers' label sets. However, complex optimization designs are required for heterogeneous teachers in previous works. Besides, direct application of previous KA methods to downstream tasks causes severe performance degradation because of domain shifts, additional noise, as well as information loss in feature projections. Moreover, due to the imperfection of pre-trained teachers and absence of human annotation, the supervision signals for students are confused. In this work, we endeavor to explore an efficient and effective KA scheme for unsupervised image classification. We aim to transfer knowledge as much as possible from pre-trained teachers who specialize in heterogeneous unsupervised image classification tasks to a compact and versatile student. For example, if one teacher classifies cars and the other classifies airplanes, the student should be able to classify both cars and airplanes. To achieve this, we first extend the contrastive learning paradigm to the knowledge fusion environment, for two reasons. Firstly, CL can effectively push positive sample pairs together and pull negative sample pairs apart without the need for manual annotations. Additionally, different teacher and student models are natural augmentation schemes, and their combination significantly increases the number of positive and negative samples for training the student. Secondly, supervised contrastive loss models have been shown to outperform traditional cross-entropy losses[9]. Thus, they can be effectively used in teacher pre-training to alleviate the incompleteness and unreliability of supervising teacher models. We propose a novel **C**ontrastive **K**nowledge **A**malgamation, refered to as CKA, by implementing the CKA framework via DNNs for unsupervised classification. Concretely, we first construct a common representation space based on the shared multilayer perceptron (MLP), and design contrastive and alignment losses to achieve intra-class cohesion and inter-class separation of samples. As a way of unsupervised learning, the contrastive loss intra- and inter- models aims to enlarge the distance between feature representations of different sample categories and reduce the distance between feature representations of the same sample category. Besides, alignment losses are proposed to minimize the sample-level distribution difference between different models. Apart from learning the teachers' features, a soft target distillation loss finally is designed to effectively and flexibly transfer probability knowledge from pre-trained teachers to a student, enabling the student to make inferences similar to or the same as the teachers' during task-level amalgamation. The contributions of this work are summarized as follows: * We propose a novel model reuse paradigm to supervise the student model without annotations, named CKA, which introduces contrastive losses and an alignment loss to achieve intra-class cohesion and inter-class separation. * We design a soft target distillation loss to effectively transfer knowledge from the pre-trained teachers to a student in the output probability space. * Extensive experiments on standard benchmarks demonstrate that CKA provides more accurate supervision, and is generalizable for amalgamating heterogeneous teachers. ## 2 Related Works ### Knowledge Distillation & Knowledge Amalgamation Knowledge distillation (KD) [6, 16] is a method of transferring knowledge from one model to another. However, existing approaches are still performed under a single teacher-student relationship with a sharing task, are not applicable to multiple and heterogeneous teachers. Knowledge amalgamation (KA) aims to acquire a compact student model capable of handling the comprehensive joint objective of multiple teacher models, each specialized in its own task. There are two kinds of approaches: (1) _Homogeneous_ KA, where all teachers and students have identical network architectures [13]. (2) _Heterogeneous_ KA, where each teacher has different architecture and specializes in its own class set [14, 11]. Among these, [14] matches the outputs of students to the corresponding teachers, while [11] aligns the features of students and teachers in a shared latent space by minimizing the maximum mean discrepancy. However, when facing with the imperfect teachers with unreliable supervisions, previous studies suffer from conflicting supervisions in the student training process, which significantly harms the performance of the student model. To the best of our knowledge, it is the first time to explore the CKA paradigm for unsupervised classification tasks. ### Contrastive Learning Contrastive Learning is an unsupervised learning method where supervision is automatically generated from the data. Currently, contrastive learning (CL) has achieved state-of-the-art performance in representation learning [4, 1, 2]. SimCLR [1] proposes the proposal by performing data augmentation on the raw input data and mapping it to a feature space, constructing a contrastive loss (i.e., InfoNCE loss) to maximize the similarity between positive pairs and minimize the similarity between negative pairs. BYOL [4] and SimSiam [2] extend the work by designing their losses to measure the similarity between positive samples, effectively eliminating the need for negative samples. However, all of these approaches are tailored for single-model and single-task. Our method extends the concept of CL to a knowledge amalgamation environment, designing intra-and inter- model contrastive losses to explore the model-agnostic semantic similarity and further apply them to downstream unsupervised multi-classification tasks. ## 3 Problem Formulation We define the problem of knowledge amalgamation as follows. Assume that we are given \(\mathcal{T}=\left\{\mathcal{T}_{t}\right\}_{t=1}^{N}\) well pre-trained teachers, where each teacher \(\mathcal{T}_{t}\) specializes a distinct classification task, i.e., a set of full labeled classes \(\mathcal{T}_{t}=(\mathcal{D}_{t};\mathcal{Y}^{t})\). Our proposal is to learn a versatile student with an unlabeled dataset \(\mathcal{D}=\bigcup_{t=1}^{N}\mathcal{D}_{t}\), which is able to perform predictions over the comprehensive class set of distinct-task teachers, \(\mathcal{Y}=\bigcup_{t=1}^{N}\mathcal{Y}^{t}\). In our KA setting, \(N\) tasks \(\mathcal{T}=\left\{\mathcal{T}_{t}\right\}_{t=1}^{N}\) can be built for either the same or cross dataset. Without loss of generality, we assume that for any two tasks \(\mathcal{T}_{i},\mathcal{T}_{j}\in\mathcal{T}\), their specialties are totally disjoint, i.e., \(\mathcal{Y}^{i}\cap\mathcal{Y}^{j}=\oslash\). ## 4 Approach This work is aimed to build a contrastive knowledge amalgamation framework, and implement it by DNNs for unsupervised image classification. Knowledge amalgamation is particularly challenging when teacher-student structures are heterogeneous and data annotation is not available. To tackle the difficulty, we first leverage the distance between feature representations of the samples, and introduce contrastive and alignment losses to achieve intra-class coherence and inter-class separation of the feature representations. Additionally, we design a soft-target distillation loss to effectively transfer the soft-target probability knowledge from pre-trained teachers to the student. The overview of the proposed CKA is shown in Figure. 1, in which the knowledge of pre-trained teachers is fixed. By training the student model in downstream tasks, the student is capable of making inferences that are similar or identical to those of their teachers. ### Margin-based Intra- and Inter-model Contrast As there are no annotated data available, we novely use contrastive learning (CL) to construct supervision for guiding the student. CL aims to maximize the similarities of positive pairs while minimizing those of negative ones [1]. The characteristics of pairs can be defined by different criteria. Motivated by this, we develop two types of contrastive losses, including edge-based student-internal contrast (intra-model contrast) and distance-based teacher-student model contrast (inter-models contrast), to increase the distance between different sample class feature representations and decrease the distance between the same sample class feature representations. The overall schematic is shown in Figure. 2. Figure 1: The overflow of contrastive knowledge amalgamation. #### 2.0.1 Margin-based Intra-model Contrast To begin with, we describe the standard contrastive loss term, following the most popular setups of SimCLR [1], which is defined as: \[\mathcal{L}(\tilde{x},\hat{x})=-\log\frac{e^{s(\tilde{z},\tilde{z})/\tau}}{e^{s (\tilde{z},\tilde{z})/\tau+\sum_{\tilde{z}\in A^{-}}e^{s(\tilde{z},\tilde{z}))/ \tau}}} \tag{1}\] Here, by way of randomized data augmentation \(\text{Aug}(\cdot)\), two different views \(\tilde{x}\) and \(\hat{x}\) for the input sample \(x\) are generated. The two images are then fed into an encoder network \(\mathcal{E}(x)\), followed by a two-layer nonlinear projection head MLP \(h(\cdot)\), yielding a pair of \(L_{2}\)-normalized positive embeddings \(\hat{z}=h(\mathcal{E}(\hat{x}))\) and \(\tilde{z}=h(\mathcal{E}(\tilde{x}))\). \(\tilde{z}\in\Lambda^{-}\) represents the negative sample in a mini-batch. \(s(\cdot,\cdot)\) declares the _cosine similarity_ for measuring the relationship between embedding pair \(\tilde{z}\) and \(\hat{z}\) (resp. \(\tilde{x}\) and \(\hat{x}\)), formulated as: \[s\left(\tilde{z},\hat{z}\right)=\frac{\left(\tilde{z}\right)\left(\hat{z} \right)^{\top}}{\|\tilde{z}\|\cdot\|\hat{z}\|} \tag{2}\] To prevent the loss from being dominated by easy negatives (different class samples with little similarity), a constant margin \(\alpha\) is introduced that only negative pairs with similarity larger than \(\alpha\) contribute to the contrastive loss in Eqn. 1. Formally, the margin-based intra-model contrastive loss for training the student model is denoted as: \[\mathcal{L}_{intra}=\left(1-s\left(\tilde{z},\hat{z}\right)\right)+\sum_{ \tilde{z}\in\Lambda^{-}}\left(s\left(\tilde{z},\tilde{z}\right)-\alpha\right) \tag{3}\] #### 2.0.2 Distance-based inter-model contrast For inter-model contrast, data across models are embedded as point distributions in high-dimensional vector spaces. To measure the inter-model distance between those two point distributions, we model two _metric measure spaces_ (mm-spaces) \(\mathcal{X}=(X,d_{X},\mu)\) and \(\mathcal{Y}=(Y,d_{Y},\nu)\), where data \(X\) (resp. \(Y\)) is a complete separable set endowed with a distance \(d_{X}\) and a positive Borel measure \(\mu\in\mathcal{M}+(X)\). Those two mm-spaces are considered up to isometry (denoted \(\mathcal{X}\sim\mathcal{Y}\)), meaning that there is a bijection \(\psi:\text{spt}(\mu)\rightarrow\text{spt}(\nu)\) (where \(\text{spt}(\mu)\) is the support of \(\mu\)) such that \(d_{X}(x,y)=d_{Y}(\psi(x),\psi(y))\) and \(\psi_{\sharp}\mu=\nu\). Here \(\psi_{\sharp}\) is the push-forward operator. Figure 2: Illustrations of intra- and inter-model contrast loss via the teacher-student pair. Specifically, let \(\mu\in\mathcal{P}\left(\mathbb{R}^{p}\right)\) and \(\nu\in\mathcal{P}\left(\mathbb{R}^{q}\right)\) with \(p\neq q\) to be discrete measures on mm-spaces with \(\mu=\sum_{i=1}^{n}a_{i}\delta_{x_{i}}\) (here \(\delta_{x_{i}}\) is the mass at \(x_{i}\)) and \(\nu=\sum_{i=1}^{n}b_{j}\delta_{y_{j}}\) of supports \(X\) and \(Y\), where \(a\in\Sigma_{n}\) and \(b\in\Sigma_{m}\) are simplex histograms. The distance \(\mathcal{D}\) between those points is defined as: \[\mathcal{D}\left(\mathcal{X},\mathcal{Y}\right)^{q}=\sum_{i,j,k,l}\left|d_{X} \left(x_{i},x_{k}\right)-d_{Y}\left(y_{j},y_{l}\right)\right|^{q}\pi_{i,j}\pi_ {k,l} \tag{4}\] Here \(d_{X}(x_{i},x_{k}):\mathbb{R}^{p}\times\mathbb{R}^{p}\rightarrow\mathbb{R}_{+}\), measures the euclidean distance between sample points \(x_{i}\) and \(x_{k}\) in \(\mu\). The intuition underpinning the definition of this distance is that there exists a fuzzy correspondence map \(\pi\in\mathcal{P}(X\times Y)\) between the points of the distributions, which tends to associate pairs of points with similar distances within each pair: the more similar \(d_{X}(x_{i},x_{k})\) is to \(d_{Y}(y_{j},y_{l})\), the stronger the transport coefficients \(\pi_{i,j}\) and \(\pi_{k,l}\) are. From a semantic perspective, by simultaneously learning the model structures of both the teacher and student, this distance can measure the similarity between samples, reducing the distance between the feature representations of similar sample classes and increasing the distance between feature representations of dissimilar sample classes. Given a mini-batch size \(B\) of feature maps \(P\in\mathbb{R}^{B\times c\times h\times w}\) and \(Q_{t}\in\mathbb{R}^{B\times c\times h\times w}\) extracted from the student encoder and \(t\)-th teacher encoder, where \(c\), \(h\), and \(w\) denote the number of channel, height and width of the feature maps respectively. For simplicity, we omit the superscripts and subscripts and denote the feature maps of two different models as \(P\) and \(Q\). The distance metric on \(P\) and \(Q\) is designed firstly to guide the contrast across different models, i.e., inter-model contrast. To this end, we first reshape \(P\) and \(Q\) to \(\mathbb{R}^{B\times m}\), i.e., \(P=[p_{1},p_{2},\ldots,p_{B}]\) and \(Q=[q_{1},q_{2},\ldots,q_{B}]\), where \(m=c\times h\times w\) is the feature vectors. The transport map \(\pi^{p},\pi^{q}\in\mathbb{R}^{B\times B}\) for \(P\) and \(Q\) can be derived by: \[\pi_{i,j}=\frac{e^{-d(p_{i},p_{j})}}{\sum_{j=1}^{N}e^{-d(p_{i},p_{j})}} \tag{5}\] where \(d(\cdot,\cdot)\) is the mm-space distance between two instances \(p_{i}\) and \(p_{j}\). Unless stated otherwise, euclidean distance is used in our experiments. As for any \(k\)-th row vector in \(\pi^{p}\) and \(\pi^{q}\), \(\pi_{k}^{p}\) and \(\pi_{k}^{q}\) can be termed as positive pairs because they both semantically illustrate the distance of \(k\)-th sample and others in the mini-batch \(N\), regardless of the model representation. Our distance-based inter-model contrastive loss, discovering fine-gained sample similarity matching between the student and each teacher, can be defined as: \[\mathcal{L}_{inter}=\sum_{t=1}^{N}\left(\left(1-s\left(\pi^{p},\pi_{t+}^{q} \right)\right)+\sum_{\pi_{t\cdot}^{q}\in\Lambda^{-}}s\left(\pi^{p},\pi_{t\cdot }^{q}\right)\right) \tag{6}\] where \(\pi_{t\cdot}^{q}\) and \(\pi_{t\cdot}^{q}\) denote the distance-based negative and positive pairs. ### Common Feature Alignment To enable a student to mimic the aggregated hints from heterogeneous teachers, a shared multilayer perceptron (MLP) is designed for mapping all features to a common latent space. Specifically, a \(1\times 1\) kernel convolution is added after the backbone network of each model separately, thereby unifying the outputs of different models into the same channel, which is taken to be the input of MLP and set to 256 in our implementation. As represented in CFL [11], we adopt the Maximum Mean Discrepancy (MMD) to measure the discrepancy between the output features of the student and that of teachers in the unit ball of a reproducing kernel Hilbert space [3]. Take a teacher-student pair as an example, we extract the mini-batch common space features with the designed shared MLP and represent them as \(f_{\mathcal{S}},f_{\mathcal{T}}\in\mathbb{R}^{B\times d}\), of which \(d\) denotes the output dimension of the MLP and is set to 128 in our implementation. An empirical \(l_{2}\)_norm_ approximation to the MMD distance of \(f_{\mathcal{S}}\) and \(f_{\mathcal{T}}\) is computed as follow: \[\text{MMD}=\frac{1}{B}\left\|\sum_{i=1}^{B}\phi\left(f_{\mathcal{T}}^{i}\right) -\sum_{j=1}^{B}\phi\left(f_{\mathcal{S}}^{j}\right)\right\|_{2}^{2} \tag{7}\] where \(\phi\) is an explicit mapping function. The extension of multi-kernel formulation of MMD can then be defined as: \[\begin{split}\text{MMD}^{2}[K,f_{\mathcal{S}},f_{\mathcal{T}}]=& K\left(f_{\mathcal{S}},f_{\mathcal{S}}\right)-2K\left(f_{\mathcal{T}}^{i},f_{\mathcal{S}}^{j}\right)+\\ & K\left(f_{\mathcal{T}},f_{\mathcal{T}}\right)\end{split} \tag{8}\] \(K\) is defined as the convex combination of \(m\) PSD kernel: \[\mathcal{K}=\left\{K=\sum_{u=1}^{m}\sigma_{u}K_{u}:\sum_{u=1}^{m}\sigma_{u}=1,\sigma_{u}\geq 0,\forall u\right\} \tag{9}\] here \(\mathcal{K}\) denotes the multi-prototypical kernel set. The constraints on coefficients \(\{\sigma_{u}\}\) are imposed to guarantee that the derived multi-kernel \(K\) is characteristic. The process of aligning each teacher and student is equivalent to minimizing the MMD distance between them. This can achieve intra-class cohesion of similar samples. We aggregate all such MMDs between \(N\) pairs of teachers and students, and the overall alignment loss \(\mathcal{L}_{align}\) in the shared MLP can be written as: \[\mathcal{L}_{align}=\sum_{i=1}^{N}\text{MMD}\left(f_{\mathcal{S}},f_{ \mathcal{T}_{t}}\right) \tag{10}\] ### Soft-target Distillation Apart from learning the teacher's features, the student is also expected to produce identical or similar inferences as the teachers do. We thus also take the teachers' predictions by feeding unlabelled input samples to them and then supervise the student's training. As there is no annotation available for each instance \(x\) in the target dataset \(\mathcal{D}_{s}\), the predictions of pre-trained teachers can be constructed as supervision for guiding the student, named as _soft-target distillation_. Specifically, we first feed \(x\) into each \(T_{i}\) to obtain the golden label probability distribution \(\Phi(x;T_{i})\) in the _softmax layer_, and then concatenate them together for training the student by minimizing the KL-divergence between their probability distribution: \[\mathcal{L}_{std}=\sum_{x\in\mathcal{D}_{s}}\text{KL}(\Phi(x,S)\|\Phi(x,T)) \tag{11}\] where \(\Phi(x,S)\) and \(\Phi(x,T)\) denote the _softmax_ probability distribution of the student and that of the concatenated teachers for input \(x\), respectively. Considering the weighted sum of contrastive losses (including \(\mathcal{L}_{intra}\), and \(\mathcal{L}_{inter}\)), alignment loss \(\mathcal{L}_{align}\) and soft-target distillation loss \(\mathcal{L}_{std}\) together, the total training objective of our CKA can be described as: \[\mathcal{L}=\lambda_{intra}\mathcal{L}_{intra}+\lambda_{inter}\mathcal{L}_{ inter}+\lambda_{a}\mathcal{L}_{align}+\lambda_{d}\mathcal{L}_{std} \tag{12}\] ## 5 Experiments In this section, we evaluate the proposed method on standard benchmarks and compare the results with the recent state of the arts. We also conduct ablation studies to validate the effect of the major components. ### Experiments Setup DatasetsWe evaluate our proposed CKA on four widely used benchmarks, i.e., CUB-200-2011 [15], Stanford Cars [10], Stanford Dogs [9], and FGVC-Aircraft [12]. The detailed statistics are summarized in Table 1. Implementation DetailsWe adopt the _resnet_ family [5] including _resnet_-18, _resnet_-34, and _resnet_-50, as our model samples. Besides, all the teachers are first pre-trained as [10] and fine-tuned to heterogeneous tasks. To construct heterogeneous tasks on the given datasets, we split all the categories into non-overlapping parts of equal size to train the teachers. The trained teacher model weights are frozen during the student training process. In student training phrase, data augmentation is performed via Random ResizedCrop, Random ColorJitter, Random HorizontalFlip, and Random GaussianBlur while in testing, Center Crop is used. \begin{table} \begin{tabular}{c|c|c|c} \hline \hline **Dataset** & **Images** & **Categories** & **Train/Test** \\ \hline CUB-200-2011 & 11,788 & 200 & 5,994/5,794 \\ \hline Stanford Dogs & 20,580 & 120 & 12,000/8,580 \\ \hline Stanford Cars & 16,185 & 196 & 8,144/8,041 \\ \hline FGVC-Aircraft & 102,000 & 102 & 6,667/3,333 \\ \hline \hline \end{tabular} \end{table} Table 1: Statistics of datasets used in this paper. During training, the learning rate is set to 0.0005, and the cosine decay is used; the weight decay is set to 0.0005; Adam is used as the optimizer, and the batch size is set to 64; a total of 100 epochs are trained. All experiments are completed with GPUs of RTX 2080 Ti 11GB and CPUs of Intel. There are several hyper-parameters involved in our method, including \(\alpha\) in Eqn. 3, set to 0.4, for alleviating the dominance of negative sample pairs; \(\lambda_{intra}\), \(\lambda_{inter}\), \(\lambda_{a}\) and \(\lambda_{d}\) for the final CKA loss in Eqn. 12, are set to \(\lambda_{intra}\) = \(\lambda_{inter}\) = \(\lambda_{d}\) = 1 and \(\lambda_{a}\) = 10. Compared MethodsWe implement various baselines to evaluate the effectiveness of our proposal, which are categorized as: (1) _Original Teacher_: The teacher models are used independently for prediction. We set the probabilities of classes out of the teacher specialty to zeros. (2) _Ensemble_: The output logits of teachers are directly concatenated for predictions over the union label set. (3) Vanilla KD [6]: The student is trained to mimic the soft targets produced by logits combination of all teacher models, via minimizing the vanilla KL-divergence objective. (4) CFL [11]: CFL first maps the hidden representations of the student and the teachers into a common feature space. The student is trained by aligning the mapped features to that of the teachers, with supplemental supervision from the logits combination. We also include a supervised learning method, which trains the student with labeled data for a better understanding of the performance. We compare the average accuracy of each method in three random experiments. ### Quantitative Analysis We compare our proposed method CKA with SOTA on above-mentioned classification datasets. The experiment results and corresponding model size are listed in Table 2. Our findings are: (1) Simple baselines can be seriously affected by incomplete datasets and annotations, showing that it is necessary to conduct amalgamation. (2) CFL cannot achieves consistent improvements on comprehensive tasks, demonstrating the instability of supervision based on simple feature alignments. (3) Our proposed CKA and its variants outperform the \begin{table} \begin{tabular}{l|c|c c c c|c} \hline \hline **Method** & **Size** & **Dogs** & **Cars** & **CUB** & **Aircraft** & **Average** \\ \hline Supervised & 163M & 83.62 \(\pm\) 0.00 & 89.64 \(\pm\) 0.00 & 72.68 \(\pm\) 0.00 & 82.78 \(\pm\) 0.00 & 82.14 \\ \hline Teacher1 & 130M & 66.64 \(\pm\) 0.00 & 70.33 \(\pm\) 0.00 & 65.37 \(\pm\) 0.00 & 63.01 \(\pm\) 0.00 & 66.80 \\ Teacher2 & 240M & 72.03 \(\pm\) 0.00 & 87.85 \(\pm\) 0.00 & 66.12 \(\pm\) 0.00 & 81.12 \(\pm\) 0.00 & 76.60 \\ Ensemble & 370M & 73.90 \(\pm\) 0.22 & 77.08 \(\pm\) 0.64 & 68.25 \(\pm\) 0.00 & 75.76 \(\pm\) 0.00 & 73.38 \\ \hline Vanilla KD & 240M & 76.16 \(\pm\) 0.60 & 80.39 \(\pm\) 0.31 & 69.94 \(\pm\) 0.79 & 78.00 \(\pm\) 0.01 & 76.06 \\ CFL & 240M & 76.23 \(\pm\) 0.26 & 81.12 \(\pm\) 0.21 & 70.67 \(\pm\) 0.97 & 79.98 \(\pm\) 0.22 & 76.86 \\ \hline CKA-Intra & 240M & 78.89 \(\pm\) 0.59 & 82.33 \(\pm\) 0.31 & 71.07 \(\pm\) 0.04 & 79.02 \(\pm\) 0.21 & 77.71 \\ CKA-Inter & 240M & 79.72 \(\pm\) 0.60 & **82.95 \(\pm\) 1.20** & **71.49 \(\pm\) 0.25** & 80.45 \(\pm\) 0.51 & 78.46 \\ CKA & 240M & **79.76 \(\pm\) 0.09** & 82.88 \(\pm\) 0.21 & 71.32 \(\pm\) 0.55 & **80.78 \(\pm\) 0.08** & 78.45 \\ \hline \hline \end{tabular} \end{table} Table 2: Comparison of different methods on comprehensive classification tasks. Best results are shown in bold. previous baseline models on all the datasets, and the average accuracy of CKA-Inter is achieves a 1.60 points gain over the best performing baseline model. On the FGVC-Aircraft dataset, the knowledge consolidation accuracy of CKA reached 80.78% without label information, approaching that of supervised learning methods. We attribute this success to the fact that CKA provides the student with natural semantic relevance estimated on the sample set based on contrastive losses, and the intra-class cohesion and inter-class separation methods effectively transfer feature-level knowledge. Furthermore, supervisory contradictions from incomplete teachers are avoided by soft labels at task-level amalgamation. These promising results indicate that our CKA framework produces better supervisions for training the student model, yields great potentials for model reusing. ### Ablation Study We conduct ablation studies to investigate the contribution of the contrastive losses and soft-target distillation loss described in our proposed approach. For margin-based intra-model contrastive loss, we compare the performances by turning them on and off. For inter-model loss between teacher-student pairs, on the other hand, we define three different distances in Eqn. 4, including euclidean distance, cosine and MMD distance. We summarize the comparative results in Table 3, where we observe that the CKA-Inter with MMD distance yields better performance than others. Moreover, CKA and its variants also improve with a large room over KD and CFL, validating the complement of contrastive losses and flexibility of soft-target loss. ### Results in Challenging Settings #### 5.4.1 CKA with Heterogeneous Teachers We further consider merging knowledge from heterogeneous teachers with different structures. Specifically, we random select two different _resnet_ architectures as the teachers, respectively. The results are listed in Table 4. We find that while a larger student tends to perform better, indicating that the wider and larger the model, the more complete the knowledge \begin{table} \begin{tabular}{c|c|c c c|c c c} \hline \hline \multicolumn{2}{c|}{**Teachers**} & \multicolumn{2}{c|}{\(\mathcal{T}_{1}\): _restnet-18_} & \multicolumn{2}{c|}{\(\mathcal{T}_{2}\): _restnet-34_} & \multicolumn{2}{c}{\(\mathcal{T}_{1}\): _restnet-50_} & \multicolumn{2}{c}{\(\mathcal{T}_{2}\): _restnet-34_} \\ \hline \multicolumn{2}{c|}{**Method**} & Vanilla KD & CFL & CKA & Vanilla KD & CFL & CKA \\ \hline \multirow{2}{*}{**Student**} & _resnet-34_ & 80.67 & 81.09 & 82.54 & 80.54 & 81.23 & 82.08 \\ & _resnet-50_ & 82.04 & 82.25 & 83.18 & 82.62 & 84.55 & 85.21 \\ \hline \hline \end{tabular} \end{table} Table 4: Result of merging heterogeneous teachers with different architectures is demonstrated by Stanford Dogs dataset. \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline \multirow{2}{*}{**Method**} & Vanilla KD\({}_{kd}\) & CFL & CKA & W/O Inter-model & \multicolumn{2}{c}{W Inter-model loss} & W/O Intra-model loss \\ \cline{3-8} & & & loss & \multicolumn{2}{c}{Euclidean Cosine MMD} & \multicolumn{2}{c}{loss} \\ \hline **Cars** & 80.22 & 81.12 82.88 & 80.04 & 82.33 & 82.95 & 83.21 & 82.33 \\ \hline **Aircraft** & 78.00 & 79.98 80.78 & 77.97 & 80.21 & 80.45 & 81.42 & 79.02 \\ \hline \hline \end{tabular} \end{table} Table 3: Ablation analysis of CKA. Remove modules lead to deteriorated performance can be learned. Our CKA achieves the best results on the Stanford Dogs, showing its effectiveness for heterogeneous teachers. #### 5.5.3 CKA with Heterogeneous Teachers for Cross-Dataset Specifically, we pretrain distinct-task teacher models on different datasets separately and then train a student to perform classification over the union label set of both datasets. The results of merging knowledge from two combined datasets, Stanford Cars and FGVC-Aircraft are listed in Table 5. _resnet-34_ is adopted for training student in the cross-dataset setting. Our CKA still outperforms previous baseline models in this settings. Interestingly, we find that the performance of CKA is superior to all baselines and even to the results of supervision. We speculate that the reason is that the correlation between classes in different datasets is weak and the data classification categories are complex, which is prone to confusion by label supervision alone. In contrast, our CKA uses contrast loss to compute the distance between samples, which is more robust and discriminative. ## 6 Conclusion In this paper, we explore knowledge amalgamation for unsupervised classification tasks for promoting better model reuse. We present a principled framework CKA, in which contrastive losses and alignment loss are designed to enlarge the distance between feature representations of samples from different categories and decrease that of samples from the same categories, as a self-supervised way to guide the student to learn discriminative features. Besides, we present a soft-target distillation loss to efficiently and flexibly transfer the dark knowledge in the task-level amalgamation. Experiments on several benchmarks demonstrate our CKA can substantially outperform strong baselines. More extensive investigations show that CKA is generalizable for challenging settings, including merging knowledge from heterogeneous teachers, or even cross-dataset teachers. \begin{table} \begin{tabular}{l|c c|c} \hline \hline **Method** & \(\mathcal{T}_{1}\)**: Stanford Cars** & \(\mathcal{T}_{2}\)**: FGVC-Aircraft** & **Merge** \\ \hline Supervised & 89.64 \(\pm\) 0.00 & 82.78 \(\pm\) 0.00 & 86.90 \(\pm\) 0.00 \\ \hline Teacher1 & 89.64 \(\pm\) 0.00 & — & — \\ Teacher2 & — & 78.00 \(\pm\) 0.00 & — \\ Ensemble & — & — & 82.08 \(\pm\) 0.54 \\ \hline Vanilla KD & 85.26 \(\pm\) 0.25 & 80.31 \(\pm\) 0.85 & 83.76 \(\pm\) 0.59 \\ CFL & 87.99 \(\pm\) 0.73 & 84.22 \(\pm\) 0.48 & 86.76 \(\pm\) 0.46 \\ \hline CKA-Intra & 88.95 \(\pm\) 0.00 & 84.93 \(\pm\) 0.58 & 87.75 \(\pm\) 0.60 \\ CKA-Inter & **89.48 \(\pm\) 0.31** & 84.91 \(\pm\) 0.59 & 88.11 \(\pm\) 0.79 \\ CKA & 89.28 \(\pm\) 0.75 & **85.78 \(\pm\) 0.07** & **88.21 \(\pm\) 0.50** \\ \hline \hline \end{tabular} \end{table} Table 5: Results of merging from teacher models with different knowledge domains and in a cross-dataset scenario of Stanford Cars and FGVC-Aircraft.
2301.07315
Face Recognition in the age of CLIP & Billion image datasets
CLIP (Contrastive Language-Image Pre-training) models developed by OpenAI have achieved outstanding results on various image recognition and retrieval tasks, displaying strong zero-shot performance. This means that they are able to perform effectively on tasks for which they have not been explicitly trained. Inspired by the success of OpenAI CLIP, a new publicly available dataset called LAION-5B was collected which resulted in the development of open ViT-H/14, ViT-G/14 models that outperform the OpenAI L/14 model. The LAION-5B dataset also released an approximate nearest neighbor index, with a web interface for search & subset creation. In this paper, we evaluate the performance of various CLIP models as zero-shot face recognizers. Our findings show that CLIP models perform well on face recognition tasks, but increasing the size of the CLIP model does not necessarily lead to improved accuracy. Additionally, we investigate the robustness of CLIP models against data poisoning attacks by testing their performance on poisoned data. Through this analysis, we aim to understand the potential consequences and misuse of search engines built using CLIP models, which could potentially function as unintentional face recognition engines.
Aaditya Bhat, Shrey Jain
2023-01-18T05:34:57Z
http://arxiv.org/abs/2301.07315v1
# Face Recognition in the age of CLIP & Billion image datasets ###### Abstract CLIP (Contrastive Language-Image Pre-training) models developed by OpenAI have achieved outstanding results on various image recognition and retrieval tasks, displaying strong zero shot performance. This means that they are able to perform effectively on tasks for which they have not been explicitly trained. Inspired by the success of OpenAI CLIP, a new publicly available dataset called LAION-5B was collected which resulted in the development of open ViT-H/14, ViT-G/14 models that outperform the OpenAI L/14 model. The LAION-5B dataset also released an approximate nearest neighbor index, with a web interface for search and subset creation. In this paper, we evaluate the performance of various CLIP models as zero shot face recognizers. Our findings show that CLIP models perform well on face recognition tasks, but increasing the size of the CLIP model does not necessarily lead to improved accuracy. Additionally, we investigate the robustness of CLIP models against data poisoning attacks by testing their performance on poisoned data. Through this analysis, we aim to understand the potential consequences and misuse of search engines built using CLIP models, which could potentially function as unintentional face recognition engines. ## 1 Introduction CLIP (Contrastive Language-Image Pre-training) is a type of deep learning model developed by OpenAI that has achieved impressive results on a variety of image recognition and retrieval tasks. The CLIP model was introduced in "Learning Transferable Visual Models From Natural Language Supervision" by Radford et. al. ([https://arxiv.org/abs/2103.00020](https://arxiv.org/abs/2103.00020)), which describes the training of the model on a proprietary dataset of 400 million images collected from the web. The models were publicly released in 2021 and can be found on the OpenAI GitHub page. CLIP is trained by combining a large amount of text and image data and using an objective function that encourages the model to map images and text to a shared latent space. This enables CLIP to perform well on tasks such as image classification and retrieval, even when it has not been explicitly trained on those tasks. One of the key characteristics of CLIP is its ability to perform zero shot learning, meaning it can recognize and classify objects or concepts that it has not seen before. This is achieved through the use of a shared latent space for text and images, which allows the model to generalize from the text data to the image data. For example, if CLIP has been trained on a large dataset of images and their associated textual descriptions, it can use its understanding of the words in the description to make educated guesses about the contents of a new image.Overall, the development of CLIP has significantly advanced the field of image recognition and retrieval, and it has the potential to be applied to a wide range of tasks and industries. The CLIP model was trained on a dataset of 400 million (image, text) pairs that were collected from various publicly available sources on the internet. To ensure a diverse range of visual concepts was included in the dataset, the creators of CLIP searched for (image, text) pairs using a set of 500,000 queries and included up to 20,000 (image, text) pairs per query. The resulting dataset was called WIT for WebImageText. This dataset is proprietary and wasn't released with the models. LAION-5B dataset was created to address the lack of publicly available datasets with billions of image-text pairs, which are necessary for training powerful multimodal learning models like CLIP, DALL-E, etc. The LAION-5B dataset provides researchers with a publicly available resource for training and testing these types of models. LAION-5B is a large dataset of image-text pairs for use in language-vision research. The dataset consists of 5.85 billion image-text pairs, with 2.32 billion containing English language. LAION-5B can be used to replicate and fine-tune foundational models and to perform additional experiments. In addition to the image-text pairs, the LAION-5B dataset also includes CLIP ViT-L/14 embeddings, kNN-indices, tools for NSFW and watermark detection, and a web interface for exploration and subset creation. LAION-5B was collected by parsing files in the Common Crawl dataset to find image tags with alt-text values. The corresponding images were downloaded and filtered using CLIP to keep only those images whose content resembled their alt-text description. The LAION-5B dataset offers a valuable resource for research on multi-modal language-vision models and is available to the broader research community. CLIP models have demonstrated strong transfer learning and out-of-distribution generalization capabilities, making them potential candidates for use as facial recognizers. Even though CLIP was not specifically trained for facial recognition, it has been shown to be able to extract rich facial features (Goh et al., 2021) and is highly resistant to image perturbations (Radford et al., 2021). Fine-tuned CLIP models have also been found to be robust against existing data poisoning attacks when used as facial recognizers (Radiya-Dixit et al., 2022). A. Mart'l and V. Rodriguez-Fernandez used the CLIP model to construct a natural language-based version of the game "Guess who?" in which players engage with the game using language prompts and CLIP determines whether or not an image meets the prompt. The performance of this technique is evaluated using various question prompts, and the limitations of its zero-shot capabilities are demonstrated. In this paper, we evaluate the performance of various CLIP models as zero shot face recognizers, examining how well they are able to accurately identify and classify different faces. We also investigate the robustness of CLIP models against data poisoning attacks, testing their performance on poisoned data to understand the potential consequences and misuse of these models as face recognizers. Through this analysis, we aim to understand the potential of CLIP models as tools for face recognition and the potential consequences of using these models in real-world applications. ## 2 Experiments In this experiment, we evaluate the performance of various CLIP models as zero-shot face recognizers using the CelebFaces Attributes (CelebA) Dataset as our primary dataset. This dataset contains 202,599 face images of various celebrities, with 10,177 unique identities, and is diverse in facial features, expressions, and demographics. We calculate top-1 and top-5 face recognition accuracies of ViT-B/32, ViT-L/14, ViT-H/14, ViT-G/14 models on this dataset and compare them to the top-1 and top-5 face recognition accuracy of the face-recognition python library. Further, we conduct data poisoning attacks on a subset of 1000 images from the CelebA dataset using Fawkes and LowKey tools, and use the unperturbed, Fawkes cloaked, and LowKey attacked versions of the images as query images to search the LAION 5B KNN index. We analyze the images retrieved for the original, Fawkes, and LowKey versions to investigate the robustness of the CLIP models against data poisoning. ### CLIP Accuracy Experiment We compute the ViT-B/32, ViT-L/14, ViT-H/14, ViT-G/14, and face-recognition embeddings for 202k images in the dataset. We build a flat L2 index over the embeddings for each model. For each image, we measure the L2 distance between the query image embeddings and all the embeddings loaded into the index. We retrieve the top 6 images with the minimum L2 distance from the index. As the query image will always be present in the index, the image with second minimum distance will be the top-1 image, and [2,6] images from top 6 will be the top-5 images. Our evaluation metrics are top-1 accuracy and top-5 accuracy. ### Face Recognition Capabilities of Image Retrieval Engines We take a subset of 1000 images from the CelebA dataset. For each image (\(x_{n}\)) in the subset, we compute the distribution of L2 distance between the query image (\(x_{n}\)) and all other images of the same identity (\(x_{n-i}\)) from the CelebA dataset. We take the 95th percentile of this distribution as the max distance threshold (\(t_{xn}\)). Further, for each image(\(x_{n}\)) in the subset, we create the Fawkes perturbed version(\(x_{n}\)-f) and LowKey perturbed version(\(x_{n}\)-lk) of it. The Fawkes version of images are created using CLI provided at ([https://github.com/Shawn-Shan/fawkes](https://github.com/Shawn-Shan/fawkes)) with protection mode set to mid. The LowKey version of images are created using Python code provided with the paper. For each version, original, Fawkes and LowKey, we search the LAION-5B KNN index to retrieve up to 50 images (\(R_{1}^{xn}\), \(R_{2}^{xn}\), \(R_{3}^{xn}\), \(\dots\) \(R_{n}^{xn-f}\) ; \(R_{1}^{xn-f}\), \(R_{2}^{xn-f}\), \(R_{3}^{xn-f}\), \(\dots\) \(R_{n}^{xn-f}\) ; \(R_{1}^{xn-lk}\), \(R_{2}^{xn-lk}\), \(R_{3}^{xn-lk}\), \(\dots\) \(R_{n}^{x-lk}\) ;) The various search parameters Figure 1: CLIP-retrieval web interface for are set as follows: url ="[https://knn5.laion.ai//knn-service](https://knn5.laion.ai//knn-service)", indice_name = "laion5B", use_mclip = False, aesthetic_score = 9, aesthetic_weight = 0.5, modality = Modality.IMAGE, num_images = 50, deduplicate = True, use_safety_model = False, use_violence_detector = False, We filter out the results \(R_{1}^{xn}\), \(R_{2}^{xn}\), \(R_{3}^{xn}\), \(\ldots\)\(R_{n}^{xn-f}\) ; \(R_{1}^{xn-f}\), \(R_{2}^{xn-f}\), \(R_{3}^{xn-f}\), \(\ldots\)\(R_{n}^{xn-f}\) ; \(R_{1}^{xn-lk}\), \(R_{2}^{xn-lk}\), \(R_{3}^{xn-lk}\), \(\ldots\)\(R_{n}^{x-lk}\) ;) where the \(||g(x_{n})-g(result)||^{2}>t_{xn}\). This limits the results to valid results, i.e. result images with the same identity as the query image. We further limit the subset images to images with 3 or more valid results for the original version of the image. Among these images, we look at how many valid results did Fawkes and LowKey versions returned. ## 3 Results From the CLIP accuracy experiment, we see that CLIP models perform with varying degrees of success at face recognition. For top-1 prediction, ViT-L/14 model has the highest accuracy among CLIP models of 80.95%, which is still significantly worse than the face-recognition python package, which has 87.61% accuracy. For top-5 prediction, ViT-H/14 model has the highest accuracy among CLIP models of 89.88%, which is comparable to 92.27% accuracy of face-recognition python package. Face recognition accuracy of CLIP models does not always improve with the bigger models. For both, top-1 and top-5 predictions, we see that the accuracy initially improves and then deteriorates as models get bigger. For the Face Recognition Capabilities of Image Retrieval Engines experiment, we see that out of 1000 Figure 2: Face Recognition Capabilities of Image Retrieval Engines Experiment Setup images from the subset, 728 images return 3 or more valid results when queried with the original version of the image on the LAION-5B KNN index. Among these 728 images, 612 (84.07%) images return one or more valid results when queried with the Fawkes perturbed version of the image, and 566 (77.75%) images return one or more valid results when queried with the LowKey perturbed version of the image. Among the 728 images, 543 (74.59%) images return one or more valid results for both Fawkes and LowKey perturbed versions. ## 4 Discussion Biometric Information Privacy Act (BIPA) passed by Illinois, and similar laws passed by Texas and Washington, regulate the collection, use, and handling of biometric identifiers and information by private entities. BIPA defines biometric identifiers as a retina or iris scan, fingerprint, voiceprint, or scan of hand or face geometry. This definition is broad, and one can argue that CLIP vectors can constitute as biometric identifiers, as CLIP vectors can be used for facial recognition. If CLIP vectors are considered as biometric identifiers, using them for facial recognition would require compliance with BIPA and similar laws passed by other states. Additionally, the potential for misuse of CLIP search engines as inadvertent face recognition engines is a serious concern. Even if the intended use of the search engine is not for facial recognition, it could still be used for that purpose. This raises important questions about the responsibility of the creators of these models to ensure that their use does not violate privacy and civil rights. Given the potential for misuse, it would be valuable to explore the possibility of training new CLIP models to intentionally have worse performance on face recognition tasks. However, this approach has its own limitations and challenges such as it will not solve the problem of unintended use or abuse of the technology, it might also be difficult to ensure that the models are not used for nefarious purposes despite the intended use. ## 5 Conclusion CLIP models show good zero-shot face recognition capabilities and are robust against data poisoning attacks. Increasing the size of the CLIP model does not necessarily lead to improved accuracy in face recognition tasks. CLIP models have demonstrated impressive performance on image recognition and retrieval tasks, it is essential to consider the potential consequences and misuse of search engines built using these models, which could inadvertently function as face recognition engines.The use of CLIP models for face recognition raises a number of important legal and ethical questions that need to be addressed.
2310.01025
Initial perturbations dependence of non-equilibrium continuous and discontinuous pattern transition
A phase separation in a spatially heterogeneous environment is closely related to intracellular science and material science. For the phase separation, initial heterogeneous perturbations play an important role in pattern formations. In this study, a pattern transition from a lamellar pattern to a columnar pattern is investigated in the presence of a slit pattern as the initial perturbations. Here it is found that the transition behavior depends on the initial slit width. When the initial slit width is close to the width of the columnar pattern at the steady state, the pattern transition is the second-order-like (continuous) transition. Meanwhile, the pattern transition becomes the first-order-like (discontinuous) transition if the width of the initial slit is much larger than that at the steady state. Then those transition behaviors can be explained by the dynamical path during the pattern formation. This finding will advance understanding of the initial perturbation dependence of nonequilibrium phenomena.
Rikuya Ishikawa, Rei Kurita
2023-10-02T09:18:02Z
http://arxiv.org/abs/2310.01025v1
# Initial perturbations dependence of non-equilibrium continuous and discontinuous pattern transition ###### Abstract A phase separation in a spatially heterogeneous environment is closely related to intracellular science and material science. For the phase separation, initial heterogeneous perturbations play an important role in pattern formations. In this study, a pattern transition from a lamellar pattern to a columnar pattern is investigated in the presence of a slit pattern as the initial perturbations. Here it is found that the transition behavior depends on the initial slit width. When the initial slit width is close to the width of the columnar pattern at the steady state, the pattern transition is the second-order-like (continuous) transition. Meanwhile, the pattern transition becomes the first-order-like (discontinuous) transition if the width of the initial slit is much larger than that at the steady state. Then those transition behaviors can be explained by the dynamical path during the pattern formation. This finding will advance understanding of the initial perturbation dependence of nonequilibrium phenomena. ## I Introduction A phase separation (PS) in which two phases are separated from a mixture is important in a wide range of fields such as material performance and biological activities [1; 2; 3; 4; 5]. For example, it is known that SnPb alloy used as solder exhibits phase separation and the formation of phase-separated domains may facilitate the formation of cracks at phase boundaries, resulting mechanical failure [6]. Moreover, it has been reported that protein droplets formed by the liquid-liquid phase separation (LLPS) in cells play important roles in transcription [7], signal transduction [8], and the pathogenesis of neurodegenerative diseases [9; 10]. Phase separation in binary mixtures has long been studied as a classical problem, and the dynamics of phase separation appears to be well understood both experimentally and theoretically [1; 4]. However, the universality of the dynamics holds for spatially uniform systems, but not for systems with inhomogeneous temperatures or concentrations. For example, in the case of an inhomogeneous system with large initial concentration fluctuations, a concentric pattern is formed around high concentration areas [11]. In the case of non-stationary temperature fields such as directional quenching (DQ) [12; 13; 14; 15], a random droplet, lamellar and columnar pattern are formed depending on a migration speed \(V\) of a quenching front. There are many other examples for showing specific pattern formations in systems such as PS with radial quenching [16; 17; 18], PS with temperature gradient [19; 20], PS with lamination [21], PS with double quenching [22], PS with containing particles [23; 24; 25; 26], particles creation [27], self-propelled systems [28], and non-reciprocal interaction systems [29]. Understanding of the phase separation phenomena under such inhomogeneous conditions is an urgent issue since the pattern formation are related with development of new functional materials and LLPS in cells occurs in inhomogeneous concentration and temperature. Although the importance of initial perturbations has been suggested for the pattern formation, there are few quantitative studies on the effects of the initial perturbations on a transition point and a transition behavior. Understanding the effects of the initial perturbations that cause transitions will not only facilitate the understanding of nonequilibrium systems, but also control the transitions which will be useful for applications in various fields. Therefore, we investigated the effect of the initial perturbations for the transition from columnar pattern to lamellar pattern in DQ. In this study, we prepared a slit pattern into the lamellar pattern as initial perturbations and studied a process of the transition from the columnar pattern to the lamellar pattern. It was found that the transition behavior depends on the slits width \(h\) and it means that the condition of the initial perturbation plays a critical role for the pattern formation. ## II Methods In this study, we used modified-Cahn-Hilliard equation for understanding a dynamics of two dimensional phase separation under inhomogeneous temperature fields [1; 30]. The normalized equation of the modified-Cahn-Hilliard equation is given as \[\frac{\partial\phi}{\partial t}=\nabla^{2}[\epsilon(x,t)\phi+\phi^{3}-\nabla^ {2}\phi] \tag{1}\] where \(\phi,t\) and \(\epsilon\) are the normalized concentration, the time normalized by the diffusion time and the temperature normalized by the quench depth, respectively. The length is normalized by the correlation length. We note that the effect of temperature gradient (Ludwig-Soret effect) was neglected. When \(\epsilon\geq 0\), the mixed state is stable; when \(\epsilon<0\), the mixture is separated into two phases. In this study, we define \(\phi=1\) and \(\phi=-1\) which are the concentrations after phase separation as A (white region) and B phase (black region), respectively. To compute eq. 1, we used the Euler method and set a grid size to \(\Delta x=1\) and a time increment to \(\Delta t=0.01\). We used a free surface condition in the \(x\) directions and a periodic boundary condition in the \(y\) directions. The system size is \(L_{x}:L_{y}=1000:1000\). We initially put A phase in \(x<40\) and then we put one or two slits of B phase in A phase as the initial perturbations (see Fig. 2). The width of one slit or the period of two slits is \(h\). The region \(x\geq 40\) is a symmetric composition \(\phi=0\). Then, the quenching front was set to \(x\) = 40 and annealed for \(-10<t<0\) to smooth the slit boundary of \(\phi\). Even after the annealing, the width of the slit was confirmed to be \(h\). At \(t\) = 0, the quenching front moved in the \(x\) direction with a constant velocity \(V\). Therefore, the temperature \(\epsilon(x,t)\) is as follows. \[\epsilon(x,t)=\left\{\begin{array}{ll}1&x\geq Vt+40\\ -1&x<Vt+40\end{array}\right.,\ \ \ \ x\in[0,L_{x}-1] \tag{2}\] ## III Results Firstly, we describe the pattern formation when the quenching front is moved at \(V\) in the slit systems. We defined "columnar pattern" when columns percolate to the right boundary of the simulation box, while "lamellar pattern" is stable when the columns change into the lamellae during the pattern formation. We set the transition velocity \(V_{t}\) at the median of the lowest velocity for the lamellar pattern and the highest velocity for the columnar pattern. Figure 1 shows the \(h\) dependence of \(V_{t}\) in one or two slits system. There are two types of the transitions; one is a discontinuous transition (DT) and the other is a continuous transition (CT). We will describe the difference between DT and CT in the next paragraph. Circles, triangles, and diamonds correspond to \(V_{t}\) with DT for one slit, with CT for two slits, and with DT for two slits, respectively. For the one slit system, the \(V_{t}\) line is smooth with a peak at \(h\) = 9 and the CT cannot be observed. Meanwhile, for the two slits system, \(V_{t}\) is larger for \(h\leq 9\) (triangle symbols) than for one slit system. For \(h\geq 10\) (diamond symbols), \(V_{t}\) with DT was almost the same as \(V_{t}\) with DT in the one slit system. In the following, we will reveal the reason for the large \(V_{t}\) jump around \(h\) = 9 in the two slits case and we will discuss the difference from the one slit case. Next, we describe \(h\) dependence of the pattern formation in the two slits case. Figure 2 shows the pattern formations for different \(h\). For \(h\) = 8 and \(V\) = 0.665 just above \(V_{t}\), the number of the slit increases as columns, but the columns break off in the middle and the lamellae are formed; the column formation have a finite lifetime (or a finite distance). Meanwhile, for \(h\) = 8 and \(V\) = 0.66 just below \(V_{t}\), the columnar pattern percolates to the right boundary and the number of the column gradually increases. For the slits width \(h\) = 20 and \(V\) = 0.595 just above \(V_{t}\), no column is generated from the slits and only lamellae are formed immediately after the DQ starts. Meanwhile, for \(h\) = 20 and \(V\) = 0.59 just below \(V_{t}\), a thin column is formed at \(t\) = 200 and then the number of the thin column increases and the columnar pattern percolates to the right boundary. Here we defined the continuous transition (CT) when the columns and lamellae coexist with some length like the system with \(h\) = 8, while the discontinuous transition (DT) when the lamellae are formed immediately. In addition, to clarify the mode of the transition, we investigated the growing behavior of the column pattern just below \(V_{t}\). When the quenching front migrates toward \(x\) direction, the lamellae are basically formed, but the column pattern also grows inside the lamellae (see Fig. 2). We defined a lamella and a column as one pair of A and B phase, and the number of lamellae and columns as \(n_{l}\), \(n_{c}\), respectively. Figure 3 (a) and (b) show the \(n_{c}\) dependence of \(n_{l}\) for \(h\) = 8 and 20, respectively. For \(h\) = 8, \(n_{c}\) linearly increases at any \(V\). When \(V\) is small, the slope \(a\) is about 2, which indicates that two columns are formed each time one lamella formed. As \(V\) approaches \(V_{t}\), the slope decreases. Meanwhile, for \(h\) = 20, the slope changes when the initial thick slit changes to the thin column near \(V=V_{t}\). It is also found that the slopes in the later period are almost constant. We investigate the \(a\) dependence on the velocity difference from \(V_{t}\)\(\delta(=V_{t}-V)\) shown as in Fig. 3(c). Circles and squares correspond to \(a\) in \(h\) = 8 and 20, respectively. Error bars reflect the errors in the linear fitting. For \(h\) = 8, \(a\) obeys with \(\delta^{\alpha}\) and the power \(\alpha\) is about 0.40. The physical meaning of \(\alpha\) is unclear, thus the theoretical derivation of this relationship is a future work. For \(h\) = 20, \(a\) has a non-zero value even \(\delta\to 0\). Thus, the growth behaviors of the Figure 1: \(h\) dependence of \(V_{t}\). Circles, triangles, and diamonds correspond to \(V_{t}\) with a discontinuous transition (DT) for one slit, with a continuous transition (CT) for two slits, and with a DT for two slits, respectively. We set the transition velocity \(V_{t}\) at the median of the lowest velocity for the lamellar pattern and the highest velocity for the columnar pattern. An error bar represents the lowest velocity for the lamellar pattern and the highest velocity for the columnar pattern. column pattern depend on \(\delta\) and there is a clear difference between CT (\(h\) = 8) and DT (\(h\) = 20). We also show the pattern formation behaviors above \(V_{t}\). To clarify the dynamics in CT, we investigated the coexistence length of the columns and the lamellae for \(h\) = 8. We define \(\xi\) as the distance from the initial position of the slit to the vanishing point of the columns (see Fig. 4 (a)) and investigate \(\xi\) dependence on \(|\delta|(=V-V_{t})\) shown as in Figure 4(b). \(\xi\) diverges as \(\xi\sim|\delta|^{\beta}\) with decreasing \(|\delta|\) and \(\beta\) is about -0.50. This behavior also suggests that the pattern transition at \(V=V_{t}\) should be continuous at \(h\) = 8. Meanwhile, \(\xi\) = 0 at \(h\) = 20 as we noted above (see Fig. 2(b). Here we investigated the time evolution of the concentration at a position prior to the quenching front of the column in order to reveal the formation dynamics in the continuous transition. Figure 5(a) shows the time evolution of \(\phi(x_{f}+3,y)\) at \(h\) =8 and \(V\) = 0.663, where \(x_{f}\) is a position of the quenching front at \(t\). We also define the \(y\)-coordinate of the center of the column as \(y_{c}\) (indicated by yellow dotted line) and \(\phi(x_{f}+3,y_{c})\) is denoted by \(\phi_{fc}\) in the following. Since the concentration is conserved and A phase rich ahead of the column with B phase, \(\phi_{fc}\) is positive (A rich phase), while the column is composed by B phase. As time passed, \(\phi_{fc}\) periodically changes shown as in Fig. 5(b). Circles, squares and triangles are \(V\) = 0.66, 0.663, and 0.665, respectively. When \(V\) is smaller than \(V_{t}\) = 0.6625 (\(V\) = 0.66), the amplitude of \(\phi_{fc}\) oscillation is quite small at any \(t\). Meanwhile, when \(V\) is larger than \(V_{t}\) (\(V\) = 0.663 and 0.665), the oscillations of \(\phi_{fc}\) become larger with time and then the column finally transforms into lamellae. Figure 2: Time evolution of patterns in two slits. The yellow dotted lines represent the quenching front. (a) For \(h\) = 8 and \(V\) = 0.665 just above \(V_{t}\), the number of the slit increases as columns, but they break off in the middle and lamellae are formed; the column formation has a finite lifetime (or a finite distance). Meanwhile, for \(h\) = 8 and \(V\) = 0.66 just below \(V_{t}\), the columnar pattern percolate to the right boundary and the number of the column gradually increases. (b) For the slits width \(h\) = 20 and \(V\) = 0.595 just above \(V_{t}\), no column is generated from the slits and only lamellae are formed immediately. Meanwhile, for \(h\) = 20 and \(V\) = 0.59 just below \(V_{t}\), a thin column is formed at \(t\) = 200 and then the number of the thin column increases and the columnar pattern percolates to the right boundary. Here, we considered the periodic fluctuations and divergence of \(\phi_{fc}\). Due to the conserved manner, A component accumulates in front of the B phase columns (\(\phi_{fc}>0\) in front of the B column). When B phase columns grow, the accumulated A component is diffused in the lateral direction. When a lamella of A phase is formed next to B phase column, a layer with \(\phi<0\) is formed in front of the quenching front. Then the accumulated A component in front of the B phase columns tends to diffuse the laterally since the difference of \(\phi\) is large. Thus, \(\phi_{fc}\) oscillates slightly with the same period as the period of the lamella. Next, the quenching front movement is faster than the characteristic time of the diffusion when \(V\geq V_{t}\). The A component is gradually accumulated in front of the B phase columns with time. The amplitude of the oscillation gradually increases and the onset time, when the amplitude of \(\phi_{fc}\) oscillation becomes larger, becomes shorter with increasing \(V\). They are consistent that the accumulation of A component is faster than the diffusion. Therefore, the dynamics in CT is determined by the competition between the accumulation and the diffusion in front of the column. When the width of the initial slit \(h\) is large, the thinner column is reformed during DQ. It means that the stable width of the column is mismatched with the width of the initial slit. Thus, we compute the stable width of the column here. According to the previous report in Ref. [21], a stable thickness of the lamella \(\xi^{\prime}\) can be expressed as \(\xi^{\prime}\sim V^{\prime-1/2}\) since the time scales of the diffusion and the migration of the free surface are balanced. Similarly, the stable width of the column is expected to be determined by the competition between the velocity of the quenching front and the speed of the diffusion. We investigated the \(V\) dependence of the stable width of the column \(\lambda\) just after the entire system was quenched. Figure 6 shows the \(V\) dependence of \(\lambda\). Circles and triangles are \(h=8\) and \(20\), respectively. Symbols and error bars are the mean and the standard deviation of \(\lambda\) for all columns, respectively. It is found that \(\lambda\) is independent of \(h\) and \(\lambda\propto V^{-0.60}\). In the lamella pattern, the diffusion perpendicular to the quenching front is only considered, while the diffusion in both perpendicular and lateral direction should be related in the column pattern. Thus we consider that the exponent in the column pattern is different from that in the lamella pattern. When \(h\) is large and \(V<V_{t}\), the wide initial slit branches into columns of the steady width \(\lambda\). Meanwhile, when \(V\geq V_{t}\), the diffusion is slower than the migration of quenching front. A layer of A component is formed in front of the B phase column Figure 3: \(n_{c}\) dependence on \(n_{l}\) for (a) \(h=8\) and for (b) \(h=20\). Each symbols represent \(n_{c}\) in a different \(V\). For \(h=8\), \(n_{c}\) linearly increases at any \(V\) and the slope decreases with increasing \(V\). For \(h=20\), the slope change at the time of the changing from thick slits to thin column when \(V\) is close to \(V_{t}\) and the slope in the later period are almost constant near \(V_{t}\). (c) \(\delta(=V_{t}-V)\) dependence of the slope \(a\) in log-log scales. Circles and squares correspond to \(h=8\) and \(20\), respectively. For \(h=8\), \(a\) obeys with \(\delta^{\alpha}\) and the power \(\alpha\) is about \(0.40\). For \(h=20\), \(a\) has a finite value even \(\delta\to 0\). The \(a\) dependence of \(\delta\) also suggests the difference between CT and DT modes. Figure 4: (a) We define \(\xi\) as the distance from the initial position of the slit to the vanishing point of the columns at \(h=8\). (b) \(|\delta|\) dependence of \(\xi\). \(\xi\) diverges as \(\xi\sim|\delta|^{\beta}\) with decreasing \(|\delta|\) and \(\beta\) is about -0.50. This behavior also suggests that the pattern transition at \(V=V_{t}\) should be continuous at \(h=8\). and then the layer becomes the stable lamellar pattern. Finally, we explain the conditions under which CT occurs. Figure 7 is a superimposed graph of Fig. 1 and Fig. 6 with \(V\) on the vertical axis and \(h\) on the horizontal axis. Filled circles and filled squares are the stable column width \(\lambda\) at the initial slit width \(h=8\) and \(20\), respectively. There is a critical-like point \((h,V)=(h_{c},V_{c})\), where \(\lambda(V_{c})=h_{c}\) and \(V_{t}(h_{c})=V_{c}\). Figure 7 is a state diagram and the transition modes between CT and DT can be explained by this diagram. As an example, we consider the case \(h=15\) and \(V=0.5\) (black circle). In this case, a misfit parameter between the initial slit and the stable column \(\Delta h=h-\lambda\) is large. When \(V<V_{t}\), the slits for \(h=15\) branch to a column with \(\lambda=8\) to eliminate this misfit. In the post-bifurcation state, \(\delta=V_{t}-V\) is also large and the ratio of column formation to lamella \(a\) is also large. Meanwhile, the slits immediately become a lamella if \(V>V_{t}\). Therefore, due to the bifurcation, the condition cannot be close to the critical-like point and then the transition is discontinuous. Meanwhile, \(\lambda\sim h_{c}\) holds at \(h=8\) and \(V=V_{c}\). In this case, the condition of DQ is close to the critical-like point and the transition becomes continuous. We note here that if \(h\) is slightly larger than \(\lambda\), the slits cannot branch. Therefore, we consider that the transition is the continuous up to \(h=9\). In addition, we describe the difference of the transition mode between one slit and two slits. One slit is more likely to be lamellae than two slits because the upper and lower lamellae are closer together. Therefore, \(V_{t}\) for one slit is smaller than for two slits. If \(V<V_{t}\), two or more columns are formed from one slit. When two or more columns are formed, the stability increases and then \(V_{t}\) becomes larger. Then, the condition becomes far from the critical-like point. We note that the state diagram for three or more slits is similar to that of two slits. Therefore, the stability of the pattern is unchanged after two or more columns are formed. These results suggest that the stability of the system changes with the time dependent condition such as the number of the slit, the width of the slit or the column. Then the discontinuous transition occurs since the re-established transition point is farther away from the simulated state. Conversely, if the initial inhomogeneous perturbations are close to the stable state, the stability of the system is unchanged and then the continuous transition occurs near the critical point. It can be concluded that the difference between the initial inhomoge Figure 5: (a) The time evolution of \(\phi(x_{f}+3,y)\) at \(h=\)8 and \(V=0.663\), where \(x_{f}\) is a position of the quenching front at \(t\). \(x_{f}\) and \(y_{c}\) show the \(x\)-coordinate of the quenching front and the \(y\)-coordinate of the column center (indicated by yellow dotted line), respectively. (b) Time evolution of \(\phi_{fc}\) at \(h=8\). Circles, squares and triangles are \(V=0.66\), \(0.663\), and \(0.665\), respectively. \(V_{t}=0.6625\) at \(h=8\). When \(V\) is smaller than \(V_{t}=0.6625\) (\(V=0.66\)), the amplitude of \(\phi_{fc}\) oscillation is quite small at any \(t\). Meanwhile, when \(V\) is larger than \(V_{t}\) (\(V=0.663\) and \(0.665\)), the oscillations of \(\phi_{fc}\) become larger with time and then the column finally transforms into lamellae. The dynamics in CT is determined by the competition between the accumulation and the diffusion in front of the column. Figure 6: \(V\) dependence of \(\lambda\) just after the entire system is quenched. Circles and triangles are \(h=8\) and \(20\), respectively. Symbols and error bars are the mean and standard deviation of \(\lambda\) for all columns, respectively. It is found that \(\lambda\) is independent of \(h\) and \(\lambda\propto V^{-0.60}\). neous perturbations (the slit width \(h\) in this study) and the steady state (the column width \(\lambda\) in this study) significantly affects the mode of the pattern formation. ## IV Discussion Firstly, we discuss the relevance of this study with experimental studies on directional pattern formation during eutectic growth and gelation process. In metallic alloys, eutectic patterns have been observed to self-assemble perpendicular to the crystal-growth direction (like columnar pattern) when it solidifies in a certain direction [31, 32, 33]. However, a horizontal pattern (like lamellar pattern) is not formed with respect to the crystal-growth direction. Similarly, in gelation of collagen induced by directional neutralization, a pattern perpendicular to the gelation plane (like columnar pattern) is formed [34], but a horizontal pattern (like lamellar pattern) is not formed. In this study, \(V_{t}\) with the initial perturbations is about one order of magnitude larger than \(V_{t}\) in homogeneous DQ [14] and then the \(V\) regime of the lamellar pattern becomes narrow. Therefore, in nonequilibrium systems, the uncontrollable initial perturbations exist and then it seems difficult to observe lamellar patterns in experiments. Next, we discuss the relationship between CT in DQ and the general critical phenomena. The results of the continuous transition in DQ are deterministic, not taking into account thermal fluctuations. On the other hand, the critical phenomena of the phase transition [4] and a laminar turbulent transition [35] are stochastic with thermal fluctuations. In fact, the columns do not disappear at the center of the columnar pattern, which is different from a critical phenomenon. Therefore, in this stage, the CT in DQ is different from these critical phenomena, and it is more appropriate to consider the pattern transition as interfacial dynamics. Finally, the transition of the pattern formation under inhomogeneous conditions are not limited to phase separations, but also occur in other nonequilibrium systems such as active matter [36, 37, 38], non-reciprocal phase transitions [39], reversible-irreversible transitions in jamming systems [40, 41]. It is interesting that nonequilibrium continuous transition are observed in many systems such as the laminar-turbulent transition in fluids [35], two turbulent states in liquid crystals [42, 43] and a reversible-irreversible transition of fiber [44], vortices [45, 46] and granular materials [47]. Thus it is expected that the difference between the condition at the steady state and the initial perturbations plays an important role in those nonequilibrium transition. ## V Summary A phase separation is important in a wide range of fields such as material science and biology. Those phase separations often exhibit in inhomogeneous concentrations or non-stationary temperature fields. Understanding of phase separation phenomena under such inhomogeneous conditions is an urgent issue since LLPS in cells occurs in inhomogeneous concentration and temperature and the pattern formation are related with development of new functional materials. Here, we investigated the columnar-lamellar pattern transition due to directional quenching (DQ) in the presence of initial slits pattern to study the effect of initial perturbations. In this study, we prepared one slit and two slits which are the nuclei of the columnar pattern as initial perturbations. Then, we studied the process of transition from columnar pattern to lamellar pattern dependent on the initial perturbations. It is found that the transition behavior depends on the slit width \(h\) and the number of the slits. The transition from the column to the lamella is continuous when \(h\) is comparable to the column width \(\lambda\) and two slits. Then, the growth of the columnar pattern is determined by the balance between the diffusion and the accumulation of the concentration. Whereas, it is a discontinuous transition when \(h\) is larger than the column width \(\lambda\). The stability of the system changes with the time dependent condition such as the number of the slit, the width of the slit or the column. Then the discontinuous transition occurs since the re-established transition point is farther away from the simulated state. This study not only shows the empirically knowl Figure 7: A superimposed graph of Fig. 1 and Fig. 6 with \(V\) and \(h\). The vertical and horizontal axis of Fig. 6 are swapped. Filled circles and filled squares are the column width \(\lambda\) at \(h\) = 8 and 20, respectively. There exists a critical-like point \((h,V)=(h_{c},V_{c})\), where \(\lambda\) and \(V_{t}\) curves cross. When the condition is close to the critical-like point, the transition is continuous. In the case \(h\) = 15 and \(V\) = 0.5 (black circle), a misfit parameter between the initial slit and the stable column \(\Delta h=h-\lambda\) is large. When \(V<V_{t}\), the slits for \(h\) = 15 branch to a column with \(\lambda\) = 8 to eliminate this misfit. In the post-bifurcation state, \(\delta=V_{t}-V\) is also large and then the transition is discontinuous. edge that inhomogeneous initial perturbations can significantly change the transition point from a homogeneous system. It also shows that in nonequilibrium systems, the transition behaviors are determined by the difference between the initial perturbations and the condition at the steady state. Therefore, it is important to design initial perturbations comparing with the condition at the steady state in order to control pattern formation and nonequilibrium transitions in experiments. ## Acknowledgements R. I. was supported by JST SPRING, Grant Number JPMJSP2156. R. K. was supported by JSPS KAKENHI Grant Number 20H01874. ## Authors contributions R. I. and R. K. conceived the project. R. I. performed the numerical simulations and analyzed the data. R. I. and R. K. wrote the manuscript. ## Competing interests statement The authors declare that they have no competing interests. ## Correspondence Correspondence and requests for materials should be addressed to R. I. ([email protected]) and R. K. ([email protected]). ## Availability of data and materials All data generated or analyzed during this study are included in this published article and its supplementary information files.
2303.08887
Seismological Studies of Pulsating DA White Dwarfs Observed with the Kepler Space Telescope and K2 Campaigns 1-8
All single stars that are born with masses up to 8.5 - 10 $M_\odot$ will end their lives as a white dwarf (WD) star. In this evolutionary stage, WDs enter the cooling sequence, where the stars radiate away their thermal energy, and are basically cooling. As these stars cool, they reach temperatures and conditions that cause the stars to pulsate. Using differential photometry to produce light curves, we can determine the observed periods of pulsation from the WD. We used the White Dwarf Evolution Code (WDEC) to calculate a grid of over one million models with various temperature, stellar mass and mass of helium and hydrogen layers, and calculated their theoretical pulsation periods. In this paper, we describe our approach to WD asteroseismology using WDEC models and we present seismological studies for 29 observed DAVs in the Kepler and K2 datasets, 25 of which have never been analyzed using these observations, and 19 of which have never been seismically analyzed in any capacity before. Learning about the internal structure of WDs place important constraints on the WD cooling sequence and our overall understanding of stellar evolution for low mass stars.
Weston Hall, Barbara G. Castanheira, Agnès Bischoff-Kim
2023-03-15T19:14:47Z
http://arxiv.org/abs/2303.08887v1
Seismological Studies of Pulsating DA White Dwarfs Observed with the _Kepler_ Space Telescope and _K2_ Campaigns 1-8 ###### Abstract All single stars that are born with masses up to 8.5 - 10 \(M_{\odot}\) will end their lives as a white dwarf (WD) star. In this evolutionary stage, WDs enter the cooling sequence, where the stars radiate away their thermal energy, and are basically cooling. As these stars cool, they reach temperatures and conditions that cause the stars to pulsate. Using differential photometry to produce light curves, we can determine the observed periods of pulsation from the WD. We used the White Dwarf Evolution Code (WDEC) to calculate a grid of over one million models with various temperature, stellar mass and mass of helium and hydrogen layers, and calculated their theoretical pulsation periods. In this paper, we describe our approach to WD asteroseismology using WDEC models and we present seismological studies for 29 observed DAVs in the _Kepler_ and _K2_ datasets, 25 of which have never been analyzed using these observations, and 19 of which have never been seismically analyzed in any capacity before. Learning about the internal structure of WDs place important constraints on the WD cooling sequence and our overall understanding of stellar evolution for low mass stars. White dwarf stars(1799), Asteroseismology(73), ZZ Ceti stars(1847), Pulsating variable stars(1307) 0000-0002-4880-2880]Weston Hall 0000-0002-4882-0880]Barbara G. Castanheira 0000-0002-4882-0880]Agnes Bischoff-Kim ## 1 Introduction Variability of WD stars was discovered by chance in observations of standard stars (Landolt, 1968). The advent of high-speed photometry allowed astronomers to identify periods in WDs on the order of 100 - 1200 s, which are now known to be caused by non-radial \(g\)-mode pulsations (Fontaine & Brassard, 2008; Winget & Kepler, 2008; Althaus et al., 2010; Corsico et al., 2019) These stars are of vast importance to the field of stellar evolution, because they contain information about their previous evolutionary phases, and over 95% of single stars will evolve into a WD (Iben, 1982; Garcia-Berro et al., 1999). This means that information about the interior structure of these stars can not only place constraints on the WD stars themselves, but also on the evolutionary path and structure of their progenitors. Pulsating WDs are excellent targets for asteroseismology, the only technique that can probe the interior of a star using the light coming from its photosphere, due to their simpler structure. The accuracy and reliability of seismological studies rely extensively on detailed and updated models of WDs, along with precision and quality of their photometric measurements. Starting in the late 1980s, astronomers established the Whole Earth Telescope (WET), in order to combat day-night aliasing (Nather et al., 1990). This became the premier way to take uninterrupted photometric measurements of stars, and created some of the best pictures of stellar structure for stars other than the Sun. However, despite the vast efforts of astronomers around the world, the requirements to observe faint objects like WDs ensured that only a few dozen WDs were studied by this collaboration. To date there are over 400 known pulsating white dwarfs with hydrogen-rich atmosphere, called DAVs or ZZ Ceti stars (Romero et al., 2022). The identification of pulsating WDs would eventually see a revolution along with the rest of the study of stellar interiors with the launch of photometric space telescopes, namely _CoRoT_ and _Kepler_. WDs did not immediately reap these benefits at launch, as only a few were observed in the original _Kepler_ field. Hermes et al. (2017) overcame that shortfall by targeting WDs to be observed in short cadence, after _Kepler_ was redesigned as _K2_ and became able to point at different fields. This survey drastically increased the number of known WDs, and provided unprecedented, high-quality measurements of pulsation periods through Fourier analysis. This study also did follow-up ground-based spectroscopic observations, as well as determination of rotation rates via asteroseismology. The quality of the observed data makes these targets suitable for an equally high-quality seismological analysis into their internal structure. In this work, we refine and expand ensemble asteroseismology work of the type done by Castanheira & Kepler (2008, 2009). In those studies, core compositions were fixed to a 50:50 homogeneous mix of carbon and oxygen as an approximation to results from stellar evolution. A major difficulty of attempting to fit a large number of DAV's at once is that not all observed pulsations spectrum sample the stars the same way. Some pulsations spectra are sensitive to core structure and in that case, it is desirable to vary core parameters, or determine them through fully evolutionary models. The latter is the approach taken in works such as Romero et al. (2012, 2013, 2017). The trade off from such an approach is coarser grids, as computing fully evolutionary sequences is computationally intensive. Also, such models rest on the assumption that the physical processes that shape the cores of white dwarfs (nuclear reaction rates, convection, core overshooting, element diffusion,...) are well understood. A philosophically different approach is to assume that stellar evolution gives us only a broad stroke model of white dwarf interiors and that pulsations can help us reverse engineer the interior structure. Such an approach was pioneered by the works of Brassard et al. (1992) and Bradley et al. (1993) and gave rise to a body of work mainly focused on individual stars. One challenge of this sort of asteroseismic fitting is the choice one must make in terms of parameterization, as it is not possible to vary all of the parameters involved and we most often do not have enough observed periods to match the number of independent parameters. One choice is to vary the parameters that dictate chemical structure in the outer layers of the model (e.g. helium and hydrogen) or in the inner parts (carbon and oxygen). It is difficult to judge a priori whether a particular period spectrum is sensitive to the carbon/oxygen chemical profiles (Bischoff-Kim, 2017), but a preliminary study of a dozen DAV period spectra showed us that for most DAV's the C/O core structure affects the fits less than the hydrogen and helium layer masses. To move forward with a unified parameterization for our pipeline fitting, we fixed the oxygen chemical profiles to that of a 0.6 solar mass fiducial model, chosen to reproduce the core structure predicted by fully evolutionary models (Althaus et al., 2010). This is explored more for our study in Section 4.4. We describe here our new asteroseismic technique to analyze these stars, along with justifications for our choices in solution fitting, and we present asteroseismic results for the 29 observed DAVs in Hermes et al. (2017), including values for effective temperature, total mass, and hydrogen and helium mass layers. Only 10 of the 29 have been previously seismologically analyzed in some capacity, and only 4 of those used _K2_ observation data. In this paper, we present 19 brand new analyses of DAV stars, using high-precision photometric data from the _Kepler_ and _K2_ missions. Other asteroseismology approaches include the Montreal group, which creates parametrized static models, searching through large parameter spaces to optimize their fits (Giammichele et al., 2016, 2018, 2022). These static models differ from other approaches as they are not derived from evolution. Similarly, the Texas group (of whom much of the work presented here was inspired by, including the use of the WDEC software), creates hot, polytropic models and cools them to specified parameters, with the ability to specify a large number of core parameter shapes and values (Bischoff-Kim et al., 2008; Bischoff-Kim & Montgomery, 2018). These approaches allow for extensive fine grids of models and a wide amount of parameters to vary. In general, this allows for high-precision fits with highly-customizable core structure, structures which are not necessarily predicted through other modelling. On the other side of WD seismology, the La Plata group specializes in using fully-evolutionary models. The benefits of this are the physical significance of the model results, however computation time is sacrificed, and they are forced to create coarser grids, and limit their parameter search (Romero et al., 2012, 2013, 2017). There also exist uncertainties in this evolution, whose impacts on asteroseismological models can be clearly established on the uncertainties in effective temperature, stellar mass, and hydrogen envelope mass (De Geronimo et al., 2017). The key difference between other DAV seismological studies and our own comes from the scope of the sample of the observed stars. When studying just one or a couple WDs, one can sample models to determine the most important parameters for the stellar interior for each star individually. Using the WDEC, this means creating extremely dense, tailored grids for each individual star, selecting the most influential parameters and trimming parameter spaces to specific ranges that best model the observed stars. In this paper, we attempt to create a uniform, structured pipeline that can be repeated and used on ensembles of DAVs, such as those presented in Hermes et al. (2017), in order to constrain important quantities about the star, like the mass of hydrogen and helium in the envelope. By using a standardized model grid, without customizing our calculations for each star, we must lower the amount of free parameters we examine. This is discussed more in-depth in Section 2, with a core sensitivity study done in Section 4.4. This means that the stars examined here will probably still require more extensive, individual seismological studies. However, our goal with using this method is to prepare the future of DAV seismology to characterize seismic fits themselves along with their observed WD counterparts. ## 2 Seismological Models The White Dwarf Evolution Code (WDEC) (Lamb & Van Horn, 1975; Wood, 1990) has a long, rich history of development by astronomers. Originally written by Martin Schwarzchild, the code has been modified, updated, streamlined, and made available on modern machines by many astronomers. ### Input Parameters Bischoff-Kim & Montgomery (2018) contains the most recent code description, and it and its subsequent sources contain discussion to great depth on the input parameters available to change with the WDEC. These parameters affect the models in different ways and different amounts. For our study, we varied four main parameters of a WD: the total mass (\(M\)), the effective temperature (\(T_{\mbox{eff}}\)), the mass of hydrogen in the envelope (\(M_{\mbox{H}}\)) and the mass of helium in the envelope (\(M_{\mbox{He}}\)). As such, there are a number of parameters we chose to hold constant, because most WDs pulsate in only a few modes, which indicates a small number of observables, as suggested by Castanheira & Kepler (2008) and Bischoff-Kim (2017). The goal in choosing our constant parameters was to create a model grid that could be used to analyze a large ensemble of WDs, that are not necessarily structured the same. As mentioned in Section 1, with our limited choices for free parameters, we must keep the specific core parameters constant. This requires the use of a standard core profile that is more general and can represent the average WD and WDs that are close to average very well. Since nonradial pulsations are much more dependent on envelope than core, we felt it was acceptable to standardize our cores in this way, allowing for the envelope helium and hydrogen masses to change the chemical profile for each model rather than core abundances. We fixed the parameters that dictate the shape of the core C/O profiles to best reproduce the composition profiles of Salaris et al. (1997), derived from stellar evolution models that evolve stars from the ZAMS and include time dependent diffusion of the elements. This profile has physical significance due to its derivation from evolutionary models, and provides a general baseline for the models we calculate. Composition profiles from two specific models calculated by WDEC for the grid, showing the transitions between different elements, specifically the C/O core and the He atmosphere, are shown in Figure 1. We also examined the sensitivity of fits to core parameters quantitatively after creating our fitting pipeline, in Section 4.4. The core profile shape and composition can be parametrized with variables: \(w_{1}\), \(w_{2}\), \(w_{3}\), and \(w_{4}\), as well as \(h_{1}\), \(h_{2}\), and \(h_{3}\), as described in Bischoff-Kim (2018), where \(w_{3}\) is constrained to be the difference in the size of the core and the sum of the other three free \(w_{n}\) parameters. We also chose the helium abundance in the C/He/H region (0.60), the diffusion coefficient for He at the base of the envelope (6.0) and at the base of the pure He (9.0). These parameters are all constant through every model in the grid. We held the core parameters fixed to the values listed in Table 1. ## 3 The Model Grid ### Grid Model Variable Parameters We show the range and step sizes for the four parameters we varied in our asteroseismic fitting in Table 2. The temperature range was chosen to match that of the observed instability strip. The total mass bounds are guided by the \begin{table} \begin{tabular}{c c} \hline Parameter & Value \\ \hline \(w_{1}\) & 0.5 \\ \(w_{2}\) & 0.1 \\ \(w_{4}\) & 0.2 \\ \(h_{1}\) & 0.9 \\ \(h_{2}\) & 40\% \(h_{1}\) \\ \(h_{3}\) & 20\% \(h_{1}\) \\ \hline \end{tabular} \end{table} Table 1: Values chosen for the core parameters. observed mass distribution of white dwarfs (Kepler et al., 2017), and the envelope mass values were chosen to represent a wide range of envelope masses, but with small enough steps to get more precise values in the solutions. For each combination of parameters, all possible periods between 100 s and 1500 s for all \(\ell=1\) and \(\ell=2\) modes were calculated. This would result in more than enough periods for an observed WD. Any mode higher than \(\ell=2\) can usually not be observed in WD stars due to geometric cancellation in light curves (Kepler et al., 2000). When referencing a specific period, we use \(k\) to denote the period number, so \(k=1\) is the first (shortest) overtone, and \(k=2\) is the next shortest, and so on. Not all models converged and led to the successful computation of a list of periods, though the vast majority did. Figure 2 shows the success rate of converging models in the grid per \(T_{\mbox{eff}}\) and total mass combination. A total of 1,109,911 models were attempted, and 1,082,209 parameter combinations converged to form the final model grid, or 97.5% of attempted models. A few patterns emerge in this heat map: there are certain total masses (0.970 \(M_{\odot}\) for example - indicating an issue with the starter model) that systematically have trouble converging, and there is an interesting band of failed models between 0.75 and 0.90 \(M_{\odot}\). Unfortunately, the reasons for the failure of these models still remains uncertain, and a future study will need to be done to reveal the mechanisms of the code that cause a model to fail. We have certain suspicions for causes of systematic failure, such as imperfect starter models, or improper memory management by the Fortran executable when ran in sequence. Since the majority of the patterns for model failures is above the usual masses for WDs in this dataset, and therefore would not normally factor into fitting anyway, we were confident that these failed models did not affect seismology. Between 0.5 and 0.75 \(M_{\odot}\), less than 1% of models failed. We could have chosen to remove our high-mass models but decided to leave them in in order to convey the patterns we observed for future studies. \begin{table} \begin{tabular}{c c c c} \hline \hline Parameter & Minimum & Maximum & Steps of \\ \hline Temperature (K) & 10600 & 12600 & 50 \\ Total Mass (\(M_{\odot}\)) & 0.470 & 1.000 & 0.005 \\ He envelope mass [\(\log\left(M_{\mbox{He}}/M_{*}\right)\)] & -1.50 & -4.00 & 0.25 \\ H envelope Mass [\(\log\left(M_{\mbox{H}}/M_{*}\right)\)] & -4.00 & -9.50 & 0.25 \\ \hline \hline \end{tabular} \end{table} Table 2: The parameters varied in our grid. The envelope masses are given by logarithmic fraction of the star’s total mass, while the star’s total mass is given in Solar masses (\(M_{\odot}\)). The total envelope mass is fixed at \(10^{-2}\)\(M_{*}\), in order to always be 2 orders of magnitude greater than the hydrogen layer. Figure 1: Chemical profiles for models with \(T_{\mbox{eff}}=12000\) K, \(M=0.6M_{\odot}\), \(\log(M_{\mbox{H}}/M_{*})=\) -7, with \(\log(M_{\mbox{He}}/M_{*})=\) –2 (left) and -4 (right) respectively. The left panel shows larger He mass layer to illustrate how the He and H mass layer values affect the shape of the profiles. These different profile shapes cause different mixing in C/He transition, as shown by the flattened sections between \(-\log(1-M_{r}/M_{*})=2\) to 4. This will cause different modes to be trapped in the main chemical zones, as well as trapping within this transition zone. ### Effect of the Parameters on the Periods The blue edge ZZ Ceti stars only pulsate with a few short periods, referred to as the first overtones. These are generally the \(k=1,2,3,4\) overtones in our models. By plotting the overtone periods vs. changing parameters of the grid, we can see how the parameters on a white dwarf model effect the calculated periods, and evaluate the physical description of what is happening in the core of ZZ Ceti stars. Figure 3 shows the change in \(\ell=1\) periods for models as the varying parameters in the grid change. Decreasing the hydrogen fraction in the envelope increases the length of the periods. This shows the presence of avoided crossings in the models, which occurs when a pulsation mode takes on the properties of the next immediate \(k\) mode (Castanheira & Kepler, 2008). This causes a variable period spacing (\(\Delta P\)), depending on the thickness of the hydrogen layer. Increasing the mass of the WD significantly decreases the values of its periods, and decreases \(\Delta P\). The latter means that higher mass stars have longer lists of calculated periods, making it easier for them to fit well. We must account for that effect and we describe our approach in section 4. There is a slight increase in pulsation period as \(T_{\mbox{eff}}\) is decreased. This means that as WDs cool, their modes get longer. We can physically measure the core cooling, which is the dominant cooling method for the ZZ Ceti phase. Helium mass layer has the most effect on periods at larger amounts, with little change at smaller masses. There is also a decrease in \(\Delta P\) with increasing temperature. The combination of that effect with the dependence of \(\Delta P\) on the total mass leads to the ubiquitous diagonal patterns in contour plots of best fit models (e.g. Fig. 5). The physical reason for this effect is that the period spacing depends on the average density of the model. The higher density of a high mass model can be compensated by the lower density of a hotter envelope and lead that model fitting the average spacing of an observed period spectrum, as well as a lower mass, cooler model. ## 4 Solution Fitting The general pipeline for fitting solutions takes inspiration from the analysis of Castanheira & Kepler (2008), Castanheira & Kepler (2009), and Bischoff-Kim et al. (2008), with some key statistical differences later on. In order to match models to stars, we need a measure of goodness of fit. In asteroseismology, we use \(S\) as our goodness of fit, where we take the sum of the squared difference in the observed vs. calculated periods, similar to a \(\chi^{2}\) goodness of fit, as described by Castanheira & Kepler (2008), and modified for our technique in Eq. 1. Figure 2: Heat map of the fraction of attempted models to converging models for each temperature-mass pair, and each He and H mass pair in the grid. The lighter grey areas show the finer features where models would not converge, while the redder areas show combinations where very few to no models converged, creating “holes” in the grid where no model exists. 97.5% of all attempted models converged, and the “holes” are small enough and at favorable locations to have a very minimal effect on seismology. \[S=\sqrt{\sum_{i=1}^{n}\frac{[P_{obs}(i)-P_{model}]^{2}\times w_{P}(i)}{\sum_{i=1} ^{n}w_{P}(i)}}\,\frac{N_{m}}{100} \tag{1}\] where \(n\) is the number of observed modes, \(N_{m}\) is the total number of \(\ell=1\) and 2 modes calculated, and \(w_{P}\) is the weight given to each mode. The weights are determined as proportional to the inverse of the observed period's uncertainty in order to give more weight to more precise modes, as such: \[w_{P}\propto\frac{1}{\sigma_{P}} \tag{2}\] where \(\sigma_{P}\) is the calculated uncertainty of an observed period in seconds, as given by Hermes et al. (2017), and then these values for each WD are normalized between \(0<w_{P}\leq 1\). The notable change in calculating \(S\) from Castanheira & Kepler (2008) is the inclusion of \(N_{m}/100\). This was done to weight strength of fits based on how many periods were able to be calculated for a give set of parameters. A model with fewer calculated periods that closely fit an observed WD should be a higher probability solution than a model with many calculated periods that fits a larger number of stars; the inclusion of this term accounts for that. During preliminary analyses, we noted that seismology was biased towards models with low \(T_{\mbox{eff}}\) and high mass. This was found to be due to cool, massive WDs having more possible periods to pulsate. By including the number of model periods in the fit, we can partially correct this bias. The factor of 1/100 was included to normalize the \(S\)-values to reasonable numbers to compare with later, since the total number of periods for a WDEC model is generally between 80-120. To fit observed periods to theoretical ones, we start by matching all observed periods to the \(\ell=1\) calculated modes, which is supported by Robinson et al. (1995), Kotak et al. (2004), and Castanheira et al. (2007), since the highest amplitude modes of a star are usually \(\ell=1\). However, if no solution is found for all \(\ell=1\) modes, higher \(\ell\) modes are tried as described. We used the mode identification done by Hermes et al. (2017), or through systematic incremental changes of matching individual periods to \(\ell=2\). Each observed mode must only match one calculated mode in the model; this is called a one-to-one match. If multiple observed periods best fit one calculated model period, then the model is not one-to-one with the observed WD, and can be eliminated from our analysis. Once all one-to-one solutions are identified, we apply an \(S\)-cut, removing all models with an \(S\) above a certain value. The \(S\)-cut is usually the number of observed periods for the WD, however if no solution has an \(S\) below the number of observed periods, the \(S\)-cut is raised. This is why the factor of 1/100 is included in Eq. 1, in order to keep \(S\) reasonably comparable to the number of observed periods. As an example of of this technique, consider _KIC 7594781_. This star was first discovered in the original _Kepler_ dataset by Greiss et al. (2016). _KIC 7594781_ should be on the blue edge, and has several observed periods, with some at \(\ell=1\) and \(\ell=2\). Blue-edge ZZ Ceti generally have tighter constraints on their periods, and therefore their seismological fits. By selecting periods as \(\ell=1\) and \(\ell=2\), calculating an \(S\) for each model, eliminating those that are not one-to-one, and then applying an \(S\)-cut of 11 (the number of periods listed in Table 3), we are left with a long list of solutions that can be visualized in the temperature-mass plane in Figure 4. The global minimum is at 12100 K and 0.495 \(M_{\odot}\), however it is apparent that there are other local minima, with \(S\)-values close to the global minima. This will be addressed in Section 4.2. ### Solution Uncertainty The uncertainty equation for asteroseismic measurements is derived in Zhang et al. (1986): \[\sigma^{2}=\frac{d^{2}}{S-S_{0}}, \tag{3}\] where \(d\) is the difference between the two measurements in whatever parameter uncertainty you are calculating, \(S\) is the value calculated for the model (Eq. 1), and \(S_{0}\) is the \(S\)-value calculated for the comparison model. The choice in selecting the two models to calculate \(\sigma\) for describes the uncertainty as either internal or external for a solution. The internal uncertainty of a solution is the uncertainty between a model and its next nearest neighbor, and the external uncertainty is calculated between the minima of families of solutions. \begin{table} \begin{tabular}{c|c|c|c} \hline Period (s) & Uncertainty \(\sigma\) (s) & \(w_{P}\) & \(\ell\) \\ \hline 206.814 & 0.00055 & 0.4545 & 1 \\ 261.213 & 0.0019 & 0.1315 & 1 \\ 279.647 & 0.0005 & 0.5000 & 2 \\ 281.314 & 0.00080 & 0.312 & 1 \\ 295.983 & 0.0011 & 0.2273 & 1 \\ 328.037 & 0.00025 & 1.0000 & 2 \\ 350.322 & 0.0010 & 0.25 & 2 \\ 356.86 & 0.0024 & 0.1042 & 1 \\ 396.146 & 0.0017 & 0.1471 & 1 \\ 480.335 & 0.0052 & 0.0481 & 1 \\ 683.934 & 0.014 & 0.0179 & 1 \\ \hline \end{tabular} \end{table} Table 3: Periods of _KIC 7594781_ selected from Hermes et al. (2017). Weights follow Eq. 2, and \(\ell\)’s follow those listed in Hermes et al. (2017), with some periods raised to \(\ell=2\) when unknown in Hermes et al. (2017). ### Families of Solutions and Significant Membership Once this list of models with a sufficiently low \(S\)-value are acquired, we can begin to classify solutions in this list into families, which become apparent when visualizing individual "slices" of Figure 4 at specific hydrogen/helium masses, such as Figure 5. We begin by assigning solutions to "families", where initially each family is composed of all solving models with the same, unique combination of hydrogen and helium masses. This process can identify up to 220 families for our grid, although usually much less actually remain after the \(S\)-cut. For _KIC 7594781_, Figure 5 shows the family distributions of models between unique hydrogen and helium masses at \(\log(M_{\mbox{H}}/M_{\ast})=-7.5\). It can be seen that oftentimes stars will have several distributions of families on top of each other at hydrogen-helium masses combinations. For _KIC 7594781_, there are about 3 or 4 possible distributions, centered at \(\sim\!0.5\,M_{\odot}\), \(\sim\!0.65\,M_{\odot}\), \(\sim\!0.80\,M_{\odot}\), and a possible one around \(0.9\,M_{\odot}\). This splitting of hydrogen and helium masses combinations generally becomes even more prevalent the more periods a star has. This can be seen with the star _EPIC 201719578_, whose periods are listed in Table 4, and whose seismological solution distribution at \(\log(M_{\mbox{H}}/M_{\ast})=-6.0\) can be seen in Figure 6, which has very distinct splitting. This presents an interesting challenge, by considering families as just unique combinations of hydrogen and helium mass values, all of these distinct distributions are considered one family, and possible well-fitting models are ignored. In order to maintain integrity of families, we have to further split hydrogen-helium combinations into these respective distributions. A naive approach would be to section off solutions into a temperature-mass grid and assign family membership based on which area they reside in, which would be subjective to the person cordoning off the grid, and varied between observed stars in the dataset. Upon close inspection, we find that solutions within a specific distribution are related via the model periods which best match the star, i.e. the \(k\) numbers associated to the periods of a solution. By considering only the \(k\)'s of each model, we can cluster solutions in specific hydrogen-helium masses into independent families. There are several grouping algorithms at the modern statistician's disposal, and choosing one to use for this scenario depends on a few parameters. Firstly, because of the highly distinctive and disparate behaviors of these fitting distributions from star to star, it would not be in our benefit to use a supervised learning method for "classification". Classification requires training on a small dataset to apply to a larger one. A supervised learning algorithm (such as \(k\)-nearest neighbors) would more than likely overfit to one model, and not be reusable between stars, requiring Figure 4: All solutions for _KIC 7594781_ following the outline in Section 4, with the minimum \(S\)-value model value shown for each temperature and mass combination. Red lines mark the global minimum solution with internal uncertainty calculated using Eq. 3. more time and human analysis to assign training sets per star. Instead of classification, we decided upon "clustering" methods. Cluster analysis does not refer to one specific algorithm, but rather the general task, usually involving calculating distances between members, through an iterative process of knowledge discovery. These algorithms are generally unsupervised, so no training datasets are required. One of the most common ones, and the one chosen here is \(k\)-means clustering (MacQueen, 1967), which intends to partition \(n\) observations into a specified \(k\) clusters (\(k\) in this case is not to be confused with the label \(k\) for period numbers), where each observation belongs to a cluster with the closest mean. The drawback for our use is the dependency on selecting a \(k\) number of clusters. By eye, one could infer 7-8 clusters from Figure 6, although other hydrogen-helium combinations could differ in number. The number of clusters for each \(k\)-means model was determined via the elbow method, which carries its own metrics, with in depth discussion in Ketchen & Shook (1996). This created a fast, efficient, and standardized way to cluster families for every star in the dataset, with relatively high accuracy. In practice, for each hydrogen-helium mass layers pair, all solving models following the aforementioned process are selected, then only their period number \(k\)s are clustered via \(k\)-means. For _EPIC 201719578_, a visualization can be seen in Figure 7, for \(\log(M_{\mbox{H}}/M_{*})=-6.0\), where the distribution of solutions have been clustered relatively good accuracy. From here, to further narrow solution selection, we can consider family membership for the identified solutions. Certain families may contain only a handful of models, while others contain dozens, or even hundreds. These low-membership families can be considered as outliers, and their corresponding solution models can be eliminated from the solution list. This is done by only taking solutions which have "significant membership", where the number of models in the family must be greater than one standard deviation (\(1\,\sigma\)) less than the arithmetic mean number of family members for the star. This was done to strengthen confidence in the solution families that arise, and therefore the solution models derived from the families. ### Solution Selection Methods Once a list of significant family solutions with the minimum-\(S\) have been acquired for a star, generally dozens to hundreds of families could fit with our method. To narrow down this list, we are required to make some choices, and we want those choices to best reflect the strength of the fitted model and the accuracy of the seismological temperature and mass to other solution types, such as spectroscopy. Table 7 contains the absolute minimum \(S\)-value family solution for each star in the _Kepler_ and _K2_ dataset, however we have no inclination to prefer the global minimum family to other solution families with similarly low \(S\)-values, because of the massively degenerate nature of these solutions. The key is in selecting "similarly low" \(S\)-values to discriminate between viable seismological solutions and solutions that are invalidated by stronger ones. In order to trim the list of solutions down to just viable ones, we used a standardized method for each star. Taking the list of solutions obtained using hydrogen-helium combinations and machine learning clusters, we can standardize all \(S\)-values between 0 and 1, and then only keep those below a certain cutoff. We considered it to be better to eliminate more solutions than less, so we eliminated all solutions with a normalized \(S\) above 0.05. For _KIC 7594781_ and _EPIC 201719578_, the distribution of chosen solutions can be seen in Figure 8. By choosing those below 0.05, we can have confidence in the seismological strength of our remaining solutions to be similarly viable. Once only viable solutions remain, we are left with our final list of solutions. Each observed star from Hermes et al. (2017) had a varying number of solutions in their final list, ranging from one to several dozen because of the degeneracy of their observed periods. Since we are confident that the final solution list are near equivalent in seismological strength, we can use other factors to guide which solution we choose. One possible approach is turning to spectroscopy. Hermes et al. (2017) goes into great detail of spectroscopic solutions for temperature and mass of these stars, and is the secondary method we use to choose solutions, after the minimum \(S\) (Figure 7). We can normalize all spectroscopic and seismological values for \(T_{\mbox{eff}}\) and \(M\) to be between 0 and 1, where 0 and 1 are the minimum and maximum values of the grid (0.47 \(M_{\odot}\) - 1.00 \(M_{\odot}\), 10600 K - 12600 K). From there, we can simply chose the single viable seismological solution with the lowest geometric distance to spectroscopy in the \(T_{\mbox{eff}}\) and \(M\) plane. Creating a spectroscopy dependence in seismological solutions is not favorable for the practice of seismology, and other selection criteria exist that would not introduce as much variability in the final solutions. We plan to explore these other criteria in later studies. Another way to analyze these final valid solution lists are to approach them each as their own dataset. The benefits here are the ability to monitor constraints and precision on our parameters, like hydrogen and helium mass. The drawback is in the low amount of valid solutions that are usually retained in these lists, meaning a inherently large standard deviation for temperature and total mass, as extreme temperature and extreme mass solutions are averaged together. This is explored in our results in Section 6.1. ### Core Sensitivity Study Figure 7: The identified clusters, and therefore families, for _EPIC 201719578_ with \(\log(M_{\mbox{H}}/M_{*})\) = -6.0, and \(\log(M_{\mbox{He}}/M_{*})\) = -1.5 and -2.5 as in Figure 6. Color/shape of point does not correlate between the plots, and the two \(k\)-means models were calculated independently from each other, using only the period numbers (\(k\)’s) of the models. Because we performed asteroseismic fits of a large number of objects, we chose to focus on bulk parameters such as mass and effective temperature and envelope parameters such as the thickness of the hydrogen and helium layer mass. While these are parameters that consistently influence the best asteroseismic fits of DAVs, numerous studies have shown that g-mode pulsations in white dwarfs were sensitive to the core as well (e.g. Bischoff-Kim et al., 2014; Giammichele et al., 2018). The WDEC is especially well suited for those types of study. Details on the parameterization of the core oxygen profile are presented in Figure 1 of Bischoff-Kim (2018). We use that parameterization in the study below. In order to quantitatively show the sensitivity of the fits to core shape, we utilized a separate grid with a coarser size (mass steps of 0.05 \(M_{\odot}\), temperature steps of 500K, layer thickness steps of 0.5) and varying core as a supplement to our grid. We chose both a high and low value for these two parameters to create 4 combinations of grid parameters to examine. The numbers were picked in order to most accurately represent the total population of WDs in nature. The parameters chosen are listed in Table 5. For the central oxygen abundance, we did not go below 0.50, consistent with the predictions of stellar evolution (Metcalfe et al., 2002; Althaus and Corsico, 2022, e.g.). We then compared fits for _KIC 7594781_ and _EPIC 201719578_ with each of these core combinations. These two are rich pulsators with very precise fits to asteroseismology. We have used them as test stars and they are good choices for practical application of these test grids as well. We followed the same pipeline as described with the fine grid above and found model fits for each star. It is apparent that, consistent with expectations, \(w_{1}\) is the more significant parameter to individual fit. We show the quality of fit map for _KIC 7595781_ in Figure 9. Although the contours are not as refined as the main grid, we can still see that the forbidden solutions in the high-mass, hot corner of the plot are generally the same and the solution family shapes are still in the same general location. The various core solutions (along with our concluding solution from Figure 9, for comparison) are listed in Table 6. Several patterns remain consistent. Except for the high mass solution of _KIC 7594781_ that would readily be discarded, its mass is consistent. We also recover the hydrogen and helium layer masses for the better fitting models. For _EPIC 201719578_, the core has a larger effect. We do still find higher mass solutions. But the layer masses are less consistent. It would be beneficial to refine this method for objects like _EPIC 201719578_, where core structure is more significant. However, with pipeline fitting, parameters have to be selected by importance, and we selected the ones that matter most for the normal mass DAVs in the dataset. ## 5 The Kepler Space Telescope The _Kepler_ space telescope mission, and later _K2_, was designed to continuously monitor stars, for four years. The main science goal of the _Kepler_ mission was to determine the frequency of Earth-like planets around Solar-like stars. Another scientific goal was to characterize the stars in the field. Most of the observations were in long-cadence \begin{table} \begin{tabular}{l l l l l l} \hline Core (h1, w1) & \(T_{\rm eff}\) (K) & \(M(M_{\odot})\) & -log(\(M_{\rm H}/M_{*}\)) & -log(\(M_{\rm He}/M_{*}\)) & \(S\) (s) \\ \hline Average solution & **11625 \(\pm\) 475** & **0.510 \(\pm\) 0.0200** & **7.38 \(\pm\) 0.12** & **1.50 \(\pm\) 0.00** & \\ \hline (0.5, 0.1) & 12000 & 0.950 & 6.5 & 1.5 & 2.875 \\ (0.5, 0.5) & 11000 & 0.550 & 7.0 & 1.5 & 2.92 \\ (0.9, 0.1) & 12500 & 0.650 & 6.0 & 3.0 & 5.102 \\ (0.9, 0.5) & 12000 & 0.550 & 6.5 & 3.0 & 6.257 \\ \hline Average solution & **11900 \(\pm\) 514** & **0.870 \(\pm\) 0.0126** & **8.20 \(\pm\) 0.58** & **2.65 \(\pm\) 0.73** & \\ \hline (0.5, 0.1) & 11000 & 0.650 & 5.5 & 1.5 & 0.59 \\ _EPIC 201719578_ & (0.5, 0.5) & 11500 & 0.650 & 5.5 & 3.5 & 0.714 \\ (0.9, 0.1) & 10000 & 0.700 & 5.5 & 3.5 & 0.857 \\ (0.9, 0.5) & 11500 & 1.000 & 4.0 & 1.5 & 0.971 \\ \hline \end{tabular} \end{table} Table 6: Solutions with varying core parameters, with _KIC 7594781_ on top and _EPIC 201719578_ on bottom of the table. Solutions from Table 9 are bolded at the top for reference. \begin{table} \begin{tabular}{c c} \hline Parameter & Value \\ \hline \(w_{1}\) & 0.1 and 0.5 \\ \(w_{2}\) & 0.1 \\ \(w_{4}\) & 0.2 \\ \(h_{1}\) & 0.5 and 0.9 \\ \(h_{2}\) & 40\% \(h_{1}\) \\ \(h_{3}\) & 20\% \(h_{1}\) \\ \hline \end{tabular} \end{table} Table 5: Core parameters that we varied to compare with each other in the supplemental grid. (30 minutes), but some targets had short-cadence (2 min) data. After a second reaction wheel on the telescope failed, the spacecraft was no longer able to hold its observing field at a fixed position in the sky without drifting. The mission was redesigned then as _K2_, to scan multiple areas throughout the sky. While this would mean lower precision, the telescope was pointed to different fields, for 19 cycles, until it ran out of fuel. The short cadence observations from _K2_ provided a reliable way to identify DAVs in white dwarf populations. Data of stellar objects can be collected from the Mikulski Archive for Space Telescopes for NASA's _Kepler_ & _K2_ Space Telescope. ### Observed DAVs From 2015 - 2017, observations of identified white dwarfs in the _K2_ campaigns poured in, with correlating analysis for variability. Hermes et al. (2017) compiled all relevant information on WDs in the _K2_ dataset, including non-variable and variable WDs, as well as pertaining spectroscopic values for the stars in the dataset. In total, there were 29 observed pulsating WDs, all of them DAVs. The periods listed in Hermes et al. (2017) were calculated in different manners, namely the Linear Least-Squares (LLS) and Lorentzian (Lor) methods. Hotter DAVs have narrow pulsation peaks that can be modelled with the LLS method, while colder DAVs have longer peaks and pulsations can only be modelled through the Lor methods (Hermes et al., 2017). For this study, we used all LLS periods when available for a star, and if not available, we used Lor modes. Lor modes have much larger uncertainties than LLS modes, sometimes by several order of magnitudes. For stars with more LLS modes than Lor, the Lor modes can be ignored when fitting solutions due to their extremely low weighting when calculating \(S\). ## 6 Results Table 7 contains the absolute minima \(S\) solutions for all the stars in Hermes et al. (2017) according to our asteroseismological study. Figure 10 shows the residual difference between \(T_{\textrm{eff}}\) and total mass for the seismic results here and the spectroscopic results collected in Hermes et al. (2017). The average seismological temperature for the WDs in this dataset is 11384 K and the average WD mass is 0.696 \(M_{\odot}\). The minimum \(S\) solutions tend to fit above the average mass for WDs (0.624 \(M_{\odot}\)) determined via spectroscopic modeling (Kepler et al., 2017). Using the selection methods detailed in Section 4.3, we choose the solutions listed in Table 8, with spectroscopic residual differences in Figure 11. These selections have an average seismological temperature of 11741 K and mass of 0.667 \(M_{\odot}\). Figure 9: Contour plots for model fits to _KIC 7594781_ with the four combinations of core parameters in the supplemental grid. From left to right, (\(h_{1}\), \(w_{1}\)): (0.5, 0.1), (0.5, 0.5), (0.9, 0.1), (0.9, 0.5). Individual contour plots are similar to and based on Figure 4. Figure 11: Residuals between spectroscopic solutions listed in Hermes et al. (2017), and the selected seismological solutions listed in Table 8 Figure 10: Residuals between spectroscopic solutions listed in Hermes et al. (2017), and seismological solutions listed in Table 7. ### Constraining Values with Averages and Uncertainties In order to see on a more general scale what the solution fit for an observed star is, we can average the valid seismological solutions determined via Section 4.3, and calculate a precision using standard deviation. Since this standard deviation calculation potentially uses multiple families, it can be understood like an external uncertainty, but not using Eq. 3. It will give us a sense of how the best seismological solutions tend to be clustered. This way, we can also gain insight into how well the hydrogen and helium layer masses are constrained. These averages and uncertainties are shown in Table 9, and plotted in Figure 12. A very common theme was for an observed star to have only a few valid solutions, with half at a higher temperature and half at a lower temperature, with varying masses between them. This means that the standard deviations for these star's solutions can become very large. This is illustrated in Figure 12 where the solutions near the center of the temperature-mass plane are averages from an equal number of extreme solutions. Since we are performing seismology, a more useful and important factor is the precision of the hydrogen and helium thickness. Using this technique on the stars in the dataset, we can calculate a precise value for these masses, and tightly constrain them. Every star in the study is held within 2 orders of magnitude for hydrogen, with several below or around 1 order. Helium mass is held within one order of magnitude for all stars. These higher-precision hydrogen and helium masses are mostly independent of the precision for temperature and mass, which instills a lot of confidence to WD asteroseismology, since it is the only technique able to probe these values. Figure 12: The average values for seismological temperature and total mass of stars in the _Kepler_/_K2_ dataset using valid, selected solutions, with error bars demonstrating precision using standard deviation. The example stars _EPIC 201719578_ and _KIC 7594781_ are marked at their high- and low-mass solutions respectively. ## 7 Conclusions We performed a systematic grid search for best fit models for the WDs listed in Hermes et al. (2017), between 10600 and 12600 K and 0.47 to 1.00 \(M_{\odot}\), using fixed core parameters. We then described and utilized a standardized solution identification procedure to select the highest-confidence and most viable seismological model. These models provide a seismological effective temperature, total WD mass, fractional hydrogen mass, and fractional helium mass for each star in the dataset. We have also instilled confidence in our method of seismological analysis that we will apply to all other known WDs. Our aim is to analyze all known DAVs, and compare this technique to previous and current studies, as well as apply it to subsequent identified WDs. We are currently drafting a post factum search for both variable and non-variable WDs in the later _K2_ campaigns (8-19), and plan to use the same pipeline from this study for any currently unidentified variable WDs in that data. We plan to also experiment with other solution selection methods, including using parallaxes with a mass-radius relationship to narrow down the valid solution list. This would be much more empirical than spectroscopic modelling, and have less dependencies on parameters and characteristics outside of seismology's control. It should also be reiterated, as stated in Section 4.4, that there are DAVs in which core structure is very influential on asteroseismic fits. The method outlined in this paper remains ineffective in determining which stars this is true for. The pipeline described above would benefit from being refined to better analyze stars like _EPIC 201719578_ in which \begin{table} \begin{tabular}{c|c|c|c|c} \hline \hline Name & \(T_{\text{eff}}\)(K) & Mass (\(M_{\odot}\)) & \(-\log(M_{\text{H}}/M_{*})\) & \(-\log(M_{\text{He}}/M_{*})\) \\ \hline KIC4357037 & 11850 \(\pm\) 150 & 0.675 \(\pm\) 0.1950 & 5.88 \(\pm\) 1.62 & 1.75 \(\pm\) 0.25 \\ KIC4552982 & 11246 \(\pm\) 559 & 0.701 \(\pm\) 0.0572 & 5.50 \(\pm\) 1.07 & 2.36 \(\pm\) 0.81 \\ KIC7594781 & 11625 \(\pm\) 475 & 0.510 \(\pm\) 0.0200 & 7.38 \(\pm\) 0.12 & 1.50 \(\pm\) 0.00 \\ KIC10132702 & 11581 \(\pm\) 655 & 0.705 \(\pm\) 0.0510 & 7.38 \(\pm\) 1.12 & 2.62 \(\pm\) 0.65 \\ KIC11911480 & 11065 \(\pm\) 692 & 0.729 \(\pm\) 0.1113 & 7.38 \(\pm\) 1.02 & 3.20 \(\pm\) 0.71 \\ EPIC60017836 & 11182 \(\pm\) 392 & 0.737 \(\pm\) 0.1059 & 6.68 \(\pm\) 1.14 & 2.56 \(\pm\) 0.73 \\ EPIC201355934 & 12583 \(\pm\) 24 & 0.720 \(\pm\) 0.0000 & 6.00 \(\pm\) 0.00 & 3.75 \(\pm\) 0.20 \\ EPIC201719578 & 11900 \(\pm\) 514 & 0.870 \(\pm\) 0.0126 & 8.20 \(\pm\) 0.58 & 2.65 \(\pm\) 0.73 \\ EPIC201730811 & 11750 \(\pm\) 736 & 0.710 \(\pm\) 0.0510 & 6.00 \(\pm\) 0.20 & 2.08 \(\pm\) 0.12 \\ EPIC201802933 & 10675 \(\pm\) 63 & 0.858 \(\pm\) 0.0121 & 9.00 \(\pm\) 0.20 & 3.38 \(\pm\) 0.24 \\ EPIC201806008 & 11492 \(\pm\) 610 & 0.664 \(\pm\) 0.1187 & 6.89 \(\pm\) 1.61 & 2.75 \(\pm\) 0.78 \\ EPIC206212611 & 11609 \(\pm\) 632 & 0.753 \(\pm\) 0.1461 & 6.85 \(\pm\) 1.60 & 2.72 \(\pm\) 0.79 \\ EPIC210397465 & 11334 \(\pm\) 555 & 0.599 \(\pm\) 0.1157 & 5.33 \(\pm\) 0.97 & 2.95 \(\pm\) 0.85 \\ EPIC211596649* & 11100 & 0.510 & 6.50 & 1.50 \\ EPIC211629697 & 11512 \(\pm\) 594 & 0.749 \(\pm\) 0.1278 & 7.10 \(\pm\) 1.41 & 2.59 \(\pm\) 0.75 \\ EPIC211914185* & 12300 & 0.630 & 6.00 & 1.50 \\ EPIC211916160 & 11631 \(\pm\) 608 & 0.663 \(\pm\) 0.1156 & 6.96 \(\pm\) 1.53 & 2.68 \(\pm\) 0.78 \\ EPIC211926430 & 10817 \(\pm\) 155 & 0.853 \(\pm\) 0.0094 & 8.92 \(\pm\) 0.12 & 3.50 \(\pm\) 0.54 \\ EPIC228682478 & 11729 \(\pm\) 624 & 0.662 \(\pm\) 0.1037 & 6.96 \(\pm\) 1.57 & 2.68 \(\pm\) 0.87 \\ EPIC229227292 & 11814 \(\pm\) 693 & 0.681 \(\pm\) 0.1516 & 7.68 \(\pm\) 1.70 & 2.61 \(\pm\) 0.97 \\ EPIC229228364 & 11622 \(\pm\) 524 & 0.772 \(\pm\) 0.1083 & 6.03 \(\pm\) 1.52 & 2.76 \(\pm\) 0.81 \\ EPIC220204626 & 11358 \(\pm\) 568 & 0.672 \(\pm\) 0.1425 & 8.29 \(\pm\) 1.34 & 2.56 \(\pm\) 0.65 \\ EPIC220258806 & 11750 \(\pm\) 450 & 0.815 \(\pm\) 0.0450 & 8.50 \(\pm\) 0.50 & 1.50 \(\pm\) 0.00 \\ EPIC220347759* & 10600 & 0.840 & 7.00 & 1.50 \\ EPIC220453225 & 12200 \(\pm\) 628 & 0.620 \(\pm\) 0.1307 & 6.45 \(\pm\) 0.43 & 3.30 \(\pm\) 0.19 \\ EPIC229228478 & 11694 \(\pm\) 624 & 0.710 \(\pm\) 0.0952 & 7.18 \(\pm\) 1.13 & 2.23 \(\pm\) 0.90 \\ EPIC229228480 & 11393 \(\pm\) 528 & 0.688 \(\pm\) 0.1131 & 6.14 \(\pm\) 1.54 & 2.83 \(\pm\) 0.81 \\ EPIC210377280* & 10950 & 0.570 & 8.75 & 1.50 \\ EPIC220274129 & 11482 \(\pm\) 517 & 0.679 \(\pm\) 0.1146 & 6.74 \(\pm\) 1.62 & 2.74 \(\pm\) 0.77 \\ \hline \end{tabular} Note. –* denotes selection process eliminated all but one solution, therefore no standard deviations exist. \end{table} Table 9: The average valid seismological solutions. core structure is significant. Expanding parameter space in pipeline fitting is computationally expensive, but such an analysis would be good to include in future studies. In total, we analyzed 29 DAVs, using data collected from the _Kepler_ and _K2_ space telescope. Of the 29 stars, we presented 19 brand new analyses, and an additional 6 analyses of known WDs using new data. The results presented here, with emphasis on the values for hydrogen and helium layer masses, provide important values for constraining internal structure of WDs. Asteroseismology is the only technique to probe the interior of these stars, and seismological results like those from this study are directly contributing to the study of white dwarf structure and stellar evolution.
2308.03478
Network Security in the Industrial Control System: A Survey
Along with the development of intelligent manufacturing, especially with the high connectivity of the industrial control system (ICS), the network security of ICS becomes more important. And in recent years, there has been much research on the security of the ICS network. However, in practical usage, there are many types of protocols, which means a high vulnerability in protocols. Therefore, in this paper, we give a complete review of the protocols that are usually used in ICS. Then, we give a comprehensive review on network security in terms of Defence in Depth (DiD), including data encryption, access control policy, intrusion detection system, software-defined network, etc. Through these works, we try to provide a new perspective on the exciting new developments in this field.
Yang Li, Shihao Wu, Quan Pan
2023-08-07T11:19:24Z
http://arxiv.org/abs/2308.03478v1
# Network Security in the Industrial Control System: A Survey ###### Abstract Along with the development of intelligent manufacturing, especially with the high connectivity of the industrial control system (ICS), the network security of ICS becomes more important. And in recent years, there has been much research on the security of the ICS network. However, in practical usage, there are many types of protocols, which means a high vulnerability in protocols. Therefore, in this paper, we give a complete review of the protocols that are usually used in ICS. Then, we give a comprehensive review on network security in terms of Defence in Depth (DiD), including data encryption, access control policy, intrusion detection system, software-defined network, etc. Through these works, we try to provide a new perspective on the exciting new developments in this field. Industrial Control System, Security, Network Security. ## I Introduction The new generation of information technology, especially the new generation of intelligent manufacturing, is developing rapidly and accelerating its integration with the internet, which brings new opportunities for the transformation and upgrading of the global manufacturing industry. However, the high connectivity of industrial control systems (ICS) makes their security an important issue. In particular, the diversity of industrial control network protocols increases the vulnerability. More than 70% of the vulnerabilities disclosed in ICS in the first half of 2020 were remotely exploited by cyber attack carriers, according to an industrial cybersecurity firm 1. Footnote 1: [https://www.securityweek.com/over-70-ics-vulnerabilities-disclosed-first-half-2020-remotely-exploitable](https://www.securityweek.com/over-70-ics-vulnerabilities-disclosed-first-half-2020-remotely-exploitable) In the view of cybersecurity, network security is confidentiality and non-repudiation of the communication in ICS, which includes protocol security, network structure security, etc. To ensure network security for ICS network designs, different organizations come up with different standards. For example, the National Institute of Standards and Technology (NIST) proposed the guidelines for ICS Security [1] since 2011, the International Electrotechnical Commission (IEC) proposed the ISA/IEC 62443-4-1 [2] in 2018 to ensure lifecycle security in ICS, etc. Different from traditional networks, industrial devices can be divided into different sectors or zones according to their functions and positions in ICS. Therefore, defense-in-depth (DiD) is an important way to ensure network security for the entire industrial network system. A DiD, usually includes data encryption, access control policies, intrusion detection system, etc. Data encryption is to ensure the confidentiality of data transmission, usage, and storage. The access control policy is a direct way to protect the ICS from hostile detection. And the intrusion detection system is a mostly used way to monitor malicious activity or policy violations in the ICS. Therefore, in this survey, we will elaborate on how these strategies work and how they typically behave in an ICS system. Plenty of research has been done on ICS network security. However, as far as we know, there are very few systematic reviews that well shape this area and current progress. Although some works have given the spotlight on the survey of ICS security, e.g., Knowles et al. [3] surveyed the methodologies and research that all before 2015 in the view of ICS security measuring and risk management. Xu et al. [4] reviewed the works in view of the protocols that are used in ICS. Recently, You et al. [5] provided a brief introduction to ICS security given security control tendency, ICS operation, network layer, etc. Although these works have explored ICS security from different views, there is still lacking a systematic survey on the current progress of network security in ICS. Furthermore, different from previous studies, our focus is more on the defense in depth which is more practical. The structure of this paper is described as follows: In section II, we provide a discussion about the ICS network. In section III, we give a discussion about the defense in depth which expounds on data encryption, access control policy, intrusion detection, software-defined network, etc. Finally comes the future research directions in section IV ## II ICS Networks With the development of connectivity and openness, communication security has become a major threat to ICS. In view of different scenarios, the protocols, network structure, etc., may be different in an ICS. Meanwhile, one of the features of ICS is the diversity of protocols and there is no unified standard for its design. Therefore, it will be a challenging task to guarantee network security in ICS. In this section, we will describe the protocols, and network structure in ICS. ### _Network Structure_ An ICS network is organized by: Programmable Logic Controller (PLC) which is an industrial digital computer used in controlling manufacturing processes, Remote Terminal Unit (RTU) which is a microprocessor-controlled electronic device that interfaces objects in the physical world to the SCADA system, Intelligent Electronic Device (IED) which is an integrated microprocessor-based controller that usually used in power system, and Human Machine Interface (HMI) which is a control panel that operates PLCs, RTUs. Based on the different requirements of ICS application scenarios, different network topologies are needed. For example, the devices are usually divided into _groups_, or _zone_ based on their location, usage, function, or network security [6], which is also recommended by NIST [1], details will be introduced in Section III-B2. As recommended in the standard ISA-62443 2, the topology can be designed as the enterprise zone and plant zones in terms of the network security and functions which is illustrated in Figure 1. Footnote 2: [https://www.isa.org/training-and-certification/isa-certification/isa99iec-62443/isa99iec-62443-cybersecurity-certificate-programs](https://www.isa.org/training-and-certification/isa-certification/isa99iec-62443/isa99iec-62443-cybersecurity-certificate-programs) To ensure security, there is no connection among zones. Another commonly used topology, shown in Figure 2, is a three-level ICS network: management level, supervision control level, and device control level. The management level can be seen as a traditional IT structure whose security is guaranteed by traditional defense methods. And in the supervision control level and device control level, different kinds of protocols are applied, e.g., ModBus 3, ProfiNet 4, DNP3 5, etc. However, seldom security modules are applied in those two levels in practice due to the incompatibility of manufacturing devices and protocol diversity. Footnote 3: [https://modbus.org](https://modbus.org) Footnote 4: [https://www.profibus.com/technology/profinet/](https://www.profibus.com/technology/profinet/) Footnote 5: [https://www.dnp.org/About/Overview-of-DNP3-Protocol](https://www.dnp.org/About/Overview-of-DNP3-Protocol) Previously, cable distance, traffic distribution, traffic balance, network delays, etc., are the main considerations in the ICS network designing [7], especially in the real-time system. However, with the interconnectivity of ICS, network security has become a serious issue. Also, the researchers start to focus on safe neural network designing in ICS. But it is hard to reconcile the traditional requirement and the security. The general problems that an ICS network needs to concern with include real-time data transfer, geographical position limitation, strong determinism, etc., which are difficult to harmonize, and the security requirement makes it even harder [8]. ### _Protocols_ The protocol is the basic element for communication in ICS, and plenty of works focused on the security of a specific protocol. And the truth is that massive protocols exist in ICS and most of them are non-public. More sadly, most of the protocols do not consider the security mechanism when they are designed and applied. In this subsection, the commonly used protocols in ICS will be analyzed in authorization, encryption, availability, integrity, and confidentiality those aspects respectively[4]. The details are listed in Table I. In the table, **Authentication** indicates the mechanisms that judge an identity, if there is no authorization in a protocol, the privileges can be easily gained with the forge protocol packets. **Authorization** is a security mechanism to determine access levels based on the user's identity. If there is a lack of authorization, malicious users can send any information or resource to others without permission. **Encryption** denotes whether there is cryptography in the protocol, if there is no encryption, the communication data can be captured easily by malicious attackers. **Availability** is the status of the ICS device or service, if it is a lack of availability, ICS may lose control which may cause an industrial accident. **Integrity** indicates whether the data is transmitted completely, if there is a lack of integrity, communication data can be rendered useless by package missing or corruption. **Confidentiality** refers to ICS's protection against unauthorized access and misuse. Without confidentiality, unauthorized users can exploit vulnerabilities to achieve illegal purposes. Real-time Ethernet (RTE) is a set of protocols that support real-time operation, and all of those protocols are designed based on the IEC 61784-1. All protocols mentioned below belong to RTE. #### Ii-B1 Profinet Profinet is usually used in data communication between controllers (e.g., PLCs, DCSs, or PACs, etc.) and devices (e.g., I/O blocks, RFID readers, proxies, etc.). There are four layers in Profinet which are Ethernet (physical and data link layers), IP (network layer), TCP&UDP (transport layer), and other protocols (application layer). In some specific Profinet versions, to make Profinet faster, it may skip the IP, and TCP&UDP layers, e.g., Real-Time Profinet. Profinet Fig. 1: An example of the zone in an ICS network (ISA-62443 zone), cited from [6]. Fig. 2: Another example of the ICS topology. has two versions which are Profinet CBA and Profinet IO, Profinet CBA is suitable for component-based communication via TCP/IP, while Profinet IO is used in systems requiring real-time communication. As there is no authentication mechanism in the original version. Therefore, it suffers from the network security problems, such as man-in-the-middle, including packet flooding, packet sniffing, etc. To tackle this problem, Oh et al. [23] added an Integrity Check Value (ICV) module at the base of Profinet/DCP (with DCP as configuring protocols) to the authentication data field. Guilherme et al. [24] proposed an anomaly detection method that can identify four different events that happened in Profinet. In summarizing, there is a high authorization, encryption, integrity, availability w.r.t., network security [9, 10]. #### Ii-B2 DNP3 DNP3 is designed based on the IEC TC-57 standard, and it is a three layers protocol. Even with the large scale of the application, DNP3 is still weak in security guaranteed. Firstly, there is no authentication protection in DNP3, and it is easy for an attacker to disrupt the control process by creating a normal conversation. Secondly, it is also lacking authorization protection, and any user can run any functions over it. Finally, there is no encryption protection, and the messages it transmits are in plain text. To ensure the DNP3's security, Jeong-Han et al. [25] detected the intrusion by producing a burst-based whitelist model. Bai et al. [26] proposed an automation protection framework to detect the attacks that aim at the DNP3. Hao et al. [27] designed a set of snort rules on the DNP3 networks for intrusion detection. Sahebroa et al. [28] enhanced the DNP3's security by modifying the internal protocol structure and encrypting its packages with Blowfish [29], and Ye et al.[30] proposed an improved version of DNP3-BAE by using the hashing chain. #### Ii-B3 Modbus Modbus is one of the most popular and oldest protocols in the ICS communication module, it was designed to connect with PLCs in 1979, and now it is broadly applied in industries with two kinds of implementation. One is serial Modbus, which applies the high-level data link control (HDLC) standard6. And the other one is the Modbus-TCP, which adopts the TCP/IP protocol stack [31]. Same to DNP3, lots of works are proposed to enhance this protocol. Emil et al. [32] proposed an authentication method for the device when connecting with Modbus TCP. Fei et al. [33] designed a new format of Modbus named SecModbus, it did not increase the communication procedure and ensured confidentiality, integrity, and authorization at the same time. Apart from the improvement over the Modbus protocol, Yusheng et al. [34] designed the stereo depth intrusion detection system (SD-IDS) to inspect the Modbus TCP traffic. Footnote 6: [http://en.Wikipedia.org/wiki/High-Level_Data_Link_Control](http://en.Wikipedia.org/wiki/High-Level_Data_Link_Control) #### Ii-B4 IEC 60870-5104 IEC 60870-5104 (also known as IEC 870-5-104) is an international standard, released in 2000 by the IEC (International Electrotechnical Commission). Several protocols are designed based on this standard, e.g., ip4Cloud/SEC3PB which is to capture Profibus data by eavesdropping and transmitting it to SCADA services, and ip4Cloud/SEC3IO which is to switch and monitors digital IO status to transmit them to SCADA services. The protocol under this standard can be used for remote control tasks between the control center and the substation. Generally, there is no authentication and encryption mechanism in IEC 60870-5104, and all the messages are transmitted with plain text. Due to such problems, protocols under this standard are vulnerable: Maynard et al. [17] analyzed the attack behaviors to the IEC 60870-5104 protocols and deployed the man-in-the-middle attack over the SCADA system successfully and suggested that rule-based methods can be applied in the security insurance. Recently, Qassim et al. [35] showed that a successful control command injection attack can be implemented by exploiting the previously identified vulnerabilities in their designed SCADA testbed. To ensure security, with the help of the open-source Snort tool7, Yang et al. [16] used a rule-based method to detect the intrusion that deployed on this protocol. To protect the substation automation network based on IEC 60780-5104 protocol, Hodo et al. [36] proposed a machine learning-based framework to do intrusion detection. Footnote 7: [https://www.snort.org/](https://www.snort.org/) #### Ii-B5 IEC 61850 IEC 61850 is a standard developed by IEC Technical Committee no. 57 Working Group 10 and IEEE for Ethernet (IEEE 802.3) based communications in substations. It is an international standard defining communication protocols for intelligent electronic devices at electrical substations. As with many other communication standards, IEC 61850 was developed without extensive consideration of critical security measures. It always suffers from plenty of attacks including eavesdropping, data spoofing, DOS, password cracking, etc [37]. To tackle the security problems in IEC 61850, IEC 62351 (Part 3, 4, and 6) is developed by the same committee in 2007. For details please refer to Youssef et al.'s work [18]. #### Ii-B6 IEC 61400-25 IEC61400-25 is a standard developed by the IEC TC88 Technical Committee. It is specifically aimed at the monitoring system communication of wind power plants. This standard aims to realize free communication between equipment of different suppliers in wind power plants [38]. The security of network communication between wind power plants is required in IEC 61400-25-3. But the method of implementation is not specified, and it is left to the protocol itself [39]. With the development of the Internet, wind power plant communication based on Web services has become a common way. However, there are significant potential security risks, and while access can be restricted by user name/password, they do not provide additional protection for confidentiality and integrity. To ensure the security of Web-based service, Liu et al. [21] extended the simple object access protocol message, and designed the security agent with a security message processing algorithm to achieve confidentiality integrity and authentication across the communication. #### Ii-A7 Ieee C37.118 IEEE C37.118 defines the data transmission format and synchronization requirements between the communication of substations. This protocol defines four different types of messages, including data, header, configuration, and command. Like most other protocols, it lacks predefined security mechanisms, which makes it vulnerable to network attacks. Attacks like reconnaissance, man-in-the-middle, DOS, etc., severely impact the synchrophasor application, sometimes, it may even cause physical damage to the equipment [40]. To ensure the security of this protocol, Stewart et al. [41] validated the security by using the firewall and Virtual Private Network between the substation and control center. Basumallik et al. [42] recommended encryption algorithms (e.g., AES), IPSEC, and DTLS to ensure the confidentiality and authentication of the data without affecting the instantaneity. ## III Defense in Depth It is a widely-known concept in IT security with defense in depth (DiD). To make a redundant security defense system in ICS, DiD also is necessary along with the development of ICS's connectivity. It is better to apply DiD in all ranges of ICS that include proactive protection, such as data encryption and access control, etc., as well as reactive protections, such as intrusion detection, intrusion protection, and network range, etc. In this section, all of these topics will be introduced here. ### _Data Encryption_ Encryption which is to transform the plaintext data to cipher data using the cryptographic function is an important way to ensure the confidentiality and integrity of the data. As we know, encryption is the fundamental way to guarantee the security of the data. Encryption has been a necessary method to ensure the security of ICS. The cryptographic methods can be divided into those three categories, the first one is symmetrical cryptography which uses the same key for the transformation, the second one is asymmetrical cryptography (also named public-key encryption) which using the public and private keys separately when doing the encryption and decryption, the public key can be open, while the private key needs to be kept secret. And the third one is hash-based cryptography which mainly is applied in integrity verification. The technique structure of the cryptographic methods shows in Figure 3 In cryptography, there are two different kinds of cipher methods, one is a block cipher which encrypts a block of text at a time, rather than encrypting a bit that doing in the stream cipher. Block cipher is safer than the stream cipher, but it cost more time during the encryption. Methods like DES [43], 3DES [44], AES [45], SM1 and SM4, etc., are all the block cipher, which usually are deployed in the communication system in ICS. However, a stream cipher is fast but it is easy to be cracked. The most popular stream cipher is RC4 which was used in Transport Layer Security (TLS) before 2015 [46]. As to asymmetrical cryptography, The RSA and SM2 belong to asymmetrical cryptography, and SHA-series, SM3, and MD5 are hash-based cryptography. Based on the applications, different cryptographies can be deployed. In ICS, before deploying the cryptographic algorithms, such things should be taken into consideration: high safety, low cost, and high performance [47]. However, due to the special need in the ICS, there are different voices in encryption applications in ICS. Fauri et al. [48] made a critical discussion about the encryption application in ICS, and concluded that the encryption process will not help increase the overall safety of the ICS, sometimes the encryption process even has negative consequences for the security. Therefore, different cryptographic algorithms or standards are applied along with different applications. For example, In the power systems infrastructure, IEC 62351 recommends end-to-end protocol TLS and point-to-point protocol IPsec, however, it is end-to-end protocol WSSeurity in the industrial automation system. ### _Access Control Policy_ It is a proactive protection method for ICS security, which includes a firewall, network address translation (NAT), etc. It is a direct way to make the access control policy to protect the ICS from hostile detection. #### Iii-B1 Firewall A common method is adopting the white list in the firewall to prevent unknown access [49]. Byoung-Koo et al. [50] designed the Industrial Cyber Attack Prevention-Gate system for the ICS, and aimed to prevent unauthorized access fundamentally by applying the firewall in all of the levels mentioned before, the drawback was decreasing the latency of the ICS in return. To reduce the latency of the firewall matching in the white list, the hash-based rule lookup was proposed by Pabilona et al. [51] Fig. 3: The structure of the cryptography. There are two ways to white list building, one is the static list, and the other one is the generated-dynamic list. The static list is time costing, inflexible, and limited in expression, but it is precise in prevention. And it is the opposing situation in the generated-dynamic list. There are lots of works in exploring static list building. The distributed firewall was proposed by Peng et al. [52], and each device was configured with different policies by using the static white list. Woo-Suk et al. [53] applied the PrefixSpan algorithm to generate the structured static white list, which improves the flexibility of the static list. Besides the static white list, plenty of works are proposed for dynamic list generation. Barbosa et al. [54] learned the white list via the flow by using the dynamic port allocation. Choi et al. [55] generated the white list automatically based on the flow's locality. There also are some works learning the white list with dynamic packet inspection. Jeyasingam et al. [56] extended the Linux-based firewall for the DNP3 protocol in the power grid, and the u32 feature-match mechanism lets the firewall extract any parts of the package, which made the white list more dynamic. #### Iii-B2 Security Zone Based on the standard proposed by NIST [1], the ICS network usually partitions into several different zones (such as trust zone, demilitarized zone, etc.), especially for some sensitive companies which have high requirements for security. However, lots of problems need to deal with. Bela et al. [8] treated the network designing in ICS as the integer linear programming (ILP) problem, and the security are added as the constraints to fulfill security requirement, finally, the results are assessed by the cyberattack impact assessment (CAIA). Jun et al. [57] proposed an automated zone partition method based on the physical system causal model to do anomaly detection. This method adopts the zones crucial states as the input, which means more data should be packaged back into the system to make the zone partition decision. ### _Intrusion Detection System_ #### Iii-C1 Vulnerability Detection Vulnerabilities are a common problem in traditional systems, as well as in ICS. The number of vulnerabilities in ICS has increased dramatically since they were first published in 1997 [58], especially in the recent seven years which can be seen in Figure 4(a). Before 2015, the number is cited from [58], and after 2015, the numbers are counted from the website of CNV 8)9, the number of vulnerabilities in 2018 is counted before 10th September. From Figure 4(b), we can see that, among the vulnerabilities published in 2018, most of them are at high-level risk for the ICS. Only 2.36% of them are in low-level risk, which has the same trend that reported by Oxana et al. [58]. Footnote 8: [http://fcs.cnvd.org.cn/](http://fcs.cnvd.org.cn/) Footnote 9: [http://fcs.cnvd.org.cn/](http://fcs.cnvd.org.cn/) Footnote 10: [http://fcs.cnvd.org.cn/](http://fcs.cnvd.org.cn/) Vulnerabilities can be exploited in any place of ICS, not only in its software or firmware but also in tools that are associated with monitoring and auditing processes. And most of the vulnerabilities are unknown. The components of vulnerability, it can be divided into four types: system platform vulnerability, communication module vulnerability, application software vulnerability, and hardware vulnerability. The system platform is to provide the base service for the ICS, which always refers to SCADA in ICS. The communication module is the communication system in ICS, which includes the communication protocols, bus, and I/O system, etc. Application software denotes the software that associates with the SCADA function which includes the management, production, storage, and operation. Hardware is the basic component that supports the ICS, which is the device that includes the circuits, RTU, etc. According to the function of vulnerability, vulnerability can also be divided into buffer overflow, authentication bypass, cross-site vulnerability, and sensitive information sniffing four categories. A buffer overflow is an overflow of the legal boundary of a buffer caused by a programming error. This type of vulnerability has been discovered in all of the four components mentioned before. Authentication bypass is the vulnerability that allows an attacker to gain access to information without authentication and is always present in the system platform, communication module, and application software. The cross-site vulnerability that allows an attacker to inject insecure scripts into a server is always present on a system platform, e.g., SCADA. Sensitive information sniffing is the vulnerability that allows an attacker to obtain or remove sensitive information from ICS. This vulnerability is always present in the communication module. These four categories are not independent, and sometimes these categories of vulnerabilities form a so-called _chain_ vulnerability, giving an attacker more opportunities to attack ICS [59]. Vulnerability modeling is an important work in vulnerability detection. One simple detection way is to use a vulnerability database. A number of vulnerability databases are available online, such as Common Vulnerabilities and Exposures (CVE)11, Open Source Vulnerability Database (OSVDB)12, National Vulnerability Database (NVD)13, China National Vulnerability Database (CNVD)14, Security Focus's vulnerability database15, and Public Cooperative Vulnerability Database (PCVD)16, etc. However, detection with a vulnerability database has many limitations, including description shortage in vulnerabilities' presence, exploit-ability, and effect, unreadable to machine etc. [60] Faced with these problems, Fig. 4: The vulnerabilities distribution in ICS. Sufatrio et al. [61] proposed the newly designed vulnerability database Movtraq, which is machine-readable and can be directly applied to automated detection systems. But this database still has some drawbacks: First of all, Movtraq relies on Unix systems (RedHat and FreeBSD) to run and is not portable; Second, the information it focuses on is unitary, which makes it difficult to make a further application. Therefore, we urgently need a language that can describe vulnerabilities completely. Open Vulnerability and Assessment Language (OVAL)16 is a machine-readable XML-based language that is defined by MITRE17. There are three parts in OVAL which are system information representation, machine state expression (e.g., vulnerability, configuration, patch state, etc.), and assessment results [62]. The vulnerabilities described by OVAL can be entered directly into the scanner, but they have no information about their exploitability and need to be checked manually. Based on OVAL, new vulnerabilities modeling language DEpendability and Security by Enhanced REConfigurability (DESEREC)18[63] is designed. This language can effectively describe the exploitation of vulnerabilities and has been successfully applied to the automatic vulnerability detection system [64]. Footnote 16: [http://oval.mitre.org/index.html](http://oval.mitre.org/index.html) Footnote 17: [https://www.mitre.org/](https://www.mitre.org/) Footnote 18: [http://www.deserec.eu/](http://www.deserec.eu/) Footnote 19: [http://www-arc.com/sara/](http://www-arc.com/sara/) Footnote 20: [http://www.saintoroperation.com/](http://www.saintoroperation.com/) Footnote 21: [http://www.pessus.org](http://www.pessus.org) Footnote 22: [http://www.google.com](http://www.google.com) As we know, there are lots of vulnerabilities scanning tools, such as SARA19, SAINT20 and Nessus21 etc. In addition to these tools, search engines such as Google22 and Shodan23 can also be applied in vulnerabilities detection [65]. But their limitation is that they provide few clues, especially when faced with exploiting serial vulnerabilities on multiple hosts [60]. To deal with such a situation, Manuel et al. [64] extended the OVAL by introducing two new elements which are _preconditions_ and _postconditions_. Recently, Chemind et al. [66] have applied high-level security policies to modeling to make vulnerability detection more effective. Generally, the vulnerabilities that these systems deal with are mainly in the system platform, application software, and communication module. Footnote 23: [https://www.sindon.io/](https://www.sindon.io/) In addition to vulnerability modeling, there are other methods for vulnerability detection, such as virtual technology, Fuzzing test, etc. Ashlesha et al. [67] designed a system IntroVirt based on vulnerability-specific predicates in virtual-machine introspection to detect vulnerabilities. IntroVirt can detect or respond to past and present vulnerabilities. Xiong et al. [68] using the Fuzzing test to detect the vulnerabilities in Modbus-TCP. Kim et al. [69] proposed a test case generation technique for a fuzzing test that can be used for vulnerability detection in industrial control system protocols. Luo et al. [70] proposed a functional code-aware fuzzy identification framework Polar, which can automatically extract semantic information from the ICS protocol and use this information to accelerate vulnerability detection. Footnote 19: [http://www-arc.com/sara/](http://www-arc.com/sara/) #### Iii-C2 Mahware Detection Unlike the passive threat invulnerability, the malware threat is active and can be extremely disruptive to the system, such as Stunex [71]. We know that malware has been discovered since the invention of computers. The openness of ICS makes it more vulnerable to traditional malware infections. Andrea et al. [72] have shown that ICS can be infected even without customized malware. Malware refers to software or code intended to read, write, or change the normal state of the ICS. It includes viruses, worms, Trojans, malicious code, etc. Malware usually exploits vulnerabilities in ICS. When doing malware detection in ICS, "real-time" is an important factor that needs to concern, especially when ICS is in a production state. Some works divide malware detection into two categories, one is anomaly-based detection and the other one is signature-based detection [73, 74]. Anomaly-based detection should know the normal and the abnormal behaviors that malware may act on [75]. This approach requires the detection tool to know all the normal behavior of the software in ICS. Signature-based detection, however, usually tries to find common features of malware. This detection method can usually be implemented by machine learning, rule-based systems, checksums, scanning strings, etc. [76, 77]. Machida et al. [78] applied rule-based methods to lure this malware into our sensors by continuously embedding sensor information into the host list in the ICS network, and it successfully detected malware like WannaCry and Conficker. In addition to these methods, formal languages can also be used in malware detection. Saman et al. [79] detects the malware automatically in the PLC through code analytics with formal language. Generally, this method belongs to signature-based detection but with specific-defined malware states informal language. Anti-virus simulators were usually applied in the past, and they detected the malware through the simulation of the behavior, such as Hirst's Virus Simulation Suite [80], Virlab [81], Nepenthes [82] etc. But all of those projects are out of date and stopped updating ten years ago. Luis et al. [83] designed the HARVEY system embedded in the PLC to detect the malicious command that malware sends to PLC. #### Iii-C3 Anomaly Traffic Detection Another problem the ICS always faces is anomaly traffic, which is a kind of intrusion over industrial networks. Because of the complexity of the ICS, it is more difficult to do intrusion detection in the ICS than that on the traditional Internet. Intrusion Detection is a communication security technique that can detect, identify, and respond to unauthorized operations, such as insert, delete, query, and modify. There are three main principles for intrusion detection: misuse-based, anomaly-based, and hybrid(combine the formers) [84]. The misuse-based method can be used to detect known attacks by the signatures of these attacks. This technique can detect the known type of attacks effectively, but it needs to manually update the database frequently, and can't detect novel attacks. The anomaly-based method can identify anomalies from normal behavior by modeling the normal network and system behavior. This method can detect unknown attacks, but it has high false alarm rates because the unseen behaviors may be identified as anomalies. The hybrid method combines the misuse-based method and the anomaly-based method and inherited their respective advantages. This method can decrease the false positive(FP) rate for unknown attacks, and raise the detection rates of known attacks. The emergence of machine learning approaches makes Intrusion Detection face a new opportunity, these approaches learn from the available data and mine the unknown characteristics in the data. At present, The following machine learning and data mining methods can be used in Intrusion Detection: Artificial Neural Networks, Association Rules and Fuzzy Association Rules, Bayesian Networks, Clustering, Decision Trees, Ensemble Learning, Evolutionary Computation, Hidden Markov Models, Inductive Learning, Naive Bayes, Sequential Pattern Mining and Support Vector Machine. Using machine learning and data mining methods can extract the features of network data effectively, so compared with the traditional network analysis methods, they can obtain more satisfactory detection results. However, these methods require a large amount of data when training the model. The larger the amount of data, the better the classification effect. The network data currently used can be obtained in the following ways. **Packet-Level Data:** There are many protocols in the network, such as Transmission Control Protocol (TCP), User Datagram Protocol (UDP), Internet Control Message Protocol (ICMP), Internet Gateway Management Protocol (IGMP), etc. Users running these protocols generate the packet network traffic of the network. The network packets can be captured by a specific application programming interface (API) called pcap. Libpcap and WinPCap are the capture software libraries of many network tools, including protocol analyzers, packet snifferers, network monitors, network IDSs, and traffic generators. **NetFlow Data:** Ciscos NetFlow version 5 defines a network flow as a unidirectional sequence of packets with seven attributes: ingress interface, source IP address, destination IP address, IP protocol, source port, destination port, and IP type of service. Currently, there are 10 versions of NetFlow. Versions 1 to 8 are similar, but version 9 and version 10 have an important difference. **Public Data Sets:** Some public data sets are commonly recognized and widely used in intrusion detection research. The Defense Advanced Research Projects Agency (DARPA) collected the datasets with the Massachusetts Institute of Technology Lincoln Laboratory (MIT/LL) in 199824. The later DARPA 1999 datasets25 and KDD 99 datasets [85] were generated on the DARPA 1998 datasets. These datasets lay the foundation for the application of machine learning and data mining methods in Intrusion Detection. Footnote 24: [https://www.ll.mit.edu/r-d/datasets/1998-darpa-intrusion-detection-evaluation-dataset](https://www.ll.mit.edu/r-d/datasets/1998-darpa-intrusion-detection-evaluation-dataset) ### _Software Defined Networks (SDN)_ The advantages of breaching the physical boundary, being programmable, and easily deploying make software-defined networks (SDN) become another promising solution for ICS network security. There are two types of flow control units in SDN, which are OpenFlow and optimalFlow respectively, which are named software controllers. Different ways to get ensure the security of the networks, some works think the security matters from the network structure of SDN. Manuel et al. [86] designed an active SDN switch architecture that is acting as a virtual firewall. It accommodates different protocols and makes the management of the ICS network efficient. Bela et al. [87] redesigned the ILP problem over the SDN and built a hierarchical control plane over the ICS, and make the ICS networks more secure and flexible. Graur [88] adopted the SDN developing a controller which can be applied in reconfiguring the ICS network. Some works adopted the IDS or IPS system besides the SDN to ensure security. Dong et al. [89] applied the IDS system along with the IDS to defend the attacks from the smart grid. ## IV Conclusions In this paper, we gave a comprehensive review of the recent research on ICS network security. We first listed the network protocols that are mostly used in the network and analyzed the probable vulnerabilities and their defense methods. Then, we try to give a brief review of the defense in depth in ICS in terms of data encryption, access control policy, and intrusion detection system. Finally, we ended with the software-defined network which is one of the promising research directions.
2301.01764
UniHD at TSAR-2022 Shared Task: Is Compute All We Need for Lexical Simplification?
Previous state-of-the-art models for lexical simplification consist of complex pipelines with several components, each of which requires deep technical knowledge and fine-tuned interaction to achieve its full potential. As an alternative, we describe a frustratingly simple pipeline based on prompted GPT-3 responses, beating competing approaches by a wide margin in settings with few training instances. Our best-performing submission to the English language track of the TSAR-2022 shared task consists of an ``ensemble'' of six different prompt templates with varying context levels. As a late-breaking result, we further detail a language transfer technique that allows simplification in languages other than English. Applied to the Spanish and Portuguese subset, we achieve state-of-the-art results with only minor modification to the original prompts. Aside from detailing the implementation and setup, we spend the remainder of this work discussing the particularities of prompting and implications for future work. Code for the experiments is available online at https://github.com/dennlinger/TSAR-2022-Shared-Task
Dennis Aumiller, Michael Gertz
2023-01-04T18:59:20Z
http://arxiv.org/abs/2301.01764v2
# UniHD at TSAR-2022 Shared Task: ###### Abstract Previous state-of-the-art models for lexical simplification consist of complex pipelines with several components, each of which requires deep technical knowledge and fine-tuned interaction to achieve its full potential. As an alternative, we describe a frustratingly simple pipeline based on prompted GPT-3 responses, beating competing approaches by a wide margin in settings with few training instances. Our best-performing submission to the English language track of the TSAR-2022 shared task consists of an "ensemble" of six different prompt templates with varying context levels. As a late-breaking result, we further detail a language transfer technique that allows simplification in languages other than English. Applied to the Spanish and Portuguese subset, we achieve state-of-the-art results with only minor modification to the original prompts. Aside from detailing the implementation and setup, we spend the remainder of this work discussing the particularities of prompting and implications for future work. Code for the experiments is available online.1 Footnote 1: [https://github.com/dennlinger/TSAR-2022-Shared-Task](https://github.com/dennlinger/TSAR-2022-Shared-Task) ## 1 Introduction With recent advancements in Machine Learning (ML) research coming largely from increasing compute budgets, Richard Sutton coined the idea of a "bitter lesson", wherein more computational power will ultimately supersede a hand-crafted solution (Sutton, 2019). More recently, increasing compute power on a general purpose architecture has also shown to be wildly successful in the Natural Language Processing (NLP) community (Vaswani et al., 2017; Wei et al., 2022). In particular, emergent capabilities in very large language models (vLLMs) have made it possible to approach a variety of tasks wherein only few (if any) samples are labeled, and no further fine-tuning on task-specific data is required at all. In stark contrast to the complex pipelines in modern lexical simplification systems (Ferres et al., 2017; Qiang et al., 2020; Stajner et al., 2022), we present a simplistic approach utilizing few-shot prompts based on a vLLM with basic instructions on simplification, which returns frustratingly good results considering the overall complexity of the approach, which utilizes a grand total of four hand-labeled instances. We present our results on the TSAR-2022 shared task (Saggion et al., 2022), which evaluates lexical simplification systems in three available languages (English, Spanish and Portuguese), with ten labeled instances and around 350 unlabeled test samples provided per language. For the English subset, official results rank our model as the best-performing submission, indicating that this approach may be another instance of the bitter lesson. While the initial findings are indeed promising, we want to carefully evaluate erroneous instances on the test set to analyze potential pitfalls, and further detail some of our experiences in hand-crafting prompts. We also acknowledge the technical challenges in reproducing (and deploying) systems based on vLLMs, especially given that suitable models exceed traditional computing budgets. ## 2 Prompt-based Lexical Simplification With the public release of the GPT-3 language model (Brown et al., 2020), OpenAI has started the run on a series of now-available vLLMs for general-purpose text generation (Thoppilan et al., 2022; BigScience, 2022; Zhang et al., 2022). Across these models, a general trend in scaling beyond a particular parameter size can be observed, while keeping the underlying architectural design close to existing smaller models. Through exhibiting zero-shot transfer capabilities, such models have also become more attractive for lower-resourced tasks; oftentimes, models are able to answer questions formulated in natural language with somewhat sen
2310.06992
Zero-Shot Open-Vocabulary Tracking with Large Pre-Trained Models
Object tracking is central to robot perception and scene understanding. Tracking-by-detection has long been a dominant paradigm for object tracking of specific object categories. Recently, large-scale pre-trained models have shown promising advances in detecting and segmenting objects and parts in 2D static images in the wild. This begs the question: can we re-purpose these large-scale pre-trained static image models for open-vocabulary video tracking? In this paper, we re-purpose an open-vocabulary detector, segmenter, and dense optical flow estimator, into a model that tracks and segments objects of any category in 2D videos. Our method predicts object and part tracks with associated language descriptions in monocular videos, rebuilding the pipeline of Tractor with modern large pre-trained models for static image detection and segmentation: we detect open-vocabulary object instances and propagate their boxes from frame to frame using a flow-based motion model, refine the propagated boxes with the box regression module of the visual detector, and prompt an open-world segmenter with the refined box to segment the objects. We decide the termination of an object track based on the objectness score of the propagated boxes, as well as forward-backward optical flow consistency. We re-identify objects across occlusions using deep feature matching. We show that our model achieves strong performance on multiple established video object segmentation and tracking benchmarks, and can produce reasonable tracks in manipulation data. In particular, our model outperforms previous state-of-the-art in UVO and BURST, benchmarks for open-world object tracking and segmentation, despite never being explicitly trained for tracking. We hope that our approach can serve as a simple and extensible framework for future research.
Wen-Hsuan Chu, Adam W. Harley, Pavel Tokmakov, Achal Dave, Leonidas Guibas, Katerina Fragkiadaki
2023-10-10T20:25:30Z
http://arxiv.org/abs/2310.06992v2
# Zero-Shot Open-Vocabulary Tracking with ###### Abstract Object tracking is central to robot perception and scene understanding. Tracking-by-detection has long been a dominant paradigm for object tracking of specific object categories [1, 2]. Recently, large-scale pre-trained models have shown promising advances in detecting and segmenting objects and parts in 2D static images in the wild. This beps the question: can we re-purpose these large-scale pre-trained static image models for open-vocabulary video tracking? In this paper, we re-purpose an open-vocabulary detector [3], segmenter [4], and dense optical flow estimator [5], into a model that tracks and segments objects of any category in 2D videos. Our method predicts object and part tracks with associated language descriptions in monocular videos, rebuilding the pipeline of Tractor [6] with modern large pre-trained models for static image detection and segmentation: we detect open-vocabulary object instances and propagate their boxes from frame to frame using a flow-based motion model, refine the propagated boxes with the box regression module of the visual detector, and prompt an open-world segmenter with the refined box to segment the objects. We decide the termination of an object track based on the objectness score of the propagated boxes, as well as forward-backward optical flow consistency. We re-identify objects across occlusions using deep feature matching. We show that our model achieves strong performance on multiple established video object segmentation and tracking benchmarks [7, 8, 9, 10], and can produce reasonable tracks in manipulation data [11]. In particular, our model outperforms previous state-of-the-art in UVO and BURST, benchmarks for open-world object tracking and segmentation, despite never being explicitly trained for tracking. We hope that our approach can serve as a simple and extensible framework for future research. Our code will be made publicly available. ## I Introduction We are interested in the problem of tracking arbitrary objects in video. A reasonable strategy for this task, which has dominated the area for multiple years, is "tracking by detection" [12]. Tracking by detection splits the task into two independent problems: (1) detect objects frame-by-frame, and (2) associate detection responses across frames. Because of its two-stage split, tracking-by-detection is mainly propelled forward by advances in detection. Notably, Tracktor [6] used person detections from a Faster R-CNN [13], propagated boxes with a simple motion model, and refined these boxes using the Faster-RCNN's box regression head. This yielded a tracker composed entirely of static image neural modules, far simpler than its contemporary methods while matching or exceeding their accuracy. Recent methods for visual tracking build upon transformer architectures, where feature vectors represent tracked objects, and these are re-contextualized in each frame by attending to the pixel features and are used to predict per-frame bounding boxes [14, 15, 16]. These methods are trained on annotated video data and do not capitalize on pre-trained static image detectors beyond the pre-training of their feature backbones [14]. Meanwhile, 2D image object detection has been recently revolutionized with open-world image detectors, jointly trained for referential grounding and category grounding of thousands of object categories [3, 17, 18] across millions of images. Can we capitalize on this and make practical progress for tracking-by-detection, by re-visiting the Tracktor paradigm [6] with these updated components? In other words, can we re-purpose large pre-trained image models into a _zero-shot_ open-vocabulary tracker, without ever fine-tuning on video data? We propose a simple and extensible framework for exploring this question, which does not introduce significant advancements or innovative approaches. We use an open-vocabulary detector to find objects as they appear [3], obtain their masks using an off-the-shelf segmenter [4], propagate the boxes to the next frame using a motion transformation computed from optical flow [5], refine the boxes using the detector's bounding box regression module, and segment the box interiors using an off-the-shelf segmenter [4]. We handle ambiguity in per-frame segmentations by selecting the segmentations with the highest temporal consistency. Finally, we revise the bounding boxes using the segmentation results. We test our method on multiple established video object segmentation and tracking benchmarks: UVO [9] and BURST [10], as well as traditional VOS benchmarks like DAVIS [7] and YoutubeVOS [8]. In open-world datasets like UVO and BURST, our method outperforms the previous state-of-the-art and achieves competitive performance in DAVIS and YoutubeVOS when evaluating VOS baselines using detected first-frame masks. Our tracker can also provide reasonable object tracks in RoboTAP [11], a manipulation-based dataset. Our method also provides a natural-language interface for tracking, where a user may describe the tracking target in words, and the model delivers the corresponding frame-by-frame segmentation. Given that our approach can improve as new pre-trained models are swapped in, we hope that our approach will serve as a simple yet extensible framework for future work. ## II Related Work **Tracking by detection.** Many modern object tracking approaches rely heavily on accurate per-frame _detectors_[19, 6, 20, 21, 22, 23]. These approaches show that simple post-processing of per-frame detection can lead to strong tracking approaches for a closed (usually small) set of objects. In particular, CenterTrack [19] also showed that a tracker can be obtained simply from training on (augmented) static images, by modeling humans and vehicles as points. However, in open-world settings, this is less feasible, as objects may overlap and have different sizes (e.g. the upper half body of a person and their shirt), making points an ambiguous descriptor for open-world tracking. Our work is most closely related to Tracktor [6], which directly uses a Faster R-CNN [13] person detector to build an accurate person tracker. Our work builds on this method, extending it to _any_ category, using a strong open-vocabulary detector [3] as the backbone. **Open vocabulary detection.** Recent advances in open-vocabulary classification [24, 25] have significantly improved open-vocabulary _detectors_. Open-vocabulary, or zero-shot, detectors largely operate by using language models to generalize to unseen object classes. Early approaches relied on using text embeddings from pre-trained language models, such as from BERT [26] or GLOVE [27], as classifiers for object proposals [28, 29, 30]. More recent work leverages text embeddings which are pre-trained to be aligned with vision embeddings [24, 25], leading to significant improvements in accuracy [3, 31, 32]. We show that this recent class of approaches can be directly generalized to open-vocabulary tracking, using Detic [3] as a representative model. **Open world tracking.** Object tracking has traditionally focused on a few categories, such as people and vehicles. Very recently, the community has seen renewed efforts to generalize tracking to _arbitrary_ objects. Traditional approaches focused on _motion_-based segmentation [33, 34, 35, 36, 37], leveraging motion as a cue to segment never-before-seen objects. More recent approaches use open-world object proposal methods to detect objects per frame and link them together using a combination of temporal consistency, appearance, and motion cues [38, 39, 40]. Our work extends this latter class of approaches to _open-vocabulary_ detectors. ## III Method Our method builds upon existing open-vocabulary detectors [3], promptable general-purpose segmenters [4], and dense optical flow estimators [5]. We call our model Open-Vocabulary Multi-Object Tracker, or OVTracktor for short. Figure 1 shows an illustration of our model architecture. Our model does not require any tracking-specific training. In this section, we first introduce the modules that we rely on, then discuss how we combine them into OVTracktor. ### _Building Blocks_ Open-vocabulary object detectorWe use Detic [3] with a Swin-B [41] backbone as our open vocabulary object detector. Detic is a two-stage detector. In the first stage, it generates a large number of candidate boxes with a Region Proposal Network (RPN), similar to Faster-RCNN [13]. In the second stage, it spatially refines each box using a regression module and predicts an objectness score and category label for each. In addition, a category-agnostic mask prediction head is trained to segment the object in each predicted bounding box. Our model exploits Detic's ability to detect and label object boxes and also re-uses its bounding box refinement module during tracking. Promptable general-purpose segmenterFor segmenting masks from object boxes, we rely on SAM [4], a recent interactive general-purpose segmenter, which produces a segmentation given box or point prompts that indicate the object of interest. SAM is a transformer-based [42] model with a large and high-resolution image encoder and a lightweight prompt-conditioned mask head. For each user prompt, SAM predicts multiple segmentation hypotheses. Optical flow estimationWe estimate the motion transformation of an object box to propagate it from frame to frame. We use GMFlow [5], which is a state-of-the-art optical flow method. GMFlow takes two consecutive frames as input, and produces a 2D pixel displacement map as output, using an architecture that computes a spatial argmax of feature correlations for each pixel, trained on large synthetic datasets. We use this flow map to estimate the motion of detected boxes and also rely on optical forward-backward flow cycle-consistency [43] to estimate occlusion, in which case we terminate the track. ### _OVTracktor_ Given an RGB video as input, the goal of OVTracktor is to estimate mask trajectories for all objects in the video and estimate category labels for those objects. A mask trajectory for object \(i\) is a sequence of image binary masks \(M_{i}=\{m_{i}^{t}\,|\,t\in[0,T]\}\), where \(m_{i}^{t}\in\mathbf{R}^{W\times H}\), where \(W\times H\) denote the width and height of the image frame, and \(t\) is the frame index in time. Each object is associated with a category label, which we denote with \(\ell_{i}\). We denote the set of all binary instance masks in frame \(t\) as \(M^{t}=\{m_{0}^{t},\,m_{1}^{t},\,\ldots\}\). DetectionWe run the detector on every frame. Let \(D^{t}\) denote the object detections and segmentations supplied by the detector at frame \(t\). At \(t=0\), our tracker initializes object masks \(M^{0}\) from the set of Detic object detections \(D^{0}\), thresholded at a confidence threshold \(\lambda_{c}=0.5\). Motion-driven box propagationWe propagate object boxes across consecutive frames using a 4-parameter box motion transformation that includes a box translation \((dx,dy)\) and width and height scaling \((s_{x},s_{y})\) using motion information obtained from an optical flow field of [5]. We filter the pixel displacement vectors that are forward-backward consistent [44]. A lenient criterion is used: we simply check if the forward-backward flows have segmentation consistency, in the sense that tracking forward and backward leads back to the original instance mask, instead of thresholding the forward-backward displacement from the origin. We use the filtered pixels to compute a box motion transformation using least squares and use this to propagate the box forward. After this motion warp, the box is still axes-aligned, we do not consider object rotation or anisotropic scaling. The category label and instance ID of the box are maintained. Object track terminationWe determine if an object track should be terminated due to occlusions by checking if the ratio of forward-backward flow consistent pixels is lower than a fixed ratio \(\lambda_{flow}\) or if the object-ness score of the box is too low. Object box refinementWe refine the propagated (non-terminated) boxes at frame \(t+1\) using Detic's bounding box regression module. This adjusts the bounding boxes according to objectness cues on that frame and gives us higher-quality box estimates in frame \(t+1\). Temporally consistent object segmentationThe bounding box estimates at frame \(t+1\) are used to prompt SAM to segment the object's interior. SAM produces multiple segmentation mask candidates per box, to handle ambiguity regarding what to segment from the box's interior. Overall, we found a box prompt often unambiguously determines the object to segment (so all resulting masks will be identical), in contrast to a center object point prompt, which does not have information regarding the object's extent. To handle the cases where this ambiguity exists, we implement a form of temporal cycle consistency at the mask level. SAM segments an object via iterative attention between an object query vector and pixel features, and a final inner product between the contextualized query vector and pixel feature vectors. The three segmentation hypotheses consider different object query initialization. For each box \(i\), we use the updated (contextualized) object query vector at frame \(t+1\) to segment the object at frame \(t\) via inner product with the pixel features from frame \(t\); this results in a temporally corresponding mask \(\hat{m}_{i}^{t}\). We select the SAM segmentation hypothesis at frame \(t+1\) whose updated query vector-driven segmentation \(\hat{m}_{i}^{t}\) has the highest Intersection over Union (IoU) with \(m_{i}^{t}\). We then update the object boxes to tightly contain the resulting segmentation mask. Spawning new object tracksAt each frame, we need to take into account new objects that enter the scene or reappear after an occlusion. For each detection in \(D^{t+1}\), we compute its IoU with all the masks in \(M^{t}\). A new track is spawned if the IoU between the detection and all masks in \(M^{t}\) is below some specified threshold \(\lambda_{spawn}\). Track re-identificationWe use appearance feature matching to determine whether to merge a new track with an existing but terminated track. We store a small window of features before a track's termination and compare them with the features of the newly spawned tracks. Newly spawned tracks are considered for Re-ID until \(T_{reid}\) time-steps have passed. We used the box features from Detic (before the box regression module) and normalized them along the channel dimension to obtain a small set of features that represent each instance. We then compute the inner product between normalized appearance features for any two tracks and merge them if their value is above a threshold \(\lambda_{reid}\). None of OVTracktor's described components require any additional training. As detectors and segmenters improve, the components can be easily swapped out for potential improvements to the tracker. **Implementation details.** During inference, we apply test time augmentation to video frames when running the detector by scaling and horizontally flipping the individual frames to obtain better object proposals, which we found to help improve the recall of detections, especially in the harder open-world datasets. OVTracktor has the following hyperparameters \(\lambda_{c}=0.5\) for thresholding detector's confidence, \(\lambda_{flow}\) for deciding track termination due to occlusion, \(\lambda_{spawn}\) for instantiating new objects non-overlapping with existing objects tracks, and \(\lambda_{reid}\) for merging temporally non-overlapping tracks during re-identification. We have found the model robust to the choice of these hyper-parameters, due to the nature of videos: an object suffices to be detected confidently only very sparsely in time, and our propagation method will propagate it forward. In evaluations, we used Fig. 1: **Architecture of OVTracktor**. An open-vocabulary detector detects objects and an open-world segmenter segments their masks. We propagate boxes to the next frame or decide their termination using an optical flow-based motion model. The propagated boxes are refined with the detector’s box regression module. Refined boxes are used to prompt the segmenter in the next frame. The detections and their associated appearance features from the next frame are used to determine whether new tracks should be spawned or merged with previously terminated tracks. some videos in the training set to select a good set of hyperparameters for the individual datasets. As the memory consumption scales with the number of objects being tracked, we also put a hard limit \(K\) on the limit of tracks that can co-exist at the same time. ## IV Experiments We test OVTracktor in multi-object tracking and video object segmentation benchmarks of BURST [10], UVO [9], DAVIS [7], and YoutubeVOS [8], along with some ablation studies and analysis on run-time speed. Qualitative tracking results for the benchmark datasets, as well as a manipulation dataset, RoboTAP [11], can be found in Figure 2. We further show qualitative results on language-guided tracking, where our model tracks objects of specific object categories that the user specifies in Figure 3. ### _Multi-Object Tracking and Video Object Segmentation_ In each benchmark, we compare against multiple SOTA methods to see where our method stands compared to the specialized models proposed for each task. #### Iv-A1 **BURST benchmark** BURST [10] extends TAO [45] to mask annotations. Mask annotations in BURST come from a large set of 482 classes, with the videos covering multiple types of scenes. These classes can be further divided into two subsets, the "common" set, which contains the 78 object classes from COCO [46], and the "uncommon" set, which contains the remaining infrequently occurring classes from LVIS [47]. In particular, we are interested in the "long-tail class-guided" evaluation task, which requires us to detect and track objects corresponding to all classes, as well as predict the correct class label corresponding to each of the object tracks. This allows us to evaluate the "open-vocabularyness" of OVTracktor, which is in stark contrast to other class-agnostic (i.e. does not require predicting object labels) existing benchmarks. We conduct our evaluations on the validation split of BURST, which contains 993 videos. We compare the 2 baselines proposed in the BURST paper that also follows the tracking-by-detection paradigm: (1) a box tracker, which simply links per-frame detections using IoU scores followed by Hungarian matching, and (2) an STCN tracker, which uses STCN [48], a SOTA object tracking method in the VOS literature, to propagate masks from frame \(t\) to \(t+1\), then uses the IoU scores of the propagated mask and the next frame detections to link object tracks. We show quantitative results in Table I, with results reportedly separately for all classes, the common classes, and the uncommon classes. Interestingly, despite STCN being a significantly more advanced method, the tracking quality falls behind a simple box tracker. This is due to STCN and STCN-like methods assuming ground truth object masks in the initial frame as input, and when STCN receives noisy detections, it tends to propagate the masks in an erroneous error, with the error compounding as the model tracks into the future. This effect is more noticeable on BURST as the detection task is significantly more difficult, often with parts of the object being mis-detected. OVTracktor achieves higher performance compared to the baselines, with a significant improvement when it comes to the tracking quality of the "uncommon" classes. to track the objects throughout the entire video. The videos are downsampled to 480p quality as per common practice in VOS literature. To reduce the amount of computation, we filter the detections in the first frame using the ground truth to reduce the amount of object tracks. In subsequent frames, we reduce the number of detections by only considering detections that have the same class labels as the detections selected in the first frame. We compare using the standard metrics: region similarity \(\mathcal{J}\), contour accuracy \(\mathcal{F}\), and their average \(\mathcal{J}\&\mathcal{F}\). For YoutubeVOS, we also report results separately for "seen" or "unseen" categories, as per conventional practice, even if OVTracktor does not explicitly distinguish between the two. We consider two groups of VOS approaches as baselines: memory-based approaches such as STM [54] and STCN [48], which use a growing history of frames and segmentation masks to propagate the segmentation in the new frame, as well as non-memory-based approaches such as SiamMask [55], UNICORN [56], Siam R-CNN [57], and UNINEXT [58], that do not use a growing history to represent object templates. All our baselines assume **ground truth** objects masks in the first frame as input, which is the standard evaluation setup of these benchmarks, while OVTracktor does not use any ground-truth information. Hence, we conduct additional experiments where we run the same detector (Detic [3]) in the initial frame to serve as input to the best-performing baseline, STCN [48], for a more apples-to-apples comparison. We show quantitative results for our model and baselines on YoutubeVOS and DAVIS in Table III. We can see that the performance drops considerably when we switch from GT mask inputs to detection masks for STCN. This is due to multiple reasons: (1) ambiguities in "what" to track, as seen in Figure 4, where the tracker is missing the handheld bags of the Ladies, and (2) existing VOS-based approaches starts to propagate the errors from detection predictions to future frames, which we also observed in BURST. This difference is further exaggerated in YoutubeVOS, where the detection problem is harder: if the detector fails to detect the object instance on the annotated frame, it's not possible to associate the object track with the correct instance ID in the GT, even if the detections are perfect in all subsequent frames. Since annotations are usually supplied in the first frame an object appears, there are many cases where the annotated frame is not an easy frame to detect the object, as the entities might have just started entering the scene. ### _Promptable Open-Vocabulary Tracking_ The object tracks our method delivers are associated with corresponding category labels. A user can specify a category to track, and then our method can deliver object tracks only of that specific category. We show such an example in Figure 3. When the human prompts the tracker for a specific object category, we use a lower confidence threshold \(\lambda_{c}\) for the detector only for instances of the mentioned category, as the human provides a powerful cue that the mentioned object is in the video. We show more examples in the supplementary file, as well as tracking objects based on open-world referential utterances by using a referential detector [17]. ### _Ablations_ We ablate design choices of our method in Table V in the DAVIS benchmark. Not using segmentation to adapt the object boxes gives a much worse performance, which shows that segmentation helps to track by providing higher quality boxes, as opposed to computing it simply as a post-processing step. Not using mask cycle consistency hurts by a small amount. Skipping the bounding box refinement also hurts by a great amount, which indicates that an optical flow-based motion model alone is not sufficient to obtain good box prompts for the segmenter. Skipping the flow-based motion propagation Fig. 3: **Language-guided tracking.**_Top two rows_: The user prompts with “saddle”, and the model can recover the missing tracks (colored pink). We also draw a bounding box for clearer visibility. _Bottom two rows_: The same interface can be easily applied if the user only wishes to track specific objects in a video like “coffee makers” (colored pink and green). Fig. 2: **Qualitative object tracking results** in DAVIS’17, YoutubeVOS, UVO, and RoboTAP, with frames selected uniformly from the start (left) to the end (right) of a video. OVTracktor can detect and track objects consistently through time and can distinguish between similar-looking instances. also unsurprisingly hurts performance, since the bounding box refinement alone cannot recover the correct object box, especially for frames where the motion is large. ### _Running Time Analysis_ We analyze the running time of the individual components in OVTracktor in Table IV. The results are reported over the average of all the frames in the videos. The model runs at 0.41 FPS on an Nvidia V100 and costs around 18GB of VRAM to run on 480p videos, without caching anything to disk. We can see that most of the running time is spent in the Detic detector, with the SAM segmenter coming in second. Test time augmentations (TTA) incur a big overhead for Detic, and for scenes where detection is easy, noticeable speedups can be achieved by turning off TTA. ## V Limitations and Future Work A limitation of OVTracktor is the use of pre-trained features for feature matching for re-identifying object tracks. Empirically we observed that these features are not necessarily temporally consistent when used directly out-of-the-box, leading to many mistakes in the Re-ID process. We will explore the possibility of training more general-purpose re-id networks as an extension in our future work. Augmenting SAM with extra modules [59] for tighter spatial-temporal reasoning, where mask query tokens attend to previous frames to be better conditioned on a track's history is another interesting avenue of future work. ## VI Conclusion We present OVTracktor, a zero-shot framework for open-vocabulary visual tracking, that re-purposes modules of large-scale pre-trained models for object tracking in videos without any training or finetuning. Our model can be applied equally well for video object segmentation and multi-object tracking across various benchmarks and helps unify the video tracking and segmentation literature, which has been over-fragmented using different evaluation protocols and ground-truth information at test time. Instead of specifying what to track with ground-truth object masks, which are hard to annotate and provide, our tracking framework offers a language interface over what to focus on and track in videos. Thanks to its simplicity, we hope that our model can serve as an upgradeable and extendable baseline for future work in open-world and open-vocabulary video tracking literature. \begin{table} \begin{tabular}{l c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{3}{c}{DAVIS’17 Val} \\ \cline{2-5} & \(\mathcal{J}\&\mathcal{F}\) & \(\mathcal{J}\) & \(\mathcal{F}\) \\ \hline OVTracktor & 74.8 & 73.2 & 76.4 \\ \hline \(-\) Box Adaptation after Segmentation & -3.1 & -2.9 & -3.3 \\ \(-\) Motion Based Propagation & -4.3 & -3.9 & -4.5 \\ \(-\) Box Refinement & -3.7 & -3.4 & -3.9 \\ \(-\) Mask Cycle Consistency & -2.0 & -1.4 & -2.7 \\ \hline \hline \end{tabular} \end{table} TABLE V: **Ablative analysis in DAVIS’17**. We show the relative changes w.r.t. the complete model (top row). \begin{table} \begin{tabular}{l l c} \hline \hline Component & Avg. Run Time (ms) \\ \hline Detic (with TTA) & 1350 \\ Optical Flow & 275 \\ Box Warping & 150 \\ SAM & 550 \\ Spawning/ReID & 100 \\ \hline \hline \end{tabular} \end{table} TABLE IV: **Average running time** in milliseconds for the individual components in OVTracktor. Fig. 4: **Failure cases of OVTracktor**. _Top row_: The definition of classes is often ambiguous. In this case, the handbags are not considered a part of a human by our tracker. _Middle row_: Even in cases without labels as in open-world settings, there can be multiple definitions of what an “object” is. _Bottom row_: Re-ID failures. In this example, we fail to match the upper half of a person (after reappearing) to an entire person (before occlusion). \begin{table} \begin{tabular}{l l c c c c c c c c} \hline \hline \multirow{2}{*}{\multirow{2}{*}{\begin{tabular}{c} Method \\ \end{tabular} } & \multicolumn{4}{c}{\begin{tabular}{c} YoutubeVOS 2018 Val \\ \end{tabular} } & \multicolumn{4}{c}{DAVIS’17 Val} \\ \cline{3-10} & & \(\mathcal{G}\) & \(\mathcal{J}_{s}\) & \(\mathcal{F}_{s}\) & \(\mathcal{J}_{u}\) & \(\mathcal{F}_{u}\) & \(\mathcal{J}\&\mathcal{F}\) & \(\mathcal{J}\) & \(\mathcal{F}\) \\ \hline \multirow{4}{*}{\begin{tabular}{c} **Ground-Truth masks at \(t=0\)** \\ \end{tabular} } & SiamMask [55] & 52.8 & 60.2 & 58.2 & 45.1 & 47.7 & 56.4 & 54.3 & 58.5 \\ & Unicom [56] & - & - & - & - & - & 69.2 & 65.2 & 73.2 \\ & Siam R-CNN [57] & 73.2 & 73.5 & - & 66.2 & - & 70.6 & 66.1 & 75.0 \\ & UNINEXT [58] & 78.6 & 79.9 & 84.9 & 70.6 & 79.2 & 81.8 & 77.7 & 85.8 \\ & STM [54] & 79.4 & 79.7 & 84.2 & 72.8 & 80.9 & 81.8 & 79.2 & 84.3 \\ & STCN [48] & 83.0 & 81.9 & 86.5 & 77.9 & 85.7 & 85.4 & 82.2 & 88.6 \\ \hline \multirow{2}{*}{ \begin{tabular}{c} **Detected masks** \\ \end{tabular} } & Detic + STCN & 58.8 & 68.1 & 71.7 & 45.3 & 50.3 & 76.5 & 74.5 & 78.5 \\ & OVTracktor & 62.2 & 65.9 & 69.4 & 53.7 & 59.8 & 74.8 & 73.2 & 76.4 \\ \hline \hline \end{tabular} \end{table} TABLE III: **Performance on YoutubeVOS 2018 [8] and DAVIS’17 [7]. Higher is better.**
2310.18574
Breaking the Trilemma of Privacy, Utility, Efficiency via Controllable Machine Unlearning
Machine Unlearning (MU) algorithms have become increasingly critical due to the imperative adherence to data privacy regulations. The primary objective of MU is to erase the influence of specific data samples on a given model without the need to retrain it from scratch. Accordingly, existing methods focus on maximizing user privacy protection. However, there are different degrees of privacy regulations for each real-world web-based application. Exploring the full spectrum of trade-offs between privacy, model utility, and runtime efficiency is critical for practical unlearning scenarios. Furthermore, designing the MU algorithm with simple control of the aforementioned trade-off is desirable but challenging due to the inherent complex interaction. To address the challenges, we present Controllable Machine Unlearning (ConMU), a novel framework designed to facilitate the calibration of MU. The ConMU framework contains three integral modules: an important data selection module that reconciles the runtime efficiency and model generalization, a progressive Gaussian mechanism module that balances privacy and model generalization, and an unlearning proxy that controls the trade-offs between privacy and runtime efficiency. Comprehensive experiments on various benchmark datasets have demonstrated the robust adaptability of our control mechanism and its superiority over established unlearning methods. ConMU explores the full spectrum of the Privacy-Utility-Efficiency trade-off and allows practitioners to account for different real-world regulations. Source code available at: https://github.com/guangyaodou/ConMU.
Zheyuan Liu, Guangyao Dou, Yijun Tian, Chunhui Zhang, Eli Chien, Ziwei Zhu
2023-10-28T03:24:54Z
http://arxiv.org/abs/2310.18574v2
# Breaking the Trilemma of Privacy, Utility, Efficiency via Controllable Machine Unlearning ###### Abstract. Machine Unlearning (MU) algorithms have become increasingly critical due to the imperative adherence to data privacy regulations. The primary objective of MU is to erase the influence of specific data samples on a given model without the need to retrain it from scratch. Accordingly, existing methods focus on maximizing user privacy protection. However, there are different degrees of privacy regulations for each real-world web-based application. Exploring the full spectrum of trade-offs between privacy, model utility, and runtime efficiency is critical for practical unlearning scenarios. Furthermore, designing the MU algorithm with simple control of the aforementioned trade-off is desirable but challenging due to the inherent complex interaction. To address the challenges, we present **Controllable Machine Unlearning** (ConMU), a novel framework designed to facilitate the calibration of MU. The ConMU framework contains three integral modules: an important data selection module that reconciles the runtime efficiency and model generalization, a progressive Gaussian mechanism module that balances privacy and model generalization, and an unlearning proxy that controls the trade-offs between privacy and runtime efficiency. Comprehensive experiments on various benchmark datasets have demonstrated the robust adaptability of our control mechanism and its superiority over established unlearning methods. ConMU explores the full spectrum of the Privacy-Utility-Efficiency trade-off and allows practitioners to account for different real-world regulations. Source code available at: [https://github.com/guangyaodou/ConMU](https://github.com/guangyaodou/ConMU). Machine Unlearning, Data Privacy, Trustworthy ML, Deep Learning + Footnote †: journal: Computer Vision and Pattern Recognition Beyond privacy, utility and efficiency are also important aspects of machine unlearning problems. For instance, sacrificing utility by naively returning constant or purely random output ensures privacy but results in a useless model. On the other hand, retraining from scratch without data subject to removal guarantees privacy and utility yet is prohibitively expensive. Designing a method that simultaneously maximizes the privacy, utility, and efficiency aspects of machine unlearning is critical and needed. Unfortunately, theoretical machine unlearning research provides evidence that there is an inevitable privacy-utility-efficiency trade-off even for convex problems (K that are trained on different data samples. (Zhang et al., 2017) focuses on deep regression unlearning tasks using a partially trained blindspot model to minimize the distribution difference with the original model. Lastly, (Zhang et al., 2017) showed that applying unlearning algorithms on pruned models gives better performance. Though important data selections were largely used in deep learning (Wang et al., 2018), their implementation in the MU field is still unexplored. We discovered that the important data selection is able to offer strong control over the utility-efficiency trade-off. Similarly, Gaussian Noise had been largely adopted in the field of differential privacy (Beng et al., 2015; Chen et al., 2015; Chen et al., 2016; Li et al., 2017; Li et al., 2018; Li et al., 2019; Li et al., 2019; Li et al., 2019), while its usage in machine unlearning is not yet fully investigated. In addition, the concept of utilizing partially competent teachers for a privacy-efficiency trade-off has not been previously examined. Moreover, many of the existing works focused solely on privacy, overlooking the relationships between accuracy, privacy, and runtime efficiency. Unlike other machine-unlearning algorithms, our method gives users exceptional flexibility and control over the trade-offs among these three factors. In addition, our method imposes no restrictions on optimization methodologies or model architecture. ## 3. Preliminaries Removing certain training data samples can impact a model's accuracy, potentially improving, maintaining, or diminishing it (Zhang et al., 2017). As noted by (Li et al., 2019), significant discrepancies between unlearned and retrained models can lead to the Streisand effect, inadvertently revealing information about forgotten samples through unusual model behavior. Therefore, the goal of Machine Unlearning is to erase the influence of the set of samples we want to forget so that the unlearned model approximates the retraining one. Let \(D_{o}=\{x_{i}\}_{i=1}^{N}\) be the complete dataset before unlearning requests, in which \(x_{i}\) is the \(i^{th}\) sample. Let \(D_{f}\) be the set of samples we want to forget as forgetting dataset, and the complement of \(D_{f}\), which we denote as \(D_{f}\), is the set of samples retained in the training samples, i.e. \(D_{f}\cup D_{r}=D_{o}\) and \(D_{f}\cap D_{r}=\emptyset\). In the setting of random forgetting, \(D_{f}\) may contain samples from different classes of \(D_{o}\). In class-wise forgetting, \(D_{f}\) is a set of examples that have the same class labels. We denote \(\theta_{o}\) as the parameters of the original model, which was trained on \(D_{o}\), denote the parameters of unlearned models as \(\theta_{u}\), and denote the parameters of retrained model \(\theta_{r}\), which is the model completely retrained from scratch using only \(D_{r}\). Lastly, let \(\theta_{I}\) denote the parameters of the unlearning proxy, which has the same model architecture as \(\theta_{o}\), but was only partially trained on \(D_{r}\) for a few epochs. \(\theta_{r}\) is the gold standard in our MU problem. The goal of machine unlearning is to approximate our \(\theta_{u}\) to \(\theta_{r}\), with less computational overhead. However, for machine unlearning on deep neural networks, achieving a balance between utility, privacy, and efficiency has always been a difficult task. ## 4. Methods To address such a trilemma in machine unlearning, we introduce the _ConMU_ (Figure 2), a novel framework that consists of an important data selection, a progressive Gaussian mechanism, and an unlearning proxy that modulate relationships among accuracy, privacy, and runtime efficiency. First, the important data selection module (Figure 2 (a)) selectively discards unimportant retaining and forgetting data samples that will not be utilized by subsequent modules. Discarding more samples improves training time while degrading the model's accuracy. Next, the Progressive Gaussian Mechanism module (Figure 2 (b)) injects Gaussian noise into the remaining forgetting dataset. The amount of noise can control the balance between privacy and accuracy. Subsequently, an unlearning proxy model (Figure 2 (c)) is trained on the retained dataset for a select number of epochs. Through knowledge transfer, the training epoch of the proxy can balance the runtime and privacy. Finally, by fine-tuning the original model using the concatenated retained and noised forgetting datasets, it is transformed into an unlearned version. As a result, by controlling the data volume, the Gaussian noise level, and proxy training duration, we are able to account for different privacy-utility-efficiency requirements. Subsequent sections delve deeper into each module's capabilities and their influence on the trilemma. ### Important Data Selection Unlearning acceleration is crucial in MU. Since our method uses both remaining noised \(D_{f}\) and remaining \(D_{r}\) to perform fine-tuning, the amount of \(D_{f}\) and \(D_{r}\) play significant roles in the run time of our proposed methods. However, the large quantities of \(D_{f}\) and \(D_{r}\) will likely result in an inefficient MU algorithm with a long runtime. Therefore, to facilitate this process, we introduce a novel filtering method using EL2N scores to determine which samples are important for unlearning scenarios. Suppose that \(f(\theta,x)\) is the output of the neural network \(\theta\) with given data \(x\), and denote \(y\) as the true class label of \(x\). We calculate the mean and the standard deviation of \(l2\) normed loss: \[\mu_{\theta}(x)=\mathbb{E}_{x}\|f(\theta,x)-y\|_{2}, \tag{2}\] \[\sigma_{\theta}(x)=\sqrt{\forall_{x}\|f(\theta,x)-y\|_{2}}. \tag{1}\] A higher \(\mu_{\theta}\) means that \(x\) is hard to learn and they tend to be the outliers in the dataset. A lower \(\mu_{\theta}\) means that \(\theta\) can fit \(x\) well. Therefore, we can keep data samples important for the generalization of models by keeping data within a certain range of samples that don't have a very high or low \(\mu_{0}\). In our method, we introduce two controllable hyperparameters \(z_{1}\) and \(z_{2}\) and calculate a bound: \[[\mu_{\theta}(x)-z_{1}\times\sigma_{\theta}(x),\mu_{\theta}(x)+z_{2}\times \sigma_{\theta}(x)]. \tag{3}\] This bound gives users control of how many important data points we want to include by tuning \(z_{1}\) and \(z_{2}\). If we include more data, our accuracy increases, but the runtime also increases. As a result, the ConMU can have a greater speed-up while maximally preserving accuracy by utilizing important data samples. ### Progressive Gaussian Mechanism MU algorithms aim to erase the information about \(D_{f}\) from the original model. In order to forget \(D_{f}\), we can continue training the original model using an obfuscated version of \(D_{f}\), prompting catastrophic forgetting of \(D_{f}\). Within this context, we propose the progressive Gaussian mechanism, which leverages Gaussian noise to obscure the selected \(D_{f}\). Moreover, one of the standout features of this approach is that the magnitude and the shape of Gaussian noise applied to the dataset serve as tunable hyperparameters, granting a remarkable degree of control over the process. More formally, after selecting a subset of important samples: \[D_{f}^{\prime}\in[\mu_{\theta_{u}}(D_{f})-z_{1}\times\sigma_{\theta_{u}}(D_{f}), \mu_{\theta_{u}}(D_{f})+z_{2}\times\sigma_{\theta_{u}}(D_{f})], \tag{4}\] the ConMU adds Gaussian noise to data samples to balance privacy and accuracy. More specifically, for each data samples in \(D_{f}^{\prime}\), we add Gaussian noise and obtain: \[D_{f}^{\prime\prime}=D_{f}^{\prime}+\alpha\times N,\ N\sim\mathcal{N}(\mu, \sigma^{2}\mathbf{I}), \tag{5}\] where \(\alpha\), \(\mu\), and \(\sigma^{2}\) are controllable hyperparameters, where \(\mu\) and \(\sigma^{2}\) represent the mean and variance of the Gaussian distribution, and the \(\alpha\) represents the number of times the noise was added to the sample. With more noise being added to the data samples, we will get higher privacy, but lower model accuracy. Therefore, the progressive Gaussian mechanism controls the amount of information they want to scrub away and the amount of information that they want to preserve to maintain the accuracy of the model. In Section 5.3, we empirically demonstrated that with larger \(\alpha\), the accuracy decreases and the privacy increases, and vice versa. ### Fine-tuning with Unlearning Proxy The objective of machine unlearning is to align the output distribution of the unlearned model closely with that of the retrained model -- a model never exposed to the forgotten data samples. To achieve this, we can utilize an unlearning proxy model, which is a model that has the same architecture as the original model and is partially trained on the retained dataset for a few epochs. By transferring the knowledge of the behavior of the unlearning proxy, we can obtain an unlearned model that contains less information about the forgetting datasets. More formally, the unlearning proxy model \(\theta_{I}\) is partially trained on the retained dataset \(D_{r}\) for \(\delta\) epochs, in which \(\delta\) is a hyperparameter. Next, we compute the KL divergence between the probability distribution of \(\theta_{I}\)'s output on the input data \(x\) and that of the \(\theta_{u}\) as: \[D_{KL}(\theta_{I}(x)\parallel\theta_{u}(x))=\sum_{i}\theta_{I}(x)(i)\log\left( \frac{\theta_{I}(x)(i)}{\theta_{u}(x)(i)}\right), \tag{6}\] where \(i\) corresponds to the data class. We want to minimize this KL divergence, aiming to make the output distribution of the unlearned model \(\theta_{u}\) as close as possible to that of a model that has never seen \(D_{f}\), which is the unlearning proxy. In section 5.3, we demonstrate that if \(\delta\) increases, the \(\theta_{u}\) will become more similar to \(\theta_{r}\), but with increasing runtime. ### Controlling Machine Unlearning After discussing the individual modules for important data selection, progressive Gaussian mechanism, and using an unlearning proxy, we now focus on how these parts come together. First, we obtain \(D_{new}=D_{f}^{\prime\prime}\cup D_{r}^{\prime}\), in which: \[D_{r}^{\prime}\in[\mu_{\theta_{u}}(D_{r})-z_{1}^{\prime}\times\sigma_{\theta _{u}}(D_{r}),\mu_{\theta_{u}}(D_{r})+z_{2}^{\prime}\times\sigma_{\theta_{u}} (D_{r})], \tag{7}\] and \(z_{1}^{\prime}\) and \(z_{2}^{\prime}\) are two hyperparameters for filtering the retained dataset, as discussed in 4.1. With \(D_{new}\), we will use the cross-entropy (CE) loss to further train \(\theta_{u}\) on \(D_{new}\), combined with the Figure 2. The overall framework of proposed method ConMU, which is placed after forgetting request. In (a), an important data selection is implemented to select data samples that are important to the model. A customized upper/lower bound is attached to this module to facilitate the selection process. Then, the selected forgetting data \(D_{f}^{\prime}\) is passed to (b), the progressive Gaussian mechanism, to gradually inject Gaussian noise. More noise in the image leads to higher privacy. Afterward, the processed forgetting data \(D_{f}^{\prime\prime}\) is concatenated with the selected retaining data \(D_{r}^{\prime}\), which is used for fine-tuning the original model. The unlearning proxy (c) is partially trained on the retaining data \(D_{r}\) and knowledge is transferred to the original model via KL Divergence. KL loss in section 4.3. The loss to train the unlearned model \(\theta_{u}\) is defined as: \[\mathcal{L}=CE(D_{new})+\gamma D_{KL}(\theta_{I}(D_{new})\parallel\theta_{u}(D_{ new})). \tag{8}\] The \(\gamma\) in Equation (8) ensures that these two losses are on the same scale. In summary, the ConMU uses Equation (8) to fine-tune the original model \(\theta_{o}\) to achieve \(\theta_{u}\), with way fewer epochs required by complete retraining, allowing the calibration of the amount of data to preserve, the amount of noise added to the filtered forget data samples, and number of times to train the unlearning proxy. With these three modules, the ConMU allows controllable trade-offs between accuracy, privacy, and runtime. ### Forget-Retain-MIA Score There are many evaluation metrics to determine the privacy of the unlearning algorithms. For example, many literatures used Retain Accuracy (RA) and Forget Accuracy (FA) (Ferrone et al., 2016; Li et al., 2017; Li et al., 2018; Li et al., 2019; Li et al., 2019; Li et al., 2019; Li et al., 2019), which are the generalization ability of the unlearned model on \(D_{r}\) and \(D_{f}\), respectively. Moreover, many previous works have used Membership Inference Attacks (MIA) (Li et al., 2017; Li et al., 2019; Li et al., 2019; Li et al., 2019; Li et al., 2019) that determine whether a particular training sample was present in the training data for a model. Given this landscape of varied metrics, it becomes imperative to consolidate them to yield a more comprehensive evaluation. As we have stated in section 3, our goal for the evaluation of privacy is to ensure minimal disparity between our model's outcomes and the retrained model, which is the gold standard of unlearning tasks. Therefore, we introduce a new evaluation metric called the _Forget-Retain-MIA_ (**FRM**) score that considers the differences between the unlearned model with the retrained model on the triefaction of FA, RA, and MIA, which is inspired by NeurIPS 2023 machine unlearning challenge 1. Suppose we denote \(FA_{r}\), \(RA_{r}\), and \(MIA_{r}\) as the FA, RA, and MIA performance of the retrained model, and denote \(FA_{u}\), \(RA_{u}\), and \(MIA_{u}\) as the FA, RA, and MIA performance of the unlearning model, we calculate the FRM score as: Footnote 1: [https://unlearning-challenge.github.io/](https://unlearning-challenge.github.io/) \[FRM=exp(-(\frac{|FA_{u}-FA_{r}|}{FA_{r}}+\frac{|RA_{u}-RA_{r}|}{RA_{r}}+\frac{ |MIA_{u}-MIA_{r}|}{MIA_{r}})). \tag{9}\] The FRM score quantitatively compares the normalized differences in FA, RA, and MIA performances of the unlearning model with that of its retrained counterpart. The FRM score lies between 0 and 1. An FRM score of an unlearning model will be closer to 1 if the unlearned model's privacy is perfectly aligned with the retrained model's privacy, and it will be closer to 0 if the model is completely different from the retrained model. An ideal FRM score of 1 signifies that the unlearning algorithm has achieved exact unlearning. We use the FRM score to evaluate the ConMU and other baseline models' performance on privacy in the subsequent experiment sections. ## 5. Experiments In this section, we conduct extensive experiments to validate the effectiveness of the ConMU. In particular, through the experiments, we aim to answer the following research questions: (1) Can ConMU find the best balance point given the trilemma? (2) Can each module effectively control a specific aspect of the trilemma? (3) Can the naive fine-tune method possess the same control ability as the ConMU? ### Experiment setups #### 5.1.1. **Datasets and models** Our experiments mainly focus on image classification for CIFAR-10 (Cheng et al., 2018) on ResNet-18 (He et al., 2018) under two unlearning scenarios: random data forgetting and class-wise data forgetting. Besides, additional experiments are conducted on CIFAR100 (Cheng et al., 2018), and SVHN (Krizhevsky et al., 2015) datasets using vgg-16 (Vinyals et al., 2017). #### 5.1.2. **Baseline Models** For baselines, we compare with Fine-Tuning (FT) (Cheng et al., 2018; Li et al., 2019; Li et al., 2019), Gradient Ascent (GA) (Cheng et al., 2018; Li et al., 2019), and Influence Unlearning (IU) (Li et al., 2019; Li et al., 2019; Li et al., 2019; Li et al., 2019). In particular, FT directly utilizes retained dataset \(D_{r}\) to fine-tune the original model \(\theta_{o}\). The GA method attempts to add the gradient updates on \(D_{f}\) during the training process back to the \(\theta_{o}\). Lastly, IU leverages influence functions to remove the influence of the target data sample in \(\theta_{o}\). Besides, Li et al. (2019) has shown that pruning first before applying unlearning algorithms will increase performance. Therefore, we apply the OMP (one-shot-magnitude pruning) (Li et al., 2019; Li et al., 2019; Li et al., 2019) to each baseline model as well as the ConMU. The details of each baseline model are elaborated in Appendix B.1. #### 5.1.3. **Evaluation Metrics** We aim to evaluate MU methods from five perspectives: test accuracy _forget accuracy (TA)_, _forget accuracy (FA)_, _retain accuracy (RA)_, _membership inference attack (MIA)_(Cheng et al., 2018), _runtime efficiency (RTE)_, and _FRM privacy score_. Specifically, TA measures the accuracy of \(\theta_{u}\) on the testing datasets and evaluates the generalization ability of MU methods. FA and RA measure the accuracy of the unlearned model on forgetting dataset \(D_{f}\) and retaining dataset \(D_{r}\), respectively. MIA verifies if a particular training sample existed in the training data for the original model. Lastly, we use the _FRM privacy score_ metric to comprehensively evaluate the privacy level of an MU method. Additional details of evaluation metrics are illustrated in Appendix A. #### 5.1.4. **Implementation Details** We report the mean and the standard deviation in the form of \(a_{\pm b}\) of ten independent runs with different data splits and random seeds. For random forgetting, we randomly selected 20% of the training samples as forgetting datasets. For class-wise forgetting, we randomly selected 50% of a particular class for different datasets as the forgetting samples. The details will be in appendix B.2. ### Experiment Results To answer the first question: **Can ConMU find the best tradeoff points given three important factors?** We conduct random forgetting and class-wise forgetting to comprehensively evaluate the effectiveness of a MU method. The performance is reported in Table 1. Note that a better performance of a MU method contains a smaller performance gap with the retrained model (except TA and RTE), which is the gold standard for MU tasks. According to the table, we can find that IU (influence unlearning) and GA (gradient ascent) with OMP pruning achieve satisfactory results under unlearning privacy (FA, RA, MIA, and FRM) and efficiency (RTE) metrics with relatively shorter runtime. According to the table, IU is usually the fastest baseline, while GA is the runner-up. However, this outstanding unlearning efficiency comes at a high cost to the model utility, rendering them the worst baseline models in terms of test accuracy. Alternatively, FT (fine-tuning) performs well across all metrics with the exception of unlearning efficiency. As shown in Table 1, FT is the runner-up in the majority of cases and achieves a high FRM across all benchmarks. However, this comes with a high sacrifice on runtime efficiency, making it the slowest baseline method. Finally, we observe that the ConMU can outperform other baselines by remarkable margins and achieve a good balance on all privacy metrics and competitive accuracy across CIFAR-10, CIFAR-100, and SVHN, respectively. Additionally, the ConMU has the highest FRM score among all baseline models \begin{table} \begin{tabular}{l|c c c c c|c} \hline \hline MU Methods & \(TA(\%)\uparrow\) & \(FA(\%)\downarrow\) & \(RA(\%)\downarrow\) & \(MIA(\%)\downarrow\) & \(RTE(s)\) & \(FRM Privacy\uparrow\) \\ \hline \hline \multicolumn{8}{c}{Resnet-18 Random data forgetting (CIFAR-10)} \\ \hline retrain & 79.99 & 80.46 (0.00) & 91.47 (0.00) & 19.62 (0.00) & 933.51 & 1 \\ IU + Pruning & 41.63\({}_{\pm 0.14}\) & 41.62\({}_{\pm 0.11}\) (38.84) & 41.21\({}_{\pm 0.07}\) (50.26) & 57.61\({}_{\pm 0.11}\) (37.99) & 33.69 & 0.051 \\ GA + Pruning & 64.61\({}_{\pm 0.14}\) & 66.74\({}_{\pm 0.24}\) (13.72) & 66.15\({}_{\pm 0.13}\) (15.23) & 34.15\({}_{\pm 0.24}\) (44.53) & **28.15** & 0.305 \\ FT + Pruning & **84.71\({}_{\pm 0.14}\)** & **84.13\({}_{\pm 0.06}\) (3.67)** & **90.96\({}_{\pm 0.05}\)** (0.51) & 15.42\({}_{\pm 0.06}\) (4.20) & 475.99 & 0.767 \\ ConMU & **78.83\({}_{\pm 0.57}\)** & **81.22\({}_{\pm 0.53}\) (0.76)** & 81.75\({}_{\pm 0.60}\) (0.72) & **18.93\({}_{\pm 0.10}\)** (0.69) & 59.59 & **0.855** \\ \hline \hline \multicolumn{8}{c}{Resnet-18 Class-Wise forgetting (CIFAR-10)} \\ \hline retrain & 82.55 & 68.22 (0.00) & 89.67 (0.00) & 33.92 (0.00) & 1241.92 & 1 \\ IU + Pruning & 20.39\({}_{\pm 2.25}\) & 0.01\({}_{\pm 0.00}\) (68.21) & 20.96\({}_{\pm 2.33}\) (68.71) & 100.00\({}_{\pm 0.00}\) (66.08) & 40.15 & 0.024 \\ GA + Pruning & 52.22\({}_{\pm 0.35}\) & 15.22\({}_{\pm 0.41}\) (53) & 53.88\({}_{\pm 0.38}\) (35.79) & 83.23\({}_{\pm 0.41}\) (49.31) & **25.24** & 0.072 \\ FT + Pruning & **85.75\({}_{\pm 0.11}\)** & 69.89\({}_{\pm 0.72}\) (1.67) & **92.10\({}_{\pm 0.03}\)** (2.43) & 27.81\({}_{\pm 0.72}\) (6.11) & 565.47 & 0.793 \\ ConMU & 83.61\({}_{\pm 1.97}\) & **67.23\({}_{\pm 2.27}\)** (0.99) & 86.68\({}_{\pm 2.50}\) (5.75) & **32.92\({}_{\pm 2.21}\)** (1.00) & 89.4 & **0.925** \\ \hline \hline \multicolumn{8}{c}{VGG Random data forgetting (CIFAR-10)} \\ \hline retrain & 81.10 & 81.49 (0.00) & 92.09 (0.00) & 19.54 (0.00) & 881.57 & 1 \\ IU + Pruning & 59.74\({}_{\pm 0.08}\) & 57.97\({}_{\pm 0.10}\) (23.52) & 57.52\({}_{\pm 0.09}\) (34.57) & 39.32\({}_{\pm 0.09}\) (19.78) & 38.36 & 0.186 \\ GA + Pruning & 69.43\({}_{\pm 0.14}\) & 69.97\({}_{\pm 0.04}\) (11.52) & 69.79\({}_{\pm 0.09}\) (22.30) & 29.20\({}_{\pm 0.04}\) (9.66) & 47.17 & 0.414 \\ FT + Pruning & **83.88\({}_{\pm 0.71}\)** & 58.98\({}_{\pm 0.60}\) (22.51) & **90.64\({}_{\pm 0.77}\)** (1.45) & 38.41\({}_{\pm 0.60}\) (18.87) & 378.02 & 0.283 \\ ConMU & 79.09\({}_{\pm 2.19}\) & **82.52\({}_{\pm 2.41}\)** (1.03) & 84.00\({}_{\pm 2.40}\) (8.09) & **17.53\({}_{\pm 2.43}\)** (2.01) & **32.42** & **0.816** \\ \hline \hline \multicolumn{8}{c}{VGG Class-Wise forgetting (CIFAR-10)} \\ \hline retrain & 82.41 & 69.02 (0.00) & 92.90 (0.00) & 33.44 (0.00) & 1034.40 & 1 \\ IU + Pruning & 53.06\({}_{\pm 17.55}\) & 27.16\({}_{\pm 28.87}\) (41.86) & 53.08\({}_{\pm 20.04}\) (38.82) & 65.50\({}_{\pm 14.07}\) (32.06) & 46.70 & 0.136 \\ GA + Pruning & 53.18\({}_{\pm 0.25}\) & 11.96\({}_{\pm 0.28}\) (57.06) & 54.51\({}_{\pm 0.29}\) (38.39) & 86.42\({}_{\pm 0.28}\) (52.98) & **30.74** & 0.059 \\ FT + Pruning & **83.88\({}_{\pm 0.91}\)** & 58.98\({}_{\pm 8.94}\) (10.04) & **96.40\({}_{\pm 0.82}\)** (2.26) & 38.31\({}_{\pm 0.84}\) (4.87) & 353.97 & 0.729 \\ ConMU & 81.12\({}_{\pm 3.27}\) & **63.75\({}_{\pm 3.33}\)** (5.27) & 87.10\({}_{\pm 3.79}\) (5.80) & **36.20\({}_{\pm 3.30}\)** (2.76) & 148 & **0.800** \\ \hline \hline \multicolumn{8}{c}{VGG Random data forgetting (CIFAR-100)} \\ \hline retrain & 60.65 & 60.54 (0.00) & 92.49 (0.00) & 40.60 (0.00) & 823.15 & 1 \\ IU + Pruning & 7.15\({}_{\pm 0.01}\) & 5.97\({}_{\pm 0.02}\) (54.57) & 5.83\({}_{\pm 0.01}\) (86.66) & 6.65\({}_{\pm 0.02}\) (33.95) & **38.74** & 0.069 \\ GA + Pruning & 14.71\({}_{\pm 0.07}\) & 13.96\({}_{\pm 0.10}\) (46.58) & 14.24\({}_{\pm 0.09}\) (78.25) & 85.53\({}_{\pm 48.73}\) (44.93) & 47.42 & 0.066 \\ FT + Pruning & 49.78\({}_{\pm 0.57}\) & 47.67\({}_{\pm 0.81}\) (12.87) & 60.70\({}_{\pm 0.79}\) (31.79) & 50.95\({}_{\pm 0.81}\) (10.35) & 215.22 & 0.444 \\ ConMU & **55.22\ with an acceptable runtime efficiency relative to other baselines. More experiment results of the different datasets are presented in Appendix C. ### Unlearning Trilemma Analysis Given our proposed ConMU is to narrow the performance gap with the gold-retrained model and to better control the trade-off between different metrics, we conduct further experiments to validate the effectiveness of each module. The central question addressed is: **Can each module effectively govern a specific facet of the aforementioned trilemma?** The associated results are shown in Figure 3. To better answer this question, we first represent the unlearning trilemma as a triangle (figure 1), wherein each side corresponds to a distinct aspect of the trilemma. An effective control module should identify a balance point anywhere along the side, rather than being confined to the two endpoints. Since the ConMU contains three modules, to better observe the compatibility and flexibility of each module in influencing different metrics, we systematically adjust the input values of each module with random forgetting requests on the CIFAR-10 dataset, showcasing the ability to control trade-offs at various levels. #### 5.3.1. **Utility vs Efficiency** In our proposed method, the important data selection module is specifically designed to curate the samples that are later utilized in the fine-tuning process of the pruned model. The rationale behind this is twofold: (1) extracting the samples that contribute significantly to model generalization process, and (2) expediting the runtime of the unlearning process. To further investigate the trade-off between model utility and runtime efficiency, we carefully adjust the upper and lower bounds of the important data selection to incorporate different percentiles of data. In Figure 3 (a), we present the control ability of the proposed important data selection module by selecting different portions of data. We start with the inclusion of 5 % of the data and gradually progress to 90 %. From figure 3 (a), we first discover that a higher percentile of selected data not only prolongs the runtime but also enhances the utility performance of the ConMU. For instance, increasing the data percentile from 5 % to 25% results in a 7.89 % increase in model accuracy from 75.09 to 81.02. However, this improvement comes at the cost of a 68.17 % increase in runtime, escalating from 38.86 seconds to 65.35 seconds. Furthermore, we observe diminishing returns as the included data percentage increases. Take the last two points as an example, including 10 % more data leads to a mere 1.39 % increase in model utility but incurs a substantial 29.7 % increase in runtime, escalating from 135.43 seconds to 175.71 seconds. This phenomenon suggests that beyond a certain threshold of included data, sacrificing runtime yields only marginal improvements in model utility. #### 5.3.2. **Utility vs Privacy** In the intricate landscape of the trilemma, another crucial facet involves the delicate equilibrium between utility and privacy. As mentioned in Section 4.2, the purpose of such a module is to disrupt the forgetting information in samples, where a higher noise level indicates that the sample contains more chaotic information, which represents better privacy. To validate this hypothesis, we modify the mechanism's noise level to demonstrate the relationship between model utility and privacy. Figure 3 (b), illustrates the performance of the proposed progressive Gaussian mechanism module under varying noise levels. We begin with a noise level of 0, which uses the selected data from the previous module, and increase it to a noise level of 10. As shown by the increasing FRM score, a higher noise level results in a closer privacy level with respect to the retrained model (higher FRM score). For example, increasing the noise level from 0 to 2 increases the FRM by 4.2 %, from 0.707 to 0.737. This enhancement, however, comes with a 0.72 % decrease in model utility, from 81.21% to 80.62%. This trend is consistent as the intensity of noise increases. As the noise level increases from 8 to 10, the model test accuracy decreases from 79.59 % to 78.22%, a decrease of 1.75 %, while the FRM increases by 3.2 %. This phenomenon demonstrates the viability of the compromise between model utility and privacy. #### 5.3.3. **Privacy vs Efficiency** Lastly, there is a discernible trade-off between model privacy and runtime performance. As mentioned in Section 4.3, we introduce an unlearning proxy to strike a balance between these two crucial factors. This module's purpose is to reduce the privacy disparity between the retrained model and the unlearning model by means of an unlearning proxy. To validate this effect, we progressively increase the training epoch of the unlearning proxy from 0 to 8, enabling it closer to the retrained model. As shown in Figure 3 (c), an increase in the number of training epochs in the unlearning proxy resulted in improved privacy performance, as indicated by a higher FRM score. Raising the proxy training epoch from 2 to 3 increases runtime by 23.13 %, from 67.61 Figure 3. Ablation study results of each module on CIFAR-10 with ResNet-18. For every module, we fix the other two novel modules while adjusting its own controllable parameters. Since each proposed module is designed to control one side of the trilemma, we present the results for each module in a chart with \(x\) and \(y\) axes representing their respective controlled factors. seconds to 83.25 seconds. This results in a 26.02 % increase in FRM score. However, this progress diminishes when FRM exceeds 0.75. Consider the last three data points as an illustration: a 66.1 % increase in duration from 100.47 seconds to 166.89 seconds results in a 4.64 % increase from 0.776 to 0.812. Similarly to the trade-off between utility and runtime, there exists a threshold between privacy and runtime where sacrificing one does not result in a substantial improvement in the other. ### ConMU vs. Naive Fine-Tune Method The overall performance of the rudimentary fine-tune (FT) baseline method, as shown in Table 1, is comparable to that of the ConMU baseline method. Consequently, an intriguing question may be posed: **Can the naive FT model have the same control ability over these trillemmas as the method demonstrated by merely adjusting its hyperparameters?** In order to answer this question, we compare the control ability of the ConMU and naive fine-tuning. To demonstrate this distinction in a holistic manner, we evaluate the performance of two models based on three crucial factors: privacy (FRM), utility (TA), and efficiency (runtime). We primarily demonstrate the control ability of the naive fine-tuning method by varying two parameters: learning rate and fine-tuning epochs. For the ConMU, we alter those three proposed modules. Figure 4 (a) demonstrates the trade-off between the runtime of each sample and the utility. In addition, figure 4 (b) illustrates the trade-off between utility and privacy, in which greater \(x\) values indicate a higher FRM score, which corresponds to a closer level of privacy with the retrained model. As demonstrated in Section 5.3, an expected trade-off would emerge as \(x\) values increase from left to right. Figure 4 (c) demonstrates the relationship between privacy and runtime efficiency. Ideally, a sample that resolves the trillemmas should be placed in the top left corner of (a), the top right corner of (b), and the top left corner of (c). Given a similar level of test accuracy, ConMU can achieve a higher FRM score with a shorter runtime. When the test accuracy for FT is 77.99 % and the ConMU is 78.08 %, for example, the FRM is 0.71 and 0.87, respectively. Meanwhile, the runtime is 480.14 seconds and 52.12 seconds, respectively, which is 9x faster. Furthermore, as test accuracy improves, the performance of the ConMU remains relatively stable and consistent. For instance, when the test accuracy for FT increases from 84.22 % to 85 %, the FT's FRM falls from 0.77 to 0.74. In contrast, the FRM for ConMU increases by 0.3 % from 0.799 to 0.801 when the test accuracy changes from 84.32 % to 85.09 %. In terms of the runtime, the FT increases from 600.22 seconds to 720.19 seconds, whereas the ConMU only increases by only 65.55 seconds, from 207.13 to 272.68 seconds. Throughout the resulting chart, ConMU displays its superiority not only in the stability of controlling the trillemma but also a significant margin over the overall performance. ## 6. Conclusion In this paper, we identify the trillemma between model privacy, utility, and efficiency that exists in machine unlearning for deep neural networks. To address this issue and gain greater control over this trillemma, we present ConMU, a novel MU calibration framework. Specifically, ConMU introduces three control modules: the important data selection, the progressive Gaussian mechanism, and the unlearning proxy, each of which seeks to calibrate portions of the MU trilemma. Extensive experiments and in-depth studies demonstrate the superiority of the ConMU across multiple benchmark datasets and a variety of unlearning metrics. Future work could focus on extending our control mechanism to other fields of study, such as the NLP and Graph domain.
2301.11577
Defective acyclic colorings of planar graphs
This paper studies two variants of defective acyclic coloring of planar graphs. For a graph $G$ and a coloring $\varphi$ of $G$, a 2CC transversal is a subset $E'$ of $E(G)$ that intersects every 2-colored cycle. Let $k$ be a positive integer. We denote by $m_k(G)$ the minimum integer $m$ such that $G$ has a proper $k$-coloring which has a 2CC transerval of size $m$, and by $m'_k(G)$ the minimum size of a subset $E'$ of $E(G)$ such that $G-E'$ is acyclic $k$-colorable. We prove that for any $n$-vertex $3$-colorable planar graph $G$, $m_3(G) \le n - 3$ and for any planar graph $G$, $m_4(G) \le n - 5$ provided that $n \ge 5$. We show that these upper bounds are sharp: there are infinitely many planar graphs attaining these upper bounds. Moreover, the minimum 2CC transversal $E'$ can be chosen in such a way that $E'$ induces a forest. We also prove that for any planar graph $G$, $m'_3(G) \le (13n - 42) / 10$ and $m'_4(G) \le (3n - 12) / 5$.
On-Hei Solomon Lo, Ben Seamone, Xuding Zhu
2023-01-27T07:52:53Z
http://arxiv.org/abs/2301.11577v1
# Defective acyclic colorings of planar graphs ###### Abstract This paper studies two variants of defective acyclic coloring of planar graphs. For a graph \(G\) and a coloring \(\varphi\) of \(G\), a 2CC transversal is a subset \(E^{\prime}\) of \(E(G)\) that intersects every 2-colored cycle. Let \(k\) be a positive integer. We denote by \(m_{k}(G)\) the minimum integer \(m\) such that \(G\) has a proper \(k\)-coloring which has a 2CC transversal of size \(m\), and by \(m_{k}^{\prime}(G)\) the minimum size of a subset \(E^{\prime}\) of \(E(G)\) such that \(G-E^{\prime}\) is acyclic \(k\)-colorable. We prove that for any \(n\)-vertex 3-colorable planar graph \(G\), \(m_{3}(G)\leq n-3\) and for any planar graph \(G\), \(m_{4}(G)\leq n-5\) provided that \(n\geq 5\). We show that these upper bounds are sharp: there are infinitely many planar graphs attaining these upper bounds. Moreover, the minimum 2CC transversal \(E^{\prime}\) can be chosen in such a way that \(E^{\prime}\) induces a forest. We also prove that for any planar graph \(G\), \(m_{3}^{\prime}(G)\leq(13n-42)/10\) and \(m_{4}^{\prime}(G)\leq(3n-12)/5\). ## 1 Introduction An _acyclic \(k\)-coloring_ of a graph \(G\) is a proper \(k\)-coloring of \(G\) with no 2-colored cycles. Confirming a conjecture of Grunbaum [3], Borodin [1] proved that every planar graph has an acyclic 5-coloring. This celebrated result is best possible as there are planar graphs that are not acyclic 4-colorable (e.g. the octahedron). Acyclic coloring has been studied extensively for several decades and applied to solve other problems on graph coloring and partitioning. We refer to [2] for a comprehensive survey on this subject. This paper studies defective acyclic \(k\)-coloring of planar graphs mainly for \(k=3,4\). In other words, we study \(k\)-colorings of planar graphs for which the condition of being an acyclic coloring is not completely satisfied, however, we want to limit the violation of the acyclicity rules. We consider two variants of defective acyclic coloring. **Definition 1**.: Given a graph \(G\) and a proper coloring \(\varphi\) of \(G\), a \(2\)_-colored cycle transversal_ (2CC transversal) with respect to \(\varphi\) is a subset \(E^{\prime}\) of \(E(G)\) that intersects all 2-colored cycles. In other words, \(G-E^{\prime}\) contains no 2-colored cycles. **Definition 2**.: Let \(G\) be a graph and \(k\) be a positive integer. We define two parameters \(m_{k}(G)\) and \(m_{k}^{\prime}(G)\) as follows: * \(m_{k}(G):=\min_{E^{\prime}\subseteq E(G)}\{|E^{\prime}|:E^{\prime}\text{ is a 2CC transversal with respect to a proper $k$-coloring}\}\). * \(m_{k}^{\prime}(G):=\min_{E^{\prime}\subseteq E(G)}\{|E^{\prime}|:G-E^{\prime} \text{ has an acyclic $k$-coloring}\}\). Note that \(m_{k}(G)=m^{\prime}_{k}(G)=0\) if and only if \(G\) is acyclic \(k\)-colorable. If \(G\) has no proper \(k\)-coloring, then \(m_{k}(G)\) is not defined. In this case, we let \(m_{k}(G):=\infty\). It follows from the definition that for any graph \(G\) and integer \(k\), \(m_{k}(G)\geq m^{\prime}_{k}(G)\). We are interested in the case that \(G\) is a planar graph and \(k=3,4\) as Borodin's theorem asserts that \(m_{5}(G)=0\). To obtain an upper bound for \(m_{k}(G)\), we need to construct a proper \(k\)-coloring \(\varphi\) of \(G\) and find a 2CC transerval \(E^{\prime}\). One immediate difficulty is that, for \(k=4\), the existence of a proper 4-coloring of a planar graph follows from the Four Color Theorem. For \(k=3\), it is NP-complete to decide whether a planar graph \(G\) is 3-colorable, and hence there is no easy way to construct a proper 3-coloring of \(G\). Fortunately, it turns out that tight upper bounds for \(m_{4}(G)\) and \(m_{3}(G)\) for the whole family of planar graphs and the whole family of 3-colorable planar graphs do not depend on a particular proper coloring of \(G\). For any proper coloring \(\varphi\) of a graph \(G\), define \[m(G,\varphi):=\min_{E^{\prime}\subseteq E(G)}\{|E^{\prime}|:E^{\prime}\text{ is a 2CC transerval with respect to }\varphi\}.\] We prove in Section 3 that for any planar graph \(G\) on \(n\) vertices and any proper coloring \(\varphi\) of \(G\), \(m(G,\varphi)\leq n-|\varphi(V(G))|\), where \(|\varphi(V(G))|\) denotes the number of colors used in \(\varphi\). To this end, we study the case when \(G\) is a plane triangulation in Section 2. Moreover, we show that if \(n\geq 5\), then there is a 4-coloring \(\varphi\) of \(G\) with \(m(G,\varphi)\leq n-5\). We apply these results to prove that for every planar graph \(G\), \(m_{4}(G)\leq n-5\) provided that \(n\geq 5\), and \(m_{3}(G)\leq n-3\) provided that \(G\) is 3-colorable. These two bounds are tight as there are infinitely many 3-colorable planar graphs \(G\) with \(m_{3}(G)=n-3\) and infinitely many planar graphs \(G\) with \(m_{4}(G)=n-5\). Besides, we show in Section 3 that for any proper coloring \(\varphi\) of a planar graph \(G\), we can find a 2CC transerval \(E^{\prime}\) with \(|E^{\prime}|=m(G,\varphi)\) that induces a forest. In Section 4 we study the parameter \(m^{\prime}_{k}(G)\). We show that \(m^{\prime}_{3}(G)\leq(13n-42)/10\) and \(m^{\prime}_{4}(G)\leq(3n-12)/5\). We shall mention an application of our results on acyclic colorings of subdivisions. For a graph \(G\) and a positive integer \(k\), define \(m^{\prime\prime}_{k}(G)\) to be the minimum size of an edge set \(E^{\prime}\subseteq E(G)\) such that the graph obtained from \(G\) by subdividing each edge in \(E^{\prime}\) by one vertex is acyclically \(k\)-colorable. It is easy to observe that \(m_{k}(G)\geq m^{\prime\prime}_{k}(G)\geq m^{\prime}_{k}(G)\). It was shown in [4] that for any \(n\)-vertex planar graph \(G\), \(m^{\prime\prime}_{4}(G)\leq n-3\). Our upper bound for \(m_{4}(G)\) immediately improves it to \(m^{\prime\prime}_{4}(G)\leq n-5\) for \(n\geq 5\). All graphs considered in this paper are finite and simple. We denote by \(V(G)\) and \(E(G)\) the vertex set and the edge set of \(G\), respectively. For \(v\in V(G)\), denote by \(N_{G}(v)\) the set of vertices adjacent to \(v\) and by \(d_{G}(v)\) the degree of \(v\). For a positive integer \(k\), denote \([k]:=\{1,\ldots,k\}\). A _\(k\)-coloring_\(\varphi\) of \(G\) is a function which assigns a color \(\varphi(v)\in[k]\) to each vertex \(v\in V(G)\). We say a coloring \(\varphi\) is _proper_ if \(\varphi(u)\neq\varphi(v)\) for any \(uv\in E(G)\). In fact, we always consider proper colorings unless specified otherwise. Given a \(k\)-coloring \(\varphi\) of \(G\), we define the color classes by \(\varphi^{-1}(i):=\{v\in V(G):\varphi(v)=i\}\) for any \(i\in[k]\). For any distinct \(i,j\in[k]\), define \(G_{ij}\) to be the subgraph of \(G\) induced by \(\varphi^{-1}(i)\cup\varphi^{-1}(j)\). ## 2 Upper bounds for \(m(G,\varphi)\) In this section we prove upper bounds on the parameter \(m(G,\varphi)\) for planar graphs. We first present several lemmas for plane triangulations. **Definition 3**.: Let \(G\) be a plane triangulation on at least 4 vertices. Denote by \(\mathcal{E}_{G}\) the set of separating triangles of \(G\), and by \(\mathcal{V}_{G}\) the set of maximal connected subgraphs of \(G\) without separating triangles. The graph \(\mathcal{T}_{G}\) is defined to be the graph on \(\mathcal{V}_{G}\) with edge set \(\mathcal{E}_{G}\) such that \(G_{1},G_{2}\in\mathcal{V}_{G}\) are joined by \(T\in\mathcal{E}_{G}\) if and only if both \(G_{1}\) and \(G_{2}\) contain \(T\). It is easy to see that \(\mathcal{V}_{G}\) is a family of \(4\)-connected plane triangulations and \(\mathcal{T}_{G}\) is a tree. Let \(\mathcal{V}_{G}:=\{G_{1},\dots,G_{t}\}\) and \(\mathcal{E}_{G}:=\{T_{1},\dots,T_{t-1}\}\). The graph \(G\) can be retrieved from the vertex-disjoint union of \(G_{1},\dots,G_{t}\) by identifying the copies of triangle \(T\) in \(G_{i},G_{j}\) for each \(T=G_{i}G_{j}\in\mathcal{E}_{G}\). Hence \(\sum_{i\in[t]}|V(G_{i})|=|V(G)|+3(t-1)\). **Lemma 4**.: _Let \(G\) be a graph and \(\varphi\) be a proper coloring of \(G\). If \(A\) is an edge set of \(G\) such that \(A\cap E(G_{ij})\) is an acyclic edge set for any distinct \(i,j\in[k]\), then there exists \(E^{\prime}\subseteq E(G)\setminus A\) satisfying that \(|E^{\prime}|=m(G,\varphi)\) and \(\varphi\) is an acyclic coloring of \(G-E^{\prime}\)._ Proof.: Let \(E^{\prime}\subseteq E(G)\) be such that \(|E^{\prime}|=m(G,\varphi)\), \(\varphi\) is an acyclic coloring of \(G-E^{\prime}\) and, subject to this, \(|E^{\prime}\cap A|\) is minimum. Suppose there exists \(uv\in E^{\prime}\cap A\). There is precisely one cycle \(C\) in \(G_{\varphi(u)\varphi(v)}-(E^{\prime}-uv)\). As \(A\cap E(G_{\varphi(u)\varphi(v)})\) is acyclic, there exists \(e^{\prime}\in E(C)\setminus A\). Then \(G_{\varphi(u)\varphi(v)}-(E^{\prime}-uv+e^{\prime})\) is acyclic, \(|E^{\prime}-uv+e^{\prime}|=|E^{\prime}|=m(G,\varphi)\) and \(|(E^{\prime}-uv+e^{\prime})\cap A|<|E^{\prime}\cap A|\), contradicting our choice of \(E^{\prime}\). Hence \(E^{\prime}\subseteq E(G)\setminus A\) as desired. **Lemma 5**.: _Let \(G\) be a plane graph, \(T\) be a separating triangle of \(G\) and \(\varphi\) be a proper coloring of \(G\). Let \(A_{1}\) and \(A_{2}\) be the components of \(G-T\), and for \(i\in[2]\), \(G^{i}\) be the subgraph of \(G\) induced by \(V(A_{i})\cup V(T)\). Then \(m(G,\varphi)=m(G^{1},\varphi^{1})+m(G^{2},\varphi^{2})\), where \(\varphi^{i}\) denotes the restriction of \(\varphi\) on \(V(G^{i})\)._ Proof.: Without loss of generality, we let \(V(T)=\{v_{1},v_{2},v_{3}\}\) with \(\varphi(v_{i})=i\) for \(i\in[3]\). By Lemma 4, there exists \(E^{\prime}\subseteq E(G)\setminus E(T)\) such that \(|E^{\prime}|=m(G,\varphi)\) and \(\varphi\) is an acyclic coloring of \(G-E^{\prime}\). As \(G^{i}-(E^{\prime}\cap E(G^{i}))\) is acyclically colored by \(\varphi^{i}\) (\(i\in[2]\)), we have \(m(G,\varphi)=|E^{\prime}|=|E^{\prime}\cap E(G^{1})|+|E^{\prime}\cap E(G^{2})| \geq m(G^{1},\varphi^{1})+m(G^{2},\varphi^{2})\). Similarly, by Lemma 4, let \(E^{\prime}_{i}\subseteq E(G^{i})\setminus E(T)\) be such that \(|E^{\prime}_{i}|=m(G^{i},\varphi^{i})\) and \(G^{i}-E^{\prime}_{i}\) is acyclically colored by \(\varphi_{i}\). Let \(E^{\prime}:=E^{\prime}_{1}\cup E^{\prime}_{2}\). Observe that if there is a cycle \(C\) which is colored by only two colors in \(G-E^{\prime}\), then \(C\) must contain two vertices of \(T\), say \(v_{1},v_{2}\), and \(C+v_{1}v_{2}\) contains some cycle in \(G^{1}-E^{\prime}_{1}\) or \(G^{2}-E^{\prime}_{2}\) which uses only two colors as well, a contradiction. Hence \(G-E^{\prime}\) is acyclically colored and \(m(G,\varphi)\leq|E^{\prime}|=|E^{\prime}_{1}|+|E^{\prime}_{2}|=m(G^{1}, \varphi^{1})+m(G^{2},\varphi^{2})\). **Lemma 6**.: _Let \(G\) be a plane triangulaion on at least \(4\) vertices and \(\varphi\) be a proper coloring of \(G\). Let \(\mathcal{V}_{G}:=\{G_{1},\dots,G_{t}\}\). We have \(m(G,\varphi)=\sum_{i\in[t]}m(G_{i},\varphi_{i})\), where \(\varphi_{i}\) denotes the restriction of \(\varphi\) on \(V(G_{i})\)._ Proof.: We prove by induction on \(|\mathcal{V}_{G}|\). It trivially holds when \(|\mathcal{V}_{G}|=1\). Suppose \(|\mathcal{V}_{G}|>1\). Let \(T\in\mathcal{E}_{G}\), \(A_{1}\) and \(A_{2}\) be the components of \(G-T\), and for \(i\in[2]\), \(G^{i}\) be the subgraph of \(G\) induced by \(V(A_{i})\cup V(T)\). We may assume \(G_{1},\dots,G_{t^{\prime}}\subseteq G^{1}\) and \(G_{t^{\prime}+1},\dots,G_{t}\subseteq G^{2}\) for some \(1\leq t^{\prime}<t\). Then, by Lemma 5 and the induction hypothesis, \(m(G,\varphi)=m(G^{1},\varphi^{1})+m(G^{2},\varphi^{2})=\sum_{i\in[t^{\prime} ]}m(G_{i},\varphi_{i})+\sum_{i\in[t]\setminus[t^{\prime}]}m(G_{i},\varphi_{i})= \sum_{i\in[t]}m(G_{i},\varphi_{i})\). **Lemma 7**.: _Let \(G\) be a \(3\)-colorable plane triangulation on \(n\) vertices and \(\varphi\) be the unique proper \(3\)-coloring of \(G\). For any distinct \(i,j\in[3]\), \(G_{ij}\) is connected. Moreover, if \(n>3\), \(G_{ij}\) is \(2\)-connected._ Proof.: We prove by induction on \(n\). The triangulations of order at most \(6\) are listed in Figure 1. Among these graphs, only the triangle and the octahedron are \(3\)-colorable. It is not hard to verify that the claims hold for these two graphs. From now on we assume that \(n>6\). As \(G\) is a \(3\)-colorable triangulation, every vertex of \(G\) has an even degree, and hence there exists \(v\in V(G)\) with \(d_{G}(v)=4\). Let \(v_{1}v_{2}v_{3}v_{4}v_{1}\) be the cycle induced by \(N_{G}(v)\). We have \(\varphi(v_{i})=\varphi(v_{i+2})\) for each \(i\in[2]\). Suppose there exists \(i\in[2]\) such that \(v_{i}\) and \(v_{i+2}\) have no common neighbor other than \(v,v_{i+1},v_{i+3}\), where \(v_{5}:=v_{1}\). We contract \(v_{i}vv_{i+2}\) to obtain \(G^{\prime}\) and call the new vertex \(v^{\prime}\). Let \(\varphi^{\prime}:V(G^{\prime})\to[3]\) be such that \(\varphi^{\prime}(v^{\prime})=\varphi(v_{i})\) and \(\varphi^{\prime}(u)=\varphi(u)\) for \(u\in V(G^{\prime})\setminus\{v^{\prime}\}\). It is clear that \(\varphi^{\prime}\) is the unique proper \(3\)-coloring of the triangulation \(G^{\prime}\). By the induction hypothesis, \(G^{\prime}_{ij}\) is \(2\)-connected for any distinct \(i,j\in[3]\). Then, one can easily prove by the construction that \(G_{ij}\) is \(2\)-connected for any distinct \(i,j\in[3]\). Suppose for every \(i\in[2]\), \(v_{i}\) and \(v_{i+2}\) have some common neighbor other than \(v,v_{i+1},v_{i+3}\). Since \(G\) is not the octahedron, it has some separating triangle \(T\). Let \(A_{1},A_{2}\) be the components of \(G-T\). We consider the subgraphs \(G^{i}\) of \(G\) induced by \(V(A_{i})\cup V(T)\) (\(i\in[2]\)). Let \(\varphi_{i}\) be restriction of \(\varphi\) on \(V(G^{i})\). As \(|V(G^{i})|>3\), it follows from the induction hypothesis that \(G^{i}_{jk}\) is \(2\)-connected for any distinct \(j,k\in[3]\) (\(i\in[2]\)), from which it immediately follows that \(G_{jk}\) is \(2\)-connected for any distinct \(j,k\in[3]\). Let \(G\) be a graph with a proper \(k\)-coloring \(\varphi\). Denote by \(c_{ij}\) the number of connected components of \(G_{ij}\). The number of edges we need to remove from \(G_{ij}\) to make \(\varphi\) acyclic is \(|E(G_{ij})|-|V(G_{ij})|+c_{ij}\). As \(E(G_{ij})\) are edge-disjoint for distinct \(i,j\), and each vertex \(v\) of \(G\) is contained in \(k-1\) subgraphs \(G_{ij}\), we know that \[m(G,\phi)=\sum_{1\leq i<j\leq k}(|E(G_{ij})|-|V(G_{ij})|+c_{ij})=|E(G)|-(k-1)| V(G)|+\sum_{1\leq i<j\leq k}c_{ij}.\] We obtain the following result by this observation. **Theorem 8**.: _Assume \(G\) is a \(3\)-colorable plane triangulation on \(n\) vertices and \(\varphi\) is the unique proper \(3\)-coloring of \(G\). Then \(m(G,\varphi)=n-3\). For \(v\in V(G)\), let \(\varphi_{v}\) be the \(4\)-coloring of \(G\) defined as \(\varphi_{v}(v)=4\) and \(\varphi_{v}(u)=\varphi(u)\) for all \(u\in V(G)\setminus\{v\}\). If \(n>3\), we have \(m(G,\varphi_{v})\leq n-5\)._ Proof.: By Lemma 7, \(G_{ij}\) is connected for any distinct \(i,j\in[3]\). Hence \[m(G,\varphi)=\sum_{1\leq i<j\leq 3}(|E(G_{ij})|-|V(G_{ij})|+1)=|E(G)|-2|V(G)|+3= n-3.\] For the second statement, we fix \(v\in V(G)\) and focus on the coloring \(\varphi_{v}\). Without loss of generality, assume \(\varphi(v)=3\). By Lemma 7, \(G_{12}\) (with respect to the coloring \(\varphi_{v}\)) is \(2\)-connected. Moreover, for \(i\in[2]\), the subgraph induced by \(\varphi_{v}^{-1}(i)\cup\varphi_{v}^{-1}(3)\cup\{v\}=\varphi^{-1}(i)\cup \varphi^{-1}(3)\) is \(2\)-connected and hence \(G_{i3}\) (with respect to the coloring \(\varphi_{v}\)) is connected. It is also obvious that \(G_{i4}\) is a forest for every \(i\in[3]\). As \(d_{G}(v)\geq 4\), we have that \[m(G,\varphi)=\sum_{1\leq i<j\leq 3}(|E(G_{ij})|-|V(G_{ij})|+1)=(|E(G)|-d_{G}(v ))-2(|V(G)|-1)+3\leq n-5.\qed\] We are now ready to prove the main result of this section. **Theorem 9**.: _Assume \(G\) is a plane triangulation on \(n\) vertices and \(\varphi\) is a proper coloring of \(G\). Let \(k:=|\varphi(V(G))|\). Then \(m(G,\varphi)\leq n-k\). If, in addition, \(k=4\), \(n\geq 5\) and \(G\) is \(4\)-connected, then \(m(G,\varphi)\leq n-5\)._ Figure 1: The triangulations of order at most \(6\). Proof.: We prove both statements by induction on \(n\). It is easy to check that they hold for \(n\leq\max\{6,k\}\), thus we assume \(n>\max\{6,k\}\). We first consider, for the first statement, that \(G\) is not \(4\)-connected, i.e. \(G\) has some separating triangle \(T\). Let \(A_{1},A_{2}\) be the components of \(G-T\). Let \(G_{i}\) be the subgraphs of \(G\) induced by \(V(A_{i})\cup V(T)\) (\(i\in[2]\)). Denote by \(\varphi_{i}\) the restriction of \(\varphi\) on \(V(G_{i})\). Write \(n_{i}:=|V(G_{i})|\) and \(k_{i}:=|\varphi_{i}(G_{i})|\). Note that \(n_{1}+n_{2}=n+3\) and \(k_{1}+k_{2}\geq k+3\). By the induction hypothesis and Lemma 4, for each \(i\in[2]\), there exists \(E^{\prime}_{i}\subseteq E(G_{i})\setminus E(T)\) such that \(|E^{\prime}_{i}|\leq n_{i}-k_{i}\) and \(G_{i}-E^{\prime}_{i}\) is acyclically colored by \(\varphi_{i}\). Let \(E^{\prime}:=E^{\prime}_{1}\cup E^{\prime}_{2}\). It is easy to prove that \(G-E^{\prime}\) is acyclically colored by \(\varphi\) and \(|E^{\prime}|=|E^{\prime}_{1}|+|E^{\prime}_{2}|\leq(n_{1}-k_{1})+(n_{2}-k_{2}) \leq n-k\). Henceforth, we assume that \(G\) has no separating triangle and thus \(\delta(G)=4,5\). Fix \(v\in V(G)\) such that \(d_{G}(v)=\delta(G)\). Depending on the value of \(\delta(G)\), we consider two cases. **Case 1:**\(d_{G}(v)=\delta(G)=4\). Let \(v_{1}v_{2}v_{3}v_{4}v_{1}\) be the cycle induced by \(N_{G}(v)\). Since \(n>6\) and \(G\) has no separating triangle, we can assume that \(v_{1},v_{3}\) have no common neighbor other than \(v,v_{2},v_{4}\). If \(\varphi(v_{1})\neq\varphi(v_{3})\), we obtain \(G^{\prime}\) from \(G\) by deleting \(v\) and adding the edge \(v_{1}v_{3}\). Let \(\varphi^{\prime}\) be the restriction of \(\varphi\) on \(V(G^{\prime})\). Denote \(n^{\prime}:=|V(G^{\prime})|\) and \(k^{\prime}:=|\varphi^{\prime}(V(G^{\prime}))|\). Note that \(G^{\prime}\) is \(4\)-connected, \(n^{\prime}=n-1\geq 6\) and \(k^{\prime}=k\) or \(k-1\). Moreover, if \(k^{\prime}=k-1\), then \(v\) is the only vertex that is colored by \(\varphi(v)\) and hence no \(2\)-colored cycle in \(G\) contains \(v\). By the induction hypothesis, there exists \(E^{\prime\prime}\subseteq E(G^{\prime})\) such that \(G^{\prime}-E^{\prime\prime}\) is acyclically colored by \(\varphi^{\prime}\) and \(|E^{\prime\prime}|=m(G^{\prime},\varphi^{\prime})\leq n^{\prime}-k^{\prime}\). Define \(S:=\{vv_{2}\}\) if \(k^{\prime}=k\), and \(S:=\emptyset\) if \(k^{\prime}=k-1\). Set \(E^{\prime}:=(E^{\prime\prime}\setminus\{v_{1}v_{3}\})\cup S\). One can readily show that \(G-E^{\prime}\) is acyclically colored by \(\varphi\) and \(|E^{\prime}|\leq n-k\). If \(k=k^{\prime}=4\), we additionally require from the induction hypothesis that \(|E^{\prime\prime}|\leq n^{\prime}-5\), which yields in this case that \(|E^{\prime}|\leq n-5\). If \(k=4\) and \(k^{\prime}=k-1\), then, suppose \(\varphi(V(G))=[4]\) and \(\varphi(v)=4\), one can deduce from Lemma 7 that \(G_{ij}\) are connected for all distinct \(i,j\in[3]\) and hence prove in a similar way as in the proof of Theorem 8 that \(m(G,\varphi)=n-5\). Assume \(\varphi(v_{1})=\varphi(v_{3})\). First we prove that \(m(G,\varphi)\leq n-|\varphi(V(G))|\). Let \(G^{\prime}\) be from \(G\) by contracting \(v_{1}vv_{3}\) to a new vertex \(v^{\prime}\) and denote the coloring induced from \(\varphi\) by \(\varphi^{\prime}\) so that \(\varphi(v^{\prime})=\varphi(v_{1})\). Denote \(n^{\prime}:=|V(G^{\prime})|\) and \(k^{\prime}:=|\varphi^{\prime}(V(G^{\prime}))|\). We have \(n^{\prime}=n-2\geq 5\) and \(k^{\prime}=k\) or \(k-1\). By the induction hypothesis and Lemma 4, there exists \(E^{\prime\prime}\subseteq E(G^{\prime})\setminus\{v^{\prime}v_{2},v^{\prime}v_{ 4}\}\) such that \(G^{\prime}-E^{\prime\prime}\) is acyclically colored by \(\varphi^{\prime}\) and \(|E^{\prime\prime}|=m(G^{\prime},\varphi^{\prime})\leq n^{\prime}-k^{\prime}\). Note that any path joining \(v_{1},v_{3}\) in \(G-\{v,v_{2},v_{4}\}\) corresponds to a cycle containing \(v^{\prime}\) in \(G^{\prime}\) as \(v_{1},v_{3}\) have no common neighbor other than \(v,v_{2},v_{4}\). Define \(S:=\{vv_{2}\}\) if \(k^{\prime}=k\), and \(S:=\emptyset\) if \(k^{\prime}=k-1\). Let \(E^{\prime}:=E^{\prime\prime}\cup\{v_{1}v_{2}\}\cup S\). It is clear that \(|E^{\prime}|\leq n-k\) and \(G-E^{\prime}\) is acyclically colored by \(\varphi\) as \(v_{1}v_{2}v_{3}v_{4}v_{1}\) is the only cycle that is possibly \(2\)-colored in \(G-E^{\prime\prime}-v\). It remains to show that if \(k=4\), then \(m(G,\varphi)\leq n-5\). If \(\varphi(v_{2})\neq\varphi(v_{4})\), we take \(E^{\prime}:=E^{\prime\prime}\) with \(|E^{\prime}|\leq n^{\prime}-4=n-6\) and it is easy to show that \(G-E^{\prime}\) is acyclically colored by \(\varphi\). So we assume that \(\varphi(v_{2})=\varphi(v_{4})\). If \(k^{\prime}=3\), then it follows from Theorem 8 that \(m(G,\varphi)\leq n-5\). So we assume \(k^{\prime}=4\); in particular, \(|E^{\prime\prime}|\leq n^{\prime}-4\). If \(|E^{\prime\prime}|=m(G^{\prime},\varphi^{\prime})\leq n^{\prime}-5\), we take \(E^{\prime}:=E^{\prime\prime}\cup\{vv_{2},v_{1}v_{2}\}\), so \(|E^{\prime}|=|E^{\prime\prime}|+2\leq n-5\) and \(G-E^{\prime}\) is acyclically colored by \(\varphi\). This yields that \(m(G,\varphi)\leq|E^{\prime}|\leq n-5\). Assume \(m(G^{\prime},\varphi^{\prime})=|V(G^{\prime})|-4\). As \(|V(G^{\prime})|>4\), by the induction hypothesis, \(G^{\prime}\) is not \(4\)-connected, and hence contains separating triangles. As \(G\) is \(4\)-connected, it follows that each separating triangle of \(G^{\prime}\) contains \(v^{\prime}\) and separates \(v_{2}\) and \(v_{4}\); an example is given in Figure 2. This implies that \(\mathcal{T}_{G^{\prime}}\) is a path \(G^{\prime}_{1}\ldots G^{\prime}_{t}\) (\(t\geq 2\)), with end-vertex \(G^{\prime}_{1}\) containing \(v_{2}\), and the other end-vertex \(G^{\prime}_{t}\) containing \(v_{4}\). Denote by \(\varphi^{\prime}_{i}\) the restriction of \(\varphi^{\prime}\) on \(V(G^{\prime}_{i})\). By Lemma 6 and Theorem 8, precisely one graph \(G^{\prime}_{i}\) from \(\mathcal{V}_{G^{\prime}}\) has \(|\varphi^{\prime}_{i}(V(G^{\prime}_{i}))|=4\), \(m(G^{\prime}_{i},\varphi_{i})=|V(G^{\prime}_{i})|-4\) and \(|\varphi^{\prime}_{j}(V(G_{j}))|=3\) for all \(j\in[t]\setminus\{i\}\). By the induction hypothesis, we know that \(|V(G^{\prime}_{i})|\leq 4\) and hence \(G^{\prime}_{i}\) is isomorphic to \(K_{4}\). Note that \(G^{\prime}_{i}\) is not a leaf of \(\mathcal{T}_{G^{\prime}}\), for otherwise, say \(i=1\), then \(|\varphi^{\prime}(V(G^{\prime})\setminus\{v_{2}\}|=3\). This implies that \(\varphi(v_{2})\neq\varphi(v_{4})\), contradicting the above assumption. Thus \(|\varphi^{\prime}(V(G^{\prime}_{j}))|=3\) and \(\{\varphi^{\prime}(v^{\prime}),\varphi^{\prime}(v_{2})\}\subset\varphi^{ \prime}(V(G^{\prime}_{j}))\) for \(j\in\{1,t\}\). As \(G^{\prime}_{i}\) is an internal vertex of \(\mathcal{T}_{G^{\prime}}\), we have \(\varphi^{\prime}(V(G^{\prime}_{1}))\neq\varphi^{\prime}(V(G^{\prime}_{t}))\). Without loss of generality, we may assume that \(\varphi^{\prime}(V(G^{\prime}_{1}))=[4]\setminus\varphi(v)\) and \(\varphi^{\prime}(V(G^{\prime}_{t}))=\{\varphi(v),\varphi^{\prime}(v^{\prime}),\varphi^{\prime}(v_{2})\}\). Let \(T\) be the separating triangle of \(G^{\prime}\) that is contained in \(G^{\prime}_{1}\). Write \(V(T):=\{v^{\prime},u,w\}\) such that \(\varphi^{\prime}(u)=\varphi^{\prime}(v_{2})\). Note that \(\varphi^{\prime}(w)\neq\varphi(v)\). Let \(C\) be the cycle induced by the neighbors of \(u\) in \(G^{\prime}_{1}\) (see Figure 2(b) for an example) and \(e_{C}\) be an arbitrary edge of \(C\). By Lemma 4, we may require \(E^{\prime\prime}\subseteq E(G^{\prime})\setminus(\{v^{\prime}v_{2},v^{\prime }v_{4}\}\cup(E(C)\setminus\{e_{C}\}))\) as \(\varphi^{\prime}(v_{2})=\varphi^{\prime}(v_{4})=\varphi^{\prime}(u)\notin \varphi^{\prime}(V(C))\), and hence \(e_{C}\in E^{\prime\prime}\). Let \(E^{\prime}:=(E^{\prime\prime}\setminus\{e_{C}\})\cup\{vv_{2},v_{1}v_{2}\}\). We have \(|E^{\prime}|=|E^{\prime\prime}|+1\leq n-5\). It remains to show that \(G-E^{\prime}\) is acyclically colored by \(\varphi\). Again, it is easy to show that \(G-E^{\prime}-e_{C}\) is acyclically colored by \(\varphi\). Hence, any cycle \(K\) which uses only two colors in \(G-E^{\prime}\) contains \(e_{C}\) and the two colors used in \(K\) are \(\varphi^{\prime}(w),\varphi^{\prime}(v^{\prime})\). So \(K\) does not contain \(v,v_{2},v_{4}\). If \(\{v_{1},v_{3}\}\subset V(K)\), then after contracting the path \(v_{1}vv_{3}\), \(K\) becomes the union of two edge-disjoint cycles in \((G^{\prime}_{\varphi^{\prime}(v^{\prime})\varphi^{\prime}(w)}-E^{\prime\prime })+e_{C}\) (as \(v_{1},v_{3}\) have no other common neighbors than \(v,v_{2},v_{4}\)), a contradiction. If \(|\{v_{1},v_{3}\}\cap V(K)|\leq 1\), then \(K\) corresponds to \(C\). Since \(C\) is a cycle separating \(v_{2}\) and \(v_{4}\) in \(G^{\prime}\), \(K\) is a cycle separating \(v_{2}\) and \(v_{4}\) in \(G\), which is however impossible since \(v_{2}vv_{4}\) is a path in \(G\) not intersecting \(K\). **Case 2:**\(d_{G}(v)=\delta(G)=5\). Let \(v_{1}v_{2}v_{3}v_{4}v_{5}v_{1}\) be the induced cycle on \(N_{G}(v)\). If \(|\varphi(N_{G}(v))|=3\), we may assume that \(\varphi(v_{1})=\varphi(v_{3})\) and \(\varphi(v_{2})=\varphi(v_{4})\). As \(G\) is \(4\)-connected and \(\delta(G)=5\), we may assume that \(v_{1},v_{3}\) have no common neighbor other than \(v,v_{2}\). Let \(G^{\prime}\) be obtained from \(G\) by contracting \(v_{1}vv_{3}\) to a new vertex \(v^{\prime}\). We do not distinguish edges from \(E(G^{\prime})\setminus\{v^{\prime}v_{2}\}\) from their corresponding edges in \(G\). Set \(\varphi^{\prime}(v^{\prime}):=\varphi(v_{1})\) and \(\varphi^{\prime}(u):=\varphi(u)\) for all \(u\in V(G^{\prime})\setminus\{v^{\prime}\}\). Denote \(n^{\prime}:=|V(G^{\prime})|\) and \(k^{\prime}:=|\varphi^{\prime}(V(G^{\prime}))|\). We have \(n^{\prime}=n-2\) and \(k^{\prime}=k\) or \(k-1\). By Lemma 4 and the induction hypothesis, there exists \(E^{\prime\prime}\subseteq E(G^{\prime})\setminus\{v^{\prime}v_{2}\}\) such that \(\varphi^{\prime}\) is an acyclic coloring of \(G^{\prime}-E^{\prime\prime}\) and \(|E^{\prime\prime}|=m(G^{\prime},\varphi^{\prime})\leq n^{\prime}-k^{\prime}\). Set \(S:=\{vv_{2}\}\) if \(k^{\prime}=k\) and \(S:=\emptyset\) if \(k^{\prime}=k-1\). Define \(E^{\prime}:=E^{\prime\prime}\cup S\). It is easy to show that \(|E^{\prime}|\leq n-k-1\) and \(\varphi\) is an acyclic coloring of \(G-E^{\prime}\). If \(|\varphi(N_{G}(v))|\geq 4\), we may assume that \(\varphi(v_{i})=i\) for each \(i\in[4]\). Obtain \(G^{\prime}\) from \(G\) by deleting \(v\) and adding edges \(v_{1}v_{3},v_{1}v_{4}\). Let \(\varphi^{\prime}\) be the restriction of \(\varphi\) on \(V(G)\setminus\{v\}\). Denote \(n^{\prime}:=|V(G^{\prime})|\) and \(k^{\prime}:=|\varphi^{\prime}(V(G^{\prime}))|\). We have \(n^{\prime}=n-1\) and \(k^{\prime}=k\) or \(k-1\). By Lemma 4 and the induction hypothesis, there exists \(E^{\prime\prime}\subseteq E(G^{\prime})\setminus\{v^{\prime}v_{3},v^{\prime}v_{4}\}\) such that \(\varphi^{\prime}\) is an acyclic coloring of \(G^{\prime}-E^{\prime\prime}\) and \(|E^{\prime\prime}|=m(G^{\prime},\varphi^{\prime})\leq n^{\prime}-k^{\prime}\). Set \(S:=\{vv_{5}\}\) if \(k^{\prime}=k\) and \(S:=\emptyset\) if \(k^{\prime}=k-1\). Define \(E^{\prime}:=E^{\prime\prime}\cup S\). It is easy to show that \(|E^{\prime}|\leq n-k\) and \(\varphi\) is an acyclic coloring of \(G-E^{\prime}\). We remark that in this case we have \(k>4\), thus we do not need to consider the second statement. The following corollary characterizes plane triangulations \(G\) and colorings \(\varphi\) that satisfy the eqaulities \(m(G,\varphi)=n-3\) and \(m(G,\varphi)=n-4\), respectively. **Corollary 10**.: _Let \(G\) be a plane triangulation on \(n\) vertices and \(\varphi\) be a coloring of \(G\). Let \(\mathcal{V}_{G}:=\{G_{1},\ldots,G_{t}\}\) and \(\varphi_{i}\) be the restriction of \(\varphi\) on \(V(G_{i})\) for \(i\in[t]\). We have that \(m(G,\varphi)=n-3\) if and only if \(|\varphi(V(G))|=3\); and \(m(G,\varphi)=n-4\) if and only if there exists \(i\in[t]\) such that \(G_{i}\) is isomorphic to \(K_{4}\) and \(|\varphi_{j}(V(G_{j}))|=3\) for all \(j\in[t]\setminus\{i\}\)._ ## 3 Acyclic 2CC transerval and upper bounds for \(m_{k}(g)\) Let \(G\) be a graph and \(\varphi\) a coloring of \(G\). We have shown upper bounds on \(m(G,\varphi)\) when \(G\) is a plane triangulation. In this section, we show that we can choose the 2CC transversal \(E^{\prime}\) so that it induces a forest as well as extend the results to general planar graphs. **Definition 11**.: Let \(G\) be a graph and \(U\subseteq V(G)\). An edge set \(E^{\prime}\subseteq E(G)\) is \(U\)_-acyclic_ if the graph induced by \(E^{\prime}\) is a forest and contains no path joining two distinct vertices of \(U\). With abuse of notation, we say an edge set is \(H\)-acyclic instead of \(V(H)\)-acyclic for any subgraph \(H\) of \(G\), and if \(H\) is a graph induced by a single edge \(e\), we write \(e\)-acyclic instead of \(H\)-acyclic. **Proposition 12**.: _Let \(G\) be a plane triangulation and \(\varphi\) be a proper coloring of \(G\). For any facial cycle \(F\) of \(G\), there exists an \(F\)-acyclic \(2\)CC transversal \(E_{F}\) with respect to \(\varphi\)._ Proof.: We prove by induction on \(|V(G)|\). We shall assume \(|V(G)|>\max\{6,|\varphi(V(G))|\}\) as the small cases can be readily verified. Suppose \(G\) has some separating triangle \(T\). Let \(A_{1}\) and \(A_{2}\) be the components of \(G-T\), and for \(i\in[2]\), \(G_{i}\) be the subgraph of \(G\) induced by \(V(A_{i})\cup V(T)\). Without loss of generality, assume that \(F\) is a facial cycle of \(G_{1}\). By the induction hypothesis, we have an \(F\)-acyclic \(2\)CC transversal \(E_{F}^{1}\subseteq E(G_{1})\) of \(G_{1}\) and a \(T\)-acyclic \(2\)CC transversal \(E_{T}^{2}\subseteq E(G_{2})\) of \(G_{2}\). It is easy to see that the edge set \(E_{F}:=E_{F}^{1}\cup E_{T}^{2}\) is an \(F\)-acyclic \(2\)CC transversal of \(G\). Henceforth, we assume that \(G\) has no separating triangle and thus \(\delta(G)\geq 4\). Fix \(v\in V(G)\setminus V(F)\) such that \(d_{G}(v)=\delta(G)\leq 5\). We consider two cases, depending on \(d_{G}(v)=4\) or \(5\). **Case 1:**\(d_{G}(v)=4\). Let \(v_{1}v_{2}v_{3}v_{4}v_{1}\) be the cycle induced by \(N_{G}(v)\). Since \(|V(G)|>6\) and \(G\) has no separating triangle, we can assume that \(v_{1},v_{3}\) have no common neighbor other than \(v,v_{2},v_{4}\). If \(\varphi(v_{1})\neq\varphi(v_{3})\), we obtain \(G^{\prime}\) from \(G\) by deleting \(v\) and adding the edge \(v_{1}v_{3}\), and color it with the coloring \(\varphi^{\prime}\) induced from \(\varphi\). Clearly, \(F\) remains a facial cycle of \(G^{\prime}\). By the induction hypothesis, there exists an \(F\)-acyclic \(2\)CC transversal \(E_{F}^{\prime}\subseteq E(G^{\prime})\) of \(G^{\prime}\). Set \(E_{F}:=(E_{F}^{\prime}\setminus\{v_{1}v_{3}\})\cup\{vv_{2}\}\). One can readily check that \(E_{F}\) is an \(F\)-acyclic \(2\)CC transversal of \(G\). If \(\varphi(v_{1})=\varphi(v_{3})\), obtain \(G^{\prime}\) from \(G\) by contracting \(v_{1}vv_{3}\) to a new vertex \(v^{\prime}\) and denote the coloring induced from \(\varphi\) by \(\varphi^{\prime}\) so that \(\varphi(v^{\prime})=\varphi(v_{1})\). Let \(E_{F}^{\prime}\subseteq E(G^{\prime})\) be an \(F\)-acyclic \(2\)CC transversal of \(G^{\prime}\). Recall that \(v_{1},v_{3}\) have no common neighbor other than \(v,v_{2},v_{4}\), and hence any path joining \(v_{1},v_{3}\) in \(G-\{v,v_{2},v_{4}\}\) corresponds to a cycle containing \(v^{\prime}\) in \(G^{\prime}\). We construct \(E_{F}\) as follows. * If \(E_{F}^{\prime}\cap\{v^{\prime}v_{2},v^{\prime}v_{4}\}=\emptyset\), then \(v_{1}v_{2}v_{3}v_{4}v_{1}\) is the only cycle in \(G-(E_{F}^{\prime}\cup\{vv_{2}\})\) that possibly uses only two colors. We claim that there exists \(j\in\{1,3\}\) such that \(E_{F}:=E_{F}^{\prime}\cup\{vv_{2},v_{j}v_{2}\}\) Figure 2: (a) A \(4\)-colored plane triangulation \(G\). (b) The plane triangulation \(G^{\prime}\) obtained from \(G\) by contracting the path \(v_{1}vv_{3}\). The cycle \(C\) consists of the thick edges. induces a forest not connecting any distinct vertices from \(V(F)\). Suppose it does not hold, then for each \(j\in\{1,3\}\), the graph induced by \(E^{\prime}_{F}\) in \(G\) contains some path joining \(v_{j}\) and \(v_{2}\), or contains two disjoint paths each joining one vertex from \(V(F)\) and one vertex of \(v_{j},v_{2}\). In any case, the graph induced by \(E^{\prime}_{F}\) in \(G^{\prime}\) contains some path joining two vertices from \(V(F)\) or some cycle, a contradiction. As \(G-E_{F}\) is acyclically colored by \(\varphi\), \(E_{F}\) is the desired edge set. * If \(E^{\prime}_{F}\cap\{v^{\prime}v_{2},v^{\prime}v_{4}\}=\{v^{\prime}v_{i}\}\) for some \(i\in\{2,4\}\), set \(E_{F}:=(E^{\prime}_{F}\setminus\{v^{\prime}v_{i}\})\cup\{vv_{2},v_{1}v_{i},v_ {3}v_{i}\}\). Similarly to the previous case, it can be shown that \(G-E_{F}\) is acyclically colored by \(\varphi\) and the subgraph induced by \(E_{F}\) has no cycle and no path joining distinct vertices from \(V(F)\). * If \(\{v^{\prime}v_{2},v^{\prime}v_{4}\}\subseteq E^{\prime}_{F}\), then there is a unique path \(P\) in \(G^{\prime}-E^{\prime}_{F}\) joining \(v^{\prime}\) and \(v_{2}\) using only colors \(\varphi(v_{1})\) and \(\varphi(v_{2})\). Therefore \(P\) can be viewed as a path in \(G-((E^{\prime}_{F}\setminus\{v^{\prime}v_{2},v^{\prime}v_{4}\})\cup E(v_{1}v_ {2}v_{3}v_{4}v_{1}))\) connecting \(v_{2}\) and \(v_{j}\) for some \(j\in\{1,3\}\). Since \(v_{1},v_{3}\) have no common neighbor other than \(v,v_{2},v_{4}\) and the neighbor of \(v^{\prime}\) in \(P\) is not \(v_{4}\), the index \(j\) is unique. Set \(E_{F}:=(E^{\prime}_{F}\setminus\{v^{\prime}v_{2},v^{\prime}v_{4}\})\cup\{vv_{ 2},v_{j}v_{2},v_{1}v_{4},v_{3}v_{4}\}\). Similarly to the previous cases, it is easy to show that \(E_{F}\) is \(F\)-acyclic. It is left to show that \(\varphi\) is an acyclic coloring of \(G-E_{F}\). Suppose to the contrary that there is some \(2\)-colored cycle \(C\) in \(G-E^{\prime}\). It is not hard to see that \(C\) contains \(v_{4-j}v_{2}\) but not \(v_{j}\). Then \(C-v_{4-j}v_{2}\) is a path in \(G^{\prime}-E^{\prime}_{F}\) connecting \(v^{\prime}\) and \(v_{2}\) yet different from \(P\), a contradiction. **Case 2: \(d_{G}(v)=5\)**. Let \(v_{1}v_{2}v_{3}v_{4}v_{5}v_{1}\) be the induced cycle on \(N_{G}(v)\). If \(|\varphi(N_{G}(v))|=3\), we may assume that \(\varphi(v_{1})=\varphi(v_{3})\) and \(\varphi(v_{2})=\varphi(v_{4})\). Suppose \(v_{1},v_{3}\) have a common neighbor \(u\) other than \(v,v_{2}\) and \(v_{2},v_{4}\) have a common neighbor \(u^{\prime}\) other than \(v,v_{3}\). Since \(G\) has no separating triangle, \(u=u^{\prime}\) and \(d_{G}(v_{2})=d_{G}(v_{3})=4\). If \(v_{2}\) or \(v_{3}\) is not incident to \(F\), we may revise our choice of \(v\) so that \(d_{G}(v)=4\). Otherwise, \(F\) is the cycle \(uv_{2}v_{3}u\) and since \(d_{G}(v)=5\), there exists some vertex \(w\in V(G)\setminus\{v,v_{1},v_{2},v_{3},v_{4},u\}\) such that \(d_{G}(w)\leq 5\); we may replace \(v\) by \(w\). Therefore, without loss of generality, we may assume that \(v_{1},v_{3}\) have no common neighbor other than \(v,v_{2}\). Obtain \(G^{\prime}\) from \(G\) by contracting \(v_{1}vv_{3}\) to a new vertex \(v^{\prime}\) and denote the coloring induced from \(\varphi\) by \(\varphi^{\prime}\) so that \(\varphi(v^{\prime})=\varphi(v_{1})\). It is clear that \(F\) remains a facial cycle of \(G^{\prime}\). Let \(E^{\prime}_{F}\subseteq E(G^{\prime})\) be an \(F\)-acyclic 2CC transversal of \(G^{\prime}\). We construct \(E_{F}\) as follows. * If \(v^{\prime}v_{2}\in E^{\prime}_{F}\), set \(E_{F}:=(E^{\prime}_{F}\setminus\{v^{\prime}v_{2}\})\cup\{vv_{2},v_{1}v_{2},v_ {2}v_{3}\}\). * If \(v^{\prime}v_{2}\notin E^{\prime}_{F}\), set \(E_{F}:=E^{\prime}_{F}\cup\{vv_{2}\}\). In both cases it is easy to show that \(E_{F}\) is an \(F\)-acyclic 2CC transversal of \(G\). If \(|\varphi(N_{G}(v))|>3\), we may assume that \(\varphi(v_{i})=i\) for each \(i\in[4]\). Let \(G^{\prime}\) be the graph obtained from \(G\) by deleting \(v\) and adding edges \(v_{1}v_{3},v_{1}v_{4}\). Let \(\varphi^{\prime}\) be the restriction of \(\varphi\) on \(V(G)\setminus\{v\}\). Let \(E^{\prime}_{F}\) be an \(F\)-acyclic 2CC transversal of \(G\). One can easily show that \(E_{F}:=(E^{\prime}_{F}\setminus\{v_{1}v_{3},v_{1}v_{4}\})\cup\{vv_{5}\}\) is an \(F\)-acyclic 2CC transversal of \(G\). We remark that the \(F\)-acyclic 2CC transversal \(E_{F}\) found in Proposition 12 induces a forest of at least \(|V(F)|=3\) components and hence has size at most \(|V(G)|-3\). In fact, an \(F\)-acyclic 2CC transversal of the optimal size \(m(G,\varphi)\) does exist due to the following observation. Note that for any edge set \(E^{\prime}\subseteq E(G)\), \(G-E^{\prime}\) is acyclically colored by a proper \(k\)-coloring \(\varphi\) of \(G\) if and only if \(E(G)\setminus E^{\prime}\) is an independent set of the direct sum of the graphic matroids of \(G_{ij}\) (\(i,j\in[k]\)). This yields the following corollary. **Corollary 13**.: _Let \(G\) be a plane triangulation, \(\varphi\) be a proper coloring of \(G\) and \(F\) be a facial cycle of \(G\). There exists an \(F\)-acyclic \(2\)CC transversal \(E^{\prime}\subseteq E(G)\) with \(|E^{\prime}|=m(G,\varphi)\)._ Next, we generalize the results to planar graphs. **Theorem 14**.: _Assume \(G\) is a planar graph on \(n\) vertices and \(\varphi\) is a proper coloring of \(G\) with \(|\varphi(V(G))|=k\). Let \(U\subseteq V(G)\) that induces a clique of size \(|U|\leq 3\). There exists a \(U\)-acyclic \(2\)CC transversal \(E_{U}\subseteq E(G)\) with \(|E_{U}|=m(G,\varphi)\leq n-k\)._ Proof.: We prove by induction on \(n\). It clearly holds when \(n\leq k\). From now on we consider \(n>k\). If \(G\) has some separator \(W\subset V(G)\) such that \(|W|\leq 3\) and \(W\) induces a clique, let \(A_{1}\) be a component of \(G-W\) and \(A_{2}\) the union of all other components. Denote by \(G_{i}\) the subgraph of \(G\) induced by \(V(A_{i})\cup W\) and by \(\varphi_{i}\) the restriction of \(\varphi\) on \(V(G_{i})\) (\(i\in[2]\)). Write \(n_{i}:=|V(G_{i})|\) and \(k_{i}:=|\varphi_{i}(V(G_{i})|\). We have \(n_{1}+n_{2}=n-|W|\) and \(k_{1}+k_{2}\geq k-|W|\). Without loss of generality, we require that \(U\subseteq V(G_{1})\). By the induction hypothesis, there exist a \(U\)-acyclic \(2\)CC transversal \(E^{\prime}_{U}\) of \(G_{1}\) with \(|E^{\prime}_{U}|\leq n_{1}-k_{1}\) and a \(W\)-acyclic \(2\)CC transversal \(E^{\prime}_{W}\) of \(G_{2}\) with \(|E^{\prime}_{W}|\leq n_{2}-k_{2}\). It is easy to show that \(E_{U}:=E^{\prime}_{U}\cup E^{\prime}_{W}\) is a \(U\)-acyclic \(2\)CC transversal with \(|E_{U}|\leq n-k\). We assume that \(G\) has no separator \(W\subset V(G)\) such that \(|W|\leq 3\) and \(W\) induces a clique. In particular, \(G\) is \(2\)-connected and every facial boundary of \(G\) is a cycle. We add to \(G\) as many edges as possible such that \(\varphi\) remains as a proper coloring and \(G\) remains as a plane graph. With abuse of notation, we call the new graph \(G\). It suffices to prove the statement for the new graph \(G\). If \(G\) is a triangulation, we apply Theorem 9 and Corollary 13 to conclude that \(G\) has some \(U\)-acyclic \(2\)CC transversal \(E_{U}\) with \(|E_{U}|=m(G,\varphi)\leq n-k\). If any facial cycle of \(G\) has a chord, then the end-vertices of the chord form a separator of \(G\), contradicting our assumption. Assume \(G\) is not a plane triangulation. As each facial cycle is an induced cycle, and any two non-adjacent vertices of a face are colored by the same color, there exists a facial cycle \(v_{1}v_{2}v_{3}v_{4}v_{1}\) in \(G\) such that \(\varphi(v_{1})=\varphi(v_{3})\) and \(\varphi(v_{2})=\varphi(v_{4})\). If \(v_{1},v_{3}\) have \(3\) common neighbors and \(v_{2},v_{4}\) have \(3\) common neighbors, then \(G\) must be isomorphic to the plane graph obtained from the octahedron by deleting one vertex since we assume that \(G\) has no separating triangle. One can easily verify that the statement holds for this graph. Thus, without loss of generality, we assume that \(v_{1},v_{3}\) have no common neighbor other than \(v_{2},v_{4}\). Let \(G^{\prime}\) be obtained from \(G\) by identifying \(v_{1}\) and \(v_{3}\) as a new vertex \(v^{\prime}\) and \(\varphi^{\prime}\) be the coloring of \(G^{\prime}\) induced from \(\varphi\). Denote \(n^{\prime}:=|V(G^{\prime})|\) and \(k^{\prime}:=|\varphi^{\prime}(V(G^{\prime}))|\). We have \(n^{\prime}=n-1\) and \(k^{\prime}=k\). Moreover, we can view \(U\) as a vertex set of \(G^{\prime}\) since \(U\) contains at most one of \(v_{1},v_{3}\). By the induction hypothesis, we have a \(U\)-acyclic \(2\)CC transversal \(E^{\prime}_{U}\) of \(G^{\prime}\) with \(|E^{\prime}_{U}|=m(G^{\prime},\varphi^{\prime})\leq n^{\prime}-k^{\prime}\). We construct \(E_{U}\) as follows. Since the approach is similar to that in the proof of Proposition 12, some details will be omitted. * If \(E^{\prime}_{U}\cap\{v^{\prime}v_{2},v^{\prime}v_{4}\}=\emptyset\), then there exists \(j\in\{1,3\}\) such that \(E_{U}:=E^{\prime}_{U}\cup\{v_{j}v_{2}\}\) is \(U\)-acyclic. * If \(E^{\prime}_{U}\cap\{v^{\prime}v_{2},v^{\prime}v_{4}\}=\{v^{\prime}v_{i}\}\) for some \(i\in\{2,4\}\), set \(E_{U}:=(E^{\prime}_{U}\setminus\{v^{\prime}v_{i}\})\cup\{v_{1}v_{i},v_{3}v_{i}\}\). * If \(\{v^{\prime}v_{2},v^{\prime}v_{4}\}\subseteq E^{\prime}_{U}\), then there is a unique path \(P\) in \(G^{\prime}-E^{\prime}_{U}\) joining \(v^{\prime}\) and \(v_{2}\) using only colors \(\varphi(v_{1})\) and \(\varphi(v_{2})\). We can view \(P\) as a path in \(G-((E^{\prime}_{F}\setminus\{v^{\prime}v_{2},v^{\prime}v_{4}\})\cup E(v_{1}v_{2 }v_{3}v_{4}v_{1}))\) connecting \(v_{2}\) and \(v_{j}\) for some unique \(j\in\{1,3\}\). Set \(E_{U}:=(E^{\prime}_{U}\setminus\{v^{\prime}v_{2},v^{\prime}v_{4}\})\cup\{v_{j}v_ {2},v_{1}v_{4},v_{3}v_{4}\}\). It is not hard to verify that the edge set \(E_{U}\) constructed above is a \(U\)-acyclic \(2\)CC transversal with \(|E_{U}|\leq n-k\). This completes the proof. **Corollary 15**.: _Let \(G\) be a planar graph on \(n\) vertices. If \(n\geq 5\), then \(m_{4}(G)\leq n-5\). If \(G\) is \(3\)-colorable, then \(m_{3}(G)\leq n-3\)._ **Theorem 16**.: _There are infinitely many \(4\)-connected planar graphs \(G\) with \(m_{4}(G)=|V(G)|-5\), and infinitely many \(3\)-colorable planar graphs with \(m_{3}(G)=|V(G)|-3\)._ Proof.: It follows from Corollary 10 that for any \(3\)-colorable plane triangulation \(G\), \(m_{3}(G)=|V(G)|-3\). Let \(G\) be the \(4\)-connected plane triangulation obtained by joining two independent vertices \(u,v\) to every vertex of a cycle \(C\) on \(n-2\) vertices with \(n\geq 7\) odd. It is obvious that \(G\) is not \(3\)-colorable. Let \(\varphi\) be any \(4\)-coloring of \(G\). Then, without loss of generality, \(\varphi(V(C))=[3]\) and \(\varphi(u)=\varphi(v)=4\). For any \(i\in[3]\), \(G_{i4}\) is a connected plane graph with \(|\varphi^{-1}(i)|\) faces, and for \(i,j\in[3]\), \(G_{ij}\) is acyclic. Therefore \(m(G,\varphi)=\sum_{i\in[3]}(|\varphi^{-1}(i)|-1)=n-5\). ## 4 Upper bounds for \(m_{k}^{\prime}(G)\) In this section we study the problem of how many edges we need to remove from a planar graph in order to make it acyclic \(k\)-colorable for \(k=3,4\). **Theorem 17**.: _Let \(G\) be a planar graph on \(n\) vertices. We have \(m_{3}(G)\leq(13n-42)/10\) and \(m_{4}(G)\leq(3n-12)/5\)._ Proof.: We first prove that \(m_{4}(G)\leq(3n-12)/5\). As every plane graph is a spanning subgraph of some plane triangulation, we may assume that \(G\) is a plane triangulation on \(n\) vertices. Let \(\varphi:V(G)\to[5]\) be an acyclic \(5\)-coloring of \(G\). Without loss of generality, assume that \[\sum_{v\in\varphi^{-1}(5)}(d_{G}(v)-3)\leq\frac{1}{5}\sum_{v\in V(G)}(d_{G}(v )-3)=\frac{3n-12}{5}.\] Let \(v\) be any vertex in \(\varphi^{-1}(5)\). Since the neighbors of \(v\) span some cycle and \(\varphi\) is acyclic, there exist \(v_{1},v_{2},v_{3}\in N_{G}(v)\) whose colors are pairwise distinct. Define \(E_{v}\) to be the set of edges incident to \(v\) other than \(vv_{1},vv_{2}\) and \(vv_{3}\), and set \(\varphi^{\prime}(v)\) to be the color from \([4]\) other than \(\varphi(v_{1}),\varphi(v_{2}),\varphi(v_{3})\). To complete the construction, we set \(E^{\prime}:=\bigcup_{v\in\varphi^{-1}(5)}E_{v}\) and set \(\varphi^{\prime}(u):=\varphi(u)\) for all \(u\in\bigcup_{i\in[4]}\varphi^{-1}(i)\). It is readily to verify that \(\varphi^{\prime}\) is a proper \(4\)-coloring of \(G^{\prime}:=G-E^{\prime}\) and \(|E^{\prime}|=\sum_{v\in\varphi^{-1}(5)}(d_{G}(v)-3)\leq\frac{3n-12}{5}\). Suppose \(\varphi^{\prime}\) is not an acyclic coloring of \(G^{\prime}\), then there is a cycle \(C\) contained in \(\varphi^{\prime-1}(i)\cup\varphi^{\prime-1}(j)\) for some distinct \(i,j\in[4]\). Note that \(C\) cannot contain any \(v\in\varphi^{-1}(5)\) since \(v\) has precisely three neighbors of three different colors in \(G^{\prime}\). Therefore \(C\) is contained in \(G^{\prime}[(\varphi^{\prime-1}(i)\cup\varphi^{\prime-1}(j))\setminus\varphi^{- 1}(5)]=G[\varphi^{-1}(i)\cup\varphi^{-1}(j)]\), a contradiction. This approach can be repeated to show that \(m_{3}(G)\leq(13n-42)/10\). More precisely, we may assume that \[\sum_{v\in\varphi^{\prime-1}(4)}(d_{G^{\prime}}(v)-2)\leq\frac{1}{4}\sum_{v\in V (G^{\prime})}(d_{G^{\prime}}(v)-2)=\frac{4n-12-2|E^{\prime}|}{4}.\] It is not hard to see that for any \(v\in V(G^{\prime})\), \(|\varphi^{\prime}(N_{G^{\prime}}(v))|\geq 2\). Let \(v\in\varphi^{\prime-1}(4)\) and \(v_{1},v_{2}\in N_{G^{\prime}}(v)\) be of different colors. Define \(E_{v}^{\prime}\) to be the set of edges incident to \(v\) other than \(vv_{1}\) and \(vv_{2}\), and set \(\varphi^{\prime\prime}(v)\) to be the color from \([3]\) other than \(\varphi(v_{1}),\varphi(v_{2})\). Set \(E^{\prime\prime}:=E^{\prime}\cup\bigcup_{v\in\varphi^{\prime-1}(4)}E_{v}^{\prime}\) and set \(\varphi^{\prime\prime}(u):=\varphi^{\prime}(u)\) for all \(u\in\bigcup_{i\in[3]}\varphi^{\prime-1}(i)\). Again, it is readily to verify that \(\varphi^{\prime\prime}\) is a proper \(3\)-coloring of \(G^{\prime\prime}:=G-E^{\prime\prime}\) and \[|E^{\prime\prime}|=|E^{\prime}|+\sum_{v\in\varphi^{\prime-1}(4)}(d_{G^{\prime} }(v)-2)\leq\frac{13n-42}{10}.\] Similarly as before, one can show that \(\varphi^{\prime\prime}\) is an acyclic \(3\)-coloring of \(G^{\prime\prime}\) and hence the result follows. We remark that there exist infinitely many planar graphs \(G\) on \(n\) vertices so that \(G-E^{\prime}\) is not acyclically \(4\)-colorable for any \(E^{\prime}\subseteq E(G)\) with \(|E^{\prime}|<(n-2)/4\). Let \(H\) be a \(2\)-face-colorable triangulation and \(\mathcal{T}\) be a family of \(|E(H)|/3\) edge-disjoint facial triangles of \(H\). Let \(G\) be obtained from \(H\) by replacing each triangle from \(\mathcal{T}\) by an octahedron. Therefore \(E(G)\) is partitioned into \(|E(H)|/3\) octahedra, and \(n=|V(H)|+|E(H)|=4|V(H)|-6\). As the octahedron is not acyclically \(4\)-colorable, any \(E^{\prime}\subseteq E(G)\) satisfying that \(G-E^{\prime}\) is acyclically \(4\)-colorable has size at least \(|E(H)|/3=\frac{n-2}{4}\). ## Acknowledgments The research of On-Hei Solomon Lo was supported by a Postdoctoral Fellowship of Japan Society for the Promotion of Science and by Natural Sciences and Engineering Research Council of Canada. The research of Ben Seamone was supported by Natural Sciences and Engineering Research Council of Canada. The research of Xuding Zhu was supported by National Natural Science Foundation of China grant NSFC 11971438 and U20A2068.
2302.10339
Use of immersive virtual reality-based experiments to study tactical decision-making during emergency evacuation
Humans make their evacuation decisions first at strategic/tactical levels, deciding their exit and route choice and then at operational level, navigating to a way-point, avoiding collisions. What influences an individuals at tactical level is of importance, for modelers to design a high fidelity simulation or for safety engineers to create efficient designs/codes. Does an unlit exit sign dissuades individual(s) to avoid a particular exit/route and vice versa? What effect does the crowd's choices have on individual's decision making? To answer these questions, we studied the effect of exit signage (unlit/lit), different proportions of crowd movement towards the exits, and the combined (reinforcing/conflicting) effect of the sign and the crowd treatment on reaction times and exit choices of participants in an immersive virtual reality(VR) evacuation experiment. We found that there is tolerance for queuing when different sources of information, exit signage and crowd movement reinforced one another. The effect of unlit exit signage on dissuading individuals from using a particular exit/route was significant. The virtual crowd was ineffective at encouraging utilization of a particular exit/route but had a slight repulsive effect. Additionally, we found some similarities between previous studies based on screen-based evacuation experiments and our VR-based experiment.
Laura M. Harris, Subhadeep Chakraborty, Aravinda Ramakrishnan Srinivasan
2023-02-20T22:04:28Z
http://arxiv.org/abs/2302.10339v1
Use of immersive virtual reality-based experiments to study tactical decision-making during emergency evacuation ###### Abstract Humans make their evacuation decisions first at strategic/tactical levels, deciding their exit and route choice and then at operational level, navigating to a way-point, avoiding collisions. What influences an individual's at tactical level is of importance, for modelers to design a high fidelity simulation or for safety engineers to create efficient designs/codes. Does an unlit exit sign dissaudes individual(s) to avoid a particular exit/route and vice versa? What effect does the crowd's choices have on individual's decision making? To answer these questions, we studied the effect of exit signage (unlit/lit), different proportions of crowd movement towards the exits, and the combined (reinforcing/conflicting) effect of the sign and the crowd treatment on reaction times and exit choices of participants in an immersive virtual reality (VR) evacuation experiment. We found that there is tolerance for queuing when different sources of information, exit signage and crowd movement reinforced one another. The effect of unlit exit signage on dissuding individuals from using a particular exit/route was significant. The virtual crowd was ineffective at encouraging utilization of a particular exit/route but had a slight repulsive effect. Additionally, we found some similarities between previous studies based on screen-based evacuation experiments and our VR-based experiment. Emergency evacuation Virtual reality User study Tactical decision making ## 1 Introduction The behavior of individuals and crowds is an important topic of research which has helped in preventing crowd disasters and in improving pedestrian flow and safety [1, 2]. Crowd behavior has been studied from different viewpoints such as animal swarms, judgment formation, and consumer behaviors [3, 4, 5, 6, 7]. Individual's decision-making related to emergency evacuation situations has been studied with screen-based experiments in order to understand different factors influencing individuals during evacuation like scenarios [8, 9, 10]. Researchers have broadly categorized the decision-making mechanism into strategic, tactical and operational level [11, 12, 2]. The strategic level is related to when the individual(s) make the decision to start their evacuation, the tactical decisions refers to individual's exit and route choice, and operational level takes care of local interactions like collision avoidance as defined by Warren et al. [13]. In reality, the tactical decisions of individuals have a profound effect on the overall evacuation process for the entire crowd. For example, if every evacuee from a building with multiple exit choices/routes chose to exit via the shortest path in order to optimize chance of successful egress, it can lead to dangerous overcrowding at that particular exit/route and thus eventually become a non-optimal choice for both the individual and the crowd [14]. ### Social Influence A crowd's behavior evolves over time and the behavioral propagation can occur through interactions between the individuals within the crowd [15] which can lead to interesting crowd dynamics such as lane formation and propensity to choose a certain exit [3]. The social influence of the crowd on individuals can be attractive, neutral, or repulsive as discussed by Warren et al. [13]. The attractive social influence is more generally known as social imitation (or herding) [16, 17]. It can sometimes lead to sub-optimal performances due to overcrowding of routes and exits [18]. The overcrowding can lead to increased congestion and exit times [16, 19]. Conversely, individuals have been found to display a repulsive social influence to crowd thus avoiding them. Some studies have found repulsive (avoiding) or neutral influence of the crowd on individuals [8, 20, 21, 22] as well. Additionally, queue lengths along with quickest and shortest path to safety have been found to be an influencing factor on route and exit choice [10]. Lin et al. [23] studied effect of social influence in an emergency evacuation situation. They utilized a virtual train station with simulated fire emergency. The study concluded that in an unevenly split (\(80-20\)) crowd, individuals tend to follow the bigger crowd. Haghani et al. [24] compared the stated and revealed choices collected through survey in a virtual evacuation. They found that participants exhibited similar decision pattern in both type of data collection. They also found that people chose the most crowded exit more often. Nilsson et al. [25] found that social influence is based on distance and plays an important role in the initial evacuation response. They concluded that physically closer people had more influence than people at distance. Additionally, they found that social influence is more important when the evacuation cue is unclear or uninformative. Kinateder et al. [26] verified that a virtual crowd exerted social influence on participants in a cave immersive virtual reality (VR) system. The virtual crowd also affected route choice of participants. Moussaid et al. [27] found that social imitation was more due to a density effect rather than social imitation. Additionally, they concluded that VR experiment elicited similar decision pattern to real-life [27]. Kinateder et al. found that participants in their VR experiment chose to exit through familiar doors and found that this effect was increased when their virtual neighbors also left through the familiar door [28]. Surrounding persons were found to influence evacuee exit decision [29]. Additionally, Zhu et al. found that strangers affected people similarly to non-strangers [29]. A stated choice survey was conducted to elicit the effect of social influence and distance to exits by Lovreglio et al. [30]. Subsequently, the fitted a discrete choice models to better understand exit choices of individuals. Thus, it is clear that crowd has an effect on individual's tactical decision during an emergency evacuation. ### Sign Influence Static sources such as signs, perceptual access, architectural differentiation, and plan configurations are also important factors in understanding the tactical decision making of individuals during emergency. Building signage are common static directional information source. Exit signs also provide information about emergency exits from a building. Cognition of signs as well as the effectiveness of the content of signs were studied and applied to exit designs [31, 32, 33, 34]. The size of the content was studied and larger content was found to be more effective [35]. The color of the content for the signs were studied and green or red lettering have been found to increase recognition distance [36]. Sign design and illumination have been found to be important for sign visibility [37]. Signs which update have been found to be more trusted than fully static signs [38]. Comprehensive knowledge of the environment was found to be not necessary for reasonable decisions to be made [39]. Fu et al. [40] studied how signs affect people with and without the presence of a crowd. Signs were found to be more effective in the absence of crowd influence. Additionally, the average decision time of those who did not follow signage was not atypically out of normal. Olander et al. [41] found that dissuasive signs, like a red X added on the sign to indicate no entry, to be effective in providing directional information. Tang et al. [42] summarized that exit signage were important for way-finding decisions, but individuals do not always follow them. Importantly, evacuations were slower when no signage were present [42]. Bode et al. [9] utilized an interactive \(2D\) VR experiment and found that signs have a significant influence on exit choice of participants. Kinateder et al. [43] found that green colored signs were most attractive for exit utilization. The influence of visual information by means of exit signs and corridor illumination were studied by Dachner et al. with a small pool of participants and both were found to influence emergency decision-making [44]. Galea et al. [45] found that dynamic exit signage increased the visibility of the exit signs from 38% to 77% compared to static exit signage. Galea et al. [46] found that dissuasive signage with reinforcement from a voice alarm system successfully redirected people towards the optimal exit by 66%. ### Use of VR in Egress Literature Researchers have used methods such as surveys [18, 22] and evacuation drills [47, 48] to study interactive evacuation scenarios [20, 9, 49, 40, 50]. More recently, immersive VR has been a promising avenue of information gathering which minimizes risks and cost associated with evacuation experiments. VR has been used in many fields to study human factors and has been found to be an effective tool to study real world behaviors [51]. Several studies have found that VR elicits similar non-emotional responses as real-world experiments [52, 53, 54, 55, 56, 57] and adequate emotional responses as situation demands [26, 58, 59, 60]. There are also known limitations in ergonomics and technology which can prevent complete reliability of the VR experiment e.g. unrealistic AI, unrealistic movement, unrealistic environments, unrealistic hazards, motion sickness [61], and lack of inherent danger. But, some validation has been performed on VR experiment by comparing results from VR to real-world data. Kobes et al. [51] performed a validation study for use of serious games for behavioral evacuation studies where VR-generated data was compared to real-world data and found to elicit similar behavior. Kinateder et al. [62] found some interesting correlation between real world and VR experiments. Reactions to an alarm were reduced, a comparable response to positive influence from bystanders, and a weaker response to negative influences where observed in VR. Kinateder et al. [63] provided an in-depth comparison of real world and virtual world behaviors and the capabilities of VR. In summary, it is important for modelers and safety engineers to account for all the different factors that can influence an individual's tactical decision. What factors can encourage individuals to take a particular route, what factors discourage individuals to take a particular route, does the crowd movement influence the decision making, if yes, whether it is a positive influence or a negative influence, does the exit signage being lit or unlit have opposite effect on the individual's choice? Also, does virtual reality (VR) based experiment elicit different response compared to screen-based experiments? Bode et al. [8, 9, 10] have performed comprehensive analysis of factors affecting the tactical decision making with a screen-based virtual evacuation experiment. We are interested in comparing the results from them with data collected in an immersive, first-person view, VR-based experiment. More particularly, the current study expands on the work reported by Bode et al. [9] on directional information source on exit choice of humans with 2D virtual experiment. Specifically, this study attempts to address how lit and unlit exit signs and crowd configuration affect evacuee exit choice behaviors in an immersive VR emergency evacuation simulation. The primary differences between this work and that of Bode et al. were the use of immersive VR environment, the addition of an uneven crowd configuration and the lit and unlit exit signage. ## 2 Methods ### Experimental Design Unreal Engine 4 was used to develop the immersive VR environments. A head-mounted display, a HTC Vive Pro (with 1440x1600 pixels resolution per eye), was used to provide the participants with a fully immersive experience. Participants navigated their virtual avatar in the simulated environment with a joystick. A simple room layout was preferred to explore the effect of directional information provided by crowd and exit signage on exit choice. This is similar to the simple room layout utilized in literature [8, 9, 10]. The room allowed the participants full visual access to both exits and exit signage. An example room (Figure 1) and a 2D representation of two representative room layouts is provided in Figure 2. The room was 40 meters wide and 44 meters long. The exit signs were 1 meter wide and 0.4 meters in height. The signage were designed to be bigger than the standard dimensions to counter the loss of clarity due to pixelation in the HTC Vive screen. We prioritized being able to read the signs over strictly adhering to sizing standards. The exit doorways were 2 meters wide which conforms with the US Department of Labor's Occupational Figure 1: A screen capture of the immersive environment: Participant perspective of scenario 2: Sign Safety and Health Administration (OSHA) guidelines for Emergency Exit Routes (osha.gov) [64]. The exits were located either against the far wall from the participant starting point or at the end of the side walls. The participants' start location was in the central bottom portion of the room (marked with an "X") as can be seen in 2D representation of the room in Figure 2. Furthermore, a crowd of co-evacuees consisting of programmed non-player characters (NPCs) was positioned around the participant within the participants' virtual field-of-view rather than throughout the room. A total of six scenarios were designed. The scenarios are summarized in Table 1. All scenarios had two exits (the left and the right exit) and each participant started from a position equidistant from the exits as indicated in Figure 2. The appearances of the NPCs in VR experiments have been found to not have any significant effect on decision-making by Llobera et al. [65] and Bruneau et al. [66]. Thus, all NPCs in our experiments appeared as males and were identical to each other. The NPCs used simple way-finding to find the closest path to their designated exit. The NPCs and participant moved at the same speed of 1.5 meters per second (3.4 miles per hour) for a faster than normal walking speed for the evacuation [67]. Table 1 details the experimental parameters for each of the 6 Scenarios tested. Scenario 1 was designed to be the control, scenario 2 was the base sign control test, scenario 3 was designed to study the effect of the crowd on the decision-making, scenario 4 was a reinforcing-treatments scenario with crowd and sign in agreement, scenario 5 was the conflicting-treatments scenario and scenario 6 was an uneven crowd treatment scenario with an unevenly split crowd. The NPCs acted as a potential influence on the participants' exit decision. To further elicit the crowding behaviors during an evacuation, the NPCs were programmed to not have collision avoidance with other evacuees to allow crowding at exits. It is to noted that the NPCs can never overlap but can be blocked by each other when we mean they can collide. The NPCs were randomly assigned to an exit according to the scenario description. Additionally, the exits were visible in all scenarios. The participants could not see into the room beyond the exits. One or both the exit \begin{table} \begin{tabular}{|l|l l l l|} \hline \# & **Scenario** & **Lit Signs** & **Crowd: Left** & **Crowd: Right** \\ \hline 1 & Control & Neither & 25 & 25 \\ 2 & Sign & Left Only & 25 & 25 \\ 3 & Crowd & Neither & 0 & 50 \\ 4 & Sign+Crowd & Left Only & 50 & 0 \\ 5 & Sign-Crowd & Right Only & 50 & 0 \\ 6 & Uneven Crowd & Neither & 15 & 35 \\ \hline \end{tabular} \end{table} Table 1: Experimental parameters for each of the 6 scenarios tested. Figure 2: 2D Representation of Experimental Rooms. All participants had their VR avatar start at the equidistant position from the exit as marked by the orange X facing towards the exits as indicated by the arrow mark in all scenarios signs were unit to provide a dissuasive sign effect to the participants according to the scenarios. The dissuasive effect is hypothesized as lit exit signs are expected whereas unit exit sign(s) were present thus biasing the participants not to use that particular exit. We refer to this as an updated sign in this work. A klaxon alarm activated in the virtual environment when the evacuation scenario started and ended when the participants' avatars exited the room. The order that the participants performed the scenarios was randomly chosen and tracked. A within subjects design was used. The number of participants who saw a given scenario as their first scenario, as opposed to later in the experiment, are provided in Table 2. The participants had the opportunity to complete all six scenarios. Participants who performed multiple scenarios may have had a source of self-feedback [68] from prior scenarios, but this was found to not be a significant influence in [23]. The path and the exit choice of the participants were recorded. In order to avoid participants expecting the same scenarios, the room for each scenario was given unique wall, floor and ceiling textures. Additionally, two sets of room layouts were used in an attempt to further reduce habituation as seen in Figure 2. ### Data Analysis Approach The three analyzed summary statistics were 1) the participants' probability to follow directional information provided by the environment in the form of signage and crowds' exit choice, 2) the participant's desire to change their exit choice after initial decision, and 3) the time they took to make their initial decision to choose an exit. To determine the probability that the participants changed their exit decision, P(change decision), the event "change decision" was defined as where the participant started to walk towards an exit and then began to walk to the other exit after moving at least one-fifth the lateral distance in the direction of the former exit. This is consistent with the definition by Bode et al. [9] in literature. The probability that the participant went to the exit with a particular treatment, P(follow treatment), was found from the proportion of the population which chose the exit indicated by the treatment. When two treatments reinforced each other, e.g. (Sign+Crowd), the summary statistics for each treatment was equal to the probability to follow the treatment, P(Follow Sign)=P(Follow Crowd)=P(Follow Treatment). When two treatments conflicted with each other, e.g. (Sign-Crowd), the follow treatment summary statistic for each treatment was found by the probability of the participants to choose the exit of the respective treatment. The decision time was chosen as the time from the start of the simulation until the instant when the participant started to move towards an exit. Since the start position of the participant's avatar was set by the simulation, the start of the movement could be identified by tracking the participant's avatar position in post-processing. The initial exploratory movements such as turning to search for exits or looking at the NPCs were thus not considered to be an actual determined movement towards exit, but rather counted as part of the decision time. The significance level of 95% was chosen for all models and tests performed. Cochran's Q-tests 1 were used to determine if the follow treatment and change decision statistics had significantly different proportions for each relevant scenario. A pairwise comparison between the scenarios and Control, Sign, and Crowd were performed to test desired comparisons ex ante. A Sidak correction was used. A one-way repeated measures ANOVA was performed to determine if there was significance across participants and across scenarios for decision time. After performing the ANOVA, a pairwise comparison between scenarios was performed with a Sidak correction. Footnote 1: Ben Jann, 2004. ‘COCHRAN: Stata module to test for equality of proportions in matched samples (Cochran’s Q),” Statistical Software Components S4444105, Boston College Department of Economics, revised 27 Oct 2004. [https://ideas.repec.org/c/bocode/s444105.html](https://ideas.repec.org/c/bocode/s444105.html) The data was also split into the participants who performed a given scenario first and those who had completed the same scenario after completing another scenario. The data was analyzed for each scenario separately to discover learning effects from observation of the means and standard deviations. \begin{table} \begin{tabular}{|l|l l l l|} \hline **\#** & **Scenario** & **First** & **Not First** & **Total** \\ \hline 1 & Control & 11 & 46 & 57 \\ 2 & Sign & 10 & 45 & 55 \\ 3 & Crowd & 10 & 44 & 54 \\ 4 & Sign+Crowd & 10 & 44 & 54 \\ 5 & Sign-Crowd & 10 & 43 & 53 \\ 6 & Uneven Crowd & 10 & 47 & 57 \\ \hline \end{tabular} \end{table} Table 2: Summary of scenarios which were seen as the first scenario and those which were not seen as the first. ### Participants Prior to any data collection, approval was sought from the local ethics committee, the Institutional Review Board (IRB), University of Tennessee, Knoxville. This study was approved by the IRB (approval number UTK IRB-17-04159-XM). The data collection was conducted prior to the Covid-19 pandemic. The data collection itself was anonymized. No personally identifiable information were recorded. The data have been stored according to local IRB recommendations. Participants were recruited from the staff and students at the University of Tennessee, Knoxville campus. No incentives were provided for participation. A total of 64 participants were recruited through an open call for volunteers. However, only the data of 61 participants was used. Participants were eligible if they were 18 years or older and had no known sickness to VR-based experiments. The demographics of the participants are listed in Table 3. For analysis purpose, gender was converted to 0 for participants who listed themselves as man and 1 for participants who listed themselves as woman. At least 10 participants were desired to perform each scenario as the first one and at least 30 participants total for each scenario given the precedence set by previous works [44, 54]. ### Procedure Each participant was provided with an information sheet about the study and a consent form, both approved for this specific experiment by the local ethics committee. The participants had opportunity to discuss their participation and potential benefit (to the scientific community) with the researchers before they signed the consent form. After signing the informed consent form, each participant was given a demographic survey. Next, the participants were asked to wear an HTC Vive Pro virtual reality headset and placed in a "VR preparatory setup" which consisted of 2 rooms surrounded by corridors and several exits. The participant started from one of the rooms with an exit and an exit sign above it. Upon leaving this room, the participant made a series of left or right decisions to exit out of the building. The participants then repeated this same scenario except with the addition of NPCs who also were egressing the building. No data from the preparatory scenarios was used in our analysis in this work, but it provided an opportunity for participants to get comfortable with the immersive VR environment and the navigation control with the joystick. After the VR preparatory environment, the participants then completed as many scenarios in the actual experimental setup (described in subsection 2.1) as they were comfortable with. The number of scenarios varied between participants due to their comfort with the VR and time involved. The 1-room simplified environment, reported in this paper, was designed to test fundamental exit choices rather than more complex time-dependent tests for memory, familiarity, etc. This was done in an attempt to limit the VR exposure time, which can cause motion sickness, but still provide the time needed to develop the feeling of involvement and engagement in the evacuation process. Moreover, a warning siren sound played through the VR headphones throughout the experiment, to provide a sense of urgency during the experiment. Finally, when the participants completed as many scenarios as possible, they were asked to complete an exit survey. The participants were asked to rate how different factor affected their decision-making during a scenario. The rated factors were 1) the exit sign lit, 2) perceived time-to-exit, 3) follow crowd, 4) avoid crowd, and 5) previous exit choice(s). Lit sign was provided for participants in the rating to inform us about how effectiveness of exit sign lighting versus \begin{table} \begin{tabular}{|l|l l|} \hline **Age Demographics** & & \\ \hline Age (years) & Frequency & Percent (\%) \\ \hline 18-24 & 27 & 44.26 \\ 25-34 & 17 & 27.87 \\ 35 and over & 16 & 26.23 \\ NA & 1 & 1.64 \\ Total & 61 & 100 \\ \hline **Gender Demographics** & & \\ \hline Gender & Frequency & Percent (\%) \\ \hline Man & 42 & 68.85 \\ Woman & 18 & 29.51 \\ Other & 1 & 1.64 \\ Total & 61 & 100 \\ \hline \end{tabular} \end{table} Table 3: Demographics Summary non-lighting in influencing their exit choice. Time-to-exit was provided for participants to describe if the time to leave the room was an important factor in determining their exit decisions. Both follow crowd and avoid crowd were provided to ascertain the effect of the crowd on individuals. The previous exit choice(s) was provided to determine what effect the previous exits decision (when participating in multiple scenarios) had on the participant's exit choice. The rating scale was from 1 (lowest) to 5 (highest). Additionally, they were requested to elaborate their thought process behind the ratings. A screenshot of the exit survey is provided in Figure 3. The experimenter's script for the entire data collection process is provided in Appendix A. ## 3 Results ### Survey Fifty of the sixty-four participants completed the survey. The studied factors were lit sign, time-to-exit, follow crowd, avoid crowd, and previous exit choice(s). The participants were asked to specify how much they thought each factor affected their exit choices. The values ranged from 1 as the lowest and 5 as the highest possible scores. The averages for the survey responses are provided in Figure 4. The lit sign was listed as the most influential followed by avoid crowd, time-to-exit, follow crowd, and previous exit choice(s) as the least influential. ### First vs. Not-First Participant Results The first versus not-first data for the chose left statistics were provided in Table 4. The chose left statistics refers to the choice to take the left exit from the participant's perspective. The primary difference in the first versus not-first results was observed in Control and Crowd scenarios. Small differences occurred in the other scenarios but the behaviors of the participants is similar for the first and not-first scenarios. The probability of the participants to choose an exit was not significantly different from chance when the not-first and first scenario participant data was combined Figure 4: A plot visualizing the mean score for each of the influencing factors along with their standard deviation. Figure 3: A screenshot of participant’s exit survey for the Control scenario; however, the participants who saw Control as their first scenario chose the left exit 27.2% of the time and those that did not see the control as their first scenario chose the left exit 54.3% of the time. Eleven participants saw Control as their first scenario compared to forty-six who saw it after their first scenario. The participants who saw Crowd as their first scenario chose to avoid the crowd (left) 20.0% of the time, but those that saw Crowd after their first scenario chose to avoid the crowd 54.5% of the time. Ten participants saw Crowd as their first scenario and forty-four participants saw it after their first scenario. These differences indicate a potential right bias in the case of the Control and a repulsive effect of the crowd when the participants had been exposed to previous scenarios. The first versus not-first data for the decision time were provided in Table 5. Upon visual inspection, the data appeared to be within standard deviation for each scenario as seen in Table 5. This would indicate that a learning effect likely did not apply to the decision time. The proportion of participants who followed the treatment for each scenario is presented in Figure 5. The proportion of those who followed the treatment for Sign-Crowd was given to those who followed the sign rather than the crowd. \begin{table} \begin{tabular}{|l|l l l|l|} \hline \# & **Scenario** & **First (\%)** & **Not-First (\%)** & **Combined (\%)** \\ \hline 1 & Control & 27.3 & 54.3 & 49.1 \\ 2 & Sign & 80.0 & 82.2 & 81.8 \\ 3 & Crowd & 20.0 & 54.5 & 47.3 \\ 4 & Sign+Crowd & 100. & 72.7 & 77.8 \\ 5 & Sign-Crowd (Sign) & 40.0 & 16.3 & 21.8 \\ 6 & Uneven Crowd & 40.0 & 46.8 & 43.9 \\ \hline \end{tabular} \end{table} Table 4: Chose Left Summary Statistics: First vs. Not First Figure 5: Proportion of participants who followed the respective treatment(s) (combined percentage) \begin{table} \begin{tabular}{|l|l l l|l|} \hline \# & **Scenario** & **First (s)** & **Not-First (s)** & **Combined (s)** \\ \hline 1 & Control & \(5.12\pm 2.72\) & \(10.4\pm 7.24\) & \(9.39\pm 6.92\) \\ 2 & Sign & \(5.82\pm 2.86\) & \(6.23\pm 4.13\) & \(6.16\pm 3.91\) \\ 3 & Crowd & \(7.48\pm 5.19\) & \(7.65\pm 5.01\) & \(7.55\pm 5.01\) \\ 4 & Sign+Crowd & \(6.20\pm 2.66\) & \(5.66\pm 4.67\) & \(5.86\pm 4.40\) \\ 5 & Sign-Crowd & \(10.2\pm 5.40\) & \(6.41\pm 6.07\) & \(7.13\pm 6.09\) \\ 6 & Uneven Crowd & \(10.1\pm 7.82\) & \(10.4\pm 7.32\) & \(10.4\pm 7.34\) \\ \hline \end{tabular} \end{table} Table 5: Decision Time Summary Statistics: First vs. Not First ### Decision Time Results A summary of the decision time can be found in Figure 6. A one-way repeated measure ANOVA was used to determine if there were significant differences between the means of the treatment scenarios and across participants for the decision time. The results can be found in Table 6. There was a significant effect of scenario number on decision time, \(F(5,263)=8.09\), \(p=0.0000\), so we reject the null hypothesis that the means of decision time are equal between scenarios. Therefore, a pairwise comparison with a Sidak correction was used. No ex ante relationships were assumed between scenarios and decision time. The significant results were between Sign vs Crowd, Sign-Crowd vs Control, Uneven Crowd vs Sign, Uneven Crowd vs Crowd, Uneven Crowd vs Sign+Crowd, and Uneven Crowd vs Sign-Crowd as summarized in Table 7. The contrasts found agree with the statistics provided in Table 5. This seems to indicate that \begin{table} \begin{tabular}{|l|l l l l|} \hline **Variable** & **SS** & **df** & **MS** & **F** \\ \hline Scenario \# & 4778 & 5 & 185.1 & 8.09 \\ Residual & 6013 & 263 & 22.86 & \\ \hline \end{tabular} \end{table} Table 6: ANOVA Summary \begin{table} \begin{tabular}{|l|l l l|} \hline **Scenario** & **Contrast** & **Std. Err.** & **t** \\ \hline Sign vs Control & \(-3.29\) & 0.914 & \(-3.60\) \\ Sign+Crowd vs Control & \(-3.63\) & 0.924 & \(-3.93\) \\ Uneven Crowd vs Sign & 4.28 & 0.915 & 4.68 \\ Uneven Crowd vs Crowd & 3.06 & 0.921 & 3.33 \\ Uneven Crowd vs Sign+Crowd & 4.63 & 0.926 & 5.00 \\ Uneven Crowd vs Sign-Crowd & 3.49 & 0.918 & 3.81 \\ \hline \end{tabular} \end{table} Table 7: Pairwise Decision Time Comparison Summary Figure 6: Average decision time for each scenario the contrasting treatments produced hesitancy compared to the single or reinforcing treatments. Additionally, decision time is the lowest for scenarios with the sign treatment, (Sign, Sign+Crowd, and Sign-Crowd). ### Following Information Results A Cochran's Q-Test was used to determine if the null hypothesis that the proportions of following information between scenarios were equal. The test reported that significantly different proportions existed for follow information between the different scenarios, \(\chi^{2}=30.3,p=0.0000\). It was desired ex ante to compare all relevant scenarios with different baseline scenarios. The baselines scenarios were Control, Sign, and Crowd. Therefore, a pairwise comparison with a series of Cochran's Q-Tests with a Sidak correction was used. As summarized in Table \(8\), [Sign vs Control, Sign+Crowd vs Control, Sign-Crowd (Sign) vs Control, Crowd vs Sign, and Crowd vs Sign-Crowd (Sign)] were significantly different. All of the significant pairwise comparisons included the sign treatment. ## 4 Discussion It is interesting to analyze how each scenario affected the participants' probability to follow the treatment information, to change their decision, and to quickly make their decision. While the Control scenario did not have treatment information, it was expected to have an equal proportion of exits to be chosen. This was not observed as the participants chose the right exit more than the left exit, but only slightly so. The deviation from the expected value was not significant but it may indicate a bias in the participants to choose the right exit. It may be conjectured that this bias is a result of right-handedness of the participants or the layout of the joystick rather than any other factors. This handed bias was found in a work by Veeraswamy et al. [69] that studied way-finding in buildings. When the participants saw the Sign-Crowd scenario as their first one, their preference to choose the exit with sign(right exit) or the crowd(left exit) was approximately evenly distributed. However, a drop in following crowd(left exit) was observed when participants saw the Sign-Crowd as a non-first scenario. This is indicative of a learned repulsive effect of the crowd. This may be related to the effect of the physicality of the crowd on the participant while attempting to exit. Furthermore, the survey responses found that the sign was the most important factor on average and avoiding the crowd was the second most important factor. These responses agree with the observed trends in the collected trajectory/exit choice data. Another potential relationship was that the time-to-exit was the third most important factor reported on the surveys, this may indicate that the participant wanted to avoid the crowd to egress faster through the less crowded exit. Participants generally chose to follow the lit exit sign over any other treatment, and did so consistently for all scenarios with the lit sign treatment. This agrees with the findings by Bode et al. [9]. The lit exit sign treatment was always with the left exit except when both the sign and crowd treatments were conflicting (Crowd left and Sign right). Additionally, the participants made their decisions sooner, on average, when they chose to follow the lit exit sign treatment. Furthermore it was observed that as exit signs are expected to be lit, the unit or dissuasive signs discouraged exit utilization and the lit signs encouraged exit utilization in general. The findings from these results agree with existing literature [41, 45, 46]. The effectiveness of the lit signs to attract participants towards that particular exit and a similar effectiveness of the sign treatment when reinforced by the crowd treatment may suggest an increased tolerance for queuing and thus a cause for congestion when individuals trusted the information provided by the environment. \begin{table} \begin{tabular}{|l|c c c|} \hline **Scenario** & \(\chi^{2}\) & **p-value** & **Sidak** \\ \hline Sign vs Control & 10.8 & 0.0014 & 0.0167 \\ Sign+Crowd vs Control & 8.33 & 0.0059 & 0.0462 \\ Sign-Crowd (Sign) vs Control & 9.14 & 0.0037 & 0.0400 \\ Crowd vs Sign & 9 & 0.0041 & 0.0403 \\ Crowd vs Sign-Crowd (Sign) & 8.91 & 0.0043 & 0.0403 \\ \hline \end{tabular} \end{table} Table 8: Pairwise Follow Information Comparison Summary ## 5 Limitations While the data is correct and useful for furthering the understanding of exit choice during an evacuation, it is important to recognize that there are limitations present from the recruitment pool and from the experiment set up itself. The participants were primarily recruited from the graduate student population of the University's College of Engineering which will likely not encompass the full range of the population demographics. Lastly, additional scenarios for the uneven crowd would have been beneficial for better understanding of crowd phenomena, such as the effects of number of people in the crowd. These can be addressed in future studies. Also, there will always be a question about the transferability of VR-based or screen-based study to real world evacuation situation. It is to be noted that VR provides a good compromise in terms of immersive experience and also a safe environment for data collection. ## 6 Conclusions The effect of lit exit signs and crowd movement on the exit choice behaviors of participants in an immersive VR experiment were studied. Specifically, the effect of an updated exit sign, different proportions of crowd movement toward each exit, and the effect of reinforcing and conflicting effects between the sign and crowd treatments were studied. Crowd and lit (updated) sign treatments were found to produce significant effects for following the treatment and time to initiate egress towards an exit. The sign treatment was found to be effective at reducing the decision time and increasing utilization of the exit with the lit sign. The utilization of the exit with the lit sign was not significantly reduced when the crowd treatment was reinforcing the sign treatment. Based on the results, the sign was an effective treatment and the crowd had an insignificant repulsive effect but was overall ineffective. These results agree with the literature. Specifically that the crowd can be repulsive [8, 20, 21, 22] and a dissuasive sign can produces the intended effect [41, 45, 46]. Furthermore, a potential learning effect was found, possibly due to the effect of the physicality of crowding more realistically perceived in immersive VR experiments. This may have also shown a difference between immersive VR and non-immersive screen-based experiments [8, 9, 10] because the participants are able to perceive the crowd more realistic in an immersive VR experiment they reacted to the crowd more strongly one way or another. Summarizing the implications of this work for human factors/evacuation planning/building design there is an indication that a queuing tolerance exists when crowds all move towards a trusted source (lit sign). Crowds are ineffective at encouraging utilization of a particular exit and may have a slight repulsive effect if the evacuee becomes uncomfortable with the crowding or feels delayed by the queuing. A sign being updated through lighting/unlighting is an effective solution to influence exit choices. Particularly, when exit sign and crowd reinforced each other, the increased tolerance for queuing may causes increased congestion, thus a dynamic lighting of signage could be used to alleviate the crowding problem at bottlenecks/exits. This idea has potential to help handle a mass evacuation better in terms of average egress time for the crowd. In other words, dynamic signage has potential to help prevent over-crowding of exits which could in turn help with better flow of the crowd during emergency evacuation. This is an interesting direction for future research. ## Acknowledgment The authors would like to thank Mr. Hema Sumanth for his help with the data collection. This research was funded by NSF CPS Program (Award Number 1932505).
2303.10010
Secondary School Students observe Venus with NASA Infrared Telescope Facility (IRTF)
Astronomy and astrophysics are regarded as highly motivating topics for students in primary and secondary schools, and they have been a recurrent and effective resource to inspire passion about science. In fact, during the last years we have witnessed a boost of facilities providing small robotic telescopes for teachers and students to remotely undertake their own observing projects. A step forward is presented here, where we describe the experience of secondary school students attending professional observations of Venus at NASA's Infrared Telescope Facility (IRTF) and, in a second observing run, conducting the observations by themselves. In addition to quickly mastering the basic operation of the control software for the SpeX instrument, the students successfully performed different types of data acquisition, including drift scan imaging.
Javier Peralta, Juan A. Prieto, Pilar Orozco-Sáenz, Jesús González, Gonzalo Trujillo, Lucía Torres, Alberto Sánchez, Manuel Arnedo
2023-03-17T14:35:39Z
http://arxiv.org/abs/2303.10010v1
# Secondary School Students observe Venus with NASA Infrared Telescope Facility (IRTF) ###### Abstract Astronomy and astrophysics are regarded as highly motivating topics for students in primary and secondary schools, and they have been a recurrent and effective resource to inspire passion about science. In fact, during the last years we have witnessed a boost of facilities providing small robotic telescopes for teachers and students to remotely undertake their own observing projects. A step forward is presented here, where we describe the experience of secondary school students attending professional observations of Venus at NASA's Infrared Telescope Facility (IRTF) and, in a second observing run, conducting the observations by themselves. In addition to quickly mastering the basic operation of the control software for the SpeX instrument, the students successfully performed different types of data acquisition, including drift scan imaging. Astronomy education (2165), Observational astronomy (1145), Infrared observatories (791), Astronomical techniques (1684), Direct imaging(387), Drift scan imaging (410), Venus (1763), Planetary atmospheres (1244), Atmospheric clouds (2180) + Footnote †: journal: Javier Peralta ## 1 Introduction Astronomy is one of the oldest sciences known. For millennia, it has captivated most of the cultures in the world, and it yet remains at the forefront of the attention and interest of public (Bailey and Slater, 2003). Many teachers have used astronomy to counter the problem that many students find the science content of middle years of schooling uninteresting (Salimpour et al., 2021). In light of the global push to get students engaged in science and technology, many aspects of astronomy have become popular and introduced in school curricula for decades (Lelliott and Rollnick, 2010), leveraging the many examples in astrophysics with direct links to Physics, Chemistry, Mathematics and even Biology (Salimpour et al., 2021). In this context, there has been an significant growth in the number robotic telescopes with observing time fully or partially devoted to educational purposes and friendly user interface to ease the remote control by school students (Gomez and Fitzgerald, 2017). This has allowed school teachers to easily implement inquiry-based learning approaches and students to have authentic science experiences, start international collaborations and even make discoveries and publish the results (Salimpour et al., 2018; Fitzgerald et al., 2018). In this work, we describe how a team of secondary school students performed professional observations of the planet Venus with the spectrograph and imager SpeX (Rayner et al., 2003) at the National Aeronautics and Space Administration Infrared Telescope Facility (NASA/IRTF) on Mauka Kea (Hawaii). Venus is a captivating target since it exhibits the consequences of a runaway greenhouse effect and it has been recently in the spotlight of the search for life (Greaves et al., 2021). ## 2 Methodology The three participating students (J. Gonzalez, A. Sanchez and G. Trujillo) were aged 15-16 and studied at _Huerta de la Cruz_, a catholic school located in the city of Algeciras (Spain). These students volunteered to carry out a project on Venus (Peralta et al., 2023, see _work_), motivated and coordinated by J. A. Prieto and P. Orozco-Saenz, professors with successful experience engaging students with scientific research (Prieto and Orozco-Saenz, 2020, 2022; Orozco-Saenz, 2021). The objectives of this work comprised acquiring a basic knowledge of Venus and its atmosphere, a discussion about its habitability, and knowing the methods for observing Venus (Peralta et al., 2023, see _log file_). With regards to the latter objective, the students were invited to participate in professional observations at NASA/IRTF between January and March of 2022. Two programs for observing Venus were scheduled1: programs 2022A058 (with E. F. Young as Principal Investigator) and 2022A038 (J. Peralta as Principal Investigator). Although with different scientific goals, both programs shared similar techniques of Venus data acquisition with SpeX (Peralta et al., 2023, see _work_): sets of Venus images with several filters using the guide camera (hereafter _guidedog_), hyperspectral data with the slit of the spectrograph (hereafter _bigdog_) scanning from north to south the disk of Venus, guiding on a star to obtain calibration spectra, or inference of flats, darks and bias. The full imagery dataset from these two programs has already been employed in recent Venus research (Peralta et al., 2023), and it will become publicly available at NASA/IPAC Infrared Science Archive. Footnote 1: [http://irtfweb.ifa.hawaii.edu/observing/schedule.php](http://irtfweb.ifa.hawaii.edu/observing/schedule.php) Prior to undertaking observations of Venus with SpeX, the students used the bibliography to became introduced to the clouds of Venus (Peralta et al., 2019), the SpeX instrument (Rayner et al., 2003), and the graphical user interfaces to control _guidedog_ (_Guidedog X-windows User Interface_ or GXUI, and _Guidedog Data Viewer_ or GDV) and the T3 remote TCS widget to adjust the pointing coordinates and the focuser (Rayner et al., 2021). After this literature review, the students benefited from a training session by remotely attending an observing run during 17 of February 2022 (2022A058). Finally, the students joined a last run during 11 of March 2022 (2022A038) to conduct the Venus observations by themselves (see Figure 1). Figure 1: Capture of screen during observing run conducted by the students in 13 of March 2022. ## 3 Experience During the observing run of 11 of March 2022, the students conducted the remote observations of Venus with IRTF/SpeX for about 2 hours, and 1 hour was recorded2. Jesus Gonzalez was selected to make the VNC connection and control GXUI, GDV and the T3 remote TCS widget, supervised by Javier Peralta. In addition to quickly understand most of the explications, J. Gonzalez also performed the following operations with success: Footnote 2: Video available upon reasonable request and for educational purposes only. * _Fast acquisition of images with different filters and integration times._ * _Acquisition of hyperspectral data._ The student used T3 remote to introduce corrections for the position of the slit during the drift scan of the night and day side of Venus. * _Acquisition of images/spectra of the sky._ The student used T3 remote to shift the slit to a location of the sky at a convenient angular separation. * _IR guiding with SpeX guider/slit viewer._ The student properly employed GXUI and GDV to move a target star into the telescope A beam box and start the guiding with the Auto GuideBox Setup (Rayner et al., 2021). ## 4 Conclusions We have shown in this work that a group of motivated secondary school students have been able to understand basic and more elaborated strategies to observe a planet of the solar system, and also learn to use a complex telescope interface to undertake the observations scheduled for a professional program. Encouraged by the positive experience, we plan to expand this recognised work (Prieto et al., 2022) with new research projects adapted to secondary education which may include an active participation in the design of professional observing proposals and/or leading novel research under the supervision of postdoctoral researchers, an initiative successfully implemented in other projects (Dunn et al., 2018). ## Author Contributions Conceptualization, software, resources, data curation, writing--original draft preparation and funding acquisition, by J.P.; methodology, writing--review and editing, supervision and validation, by J.P., J.A.P. and P.O-S.; investigation, visualization and formal analysis, by J.G., M.A., A.S. and G.T. J.P. thanks funding by the program EMERGIA, Junta de Andalucia (Spain), Grant EMERGIA20_00414. J.A.P. and P.O-S. acknowledge the support from Asociacion Amigos de la Ciencia, Diverciencia. We also thank _E. F. Young_ and _M. A. Bullock_ for allowing the students to attend one of their observing runs. We acknowledge the support from _Callie Matulonis_ (Telescope Operator during the run with students) and the rest of staff at the Infrared Telescope Facility, which is operated by the University of Hawaii under contract 80HQTR19D0030 with the National Aeronautics and Space Administration. IRTF(SpeX)
2310.08767
Modeling Fission Gas Release at the Mesoscale using Multiscale DenseNet Regression with Attention Mechanism and Inception Blocks
Mesoscale simulations of fission gas release (FGR) in nuclear fuel provide a powerful tool for understanding how microstructure evolution impacts FGR, but they are computationally intensive. In this study, we present an alternate, data-driven approach, using deep learning to predict instantaneous FGR flux from 2D nuclear fuel microstructure images. Four convolutional neural network (CNN) architectures with multiscale regression are trained and evaluated on simulated FGR data generated using a hybrid phase field/cluster dynamics model. All four networks show high predictive power, with $R^{2}$ values above 98%. The best performing network combine a Convolutional Block Attention Module (CBAM) and InceptionNet mechanisms to provide superior accuracy (mean absolute percentage error of 4.4%), training stability, and robustness on very low instantaneous FGR flux values.
Peter Toma, Md Ali Muntaha, Joel B. Harley, Michael R. Tonks
2023-10-12T23:26:44Z
http://arxiv.org/abs/2310.08767v2
Modeling Fission Gas Release at the Mesoscale using Multiscale DenseNet Regression with Attention Mechanism and Inception Blocks ###### Abstract Mesoscale simulations of fission gas release (FGR) in nuclear fuel provide a powerful tool for understanding how microstructure evolution impacts FGR, but they are computationally intensive. In this study, we present an alternate, data-driven approach, using deep learning to predict instantaneous FGR flux from 2D nuclear fuel microstructure images. Four convolutional neural network (CNN) architectures with multiscale regression are trained and evaluated on simulated FGR data generated using a hybrid phase field/cluster dynamics model. All four networks show high predictive power, with \(R^{2}\) values above 98%. The best performing network combines a Convolutional Block Attention Module (CBAM) and InceptionNet mechanisms to provide superior accuracy (mean absolute percentage error of 4.4%), training stability, and robustness on very low instantaneous FGR flux values. _Keywords:_ _Fission gas release, Machine learning, Deep learning, Convolutional neural networks, Densenet._ ## 1 Introduction While only comprising 4.3% of global energy production in 2020 [1], nuclear power has gathered renewed interest in recent years for offering significant quantities of reliable, low carbon footprint electricity. The first civilian nuclear reactors for electricity generation debuted in the 1950s, but there remains significant unsolved challenges regarding their long-term operation. One such issue is the problem of fission gas release (FGR), most known in the polycrystalline UO\({}_{2}\) fuel pellets used by modern commercial light water reactors (LWRs) [2]. FGR, occuring when a fission reaction produces noble gases such as xenon and krypton as waste products, is an inevitable byproduct of the operation of a LWR. Fission products do not dissolve into the nuclear fuel microstructure but instead form gas bubbles inside the microstructure. These bubbles reduce the thermal conductivity of the fuel [2], reducing the efficiency with which heat can be converted to electricity. Intergranular fission gas bubbles grow and interconnect, eventually providing a path for FGR when gas escapes the fuel into the cladding. This decreases heat transport through the gap between the fuel and cladding, and increases the cladding pressure. The inability to remove heat from the fuel accelerates the degradation of its microstructure by FGR even further, thus establishing a negative feedback loop. Currently, one third of the nuclear fuel rods in standard LWR reactors must be replaced every 12-24 months [3] - a severe limitation given the difficulty of shutting down and restarting a nuclear reactor. Thus, understanding the process of FGR is critical to increasing the efficiency and safety of LWR fuel. The traditional method of estimating FGR from nuclear fuel microstructures is by reduced order models that approximate the physics underlying FGR [2, 4]. More recently, mesoscale models have been developed that spatially resolve the fission gas bubble evolution and give a more accurate description of the fission gas behavior [2], and the phase-field method has emerged as one of the most popular approaches for these mesoscale simulations [5, 6, 7, 8, 9, 10]. One limitation of the phase field method is that it is not computationally feasible to model larger intergranular bubbles and small intragranular bubbles in the same simulation. This has been overcome by a recent hybrid model that couples the phase field method of intergranular bubbles with a spatially resolved cluster dynamics model of intragranular fission gas [9]. Presently, the hybrid phase-field/cluster dynamics approach is capable of yielding highly accurate results, yet it is also computationally intensive. In the last decade, advances in data-driven methods such as deep learning provide an approach to develop surrogate FGR models that are much more computationally efficient than the hybrid model. Neural networks, once trained, can process new inputs without the use of specialized multiphysics tools. This paper introduces a physics-agnostic deep learning method based on convolutional neural networks (CNN) trained on the results from mesoscale simulations using the hybrid model to estimate instantaneous FGR flux from 2D microstructure images - a challenging problem, due to FGR dependency on complex spatial features in a microstructure such as the connectivity of fission gas bubbles to the free surface. CNNs are multilayer neural network architectures that infer features in grid-like input data through the application of convolutional operations, which can be construed as filters. The output of these filters represent the features, and the filter weights are trainable neural network parameters. Supervised learning uses training datasets comprised of input-output pairs which are fed into a gradient descent algorithm that finds the network parameters that minimize a loss function comparing the network output to the given training output when a particular input is applied to the network. CNNs were originally developed in computer vision for image classification. However, CNNs can also be applied to solving regression problems for any gridlike data. For example, [11] demonstrated the use of a CNN to infer cyanobacteria counts from hyperspectral images of up to 80 channels. The hybrid FGR model is solved on a finite element mesh, and therefore its output can be used to train a CNN. However, approximating a highly complex and spatially-dependent phenomenon such as FGR requires a comparably complex CNN architecture. For this task, one such CNN structure presents itself: DenseNet. DenseNet [12] is a class of CNN that preserves features from shallower network layers in deeper layers by appending the outputs of all previous layers onto the output of the current layer. Other CNN architectures such as ResNet [13] have more localized interlayer connections, and therefore are less effective at preserving shallower features. Due to its compactness, improved performance, and reduced gradient vanishing with increased depth during training, DenseNet has become extensively utilized in many applications, including imaging [14], superresolution [15], and remote sensing [16]. To address the problem at hand, this study examines the following specific modifications to baseline DenseNet: intermediate regression layers for multiscale feature processing similar to [17], a combined spatial and channel attention mechanism [18], and hybrid InceptionNet [19] blocks. The objective of this study is to train a DenseNet CNN to estimate the instantaneous FGR flux from 2D microstructure images. The training data is taken from hybrid FGR model simulations. The performance of four DenseNet variants equipped with intermediate regression layers are compared: baseline DenseNet, DenseNet with attention, and two DenseNets with attention and hybrid Inception blocks. The CNN is a less computationally expensive surrogate for the hybrid FGR model. This paper is organized as follows. The methodology is discussed in Section 2., including the CNN architecture, the training data, and the training and evaluation details. The results are presented in Section 3.. Section 5. concludes the paper. ## 2 Methodology ### Neural Network Architecture A high-level schematic of the proposed network architecture is shown in Fig. 1. Inspired by the DenseNet-121 architecture [12], the neural network consists of an initial convolutional layer followed by four dense block groups with associated transition layers (TL1 - TL4). Each dense block group is structured as a concatenation of a number of identical dense blocks, as indicated in Fig. 1. TL1 - TL3 consist of a batch normalization (BN) layer, followed by a leaky rectified linear unit (LeakyReLU) activation function layer and a convolutional layer. TL4 consists of only a BN layer, followed by a LeakyReLU layer. Compared to conventional CNNs, in which the output is obtained only from the final convolutional layer, the proposed architecture introduces a multiscale feature regression approach, where the output of the initial convolution layer and of each subsequent dense block group is passed to a fully connected (FC) intermediate regression block (IRB1 - IRB5). Each IRB consists of a global average pooling layer which produces a spatial average for each channel of the feature set, resulting in a 1D vector that is then inputted into a dense FC layer. The outputs of each intermediate regression block are concatenated and passed to a final regression layer with a rectified linear unit (ReLU) activation function that outputs the predicted instantaneous FGR flux \(\hat{y}\) when the network is presented with an \(m\times m\times 1\) image. In this manner, the CNN can better make predictions using both fine-scale and coarse-scale features. Intermedi ate classification layers have been successfully demonstrated on multiscale DenseNet variants for image classification problems [20], such as detecting lung cancer using fine-grained features [17], setting a precedent for their use in CNNs. This study compares the performance of four dense block group structures - baseline DenseNet, DenseNet with attention, and two DenseNets with attention and hybrid Inception blocks - all using the same high-level framework shown in Fig. 1. Baseline Dense Block Group - Basic DenseNetWith reference to Fig. 2, a basic DenseNet block group consists of \(N\) dense blocks, with each block consisting of a BN layer followed by a LeakyReLU layer and a convolutional layer. The output of the convolutional layer has \(k\) channels, where \(k\) is the growth rate of the network - the number of channels added to each layer in the network compared to the previous layer. The output of each dense block is concatenated with its input; this is the mechanism by which features from earlier in the network are passed to deeper layers. Comparing to the standard DenseNet-121 architecture, the proposed network does not use bottleneck and compression, in order to avoid loss of potentially useful features. Therefore, assuming that the size of the input feature is \(n\times n\times c\), then the output of the basic dense block group with \(N\) dense blocks has a size of \(n\times n\times(c+k\cdot N)\). Dense Attention Block GroupInstantaneous FGR flux is highly dependent on spatial relationships in a microstructure such as fission gas bubble connectivity. While Baseline CNNs excel at detecting singular features, they struggle to capture these more complex relationships due to global pooling of feature maps that result in loss of spatial information. Attention mechanisms have been introduced to address this problem. Initially developed for time-series data [14], atten Figure 1: Schematic of the DenseNet architecture applied to predict instantaneous FGR flux from 2D microstructures. Figure 3: Schematic of the Dense Attention Block Group. Figure 2: Schematic of the Baseline Dense Block Group - Basic DenseNet. There is no bottleneck nor compression. tion mechanisms allow a network to dynamically focus on relevant features in each input image. The first notable CNN attention mechanism was the squeeze-and-excitation (SE) module [21], which only enabled channel attention. Illustrated in Fig. 3, the Convolutional Block Attention Module (CBAM) [11] combines an SE channel attention module with a spatial attention module. CBAM is attached to the output of a dense block. The CBAM-refined feature map has the same dimensionality as the output feature map of the dense block: \(n\times n\times(c+k\cdot N)\). Dense Inception Attention Block GroupsThe multiscale detection abilities of a DenseNet can be further enhanced by replacing the standard single-convolution layer in a DenseNet block with a set of layers based on the InceptionNet architecture, as described in [19] - see Fig. 4 and Fig. 5. Dense Inception blocks are able to both reduce overfitting and improve a network's fine-grained feature detection ability - useful for capturing a microstructure's complex geometry. A Dense Inception block divides the input channel-wise into three branches and performs different combinations of pooling, convolution, and activation function operations on each branch, allowing the block to detect a significantly larger range of features across different scales while reducing the overall number of required trainable parameters. Inspired by [19], we consider two Inception block architectures. Inception-A has a branch with a cascade of two \(3\times 3\) convolution layers and an average pooling branch, while Inception-B replaces the cascade with a \(3\times 3\times 3\) convolution branch with a max pooling layer followed by a \(1\times 1\times 1\) convolution layer. All branches in Inception-A use the same LeakyReLU activation function, while Inception-B uses three different activation functions: LeakyReLU, a leaky rectified linear unit activation with upper limit of \(6.0\) (LeakyReLU6), and an exponential linear unit activation function (ELU). Figure 4: Schematic of the Dense Inception-A Attention Block Group. Finally, a CBAM block can be attached to the output of a Dense Inception block group to gain the benefits of attention mechanisms. In this study, we do not use a bottleneck or compression inside a Dense Inception block. Therefore, the output feature map of a Dense Inception Attention block group retains the same dimensionality as that of a Dense Attention block group, assuming both have \(N\) blocks. ### Network Architecture Parameters The network architecture parameters chosen for the present study are summarized, below. 1. **Image input** - \(4\times 4\) zero padding is applied to the input images for the network to better detect the surfaces of the microstructure. 2. **Initial convolution layer** (see Fig. 1) - \(7\times 7\) kernel, stride = 2, padding same as image input, and \(2k\) channels (where \(k\) is the growth rate). Using the same padding maintains the input's spatial dimensions in the output. 3. **Transition layers** (see Fig. 1) - \(1\)x1 convolution, stride = 1, the same padding, followed by \(2\times 2\) average pooling with stride = 2 and the padding. 4. **Convolution layers in baseline DenseNet block groups** (see Fig. 2) - \(3\times 3\) kernel, stride = 1, the same padding, and \(k\) channels. 5. **Convolution layer in spatial attention module from CBAM** (see Fig. 3) - \(7\times 7\) kernel, stride = 1, same padding, and 1 channel. 6. **Dense Inception-A block configuration** (see Fig. 4) - The first branch uses two \(3\times 3\) convolution layers, stride = 1, and same padding. The second branch uses a \(3\times 3\) convolution Figure 5: Schematic of the Dense Inception-B Attention Block Group. layer, stride = 1 and the same padding. The third branch uses a \(3\times 3\) average pooling layer, stride = 1, same padding, and a \(1\times 1\) convolution layer with stride = 1 and the same padding. 7. **Dense Inception-B block configuration** (see Fig. 5) - The first branch uses stride = 1, \(3\times 3\) max pooling layer, and a \(1\times 1\) convolution layer with stride = 1 and the same padding. The second branch uses a 3x3 convolution layer, stride = 1 and the same padding. The third branch used a \(3\times 3\) average pooling layer, stride = 1, the same padding, and a \(1\times 1\) convolution layer with stride = 1 and same padding. 8. **Activation functions** - Except for the final layer and where otherwise noted, LeakyReLU activation (\(\alpha=0.2\)) is used throughout the network to prevent neuron "death." If a high gradient is applied to a neuron using conventional ReLU activation, the neuron could be set to a very small value and not be able to recover, i.e. "die." LeakyReLU enables the reactivation of "dead" neurons. The max pooling branch inside the Dense Inception-B block (see Fig. 5) uses LeakyReLU6 (\(\alpha=0.2\)), which is simply LeakyReLU with a maximum value of 6. The Inception-B \(3\times 3\) convolution branch uses ELU activation with \(\alpha=0.1\). 9. **Final regression layer** - The final FC layer uses ReLU activation with a single output neuron to output the final instantaneous FGR flux prediction, ensuring that the network does not output physically impossible negative values. 10. **Attention ratio** - \(r=8\) used for networks with CBAM (see Fig. 3) in the shared MLP as part of the squeeze mechanism specific to the channel attention module. 11. **Growth rate** - \(k=32\) used for baseline DenseNet and Dense Attention architectures. \(k=33\) used for the Dense Inception Attention architecture, as \(k\) must be divisible by 3 in this case. Table 1 summarizes the feature map dimensions throughout the networks examined in this study, assuming input image dimensions of \(101\times 101\times 1\) (\(109\times 109\times 1\) after \(4\times 4\) zero padding). Notably, both Dense Inception + CBAM configurations have approximately half the trainable parameters of both baseline DenseNet and DenseNet + CBAM, pointing to a more efficient architecture. The size of the final regression layer's input vector is 4,288 for the baseline DenseNet and DenseNet + CBAM networks, and 4,422 for the Dense Inception + CBAM networks. ### Training and Evaluation Data As FGR and gas bubble growth inside fuel rods occur during the active operation of a nuclear reactor, it is very difficult to obtain in situ data characterizing the evolving bubble structures. It is more practical to generate synthetic data that includes both the microstructure and FGR in the high volumes required for neural network training. This study uses the hybrid phase field/cluster dynamics model of fission gas behavior developed by Kim et al. [9]. The phase field model is implemented in the mesoscale MARMOT code based on the open-source Multiphysics Object-Oriented Simulation Environment (MOOSE) [22]. The cluster dynamics model is implemented \begin{table} \begin{tabular}{|c|c|c|c|} \cline{2-4} \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{**DenseNet**} & \multicolumn{1}{c|}{**DenseNet**} & \multicolumn{1}{c|}{**Inception A/B**} \\ \multicolumn{1}{c|}{} & **(k=32)** & **+ CBAM (k=32)** & **+ CBAM (k=33)** \\ \hline **Blocks/Layers** & **Output Size** & **Output Size** & **Output Size** \\ \hline Convolution & \(55\times 55\times 64\) & \(55\times 55\times 64\) & \(55\times 55\times 66\) \\ \hline Intermediate Regression Block 1 & 64 & 64 & 66 \\ \hline _Dense Block Group 1_ & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c}{} \\ \hline Basic DenseNet Block \(\times 6\) & \(55\times 55\times 256\) & \(55\times 55\times 256\) & N/A \\ \hline Dense Inception Block \(\times 6\) & N/A & N/A & \(55\times 55\times 264\) \\ \hline CBAM & N/A & \(55\times 55\times 256\) & \(55\times 55\times 264\) \\ \hline Transition Layer 1 & \(27\times 27\times 256\) & \(27\times 27\times 256\) & \(27\times 27\times 264\) \\ \hline Intermediate Regression Block 2 & 256 & 256 & 264 \\ \hline _Dense Block Group 2_ & \multicolumn{1}{c|}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \hline Basic DenseNet Block \(\times 12\) & \(27\times 27\times 640\) & \(27\times 27\times 640\) & N/A \\ \hline Dense Inception Block \(\times 12\) & N/A & N/A & \(27\times 27\times 660\) \\ \hline CBAM & N/A & \(27\times 27\times 640\) & \(27\times 27\times 660\) \\ \hline Transition Layer 2 & \(13\times 13\times 640\) & \(13\times 13\times 640\) & \(13\times 13\times 660\) \\ \hline Intermediate Regression Block 3 & 640 & 640 & 660 \\ \hline _Dense Block Group 3_ & \multicolumn{1}{c|}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \hline Basic DenseNet Block \(\times 24\) & \(13\times 13\times 1408\) & \(13\times 13\times 1408\) & N/A \\ \hline Dense Inception Block \(\times 24\) & N/A & N/A & \(13\times 13\times 1452\) \\ \hline CBAM & N/A & \(13\times 13\times 1408\) & \(13\times 13\times 1452\) \\ \hline Transition Layer 3 & \(6\times 6\times 1408\) & \(6\times 6\times 1408\) & \(6\times 6\times 1452\) \\ \hline Intermediate Regression Block 4 & 1408 & 1408 & 1452 \\ \hline _Dense Block Group 4_ & \multicolumn{1}{c|}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \hline Basic DenseNet Block \(\times 16\) & \(6\times 6\times 1920\) & \(6\times 6\times 1920\) & N/A \\ \hline Dense Inception Block \(\times 16\) & N/A & N/A & \(6\times 6\times 1980\) \\ \hline CBAM & N/A & \(6\times 6\times 1920\) & \(6\times 6\times 1980\) \\ \hline Transition Layer 4 & \(6\times 6\times 1920\) & \(6\times 6\times 1920\) & \(6\times 6\times 1980\) \\ \hline Intermediate Regression Block 5 & 1920 & 1920 & 1980 \\ \hline Dense Linear Layer & 1 & 1 & 1 \\ \hline Number of Trainable Parameters & 25,042,177 & 26,583,331 & 15,055,569 (A) & 13,278,739 (B) \\ \hline \end{tabular} \end{table} Table 1: Summary of feature map dimensions throughout examined networks. in the Xolotl library [23]. The two codes are coupled using the MultiApp capability available in MOOSE [24]. For more detail about the fission gas model, see the paper by Kim et al. [9]. The hybrid model has been modified to include fast grain boundary and surface diffusion and a free surface to include FGR [10]. To generate the training data, we apply the modified hybrid fission gas model [10] to simulate the microstructure evolution and the flux of fission gas from the left surface in 2D 10-grain microstructures for 365 days at 1500 K. The simulations are carried out in a 15 \(\mu\)m by 15 \(\mu\)m domain with 20 initial 480 nm radius fission gas bubbles. 100 elements are used in both the \(x\)- and \(y\)-directions, resulting in a mesh size of 150 nm. The simulation uses an adaptive timestepping scheme with an initial timestep of 10 seconds; each simulation uses up to 700 time steps and the instantaneous flux of fission gas from the left surface is calculated for the microstructure at each time step. Zero flux boundary conditions are applied on the right boundary and periodic boundary conditions on the top and bottom boundaries. The simulation is repeated for 100 different initial grain boundary and fission gas bubble structures, resulting in over 72,000 microstructure images and corresponding instantaneous FGR flux values. 2D simulations are used here to reduce the computational cost of generating the training data. Each microstructural image consists of a set of floating point values ranging between 0 and 1, corresponding to the phase parameter of the microstructure at each grid point. A value of 0 represents a void, a value of 1 represents UO\({}_{2}\) nuclear fuel, and a value between 0 and 1 represents a grain boundary or void surface. In order to improve computational efficiency, MARMOT uses automatic mesh adaptivity to automatically coarsen or refine different regions of a simulation mesh, simulating low-error regions such as the interiors of grains with less fidelity than high-error regions such as grain boundaries. Because of this, the raw phase parameter image data output by MARMOT does not correspond to a regular 2D grid, and therefore it is necessary to interpolate Figure 6: Microstructure image obtained from the hybrid FGR model. Red indicates UO\({}_{2}\), blue a void, and all other colors represent grain boundaries or void surfaces. each image onto a \(101\times 101\times 1\) grid during preprocessing. Fig. 6 shows an example of an input microstructure image. To minimize floating point errors, the FGR flux values of the original data are scaled up by a factor of 1000 for training, then scaled back to the original range for evaluation. The instantaneous FGR value of a microstructure does not change if it is mirrored across the axis perpendicular to the free surface on the left, allowing for doubling of the dataset to approximately 144,000 images by adding, for each image, its vertically-mirrored version with the same instantaneous FGR flux. ### Training and Evaluation Details The specified network architectures are implemented, trained, and evaluated in Python 3, using the Tensorflow 2.7.0 framework with Keras [25]. The code is executed on the University of Florida's HiPerGator supercomputer [26] on a partition with two Intel Xeon Gold 6142 CPUs @ 2.60 GHz and one NVIDIA A100 GPU. On this configuration, training takes approximately 18-24 hours, depending on the neural network architecture. For training, mean absolute error (MAE), standard for regression problems, is used as the loss function for both training and testing. As the absolute instantaneous FGR flux output values are scaled to arbitrary units, mean absolute percentage error (MAPE) is used as the primary evaluation metric. Stochastic Gradient Descent is used as the optimizer, as in the original DenseNet paper [12], with a fixed learning rate of \(\alpha=0.001\). L2 weight regularization with weight decay = \(10^{-4}\) is used in all convolutional layers. All weights are initialized using the He method [27]. Each network is trained for a total of 100 epochs, with a batch size of 32 images. This study uses a 90%/5%/5% training/evaluation/test split of the data set, with performance on the test set reported as final results. The input data is shuffled randomly to prevent spurious sequential effects. Linear regression analysis is performed using the MATLAB [28]_fitlm_ function to calculate the R-squared value of predicted vs. simulated instantaneous FGR flux values for each tested network, with values closer to 1 indicating greater network predictive power. For interpretability, saliency maps are generated using a Tensorflow [25] GradientTape-enabled backpropagation visualization method [29]. These maps provide a visual representation of the regions of an input image that activate the network, indicating which regions of the microstructure have the largest impact on the FGR. ## 3 Results Figure 7 shows a plot of the MAE training loss vs. training epoch for each examined network. Baseline DenseNet underperforms compared to other architectures for all epochs. After 40 epochs, all the attention-enabled networks have reached comparable loss values. The DenseNet Attention (DenseNet + CBAM) network without Inception blocks generates the lowest MAE training loss for all epochs, but this could also be a sign of overfitting, as will be discussed later in this section. Figure 8 compares the validation MAPE vs. training epoch behavior across the four tested networks. DenseNet Inception-A + CBAM appears to exhibit better training stability from epoch to epoch, judging from reduced fluctuations in validation MAPE during the training process. Table 2 summarizes the training and evaluation performance of the examined DenseNet architectures. DenseNet + CBAM without Inception shows the lowest training MAPE (2.88%), yet also shows the highest validation MAPE (6.75%) - an indicator of overfitting to the training set. The Dense Inception + CBAM networks shows comparable training and validation MAPE values, with Dense Inception-A + CBAM providing the lowest validation MAPE (4.13%). This behavior points to greater generalizability on the part of the Inception blocks, which is the intended outcome of these architectures. Dense Inception-A + CBAM also provides the lowest MAPE on the test set (4.40%). To examine the performance of the examined networks for very low instantaneous FGR flux values, we calculate an adjusted test MAPE by filtering out all input images with associated instantaneous FGR flux values of less than 0.001 (unscaled), with only 47 out of 7,227 test data points fitting this criterion. Based on the difference between the test MAPE and the adjusted test MAPE, Dense Inception-A + CBAM appears to be the most robust configuration with respect to very low instantaneous FGR flux values, followed by Dense Inception-B + CBAM. DenseNet + CBAM without Inception appears to be the least robust architecture with respect to this criterion. Figure 9 illustrates the linearity of the predicted vs. simulated instantaneous FGR flux values for each network. All networks exhibited excellent \(R^{2}\) values (above 98%). Two networks, DenseNet + CBAM and DenseNet Inception-A + CBAM, have \(R^{2}\) values above 99%. Figure 7: Plot of MAE training loss vs. training epoch for each examined network. The trained neural network surrogate models can rapidly predict instantaneous FGR flux when presented with a novel microstructure image input. The following evaluations provide an indication of the computational acceleration that can be achieved comparing to the reference mesoscale multiphysics simulations. Note that the latter generate both 2D microstructure images and FGR flux values, whereas the CNN surrogate models studied generate only instantaneous FGR flux value predictions. The mesoscale hybrid FGR model, with parameters described in section II.C, is executed on the University of Florida's HiPerGator [26] on an 80-CPU (Intel Xeon Gold 6142 CPU @ 2.60 GHz) partition with no GPU acceleration. The mean execution time per time step of the hybrid model is found to be approximately 26.5 seconds. The four neural network models are evaluated using two different HiPerGator configurations. The first configuration is identical to the one used in the reference multiphysics simulations, while the second configuration uses a single Intel Xeon Gold 6142 CPU @ 2.60 GHz and one NVIDIA A100 GPU. Figure 8: Plot of validation MAPE vs. training epoch for each examined network. ## 6 Conclusion \begin{table} \begin{tabular}{|l|r|r|r|r|} \cline{2-5} \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{**Baseline**} & \multicolumn{1}{c|}{**DenseNet**} & \multicolumn{1}{c|}{**Inception-A**} & \multicolumn{1}{c|}{**Inception-B**} \\ \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{**DenseNet**} & \multicolumn{1}{c|}{**+ CBAM**} & \multicolumn{1}{c|}{**+ CBAM**} & \multicolumn{1}{c|}{**+ CBAM**} \\ \hline \multicolumn{5}{|l|}{_Training and Validation Data Set Performance @ 100 Training Epos_} \\ \hline **Training Loss (MAE)** & 2.94 & 2.40 & 2.55 & 2.50 \\ \hline **Training MAPE (\%)** & 4.31 & 2.88 & 3.05 & 3.23 \\ \hline **Validation Loss (MSE)** & 4.00 & 3.59 & 3.83 & 3.61 \\ \hline **Validation MAPE (\%)** & 6.72 & 6.75 & 4.13 & 4.63 \\ \hline \multicolumn{5}{|l|}{_Test Data Set Performance_} \\ \hline **Test MAPE (\%)** & 8.56 & 10.85 & 4.40 & 5.14 \\ \hline **Adjusted Test MAPE (\%)** & 5.17 & 3.52 & 3.56 & 4.00 \\ \hline \(R^{2}\) **(\%)** & 98.87 & 99.23 & 99.10 & 98.88 \\ \hline **Slope** & 0.9793 & 0.9884 & 0.9687 & 0.9852 \\ \hline **Intercept** & 0.0047 & -0.0009 & 0.0016 & 0.0027 \\ \hline \end{tabular} \end{table} Table 2: Comparative training and evaluation performance of examined DenseNet architectures. Figure 9: Plot of predicted vs. simulated instantaneous FGR flux for each examined network. Table 3 lists the mean execution time for instantaneous FGR flux prediction per microstructure image for the four neural network models and the two configurations. Using 80 CPUs and no GPUs, the execution time of the four neural network models varies by only around 12%. Dense Inception-A + CBAM is the most computationally efficient. DenseNet+CBAM is the least computationally efficient. These execution times are around three orders shorter than for simulations using the hybrid model. Using 1 CPU and 1 GPU, the execution time of the four models varies by around 28%. Baseline DenseNet and DenseNet+CBAM are the most computationally efficient for this configuration, and Dense Inception-A+CBAM is the least computationally efficient. The times with a GPU are around one order of magnitude shorter than with 80 CPUs and are four order of magnitude shorter than the hybrid model. ## 4 Discussion These results indicate that densely-connected CNNs possess a strong ability to approximate highly complex and nonlinear phenomena such as FGR in 2D microstructures. After initial training, the networks can rapidly make predictions of instantaneous FGR flux without resorting to computationally-intensive simulations using the hybrid FGR model. The network that provides overall the lowest error is Dense Inception-A + CBAM. It is also the most computationally efficient using CPUs. It is the least efficient on a GPU, but all of the times were very short. Furthermore, the studied approach is physics-agnostic and could potentially be used in other regression problems using grid-like input data and a scalar output. Once trained, the networks predict the instantaneous FGR flux for a 2D microstructure image. In addition, the networks can provide information regarding which aspects of the microstructure have the largest impact on the FGR. This is accomplished using saliency maps [29] generated using Tensorflow's [25] GradientTape functions. Figure 9(a) gives an example of such a saliency map. A blue tint on a pixel indicates greater impact of that pixel on the FGR. Notably, pixels on the left side of the image that are closer to the free surface have a larger impact, as do those near voids and grain boundaries. This is consistent with the fast grain boundary and surface diffusion included in the phase field model. It is expected that structures with greater grain boundary connectivity to the free surface will exhibit greater instantaneous FGR flux. Figure 9(b) shows the saliency map for an image with a low instantaneous FGR value. The map shows that only the pixels near the \begin{table} \begin{tabular}{|l|l|l|l|l|} \cline{2-5} \multicolumn{1}{c|}{} & \multicolumn{4}{c|}{**Mean Execution Time per Image (ms)**} \\ \hline \multirow{2}{*}{**Configuration**} & **Baseline** & **DenseNet** & **Inception-A** & **Inception-B** \\ & **DenseNet** & **+ CBAM** & **+ CBAM** & **+ CBAM** \\ \hline 80 CPUs, No GPU & 25.1 & 28.3 & 24.7 & 26.6 \\ \hline 1 CPU, 1 GPU & 2.6 & 2.6 & 3.4 & 2.9 \\ \hline \end{tabular} \end{table} Table 3: Mean execution times of examined neural network architectures for predicting instantaneous FGR flux from single image inputs. Note that the execution time for one time step of the hybrid model using 80 CPUs and no GPUs is 26.5 s (\(26.5\times 10^{3}\) ms). bubbles on the top and bottom surfaces near the left surface have significant impact on the FGR. The results presented in this study are subject to several limitations. The neural network models are trained on simulated data generated using only 10-grain microstructures, temperature set at 1500 K, and a free surface on the left side of the microstructure. Further work is needed to increase the network generalizability by expanding the training dataset with simulation runs carried at different temperatures and with varying free surface configurations and grain counts. The modified network architectures would need to accept temperature and free surface location as additional input parameters. Another natural extension of this work is to process 3D microstructures using 3D, rather than 2D, convolutions throughout the network layers. This work, predicting the instantaneous fission gas release given a microstructure image, is a necessary first step in our ultimate goal of predicting the both the FGR flux and the evolution of the UO\({}_{2}\) microstructure over time. Such a model could be accomplished by incorporating a CNN model into a recurrent neural network. The pretrained CNN could function as the encoder in a U-Net [30] architecture with Long Short-Term Memory [31] layers used for the recurrent component. ## 5 Conclusions We tested four DenseNet-inspired neural network architectures modified to implement multiscale regression for predicting instantaneous FGR flux from 2D nuclear fuel microstructure images. These architectures were trained using 2D data from a hybrid phase field/cluster dynamics model Figure 10: Examples of Tensorflow GradientTape saliency maps for the Dense Inception-A + CBAM network prediction for (a) a case with grain boundaries contacting the free surface (simulated FGR flux = 0.13424, predicted FGR flux = 0.13456, absolute percentage error = 0.23%), and (b) a case with no grain boundaries contacting the free surface (simulated FGR flux = 0.00983, predicted FGR flux = 0.00989, absolute percentage error = 0.62%). The images are shaded by the saliency, which indicates the impact of that pixel on the FGR. Voids and grain boundaries are shown in black. of bubble evolution and FGR. We compared a baseline DenseNet configuration with an attention-enabled DenseNet and two densely-connected, attention-enabled Inception models. All four networks exhibited very high predictive power, with two configurations - DenseNet with attention and Dense Inception-A with attention - demonstrating \(R^{2}\) values of above 99%. Dense Inception-A with attention produced the lowest validation and test MAPE out of the four networks, as well as exhibiting the greatest robustness with respect to very low instantaneous FGR flux values and the greatest degree of stability during training. Saliency map visualizations showed that the trained models could also indicate which pixels have the largest impact on the instantaneous FGR. Preliminary computational time evaluations indicated the potential of surrogate neural network models to achieve several orders of magnitude acceleration comparing to mesoscale multiphysics simulations. ## 6 CrediT Authorship Contribution Statement **Peter Toma**: Investigation, Methodology, Software, Data Curation, Writing - Original Draft, Visualization. **Md Ali Muntaha**: Supervision, Conceptualization, Methodology, Formal Analysis, Writing - Review & Editing. **Joel B. Harley**: Validation, Methodology, Writing - Review & Editing. **Michael R. Tonks**: Project Administration, Supervision, Conceptualization, Resources, Writing - Review & Editing ## 7 Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. ## 8 Acknowledgements We express our gratitude for the high-performance computing resources provided by the University of Florida's HiPerGator clusters, which facilitated the execution of computationally intensive 2D simulations. The time of Tonks and Muntaha for this work was supported by the U. S. Department of Energy, Office of Nuclear Energy and Office of Science, Office of Advanced Scientific Computing Research through the Scientific Discovery through Advanced Computing (SciDAC) project on Simulation of Fission Gas through the grant DOE DE-SC0018359 at the University of Tennessee. ## 9 Data Availability The MOOSE input files used to generate the simulation results in this paper, the Python source code used to train and evaluate the networks, and the trained networks can be obtained from the authors upon reasonable request.
2310.07332
Dust Coagulation Reconciles Protoplanetary Disk Observations with the Vertical Shear Instability. I. Dust Coagulation and the VSI Dead Zone
Protoplanetary disks exhibit a vertical gradient in angular momentum, rendering them susceptible to the Vertical Shear Instability (VSI). The most important condition for the onset of this mechanism is a short timescale of thermal relaxation ($\lesssim 0.1$ orbital timescales). Simulations of fully VSI active disks are characterized by turbulent, vertically extended dust layers. This is in contradiction with recent observations of the outer regions of some protoplanetary disks, which appear highly settled. In this work, we demonstrate that the process of dust coagulation can diminish the cooling rate of the gas in the outer disk and extinct the VSI activity. Our findings indicate that the turbulence strength is especially susceptible to variations in the fragmentation velocity of the grains. A small fragmentation velocity of $\approx 100 \mathrm{\, cm \,s^{-1}}$ results in a fully turbulent simulation, whereas a value of $\approx 400 \mathrm{\, cm \,s^{-1}}$ results in a laminar outer disk, being consistent with observations. We show that VSI turbulence remains relatively unaffected by variations in the maximum particle size in the inner disk regions. However, we find that dust coagulation can significantly suppress the occurrence of VSI turbulence at larger distances from the central star.
Thomas Pfeil, Til Birnstiel, Hubert Klahr
2023-10-11T09:24:19Z
http://arxiv.org/abs/2310.07332v2
# Dust Coagulation Reconciles Protoplanetary Disk Observations with the Vertical Shear Instability ###### Abstract Protoplanetary disks exhibit a vertical gradient in angular momentum, rendering them susceptible to the Vertical Shear Instability (VSI). The most important condition for the onset of this mechanism is a short timescale of thermal relaxation (\(\lesssim 0.1\) orbital timescales). Simulations of fully VSI active disks are characterized by turbulent, vertically extended dust layers. This is in contradiction with recent observations of the outer regions of some protoplanetary disks, which appear highly settled. In this work, we demonstrate that the process of dust coagulation can diminish the cooling rate of the gas in the outer disk and extinct the VSI activity. Our findings indicate that the turbulence strength is especially susceptible to variations in the fragmentation velocity of the grains. A small fragmentation velocity of \(\approx\)100 cm s\({}^{-1}\) results in a fully turbulent simulation, whereas a value of \(\approx\)400 cm s\({}^{-1}\) results in a laminar outer disk, being consistent with observations. We show that VSI turbulence remains relatively unaffected by variations in the maximum particle size in the inner disk regions. However, we find that dust coagulation can significantly suppress the occurrence of VSI turbulence at larger distances from the central star. protoplanetary disks -- dust evolution -- hydrodynamics -- methods: numerical 0000-0002-4882-8858]Thomas Pfeil 0000-0002-4882-7885]Tilman Birnstiel 0000-0002-4882-7885]Hubert Klahr ## 1 Introduction Around 1 % of the mass of protoplanetary disks is initially composed of solids (Lodders, 2003; Magg et al., 2022). Despite its small contribution to the overall mass budget, this dust is the building material for planetesimals and planets and an essential observable for infrared and radio observations. It can have a considerable influence on the gas dynamics within the disk via drag forces (Weidenschilling, 1980; Youdin and Goodman, 2005) and is the main source of opacity. Therefore, cooling and heating is mostly determined by the solids for the bulk of the disk (Semenov et al., 2003; Woitke, 2015; Malygin et al., 2017). Many linear instabilities of the gas flow depend on the local rate of thermal relaxation (Klahr and Bodenheimer, 2003; Petersen et al., 2007, 2007; Klahr and Hubbard, 2014; Lin and Youdin, 2015; Marcus et al., 2015; Lyra and Umurhan, 2019) or the ionization state of the gas (Balbus and Hawley, 1991; Blaes and Balbus, 1994), and are therefore sensitive to the assumed dust size distribution (Barranco et al., 2018; Fukuhara et al., 2021; Kawasaki and Machida, 2023). In this work, we are specifically interested in the evolution of the Vertical Shear Instability (VSI, Urpin and Brandenburg, 1998), which requires a short thermal relaxation time of the gas (Lin and Youdin, 2015; Manger et al., 2021; Fukuhara et al., 2021). VSI was studied in much detail in isothermal and adiabatic disk models at various rates of \(\beta\) cooling (e.g., Nelson et al., 2013) and in models with radiative transfer (e.g., Stoll and Kley, 2016; Stoll et al., 2017; Flores-Rivera et al., 2020). Due to the numerical obstacles of incorporating dust evolution models in hydrodynamic simulations (Drazkowska et al., 2014; Gonzalez et al., 2017; Drazkowska et al., 2019; Lombart et al., 2022), most previous studies consider a static dust population, perfectly coupled to the gas. These studies often aim for a detailed analysis of the instability mechanism itself (e.g., Nelson et al., 2013; Manger et al., 2021; Svanberg et al., 2022). They showed the VSI's ability to cause large-scale vortex formation (Richard et al., 2016; Manger and Klahr, 2018; Pfeil and Klahr, 2021) and strong corrugations in the dust layer (Stoll and Kley, 2016; Flores-Rivera et al., 2020). Simulations assuming perfectly coupled dust or isothermal conditions can, how ever, not model the conditions in real protoplanetary disks, for which observations show an evolved dust population (Perez et al., 2012; Tazzari et al., 2016; Huang et al., 2018; Ohashi and Kataoka, 2019; Sierra et al., 2021), substructures (ALMA-Partnership et al., 2015; Andrews et al., 2018; Dong et al., 2018), and planets (Keppler et al., 2018). In this work, we intend to go one step further by considering an evolved--yet static--dust population in two-dimensional simulations of smooth protoplanetary disks. Our work is motivated by the results of Dullemond et al. (2022), which show that VSI turbulence in an isothermal disk model is not consistent with observations of thin dust layers in protoplanetary disks. In Pfeil and Klahr (2021), we have explored the impact of a more realistic cooling time prescription on the strength of VSI turbulence. For this, we assumed the presence of a static, um-sized dust population in the inner parts of a protoplanetary disk (at \(\sim 5\,\mathrm{au}\)). For these setups, we found that the collisional decoupling of the gas and dust particles inhibits thermal relaxation in the disk atmosphere and thus reduces VSI turbulence. The respective collisional coupling time scale depends on the size distribution and is, thus, sensitive to the fragmentation velocity and other dust properties. Fukuhara et al. (2021) further studied this effect in models with a more detailed prescription of the dust size distribution. They found that coagulation can indeed inhibit the VSI by depleting the number of small grains that provide radiative cooling. In their most recent study, Fukuhara et al. (2023) attempted to simulate this in a more self-consistent way, by taking into account the effect of the VSI on the diffusivity and the cooling times. Since they could not afford to dynamically evolve the dust population within their hydrodynamic simulations, they relied on analytic prescriptions for the cooling time for a static dust size distribution. In this work, we study the effect of a more realistic steady-state dust distribution for varying coagulation parameters using DustPy(Stammler and Birnstiel, 2022) and PLUTO(Mignone et al., 2007). We deduce thermal relaxation times from dust coagulation models in Section 3 which are then implemented in hydrodynamic simulations, from which we study the VSI activity in Section 4. This makes it possible to study the influence of dust coagulation and the coagulation parameters on VSI turbulence. These steps are schematically displayed in Figure 1. In the next step, we introduce passive dust fluids to our simulations in Section 4.1 to study the effect of the emerging VSI turbulence on the thickness of the dust layer. To make our results comparable to observations, we create synthetic intensity maps with RADMC-3D(Dullemond et al., 2012) in Section 5. ## 2 Theory ### Cooling Requirements for the Vertical Shear Instability Vertical shear, in the geophysical context also known as thermal wind (Holton and Hakim, 2012), is a consequence of the radial temperature gradient in the vertically stratified protoplanetary disks. The temperature gradient itself is maintained by stellar irradiation. Consequently, fluid parcels can be displaced upward into a region of lower specific angular momentum experiencing an outward acceleration. A perturbation along such a trajectory violates Rayleigh's stability criterion and leads to a continued acceleration of the fluid parcel. This mechanism is called the Vertical Shear Instability (Urpin and Brandenburg, 1998) and results in vertically elongated and radially narrow flow patterns. However, as the gas parcels enter the lower-density regions of the disk atmosphere, they are subjected to buoyancy forces, which, in a stably stratified atmosphere, would lead to an oscillation around the disk midplane. The characteristic frequency of this oscillation is the Brunt-Vaisala frequency \[N_{z}^{2}=-\frac{1}{\rho_{\mathrm{g}}C_{\mathrm{P}}}\frac{\partial P}{ \partial z}\frac{\partial S}{\partial z}, \tag{1}\] where \(z\) is the distance from the disk midplane, \(\rho_{\mathrm{g}}\) is the gas density, \(P\) is the pressure \(S\) is the gas entropy, and \(C_{\mathrm{P}}\) is the gas' specific heat capacity at constant pressure. Thermal relaxation counteracts the restoring force of this oscillation by adjusting a gas parcel's specific entropy to the background. In order for the vertical shear to overcome buoyancy and trigger the VSI, thermal relaxation must be fast. Lin and Youdin (2015) have shown that vertically global VSI grows the fastest if the cooling timescale fulfills \[t_{\mathrm{c}}<\frac{H_{\mathrm{g}}}{R}\frac{|\beta_{T}|}{\gamma-1}\Omega_{ \mathrm{K}}^{-1}, \tag{2}\] where \(R\) is the distance to the central star, \(\beta_{T}\) is the power law exponent of the temperature profile, \(H_{\mathrm{g}}\) is the pressure scale height, \(\Omega_{\mathrm{K}}\) is the local Keplerian frequency, and \(\gamma=\nicefrac{{C_{\mathrm{P}}}}{{C_{\mathrm{V}}}}\) is the gas' heat capacity ratio. Equation 2 was derived under the assumption of a vertically constant thermal relaxation time. As we specifically consider the height dependence of thermal relaxation, we will use the local definition of a critical cooling time (Urpin, 2003; Klahr et al., 2023) for local VSI modes \[t_{\mathrm{c}}\lesssim\frac{|r\partial_{z}\Omega|}{N_{z}^{2}}\approx\frac{H_{ \mathrm{g}}}{R}\frac{|\beta_{T}|\gamma}{2(\gamma-1)}\left(\frac{z}{H_{ \mathrm{g}}}\right)^{-1}\Omega_{\mathrm{K}}^{-1}. \tag{3}\] In fact, numerical studies like Manger et al. (2021) investigated the dependency of the VSI turbulence on a vertically constant thermal relaxation time and found VSI not to develop for cooling times beyond the critical value for global modes. This may be due to numerical resolution as Lin and Youdin (2015) show that VSI exists for all cooling times, yet at reduced efficiency. Urpin (2003) derived growth rates in this regime, which show a decay proportional to \(t_{\rm c}^{-1}\). This behavior was recently confirmed in high-resolution1 studies of the VSI and other thermal instabilities in disks by Klahr et al. (2023). It is still subject to investigation how longer growth times will translate into turbulence levels for the non-linear regime, especially in terms of angular momentum transport, diffusion, and gas r.m.s. velocities. The saturation behavior of VSI and other thermal baroclinic instabilities especially for longer cooling times at sufficient resolution is still being investigated (Latter and Papaloizou, 2018; Cui and Latter, 2022; Klahr et al., 2023). Footnote 1: PLUTO-4.2 simulation with 256 cells per gas scale height, WENO reconstruction, and RK3 time integration (Klahr et al., 2023). ### Optically Thin Thermal Relaxation Thermal relaxation of the gas in a protoplanetary disk is mostly achieved via thermal coupling with the dust in a two-stage process. At low temperatures, the emission timescale of the gas molecules is long, which means that cooling is only possible via thermal accommodation with the strongly emitting dust particles through collisions. Barranco et al. (2018), derived the thermal relaxation times for the non-LTE case between dust grains and the gas based on the calculation of cooling rates (see Appendix A for a recap of the derivations). For a given dust size distribution \(n(a)\), the Sauter mean radius is an instructive parameter in this context, defined as (Sauter, 1926) \[a_{\rm S}=\frac{\int n(a)a^{3}\,{\rm d}a}{\int n(a)a^{2}\,{\rm d}a} \tag{4}\] where the size integral is executed over the entire size distribution. Corresponding to the Sauter mean, we define a respective number density \(n_{\rm S}=\rho_{\rm d}/\left(4/3\,\pi\,\rho_{\rm m}a_{\rm S}^{3}\right)\) and a collisional cross-section \(\sigma_{\rm s}=\pi a_{\rm S}^{2}\), where \(\rho_{\rm m}=1.67\,{\rm g}\,{\rm cm}^{-3}\) is the interior density of the dust grains. With these definitions, we write the thermal accommodation timescale for the gas molecules and the dust grains (Probstein, 1969; Burke and Hollenbach, 1983) as \[t_{\rm g}^{\rm coll}=\frac{\gamma}{\gamma-1}\frac{1}{n_{\rm S}\sigma_{\rm S} \bar{v}_{\rm g}}, \tag{5}\] where \(\bar{v}_{\rm g}=c_{\rm s}\sqrt{8/\pi}\) is the average gas molecule velocity of a Maxwell-Boltzmann distribution with the isothermal speed of sound \(c_{\rm s}\). Similarly, a timescale for the thermal relaxation of the dust component can be derived, which reads \[t_{\rm d}^{\rm coll}=\left(\frac{\rho_{\rm d}}{\rho_{\rm g}}\right)\left(\frac {C_{\rm d}}{C_{\rm P}}\right)t_{\rm g}^{\rm coll}, \tag{6}\] with the dust-to-gas density ratio \(\nicefrac{{\rho_{\rm d}}}{{\rho_{\rm g}}}=\varepsilon\) and the specific heat capacity of the dust particles \(C_{\rm d}\). As a typical Figure 1: Workflow from our dust coagulation models with DustPy to the hydrodynamic simulations with PLUTO to the radiative transfer modeling with RADMC-3D. Used methods and tools are shown as dark blue boxes. Input parameters and intermediate results are shown in light blue. The results of our work are schematically displayed as orange boxes. The details of our methodology are laid out in Section 3 (DustPy and cooling times), Section 4 (PLUTO simulations and results), and Section 5 (radiative transfer and synthetic observations with RADMC-3D). value we pick \(C_{\rm d}=800\,{\rm J\,kg^{-1}\,K^{-1}}\), as used by Barranco et al. (2018)(see Wasson, 1974; Piqueux et al., 2021; Biele et al., 2022). If the collisional coupling is efficient, i.e., temperature perturbations in the gas are transferred to the dust, the thermal equilibrium of the grains will be restored by the emission of radiation. This happens on the black body timescale, depending on the dust density distribution \(\rho_{\rm d}(a)\) in units of [g/cm\({}^{4}\)] and the respective Planck mean opacity distribution \(\kappa_{\rm P}(a,T)\), in units of [cm\({}^{2}\)/g] \[t_{\rm d}^{\rm rad}=\frac{\rho_{\rm d}C_{\rm d}}{16\,\sigma_{\rm SB}\,T_{\rm eq }^{3}}\left(\int\rho_{\rm d}(a)\kappa_{\rm P}(a,T_{\rm eq})\,{\rm d}a\right)^{- 1}, \tag{7}\] with the Stefan-Boltzmann constant \(\sigma_{\rm SB}\). The total thermal relaxation time of the dust gas mixture can then be calculated following Equation (19) from Barranco et al. (2018) \[t_{\rm thin}^{\rm NLTE}=2t_{||}\left[1-\sqrt{1-\frac{4t_{||}^{2}}{t_{\rm g}^ {\rm coll}t_{\rm d}^{\rm rad}}}\right]^{-1} \tag{8}\] with \(\nicefrac{{1}}{{t_{||}}}=\nicefrac{{1}}{{t_{\rm d}^{\rm rad}}}+\nicefrac{{1}} {{t_{\rm d}^{\rm coll}}}+\nicefrac{{1}}{{t_{\rm g}^{\rm coll}}}\). In practice, this means the slowest channel of energy transfer acts as a bottleneck and the longest timescale of thermal relaxation determines the cooling time scale of the gas. If the dust's emissivity is low, energy can not be emitted effectively by the grains, and temperature perturbations can not decay, no matter how well the grains and molecules are coupled (\(t_{\rm thin}^{\rm NLTE}\approx t_{\rm d}^{\rm rad}\)). This situation is unlikely to occur in protoplanetary disks because of the large dust opacities. Another case is the collisional decoupling of dust grains and gas molecules. At low densities and in regions where small grains are depleted, heat can not be transferred between the main carriers of thermal energy (the gas molecules) and the emitters (the dust grains). The high emissivity of the grains does not matter in such a case, since temperature perturbations stay locked in the poorly emitting gas (\(t_{\rm thin}^{\rm NLTE}\approx t_{\rm g}^{\rm coll}\)). Muley et al. (2023) introduced a three-temperature radiation transport scheme, which treats dust and gas temperatures separately, yet coupled via collisions. They also find that in most cases the collisional time scale is the most relevant to determine thermal relaxation. In this case, the cooling time is proportional to the square root of the maximum particle size. This can be shown by assuming the size distribution to be a truncated power law with maximum particle size \(a_{\rm max}\), minimum size \(a_{\rm min}\), and power law exponent \(p=-3.5\). Then \(a_{\rm s}=\sqrt{a_{\rm max}a_{\rm min}}\) and thus \(t_{\rm g}^{\rm coll}\propto(n_{\rm S}\sigma_{\rm S})^{-1}\propto\sqrt{a_{\rm max}}\). Sticking collisions between grains typically increase the maximum particle size until a fragmentation-coagulation equilibrium is reached. In this case, \(a_{\rm max}\approx a_{\rm frac}\propto v_{\rm frag}^{2}\) holds (Birnstiel et al., 2012), and we deduce that the collisional timescale is directly proportional to the fragmentation velocity in this case. Laboratory experiments aim to determine the actual value of \(v_{\rm frag}\) which is dependent on the composition and porosity of grains (Blum, 2000; Wurm et al., 2001; Blum et al., 2006; Musiolik and Wurm, 2019). Typical values lie within a range of 100-1000 cm s\({}^{-1}\). An additional uncertainty arises from the unknown relative grain velocities, which depend on the strength of turbulence, differential drift, and settling. Especially the strength of turbulence in protoplanetary disks is highly uncertain and also a subject of this article. The simplest assumption for the turbulent transfer of energy across length scales is the Kolmogorov cascade. For the resulting energy spectrum, relative grain velocities can be approximated as \(\delta v\approx\sqrt{3\alpha{\rm St}}c_{\rm s}\)(Ormel and Cuzzi, 2007), with the Stokes number St (see Equation 14). This is the underlying assumption for the derivation of \(a_{\rm frag}\). In this turbulence prescription, which is based on the assumption of a mixing length model (Prandtl, 1925), turbulent stresses result in an effective viscosity \[\nu=\alpha c_{\rm s}H, \tag{9}\] where \(c_{\rm s}\) is the local sound speed and \(H\) is the pressure scale height of the disk (Shakura and Sunyaev, 1973). From this, turbulent r.m.s. velocities can be related to \(\alpha\) by assuming a turbulent correlation time of \(\Omega_{\rm K}^{-1}\) via \[\alpha=\frac{\langle v_{\rm turb}^{2}\rangle}{c_{\rm s}^{2}}. \tag{10}\] With this, \(a_{\rm frag}\propto\alpha^{-1}\), implying \(t_{\rm g}^{\rm coll}\propto\alpha^{-\nicefrac{{1}}{{2}}}\). Low \(\alpha\) therefore corresponds to longer cooling times, as a consequence of the presence of larger particles. Additionally, lower levels of turbulence correspond to smaller dust scale heights, leading to a depletion of the upper layers and an additional dampening of the VSI in these regions. Fukuhara et al. (2021) investigated the effect of varying maximum particle sizes throughout a protoplanetary disk and found that the presence of VSI depends on particle sizes via the cooling time dependency. In the following sections, we investigate this effect through the use of more realistic dust coagulation models and subsequent hydrodynamic simulations. We aim to determine the implications for the interpretation of observational data and the respective feedback onto the dust layer by turbulent mixing through the VSI. In the previous sections, we discussed the importance of thermal relaxation for the VSI. We have also highlighted that the cooling times are highly sensitive to the present dust population, most importantly, the maximum particle size. In this section, we present a series of dust coagulation simulations, conducted with DustPy, that further illustrate the impact of dust coagulation on the cooling times. We use the output of these simulations to calculate cooling time distributions for our subsequent hydrodynamic simulations with the PLUTO code. For our disk model we employ the standard Lynden-Bell & Pringle (1974) profile for a solar-mass star and a \(0.05\,\mathrm{M}_{\odot}\) disk with dust-to-gas ratio (metallicity) \(\mathcal{Z}=0.01\) (see Table 1) \[\Sigma_{\mathrm{g}}=\frac{M_{\mathrm{d}}(1+\beta_{\Sigma})}{2\pi R_{\mathrm{c }}^{2}}\left(\frac{R}{R_{\mathrm{c}}}\right)^{\beta_{\Sigma}}\exp\left[-\left( \frac{R}{R_{\mathrm{c}}}\right)^{2+\beta_{\Sigma}}\right]. \tag{11}\] We set the radial column density gradient to \(\beta_{\Sigma}=-0.85\), and the cutoff radius to \(R_{\mathrm{c}}=100\,\mathrm{au}\). Our radial temperature profile is determined by passive stellar irradiation and assumed to be constant in the vertical direction (see Chiang & Goldreich, 1997; D'Alessio et al., 1998; Dullemond et al., 2018) \[T=\left(\frac{\varphi L_{*}}{4\pi R^{2}\sigma_{\mathrm{SB}}}\right)^{\nicefrac{ {1}}{{4}}}, \tag{12}\] where \(L_{*}\) is stellar luminosity, and \(\varphi=0.02\) is the flaring angle. Gas evolution and dust drift alter the dust size distribution in protoplanetary disks. The overall effect of these transport phenomena on the shape of the distribution is, however, most relevant in the final stages of disk evolution, when the growth front has reached the outer disk edge and the mass budget is quickly decreasing (i.e., when the dust accretion rate is no longer radially constant, Birnstiel & Andrews, 2014). At what point in time after disk formation this becomes relevant is dependent on the disk's size, its radial structure, the dust-to-gas ratio, the strength of turbulence, the fragmentation velocity, etc. In this study, we are interested in the effect of dust coagulation on the cooling times and, through the cooling times, on the VSI. In the inner parts of the disk, a steady state distribution, determined by fragmentation and coagulation, will be reached and approximately maintained as long as the outer disk edge is not yet moving inward. We have therefore decided to completely disregard any transport effects (except the vertical settling-mixing equilibrium). We are thus calculating a steady state dust distribution for each parameter set that is only determined by fragmentation and coagulation. The output of our models is, therefore, time-independent once the equilibrium size distribution is reached at each radius. In that way, we avoid selecting an arbitrary simulation snapshot. Note that this is still an idealized assumption. In reality, radial drift and gas evolution could slightly alter the radial structure and the size distributions at similar timescales. Typically, drift-limited size distributions are slightly steeper than in the fragmentation limit (Birnstiel et al., 2011). In recent studies, the VSI itself was also shown to alter the radial disk structure (Manger et al., 2021). Our DustPy models are run for \(10^{5}\,\mathrm{yr}\), after which coagulation-fragmentation equilibrium is reached at every radial grid cell. We conduct simulations for three different fragmentation velocities \(v_{\mathrm{frag}}=100\,\mathrm{cm}\,\mathrm{s}^{-1},\ 200\,\mathrm{cm}\, \mathrm{s}^{-1}\) and \(400\,\mathrm{cm}\,\mathrm{s}^{-1}\) and for a turbulence parameter \(\alpha=10^{-3}\). Additionally we probe two different turbulent diffusivities with \(\alpha=10^{-4}\) and \(10^{-2}\), at \(v_{\mathrm{frag}}=100\,\mathrm{cm}\,\mathrm{s}^{-1}\). At this point we do not further specify the origin of the diffusivity \(\alpha\), making it a free parameter for the coagulation models. We show the resulting dust size distribution at \(50\,\mathrm{au}\) and \(100\,\mathrm{au}\) on the left-hand-side of Figure 2 and some key particle properties are shown in Table 1. We can see that the particles grow to larger sizes at smaller distances to the central star, in accordance with analytic estimates of the fragmentation-limited particle size (Birnstiel et al., 2012). The respective size distributions can be approximated with power laws with exponents \(-p\approx 3.6-3.7\). These values lie within the typical range for fragmentation-limited size distributions derived by Birnstiel et al. (2011). ### Thermal Relaxation Times Derived From Dust Coagulation Simulations We derive the vertical structure from these, vertically integrated, DustPy models by assuming vertical hydrostatic equilibrium for the gas and vertical settling mixing equilibrium for the dust. Gas densities thus follow \[\rho=\rho_{\mathrm{mid}}\exp\left[\left(\frac{H_{\mathrm{g}}}{R}\right)^{-2} \left(\frac{R}{\sqrt{R^{2}+z^{2}}}-1\right)\right], \tag{13}\] with gas scale height \(H_{\mathrm{g}}\) and \(\rho_{\mathrm{mid}}=\nicefrac{{\Sigma(R)}}{{\sqrt{2\pi H_{\mathrm{g}}^{2}}}}\). We assume an ideal equation of state \(P=\rho c_{\mathrm{s}}^{2}\). The vertical dust distribution is determined by the diffusion parameter \(\delta\) and the Stokes number of the individual size bins on the size distribution, which is defined as \[\mathrm{St}=\frac{\pi}{2}\frac{a\rho_{\mathrm{m}}}{\Sigma_{\mathrm{g}}}. \tag{14}\] Volume dust densities for each size are then derived by calculating the dust scale height \[H_{\mathrm{d}} =H_{\mathrm{g}}\sqrt{\frac{\delta}{\delta+\mathrm{St}}} \tag{15}\] \[\rho_{\mathrm{d}} =\rho_{\mathrm{d,mid}}\exp\Bigg{[}\left(\frac{H_{\mathrm{d}}}{R} \right)^{-2}\left(\frac{R}{\sqrt{R^{2}+z^{2}}}-1\right)\Bigg{]}, \tag{16}\] with \(\rho_{\mathrm{d,mid}}=\nicefrac{{\Sigma_{\mathrm{d}}(R)}}{{\sqrt{2\pi H_{ \mathrm{d}}^{2}}}}\). The resulting temperature and density structure is used to calculate the Planck mean opacities of the dust. We use the DSHARP opacity model by Birnstiel et al. (2018) as implemented in the dsharp_opac python package with the standard DSHARP particle properties. Thermal relaxation times of the gas can then be calculated from the disk structure and opacities via equations (5)-(8). For the given parameters in our simulations, we find that the thermal relaxation time is limited by the collision timescale outside of \(\sim 10\,\mathrm{au}\). At smaller radii, the disk might become optically thick, meaning the relaxation time of temperature perturbations depends on the respective length scale. We are therefore only modeling the parts of the disk around \(50\,\mathrm{au}\), where thermal relaxation operates in the optically thin regime. Figure 2 shows the size distributions and the vertical profile of the thermal relaxation times for the respective coagulation and turbulence parameters at \(50\,\mathrm{au}\) and \(100\,\mathrm{au}\). We find that the cooling times increase with height above the midplane. The reason for this is that cooling is achieved via collisions between dust particles and gas molecules, which become rarer at lower densities. This also means that models with larger particles have longer thermal relaxation times because of the reduced number densities of dust particles and the stronger settling. Higher fragmentation velocities are counteracting the VSI. Likewise, models with weaker turbulence parameter \(\alpha\) can also be expected to have less VSI activity, as demonstrated by our numerical simulations. ## 4 Pluto Simulations based on coagulation models We set up axisymmetric PLUTO simulations with the same radial structure as our DustPy models to study the evolution of VSI with the respective model's cooling times. Pressure forces act in the outward direction of the disk and therefore decrease the equilibrium rotation frequency of the gas, especially at the steep outer edge of the disk. We define our hydrostatic initial rotation profile accordingly as \[\frac{\Omega^{2}(R,z)}{\Omega_{\mathrm{K}}^{2}} =\Bigg{[}\left(\frac{H_{\mathrm{g}}}{R}\right)\left(\beta_{T}+ \beta_{\rho}-(\beta_{\Sigma}+2)\left(\frac{R}{R_{\mathrm{c}}}\right)^{\beta_ {\Sigma}}\right)\] \[\qquad-\frac{\beta_{T}R}{\sqrt{R^{2}+z^{2}}}+\beta_{T}+1\Bigg{]}, \tag{17}\] where \(\beta_{\rho}\) is the power law exponent of the midplane gas density \(\rho\propto R^{\beta_{\rho}}\) and \(\beta_{T}\) is the power law exponent of the radial temperature profile \(T\propto R^{\beta_{T}}\). Thermal relaxation is realized as in Pfeil and Klahr (2021), by analytically relaxing the gas pressure towards the equilibrium profile (determined by stellar irradiation). Density is kept constant in this cooling step, which makes a relaxation in pressure equal to a relaxation in temperature for an ideal equation of state. \[P^{(n+1)} =P_{\mathrm{eq}}+(P^{(n)}-P_{\mathrm{eq}})\exp\left(-\frac{ \Delta t}{t_{\mathrm{thin}}^{\mathrm{NITE}}}\right)\] \[\stackrel{{\mathrm{const.}}}{{\Longleftrightarrow}}T ^{(n+1)} =T_{\mathrm{eq}}+(T^{(n)}-T_{\mathrm{eq}})\exp\left(-\frac{\Delta t }{t_{\mathrm{thin}}^{\mathrm{NITE}}}\right), \tag{18}\] where \((n)\) denotes the number of the current simulation timestep of length \(\Delta t\). The equilibrium temperature \(T_{\mathrm{eq}}\) is defined by stellar irradiation (Equation 12). Cooling times, presented in the previous section, are derived from DustPy simulations (see Figure 2) and subsequently fitted as a function of local gas density and temperature for each simulation (for a detailed description of the fits, see Appendix B). Fitting the spatial distributions of the thermal relaxation times as functions of density and temperature also introduces uncertainties in the cooling times for PLUTO. For all models except one, these errors lie within \(25\,\mathrm{\char 37}\) with respect to the real distribution of cooling times. For the case of the most settled particles (\(v_{\mathrm{frag}}=100\,\mathrm{cm}\,\mathrm{s}^{-1}\), \(\alpha=10^{-4}\)), however, the fitting function seems to diverge further from the real distribution and the fit deviates up to \(58\,\mathrm{\char 37}\) from the cooling times close to the midplane. This is likely due to the difference between this particular highly settled model and the other less settled cases. Since the cooling times vary over several orders of magnitude throughout the simulation domain and between the models, we deem this uncertainty acceptable-also because the overall distribution of cooling times is still well reproduced (this can be seen in the matching contours in Figure 12). It is worth noting, however, that in this work, we only study the overall trends of VSI turbulence with the coagulation parameters and do not aim to exactly reproduce specific systems or observations. The resulting analytic cooling time prescriptions are used within our PLUTO simulations to calculate \(t_{\rm thin}^{\rm NLTE}\) from the local disk structure. Since cooling is dominated by the small grains, which predominantly move along with the gas, minor disturbances in the gas densities, as caused by the VSI, can also influence the cooling times in this model. We emphasize that this is a minor effect in our simulation, and does not have an impact on the resulting turbulence. It should be noted, that our cooling time prescription, which is derived from dust coagulation models, is static throughout the simulation. Although our coagulation models assumed a certain turbulent diffusivity delta to calculate relative particle velocities, we set up our hydrodynamic simulations to be inviscid. This is because we want to study the onset of the VSI and the resulting turbulence strength. Applying the same diffusivities as for the coagulation models (\(\delta=1\times 10^{-4}-1\times 10^{-2}\)) as viscosity in PLUTO would likely stop the VSI from emerging in the first place (Barker and Latter, 2015). Note that setting up viscous simulations would also not be fully self-consistent since relative particle velocities in DustPy are inferred from perfectly isotropic turbulence and the resulting Kolmogorov cascade, which would not be the case for the developing VSI turbulence in our simulations. We carry out the calculations for 500 orbital periods at 50 au (\(=176\,777\) yr). Simulation domains are set up in spherical coordinates and extend from 25-150 au in the radial direction, and over \(\pm 3\) pressure scale heights from the midplane of the disk in the polar direction. We resolve one scale height at 50 au with 85 cells and employ logarithmic griding in the radial direction to preserve the cells' aspect ratios, resulting in a \(2011_{r}\times 513_{\vartheta}\) grid. Periodic boundary conditions are set up in the azimuthal direction with only one grid cell, making our simulations axisymmetric. Radial and polar boundaries are setup up as reflective for the orthogonal velocity components and as zero-gradient for the respective tangential velocity components. Pressure and density in the boundary cells are kept at the initial condition. In Figure 3, we show the vertical velocities in our simulations at the end of the simulation time. It is evident, that the spatially varying cooling times set constraints on where the VSI can be active and where vertical motions are suppressed by buoyancy. As a comparison, we also show an isothermal simulation (i.e., ideal VSI), in which the resulting turbulence is present in the entire simulation domain and at higher turbulent Mach numbers. For the case of \(v_{\rm fr}=400\) cm s\({}^{-1}\) and \(\alpha=10^{-3}\), we find the disk to be completely quiescent outside of \(\sim 80\) au, due to the long cooling times. In this case, dust would settle into a very thin layer in the outer disk, which we will further investigate in the next sections. Similarly, the disk regions outside of \(\sim 100\) au show only very little VSI activity for the coagulation model with \(v_{\rm fr}=100\) cm s\({}^{-1}\) and \(\alpha=10^{-4}\). To characterize the development and strength of the VSI turbulence, we measure the Favre-averaged (i.e. density-weighted) turbulent Mach numbers over the whole simulation domains, where the average in a direction \(x\) (polar, radial, or both) is defined as \[\langle\mathcal{M}\rangle_{x}=\frac{\int\frac{\sqrt{v_{\rm r}^{2}+v_{\rm s}^ {2}}}{c_{\rm s}}\rho\,{\rm d}x}{\int\rho\,{\rm d}x}, \tag{19}\] Figure 2: Dust size distributions at 50 au (solid lines) and 100 au (dashed lines) of our DustPy models (left side). On the right-hand side, we show the respective vertical cooling time profiles, assuming vertical settling-mixing equilibrium for the given \(\alpha\) and the critical VSI cooling time. Models with larger particles also exhibit longer cooling times due to collisional decoupling between dust and gas. We also show the height-dependent cooling time for local VSI modes as purple lines (see Equation 3). where \(v_{r}\) and \(v_{\vartheta}\) represent the radial and polar velocity components. Since our simulations are set up hydrostatically, these components measure turbulent fluctuations caused by the VSI. While velocities in our isothermal simulation saturate after \(\sim 100\) orbits at \(\langle\mathcal{M}\rangle\approx 4\times 10^{-2}\), all other, non-ideal simulations, reach lower Mach numbers and have longer growth time scales (see Figure 4). The vertical profile of the Mach numbers shows the typical vertical increase and a sharp upper cutoff, similar to the results in Pfeil and Klahr (2021). The collisional decoupling of dust particles and gas molecules is the reason for this behavior. Figure 4 also shows the three Mach numbers corresponding to the diffusivities chosen to calculate turbulent relative velocities between particles in our coagulation model (\(\alpha\)= \(1\times 10^{-4}\), \(1\times 10^{-3}\) and \(1\times 10^{-2}\)). As can be seen, the three lines do not exactly correspond to the measured Mach numbers of our simulations. This is, however, also not to be expected, since the direct conversion between Mach numbers and particle collision speed (see Equation 10) assumes a perfect Kolmogorov spectrum and, thus, isotropic turbulence which is not given for the VSI. The calculation of collision speeds would furthermore depend on the correlation time spectrum which was not taken into account here. Figure 5 depicts the radial dependence of the Mach numbers in our simulations. The lowest turbulence levels of \(\langle\mathcal{M}\rangle\approx 8\times 10^{-3}\) are reached in our simulations based on the DustPy model with \(\alpha=10^{-3}\) and \(v_{\rm fr}=400\,\rm cm\,s^{-1}\), i.e., in the model with the largest particles (\(a_{\rm max}(50\,\rm au)\approx 0.14\,\rm cm\)). For this simulation, we observe a decrease in turbulence outside of \(40\,\rm au\). At \(60\,\rm au\), turbulent Mach numbers have already decreased by a factor 10 compared to the inner regions. Also our models with \(v_{\rm fr}=200\,\rm cm\,s^{-1}\) and \(\alpha=10^{-3}\) and the model with \(v_{\rm fr}=100\,\rm cm\,s^{-1}\) and \(\alpha=10^{-4}\) show a radially decreasing level of turbulence in the outer disk. We conclude that the level of VSI turbulence is highly dependent on the physical details of the dust coagulation process. If dust grains can grow up to the fragmentation limit--which is to be expected in most parts of protoplanetary disks in the early evolutionary stages--we can expect weak collisional coupling between dust grains and gas molecules in the optically thin, outer regions, leading to inefficient cooling and only weak VSI turbulence. The magnitude of the impact of dust coagulation on the hydrodynamic turbulence depends mostly on the maximum size of the grains, where larger grains correspond to less cooling and, thus, stronger damping of VSI. ### Dust Dynamics in the _Pluto_ Simulations In the previous section, we have shown that the VSI activity in protoplanetary disks is highly sensitive to the properties of the present dust grain population, especially the largest grain size. However, we can not directly infer the VSI's feedback on the dust population. Dullemond et al. (2022) have clearly shown that the ideal VSI is inconsistent with the observed thickness of protoplanetary disks in Millimeter-wave observations with ALMA (Villenave et al., 2020, 2022). Our simulations Figure 3: Vertical velocities in units of the local speed of sound in our six PLUTO runs after 500 orbital time scales at \(50\,\rm au\). The isothermal run shows a snapshot after only 200 orbits. White contours mark the position at which the critical cooling time for the VSI is reached (Equation 3), i.e., VSI is theoretically possible within the white lobes. show that the level of turbulent vertical velocities can vary by orders of magnitude across the disk, depending on the details of the dust size distribution. In this section, we explore how these different levels of turbulence impact the thickness of the dust layer. For this, we restart the simulations after the VSI has reached a saturated level of turbulence. We add four dust fluids, resembling a power law size distribution \(n(a)\propto a^{p}\), and thus \(\Sigma_{\mathrm{d}}(a)\propto n(a)m(a)\propto a^{p+3}\). Normalizing to the total dust column density (column dust-to-gas ratio \(\mathcal{Z}=0.01\)) and integrating the distribution over the size bin \(i\) with boundaries \(a_{i}\) and \(a_{i+1}\), we get \[\Sigma_{\mathrm{d},i}=\begin{cases}\Sigma_{\mathrm{d},\mathrm{tot}}\frac{a_{i+1 }^{p+4}-a_{i}^{p+4}}{a_{\mathrm{max}}^{p+4}-a_{\mathrm{min}}^{p+4}}&\text{for $p\neq-4$}\\ \frac{\log(a_{i+1})-\log(a_{i})}{\log(a_{\mathrm{max}})-\log(a_{\mathrm{min}}) }&\text{for $p=-4$}\end{cases}. \tag{20}\] The maximum grain sizes \(a_{\mathrm{max}}\) and exponents \(p\) are derived from the underlying DustPy models (measured at a distance of \(50\,\mathrm{au}\) as the size including \(99.9\,\mathrm{\char 37}\) of the dust mass, see Table 1). Similar to the DustPy simulations, the minimum grain size is set to \(0.1\,\mathrm{\SIUnitSymbolMicro m}\), which is a typical size assumed for monomers in protoplanetary disks (Tazaki and Dominik, 2022) and which is constant throughout the simulations. We divide the power law size distribution into four sections, equally spaced in logarithmic size space between \(a_{\mathrm{min}}\) and \(a_{\mathrm{max}}\). The initial vertical dust distribution is determined by the midplane Stokes numbers and the level of turbulence assumed in Figure 4: Time evolution of vertical shear instability simulations based on the different dust models. Turbulent Mach numbers are shown as a function of time (radially and vertically Favre-averaged) and as a function of height above the midplane (time-averaged and radially Favre-averaged). In models with larger particles, cooling times are generally longer, which results in lower growth rates and lower Mach number turbulence. The vertical profiles on the right-hand side change accordingly. Cooling times in models with larger maximum particle size increase more rapidly with height above the midplane, which also cuts off the VSI turbulence. Isothermal models typically have vertically increasing turbulent velocities. The three dashed horizontal lines show the Mach numbers corresponding to the three \(\alpha\) values that we assumed for our coagulation models (see Equation 10). Note that the conversion between turbulent Mach numbers and diffusivities assumes a perfect Kolmogorov turbulence spectrum (see discussion in Section 6), which is likely not given for the anisotropic VSI turbulence. Figure 5: Radial dependency of the turbulent Mach numbers in a polar and time Favre average over 200 orbits in our VSi simulations. VSI simulations based on DustPy models with larger particles have lower levels of turbulence. For our model with the largest particles \(v_{\mathrm{fr}}=400\,\mathrm{cm}\,\mathrm{s}^{-1}\) and \(\alpha=10^{-3}\), the outer disk, beyond \(80\,\mathrm{au}\) is completely quiescent. The three dashed horizontal lines show the Mach numbers corresponding to the three \(\alpha\) values that we assumed for our coagulation models (see Equation 10). Note that such a conversion assumes a perfect Kolmogorov turbulence spectrum (see discussion in Section 6). the respective DustPy runs, follwing Equation 16. Dust is allowed to flow in from the outer boundary of the simulation domain with the initial vertical distribution. As to the time of this work, the PLUTO code has no built-in dust fluids. Therefore, we make use of the available gas tracer fluids. To model radial dust drift and vertical settling we modify the tracer fluxes according to the respective grain sizes' relative velocity to the gas, which is given by the prescriptions of Nakagawa et al. (1986) (terminal velocity approximation). Each dust fluid is advected with the gas velocity plus the drift correction of the mass-averaged size of the respective size bin. In Appendix C we present tests of this method that verify its accuracy. We continue the previous, gas-only, VSI simulations with dust for another 150 orbits (measured at \(50\,\mathrm{au}\)). Figure 6 depicts the distribution of dust-to-gas ratios in our simulations after 150 orbits. In our model with \(\alpha=10^{-3}\) and \(v_{\mathrm{fr}}=400\,\mathrm{cm\,s^{-1}}\), we have the largest particles of \(\approx 0.14\,\mathrm{cm}\) radius, while the smallest particles are present in the model with \(\alpha=10^{-2}\) and \(v_{\mathrm{fr}}=100\,\mathrm{cm\,s^{-1}}\), with a maximum size of \(\approx 15\,\mathrm{\SIUnitSymbolMicro m}\) (see Table 1). As a comparison, we initialize the isothermal simulation with the largest grains, to get an estimate of the effect of ideal VSI on a grown dust population (as in Dullemond et al., 2022). The effect of the different levels of VSI turbulence, depending on the coagulation parameters and the respective thermal relaxation times becomes visible in the dust-to-gas ratios, where the simulations with larger particles, longer cooling times, and less VSI turbulence have more settled dust layers. Especially the outer disk regions are affected by this, as can be seen in the cases with \(v_{\mathrm{fr}}>100\,\mathrm{cm\,s^{-1}}\) and \(\alpha<10^{-2}\). We can furthermore see, that the isothermal simulation provides a good approximation for the models with the smallest particles. This is to be expected because the models with the smallest particles also have the shortest cooling times, making the VSI modes almost isothermal. To visualize the clear distinction between the inner VSI active region and the outer VSI inactive regions, we plot the time and radially averaged total dust-to-gas ratios in Figure 7. For the models with fully VSI active disks, we find flat top, or double-peaked dust distributions throughout the entire disks. In contrast, models with larger grains and inactive outer disks, show flat-topped, or double-peaked profiles in the inner disk regions and highly settled outer regions. A perfect flat-top distribution would indicate spatially homogeneous diffusion and could easily be fitted by an analytic expression (see Equation 21, Fromang and Nelson, 2009). The double hump on the other hand can not be a feature of iso-tropic turbulence and reflects the action of the quasi-periodic VSI motions. At this point, we can only speculate what the feedback of these dust distributions onto the VSI would be. Lin and Youdin (2017) and Lin (2019) studied the influence that dust backreaction could have on the VSI and found that this process generally damps the VSI turbulence. For the highly settled cases, with midplane dust-to-gas ratios near unity, one would have to include hydrodynamic backreaction, as in the work by Schafer et al. (2020); Schafer and Johansen (2022). In these scenarios, the presence of VSI would probably be further inhibited by the hydrodynamic feedback of the dust onto the gas. Cooling times would also increase significantly in these regions. The areas above the midplane would be in the collision-limited regime, whereas the midplane could become optically thick (see Section 5). ## 5 Radiative transfer post processing We have shown the impact of the dust grain sizes on the strength of the VSI and the morphology of the dust layer in the previous section. Now, we want to determine the visual appearance of the simulated disks in synthetic Millimeter-wave observations. Our goal is a qualitative comparison of our results with ALMA observations of edge-on or almost edge-on protoplanetary disks. Specifically, the works of Villenave et al. (2020, 2022, 2023) have shown that many protoplanetary disks appear settled in \(\lambda=1.25\,\mathrm{mm}\) images obtained with ALMA. Oph163131 is the most prominent example with a very thin dust disk of height \(H_{\mathrm{d,100\,au}}\approx 0.5\,\mathrm{au}\). Villenave et al. (2022) obtained this result by modeling the appearance of one of the disk's gaps. For our approach, we create radiation intensity maps of edge-on disks (\(i=90^{\circ}\)) from the dust distributions of the last snapshot of our hydrodynamic simulations with RADMC-3D. For comparison, we also simulate the intensities arising from steady-state dust distributions under the assumption of a fixed diffusivity. In this settling-mixing equilibrium, the vertical dust distribution can be written \[\varepsilon=\varepsilon_{\mathrm{mid}}\exp\left[-\frac{\mathrm{St}_{\mathrm{ mid}}}{\delta}\left(\exp\left(\frac{z^{2}}{2H_{\mathrm{g}}^{2}}\right)-1 \right)\right], \tag{21}\] (Fromang and Nelson, 2009). Opacities are calculated for each of the four populations using the standard DSHARP particle properties with the dsharp_opac python package (Birnstiel et al., 2018). We consider a photon package to be fully extinct after being scattered over a length of five optical depths. Our models are axisymmetric and we treat the anisotropic scattering angle for 60 angular sample points. Before running the ray tracting algorithm, we use the mctherm task to calculate the dust temperatures from a thermal Monte Carlo simulation. For this, we use \(10^{7}\) photon packages. To mimic the effect of a finite beam size in ALMA observations, we convolve our images with a circular Gaussian beam, which for DSHARP observations had a typical FWHM of \(35\,\mathrm{mas}\). We place our disk at a distance of \(100\,\mathrm{pc}\) to the observer. We show the resulting images for the VSI simulation with \(v_{\mathrm{fr}}=100\,\mathrm{cm\,s^{-1}}\) in Figure 8, \(v_{\mathrm{fr}}=200\,\mathrm{cm\,s^{-1}}\) in Figure 9, and for \(v_{\mathrm{fr}}=400\,\mathrm{cm\,s^{-1}}\) in Figure 10. The right-hand side of each figure depicts three minor axis cuts through the intensity map at the locations of the vertical lines in the images. The images within each figure are created from disk models with identical particle sizes. As a result of optical depth effects, we find that the models with \(v_{\mathrm{fr}}=200\,\mathrm{cm\,s^{-1}}\) (Figure 9) \(v_{\mathrm{fr}}=400\,\mathrm{cm\,s^{-1}}\) (Figure 10), have a double-peaked intensity profile in the optically thick regions, marked by the hatched areas in each image. Above the midplane, these models have optical surfaces closer to the central star. Therefore, we observe the hotter inner regions above the midplane and the cooler outer regions in the disk midplane, as illustrated in Figure 11. Double-peaked profiles have already been observed in synthetic images of a VSI active disk in Blanco et al. (2021). Their work is based on the simulation presented in Flock et al. (2020) and also treats radiative transfer through radiative diffusion in combination with ray-tracing from the central star for up to \(10\,\mathrm{\SIUnitSymbolMicro m}\) dust particles. \begin{table} \begin{tabular}{c c c c c c c c c c c c c} \hline \hline \(M_{*}\) & \(R_{*}\) & \(T_{*}\) & \(M_{\mathrm{disk,g}}\) & \(\mathcal{Z}\) & \(v_{\mathrm{fr}}\) & \(\alpha_{\mathrm{turb}}\) & \(\rho_{\mathrm{m}}\) & \(a_{\mathrm{min}}\) & \(a_{\mathrm{max}}\) (50 au) & St\({}_{\mathrm{max}}\) (50 au) & \(a_{\mathrm{s}}\) (50 au) & St\({}_{\mathrm{s}}\) (50 au) \\ \([M_{\odot}]\) & \([R_{\odot}]\) & \([\mathrm{K}]\) & \([M_{*}]\) & & \([\mathrm{cm\,s^{-1}}]\) & & \([\mathrm{g\,cm^{-3}}]\) & \([\mathrm{cm}]\) & & & & [cm] & \\ \hline 1 & 2 & 5772 & 0.05 & 0.01 & 100 & \(10^{-3}\) & 1.67 & \(10^{-5}\) & \(1.08\times 10^{-2}\) & \(2.51\times 10^{-3}\) & \(1.99\times 10^{-4}\) & \(4.63\times 10^{-5}\) \\ " & " & " & " & " & 200 & \(10^{-3}\) & " & " & \(3.98\times 10^{-2}\) & \(9.26\times 10^{-3}\) & \(4.30\times 10^{-4}\) & \(9.99\times 10^{-5}\) \\ " & " & " & " & " & 400 & \(10^{-3}\) & " & " & \(1.47\times 10^{-1}\) & \(3.41\times 10^{-2}\) & \(1.26\times 10^{-3}\) & \(2.93\times 10^{-4}\) \\ " & " & " & " & " & 100 & \(10^{-4}\) & " & " & \(6.31\times 10^{-2}\) & \(1.47\times 10^{-2}\) & \(9.69\times 10^{-4}\) & \(2.25\times 10^{-4}\) \\ " & " & " & " & " & 100 & \(10^{-2}\) & " & " & \(1.58\times 10^{-3}\) & \(3.69\times 10^{-4}\) & \(8.52\times 10^{-5}\) & \(1.98\times 10^{-5}\) \\ \hline \hline \end{tabular} \end{table} Table 1: Dust coagulation parameters of our five DustPy simulations and the respective maximum particle size measured at \(50\,\mathrm{au}\) in the DustPy simulation. This value was used as a maximum particle size in our PLUTO simulations with dust. We also show the respective Sauter mean radii and Stokes numbers. Figure 6: Total dust to gas ratios in VSI simulations restarted after 425 orbits with four passive dust fluids. Each simulation is started with a dust distribution similar to the one derived from the respective DustPy simulations. Snapshots are taken after 150 orbits of evolution. White contours mark the position at which the critical cooling time for the VSI is reached (Equation 3), i.e., VSI is theoretically possible within the white lobes. The disk model with the smaller particles (\(v_{\rm fr}=100\,{\rm cm\,s^{-1}}\)), is subject to the strongest VSI and the strongest vertical mixing (row a of Figure 8). Therefore, the disk midplane is not as strongly enriched and remains optically thin outside of \(\sim 45\,{\rm au}\). We are therefore not observing any double-peaked minor axis intensity profiles in these cases. The minor cut intensity profiles in the inner disk match best with the analytic profile with \(\delta=10^{-4}\) or \(\delta=10^{-3}\) (rows b and c in Figure 8). In the outer disk, they show almost no settling, since the VSI is still active under the given conditions (more comparable with large diffusivities as in row c in Figure 8). Similar to the conclusions of Dullemond et al. (2022), we confirm that such a disk structure is not consistent with observations of highly settled edge-on disks like Oph163131. In our disk model with \(v_{\rm fr}=200\,{\rm cm\,s^{-1}}\), we find a vertically extended and optically thick inner disk with the typical two intensity peaks. However, we can already see the effect of the radially increasing cooling times in the regions beyond \(100\,{\rm au}\). While the inner, VSI active regions appear the be more consistent with the analytic models of high diffusivity (row d in Figure 9), we can see that the outer regions are most consistent with a low diffusivity of \(\sim 10^{-4}\) (row b in Figure 9). This would still not be in agreement with observations of Oph163131, which find the disk to be highly settled at \(r\approx 80\,{\rm au}\). Ramping up the fragmentation threshold further, as in our model with \(v_{\rm fr}=400\,{\rm cm\,s^{-1}}\), results in a highly settled outer dust layer outside of \(\sim 60\,{\rm au}\), as can be seen in Figure 10. The minor axis cuts illustrate the transition from an optically thick vertical structure in the inner regions to a mostly optically thin profile in the outer regions, which occurs at the outer edge of the VSI active region. For the inner regions, we find a good agreement between the VSI simulation and the analytic model with \(\delta=10^{-3}\) (row c in Figure 10). Similar to Dullemond et al. (2022), we find that the VSI can still lift up large particles in these inner regions. In contrast, the outer regions are strongly settled, and more consistent with the analytic profile with \(\delta=10^{-4}\) (row b in Figure 10). At this level of settling, it is unlikely that the outer regions of the VSI simulation could still be distinguished from a fully settled disk (\(\delta=0\)), due to the applied beam smearing. Note that in this simulation, we allow dust to flow into the simulation domain with a vertical distribution equal to the initial condition (which assumes \(\delta=10^{-3}\)). Any remaining vertical extent of the dust layer in the outer disk therefore likely exists as a result of the boundary condition. ## 6 Discussion ### Other Modes of Thermal Relaxation We assumed the dust to be the only source of cooling in the outer regions of protoplanetary disks. However, molecules like CO, H\({}_{2}\)O, CO\({}_{2}\), etc, with electromagnetic dipole moments, might also contribute to the cooling of the gas through line emission when gas and dust are thermally decoupled (Woitke et al., 2009; Malygin et al., 2014). In this case, thermal energy must also be transferred from the bulk constituent of the disk, H\({}_{2}\), to the emitting species via collisions. Cooling the VSI modes could, thus, again become a matter of collision timescales at the very low densities of the outer disk. Figure 7: Radially and time-averaged dust-to-gas ratios in the inner and outer parts of our simulations. The inner regions are VSI active, forming plateau-like dust distributions in all simulations with a cutoff at the edges of the VSI active zones. The outer disk regions appear much more settled in that cases of \(v_{\rm fr}=200\,{\rm cm\,s^{-1}}\) and \(v_{\rm fr}=400\,{\rm cm\,s^{-1}}\), in which the outer regions are quiescent. At low temperatures, emission lines may also become extremely inefficient at cooling the gas at the required rate. Freeze-out of emitting molecules might also reduce the rate of thermal relaxation that can be achieved by emission line cooling. How much material can freeze out and thus be stopped from cooling the H\({}_{2}\), depends also on the availability of small grains. Cooling of the disk via gas emission lines is, thus, also dependent on the details of the dust population. Future studies should aim to incorporate some treatment of gas cooling via emission lines. Models for this exist (Woitke, 2015), but are very complex and currently not feasible for implementation in a hydrodynamic simulation. Furthermore, we have omitted the optically thick regions of protoplanetary disks (\(R<10\,\mathrm{au}\)) in our simulations. Optically thick in this context does not refer to the bulk optical depth of the disk (\(\tau\sim\Sigma\kappa\)), as discussed in the previous section, but to the optical depth of individual VSI flow structures, which in the inner disk measure only a fraction of the disk scale height in the radial direction(denoted as \(l\) in the following). We attempted to simulate these regions in Pfeil and Klahr (2021) by assuming a characteristic diffusion length scale. However, self-consistent modeling requires some treatment of radiative transfer, as in Stoll and Kley (2016) or Flores-Rivera et al. (2020). Our findings nonetheless allow us to make predictions about the effect of dust coagulation on the cooling times in these regions, based on the results obtained here. If radiative diffusion becomes the dominant channel for thermal relaxation, we can write Figure 8: Upper row a: RADMC-3D intensity maps of our VSI simulation with \(v_{\mathrm{fr}}=100\,\mathrm{cm\,s^{-1}}\) and \(\alpha=10^{-3}\), seen edge-on. Rows b, c, and d show intensity maps calculated from analytic dust distribution that assume different diffusivities \(\delta\). The grain sizes are identical in all simulations. We convolve the images with a typical ALMA beam with FWHM of \(35\,\mathrm{mas}\) for a distance of \(100\,\mathrm{pc}\) shown as a grey circle. Hatched areas mark regions that have optical depth \(\tau\geq 1\). Horizontal hatches correspond to areas for which the \(\tau=1\) surface lies on the far side of the disk. Diagonally hatched regions mark \(\tau=1\) surfaces that lie on the observer’s side of the disk. The panels on the right-hand side show minor axis cuts through the images along the vertical lines in the intensity maps. Purple lines in all plots are the minor axis cuts from the VSI simulation (panel a). the respective cooling time as \[t_{\rm LTE}^{\rm diff}=\frac{3}{16}\frac{C_{\rm V}\rho_{\rm small}\rho_{\rm g} \kappa_{\rm R}l^{2}}{\sigma_{\rm SB}T^{3}}, \tag{22}\] where \(\kappa_{\rm R}\) is the Rosseland mean opacity, which is mostly determined by the small grains of density \(\rho_{\rm small}\)(Lin and Youdin, 2015; Dullemond et al., 2022). If coagulation is increasing the maximum particle size, the density of small particles will be reduced, therefore reducing the diffusion timescale. At the same time, the size distribution-averaged opacity will also be reduced. Therefore, dust coagulation would effectively reduce the diffusion time scale and thus be beneficial for the VSI in the inner disk regions. ### Implication for the Vertical Shear Instability We have shown that the Vertical Shear Instability is highly sensitive to the underlying dust size distribution, which determines the timescale of thermal relaxation. Manger et al. (2021) and Klahr et al. (2023) have shown that the VSI growth rate almost instantaneously drops to almost zero once the critical cooling time threshold is reached. This is also what we observe as a sudden cutoff in the VSI activity at large disk radii. Therefore, the VSI active zones in protoplanetary disks are not extending throughout the entire outer disk. Our simulations predict a VSI dead zone at large radii, which is caused by the reduced efficiency of cooling. Our simulations omit a treatment of dust backreaction onto the gas. Schafer et al. (2020) have shown, that if the dust can settle into a thin layer in the disk midplane before the VSI starts to grow, dust feedback can Figure 9: Upper row a: RADMC-3D intensity maps of our VSI simulation with \(v_{\rm fr}=200\,\rm cm\,s^{-1}\) and \(\alpha=10^{-3}\), seen edge-on. Rows b, c, and d show intensity maps calculated from analytic dust distribution that assume different diffusivities \(\delta\). The grain sizes are identical in all simulations. We convolve the images with a typical ALMA beam with FWHM of \(35\,\rm mas\) for a distance of \(100\,\rm pc\) shown as a grey circle. Hatched areas mark regions that have optical depth \(\tau\geq 1\). Horizontal hatches correspond to areas for which the \(\tau=1\) surface lies on the far side of the disk. Diagonally hatched regions mark \(\tau=1\) surfaces that lie on the observer’s side of the disk. The panels on the right-hand side show minor axis cuts through the images along the vertical lines in the intensity maps. Purple lines in all plots are the minor axis cuts from the VSI simulation (panel a). counteract the VSI. Since dust coagulation, settling, and the onset of the VSI, occur on comparable timescales, it is not trivial to predict the outcome of such a situation without a realistic disk simulation that treats all of the aforementioned effects simultaneously. Our results show that if some dust settling and coagulation can occur before the onset of the VSI, the effect of the reduced cooling time would reduce the VSI activity and therefore probably enhance the dampening effect of the dust's dynamic backreaction onto the gas. ### The Need for a Self-Consistent Three-Dimensional Model and the Limitations of our Approach Simulations that aim to study the VSI under realistic conditions can not ignore the implications of an evolved dust population, as presented in our and previous studies (see Fukuhara et al., 2021, 2023). Measurements of the spectral index in protoplanetary disks (Tazzari et al., 2016; Perez et al., 2012; Huang et al., 2018; Sierra et al., 2021) and polarization observations (Ohashi and Kataoka, 2019) imply that dust coagulation is occurring and that grains in the outer disk can reach sizes of between 0.1-1 mm, similar to the outcome of the DustPy models that our VSI simulations are based on. Note, however, that our studies are no self-consistent representations of protoplanetary disks. The dust size distributions used to calculate the cooling time in our setups are static. In a real disk, they would evolve together with the VSI. Settling and stirring of the dust layer would impact the cooling times. It is unclear if this would lead to some sort of equilibrium situation in which the dust stirring by the VSI can maintain a thick enough dust layer to Figure 10: Upper row a: RADMC-3D intensity maps of our VSI simulation with \(v_{\rm fr}=400\,\rm cm\,s^{-1}\) and \(\alpha=10^{-3}\), seen edge-on. Rows b, c, and d show intensity maps calculated from analytic dust distribution that assume different diffusivities \(\delta\). The grain sizes are identical in all simulations. We convolve the images with a typical ALMA beam with FWHM of \(35\,\rm mas\) for a distance of \(100\,\rm pc\) shown as a grey circle. Hatched areas mark regions that have optical depth \(\tau\geq 1\). Horizontal hatches correspond to areas for which the \(\tau=1\) surface lies on the far side of the disk. Diagonally hatched regions mark \(\tau=1\) surfaces that lie on the observer’s side of the disk. The panels on the right-hand side show minor axis cuts through the images along the vertical lines in the intensity maps. Purple lines in all plots are the minor axis cuts from the VSI simulation (panel a). support the necessary cooling times. Continuous coagulation of grains would counteract the turbulent mixing further. Fukuhara et al. (2023) presented an approach to study this equilibrium by using analytic, yet physically motivated, cooling time profiles. They iterated between VSI simulations and calculations of the resulting steady-state dust distribution from the measured turbulent diffusivity. In that way, they were able to reach a convergent state in which the VSI turbulence creates the necessary diffusivity to maintain the underlying cooling times. Their studies did, however, not consider the effect of the changing diffusivity on the grain size itself through coagulation and fragmentation. This poses an additional uncertainty in their and our studies. We can already see that the measured Mach numbers in our simulations do not always correspond to the \(\alpha\) values used in the underlying coagulation models (see Figure 4). Note that \(\mathcal{M}\) is only part of the generation of turbulent collision speeds (Ormel and Cuzzi, 2007). The turbulent spectrum in correlation time space is also required to calculate the acceleration that can be imposed on various particle sizes. Collision speeds can only be obtained from the large scale r.m.s. velocity \(U(L)\) and the associated length-scale \(L=\sqrt{\alpha}H\), for an ideal Kolmogorov turbulence cascade which causes isotropic turbulent diffusivities (Youdin and Lithwick, 2007; Binkert, 2023). If any source of additional turbulence would be present that causes the turbulent diffusivities used in our coagulation models, this would also have an effect on the developing VSI. Even small viscosities of \(\alpha=1\times 10^{-4}-1\times 10^{-3}\) are enough to hinder the evolution of the VSI (Barker and Latter, 2015). Future studies should try to apply a more realistic, self-consistent prescription of diffusivities in the coagulation model. In our cooling time calculations, we have also neglected the effects of radial drift. Drift-limited size distributions are characterized by smaller maximum particle sizes and are more top-heavy than fragmentation-limited distributions. This results in longer thermal accommodation timescales and would further inhibit the VSI turbulence. The effect of the drag force onto the gas was also not considered in our simulations. Schafer et al. (2020) and Schafer and Johansen (2022) have shown that backreaction can indeed inhibit the VSI turbulence close to the disk midplane if the dust has time to sediment before the VSI is saturated. Future studies should therefore aim to incorporate more realistic dust dynamics. In our two-dimensional simulations with dust, we have observed flat top or double-peaked dust-to-gas ratio distributions. This reflects the periodic and non-isotropic nature of the VSI-driven turbulence, which is not accounted for in the coagulation simulations. However, as our simulations are two-dimensional, the prominence of these features might be artificially enhanced, as the \(\varphi-\)dimension is missing as a degree of freedom. Three-dimensional simulations (Manger and Klahr, 2018; Flock et al., 2020; Manger et al., 2021; Pfeil and Klahr, 2021) are needed for the study of the non-linear saturation and fully developed turbulent state of VSI-driven turbulence, before deriving the turbulence properties as diffusivity, correlation times, and energy spectra. The main conclusions of our study and Fukuhara et al. (2021), however, remain unchanged by all these considerations. Dust coagulation and dynamics are essential components in studies of cooling-time-sensitive instabilities like the VSI. This highlights the need for a more-self consistent numerical approach. Cooling times have to be constantly recalculated throughout a simulation from the present dust size distributions in order to study such systems. In the inner, optically thick parts of the disk, radiative transfer models have to be employed to study the effect of coagulation on diffusive radiative cooling. ## 7 Summary and Conclusions In this work, we studied the effect of evolved dust size distributions on the VSI activity in protoplane Figure 11: Origin of the double-peaked intensity profiles in Figure 9 and Figure 10. The \(\tau=1\) surfaces for the layers above the midplane lie closer to the central star due to the lower densities. The respectively higher temperatures lead to higher intensities above the midplane. Here shown are the \(\tau=1\) surfaces for row c in Figure 9. tary disks. We conducted hydrodynamic simulations based on five different dust coagulation models for different fragmentation velocities and assumed turbulence strengths, which resulted in maximum particle sizes between \(\sim 10\,\mathrm{\SIUnitSymbolMicro m}\) and \(\sim 0.1\,\mathrm{cm}\). Based on these dust size distributions, we calculated the cooling times for our subsequent hydrodynamic simulations. Our results show a strong dampening effect of dust coagulation on the VSI, as predicted by previous studies (Lin and Youdin, 2015; Fukuhara et al., 2021; Pfeil and Klahr, 2021; Dullemond et al., 2022; Fukuhara et al., 2023). The reason for this is the collisional decoupling between dust particles and gas molecules that is enhanced if dust coagulation is increasing the maximum particle size. Reduced collision rates inhibit the thermal accommodation of dust and gas and therefore reduce the cooling rate of the gas. The effect can be strong enough to hinder the development of the VSI, leading to a highly settled dust layer even for moderate fragmentation velocities of \(v_{\mathrm{fr}}\gtrsim 200\,\mathrm{cm}\,\mathrm{s}^{-1}\). At the same time, the inner regions--in which the gas and dust components remain well coupled--can maintain some level of VSI turbulence. This finding is consistent with recent observations of highly settled dust layers in protoplanetary disks (Villenave et al., 2020, 2022). Our simulations also show that even a low level of VSI can still significantly alter the vertical distribution of dust, which we can observe in the inner disk regions of our simulations with the largest particles. Synthetic Millimeter-images of these VSI active regions are mostly consistent with analytic models that assume large diffusivities of \(\delta\sim 10^{-3}-10^{-2}\). At the same time, outer disk regions can appear completely settled in our simulations. We thus report the existence of a VSI dead zone in the outer regions of protoplanetary disks. The existence of the VSI dead zone in the outer regions of protoplanetary disks reconciles recent Millimeter-wave observations with models of hydrodynamic turbulence. Future studies of VSI-active disks should aim to incorporate a more self-consistent treatment of dust coagulation and dynamics. Additionally, cooling via gas emission lines has to be considered to gain a better understanding of the impact of thermal relaxation on the VSI in protoplanetary disks. For this, thermo-chemical modeling is required to track the amounts and the evolution of relevant species, which in fact also depends on the dust coagulation process. Modeling the optically thick parts of protoplanetary disks and the impact of stellar irradiation furthermore requires radiative transfer modeling. After applying our methodology to smooth disks in this article, we will extend our studies to disks with sub-structure in Part II. Specifically, Oph163131 (Villenave et al., 2020; Wolff et al., 2021; Villenave et al., 2022) and HD163296 (Dullemond et al., 2018; Rosotti et al., 2020; Doi and Kataoka, 2021) have been extensively surveyed with a focus on the dust diffusivities and provide good conditions for comparison with simulations. ## Acknowledgments T.P., H.K., and T.B. acknowledge the support of the German Science Foundation (DFG) priority program SPP 1992 "Exploring the Diversity of Extrasolar Planets" under grant Nos. BI 1816/7-2 and KL 1469/16-1/2. T.B. acknowledges funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme under grant agreement No 714769 and funding by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under grants 361140270, 325594231, and Germany's Excellence Strategy - EXC-2094 - 390783311. Computations were conducted on the computational facilities of the University of Munich (LMU). SOFTWARE * PLUTO-4.4 (Mignone et al., 2007) * RADMC-3D (Dullemond et al., 2012) * Python with the packages: * NumPy (Harris et al., 2020) * SciPy (Virtanen et al., 2020) * matplotlib (Hunter, 2007) * DustPy (Stammler and Birnstiel, 2022) * dsharp_opac (Birnstiel, 2018)
2301.11211
Electrons trapped in graphene magnetic quantum dots with mass term
Owing to the Klein tunneling phenomenon, the permanent confinement or localization of electrons within a graphene quantum dot is unattainable. Nonetheless, a constant magnetic field can transiently ensnare an electron within the quantum dot, giving rise to what are known as quasi-bound states characterized by finite lifetimes. To prolong the retention of electrons within the quantum dot, we introduce a mass term into the Hamiltonian, thereby inducing an energy gap. We resolve the Dirac equation to ascertain the eigenspinors, and by ensuring their continuity at the boundaries, we investigate the scattering behavior. Our findings indicate that the presence of an energy gap can extend the lifetimes of these quasi-bound states within the quantum dot. In particular, we demonstrate that even in the absence of a magnetic field, the scattering efficiency attains significant levels when the energy gap gets closed to the incident energy of an electron traversing the quantum dot. It is found that an augmentation in the electron density within the quantum dot results in an enhancement of the electron-trapping time.
Mohammed El Azar, Ahmed Bouhlal, Ahmed Jellal
2023-01-26T16:41:35Z
http://arxiv.org/abs/2301.11211v2
# Electrons trapped in magnetic controlled gapped graphene-based quantum dots ###### Abstract Due to the Klein tunneling effect in graphene, it is impossible to confine or permanently localize electrons inside the graphene quantum dot. However, an electron can be transiently trapped inside the quantum dot under the effect of a constant magnetic field, which will have the so-called "quasi-bound states" characterized by a finite lifetime. In order to improve the trapping time of the electrons inside the quantum dot, we will add a mass term to the Hamiltonian, creating an energy gap. We solve the Dirac equation to determine the energy spectrum, using the continuity of the eigenspinors at the edges to study the scattering phenomenon. We find that the energy gap can increase the lifetime of the quasi bound states inside the quantum dot. In addition, we show that even in the absence of the magnetic field, the scattering efficiency can reach considerable levels when the energy gap is closed to the incident energy of the electron passing through the quantum dot. It is found that the density inside the quantum dot is enhanced, resulting in an improvement in the trapping time of the electrons. Graphene, circular quantum dot, magnetic field, energy gap, scattering phenomenon pacs: 81.05.ue; 81.07.Ta; 73.22.Pr ## I Introduction A two-dimensional semi-metal called graphene is distinguished by a linear band structure that is quite close to the Fermi level. It has a honeycomb lattice structure comprised entirely of carbon atoms. Researchers are very driven to enhance the conclusions established theoretically or experimentally in this context (see [1; 2]) because of the very specific electrical characteristics of graphene. The anomalous quantum Hall effect [3; 4], Klein tunneling [5], high electrical conductivity, and extremely high electron mobility [6; 7] are only a few of the extraordinary electronic characteristics of graphene. Several scientific studies have demonstrated graphene's interest in fundamental physics for more technological applications in a variety of fields (e.g., [8; 9; 10]). In-depth studies in the physics of materials like graphene are usually based on the interactions of these materials with external fields. They have shown that it can offer a perfect framework to study fundamental physics and interpret physical effects and phenomena such as Landau level quantization [11; 12; 13], Aharonov-Bohm effect [14; 15; 16]. Then graphene is one of the strongest materials tested so far, possessing remarkable conductivity qualities that serve to use graphene in technological applications such as integrated circuits, light sensing devices, and microelectronic devices [17; 18; 19]. Due to Klein's paradox, charge carriers cannot be localized in a small, constrained region of graphene by an electrostatic gate. The main objective of graphene-based electronics is to confine the particles that generate quantum dots [20; 21; 22; 23]. Since graphene does not have a band gap, there are not any traditional quantum dots in it that can localize electrons in areas of finite dimensions. Utilizing the quantum dot for relativistic electrons, which behave like massless Dirac fermions, will be necessary for graphene's future electronic uses [2; 5]. The normal incidence issue, which is the cause of perfect transmission (Klein paradox), is resolved by using zero-dimensional circular quantum dots [24]. A variety of quantum dot applications in electronics are made possible by the possibility of confining Dirac fermions in graphene, including solar panels, lasers [25], photo-detectors [26], quantum information processing, and quantum computers [27]. The localization of graphene quantum dots is prevented by Klein tunneling when an electron strikes the dot with normal incidence [5]. The trapping of electrons in quantum dots is currently a very interesting research topic in the field of condensed matter physics. It began decades ago and continues to receive a lot of attention from both a fundamental and an application standpoint. The electron trapping problem was first studied in a one-dimensional wire in the absence of a magnetic field [28; 29; 20], and then for a quantum dot with smooth [22] and sharp [30] boundary states. The trapping potential of a relativistic electron in graphene is dependent on several factors, in particular the transverse momentum [20; 22; 31]. Regarding a quantum dot, the trapping potential is closely related to the electron's angular momentum and becomes significantly more intense as it increases. Another factor that influences the lifetime is the sharpness of the confining potential. Trapping an electron in a graphene quantum dot is thus best accomplished when the confinement potential is smooth and electronic states have a large angular momentum. It is shown that Klein tunneling is significantly reduced in the resonant regime, and the lifetime increases with increasing magnetic field [32]. A gap between the two valence and conduction bands in the graphene band structure can be made using a variety of experimental procedures [9]. As a result of the sublattice's symmetry violation, the highest energy gap value might be 260 meV [33]. It is interesting to note that the energy gap value differs among testing techniques. By manipulating the structure of the interface between graphene and ruthenium, it has been shown that there are alternative experimental ways besides system breaking for opening gaps [34]. Furthermore, the graphene sheet's energy gap has been altered by altering the substrate, SiC is one such substrate [33]. Different band gaps are created based on depositing a graphene layer on other substrates [35; 36]. Inspired by the outcomes described above and, specifically, [32], we study the phenomenon of incident electron scattering from a gapped circular graphene quantum dot subjected to a constant magnetic field. This is an investigation into scattering efficiency \(Q\), probability density \(\rho\), and lifetime \(\tau\). These will allow us to show how the energy gap can affect the scattering phenomena of the present system. We first use the Dirac equation to analytically determine the solutions of the energy spectrum, and then we use the continuity at the interface to calculate the corresponding scattering efficiency outside and inside the quantum dot. We find that when the energy gap is closed to the incident energy of an electron that crosses the GQD, the efficiency can still be quite high even in the absence of a magnetic field. Also, we demonstrate that the probability density is strengthened, resulting in an improvement in the lifetime of the electrons. The main characteristics of the scattering phenomenon are analyzed in relation to the physical parameters of our present system, with the primary goal of determining the best situation to produce interesting and advantageous results. The present paper is organized as follows. In Sec. II, we establish a theoretical model describing our system and determine the solution of the energy spectrum. After matching eigenspinors at the interface, we explicitly determine the quantities characterizing the scattering phenomenon in Sec. III. In Sec. IV, we numerically analyze our finding under various conditions of the physical parameters. The scattering efficiency \(Q\), probability density \(\rho\), and lifetime \(\tau\) are examples. Finally, we conclude our results. ## II Theoretical model Let us consider a gapped circular graphene quantum dot (GQD) subjected to a constant magnetic field, which is made of two regions as depicted in Fig. 1. The current system can be described by a single valley Hamiltonian as follows \[H=v_{F}\vec{\sigma}\cdot(\vec{p}-e\vec{A})+\Delta\sigma_{z} \tag{1}\] where \(v_{F}=10^{6}\) ms\({}^{-1}\) is the Fermi velocity, \(\vec{\sigma}=(\sigma_{x},\sigma_{y}.\sigma_{z})\) are Pauli matrices and \(\Delta\) is the energy gap resulting from the mass term. It convenient to select the vector potential \(\vec{A}=\frac{B}{2}(x,y)\) in symmetric gauge. Because of the spherical symmetry, we can write the Hamiltonian in polar coordinates, knowing that \[\sigma_{r}=\left(\begin{array}{cc}0&e^{-i\theta}\\ \\ e^{i\theta}&0\end{array}\right),\quad\sigma_{\theta}=\left(\begin{array}{cc }0&-ie^{-i\theta}\\ \\ ie^{i\theta}&0\end{array}\right) \tag{2}\] Figure 1: (color online) A graphene quantum dot of radius \(R\) is placed in the horizontal plane \(xy\), with the magnetic field \(B\) oriented perpendicular to the dot plane in the direction \(z\). The incident electron is represented by a plane wave \(\Phi_{k}^{\mathrm{r}}\) with energy \(E=\hbar v_{F}k\). When an electron approaches a quantum dot, it is either reflected (wave \(\Phi_{k}^{\mathrm{r}}\)) or transmitted (wave \(\Phi_{q}^{\mathrm{r}}\)). and therefore we have \[H=\begin{pmatrix}\Delta&-i\hbar v_{F}e^{-i\theta}\left(\partial_{r}-\frac{i}{r} \partial_{\theta}-\frac{eBr}{2\hbar}\right)\\ -i\hbar v_{F}e^{-i\theta}\left(\partial_{r}+\frac{i}{r}\partial_{\theta}+\frac {eBr}{2\hbar}\right)&-\Delta\end{pmatrix}. \tag{3}\] Given that the total angular momentum operator \(J_{z}=-i\hbar\partial_{\theta}+\frac{\hbar}{2}\sigma_{z}\) commutes with the Hamiltonian (1), i.e., \([H,J_{z}]=0\), the eigenspinors can be separated as \[\Phi(r,\theta)=\begin{pmatrix}\Phi^{A}(r)e^{im\theta}\\ i\Phi^{B}(r)e^{i(m+1)\theta}\end{pmatrix} \tag{4}\] where the integer values \(m\) are eigenvalue of \(J_{z}\). Now using the eigenvalue equation \(H\Phi(r,\theta)=E\Phi(r,\theta)\) to obtain \[\left(\partial_{r}+\frac{r}{2l_{B}^{2}}-\frac{m}{r}\right)\Phi^{ A}(r)=-\frac{E+\Delta}{\hbar v_{F}}\Phi^{B}(r) \tag{5a}\] \[\left(\partial_{r}+\frac{m+1}{r}-\frac{r}{2l_{B}^{2}}\right) \Phi^{B}(r)=\frac{E-\Delta}{\hbar v_{F}}\Phi^{A}(r) \tag{5b}\] For instance, by injecting (5a) into (5b), we end up with a second differential equation for \(\Phi^{A}(r)\) \[\left(\partial_{r}^{2}+\frac{1}{r}\partial_{r}+\frac{m+1}{l_{B}^{2}}-\frac{r^ {2}}{4l_{B}^{2}}-\frac{m^{2}}{r^{2}}+q^{2}\right)\Phi^{A}(r)=0 \tag{6}\] where we have set \(q=\frac{\sqrt{|E^{2}-\Delta^{2}|}}{\hbar v_{F}}\) and the magnetic length \(l_{B}=\sqrt{\frac{\hbar}{eB}}\). In order to solve (6), we start by exploring the asymptotic limits that define the necessary physical behaviors depending on the value of \(r\). In the limit \(r\rightarrow\infty\), (6) can be approximated by \[\left(\partial_{r}^{2}+\frac{1}{r}\partial_{r}-\frac{r^{2}}{4l_{B}^{2}}\right) \Phi^{A}(r)=0 \tag{7}\] which is the updated Bessel equation for the zero-order case, therefore having the solution \[\Phi^{A}(r)=c_{1}I_{0}\left(\frac{r^{2}}{4l_{B}^{2}}\right)+c_{2}K_{0}\left( \frac{r^{2}}{4l_{B}^{2}}\right) \tag{8}\] where \(I_{0}(x)\) and \(K_{0}(x)\) denote the zero order modified Bessel functions of the first and second kinds, respectively. We choose \(c_{1}=0\) and \(c_{2}=1\) to avoid the divergence of function \(I_{0}(x)\) when \(x\) reaches infinity. Now using the asymptotic behavior \(K_{0}(x)\underset{x\gg 1}{\sim}\frac{e^{-x}}{\sqrt{x}}\), to approximate \(\Phi^{A}(r)\) as \[\Phi^{A}(r)\sim 2l_{B}\frac{e^{-\frac{r^{2}}{4l_{B}^{2}}}}{r}. \tag{9}\] In the limit \(r\to 0\), (6) reduces to \[\left(\partial_{r}^{2}+\frac{1}{r}\partial_{r}-\frac{m^{2}}{r^{2}}\right)\Phi ^{A}(r)=0 \tag{10}\] which has the following solution \[\Phi^{A}(r)=\frac{c_{3}}{2}(r^{m}+r^{-m})+\frac{ic_{4}}{2}(r^{m}-r^{-m}) \tag{11}\] where \(c_{3}\) and \(c_{4}\) must be chosen in such a way that the solution adheres to the physical constraints. As a result, we examine the positive and negative values of \(m\) independently. Indeed, for \(m\geq 0\), \(\sim r^{-m}\) must vanish, which implies that \(c_{4}=-ic_{3}\) and, again by convention, we put \(c_{3}=1\). For \(m<0\), \(\sim r^{m}\) must vanish, then we replace \(c_{4}=ic_{3}\) with \(c_{3}=1\). Combining all to write the asymptotic behavior of \(\Phi^{A}(r)\) as \[\Phi^{A}(r)\sim r^{m},\qquad m\geq 0, \tag{12a}\] \[\Phi^{A}(r)\sim r^{-m},\qquad m<0. \tag{12b}\] Using the above analysis to write the solution of (6) as \[\Phi^{A\pm}(r)=r^{\pm m}\frac{e^{-r^{2}/4l_{B}^{2}}}{r/2l_{B}}\Xi_{q}^{A\pm}(r) \tag{13}\] knowing that the sign "+" stands for \(m\geq 0\), while "-" stands for the opposite case, \(m<0\). Now, we perform the variable change \(\eta=\frac{r^{2}}{2l_{B}^{2}}\) and use the transformation \(\Xi_{q}^{A\pm}(\eta)=\sqrt{\eta}\chi_{q}^{A\pm}(\eta)\) to write (13) as \[\Phi^{A\pm}(\eta)=l_{B}^{\pm m}2^{\frac{1+m}{2}}\eta^{\pm m/2}e^{-\eta/2}\chi_{ q}^{A\pm}(\eta) \tag{14}\] which can be injected into (6) to end up with the Kummer-type differential equations \[\eta\partial_{\eta}^{2}\chi_{q}^{A+}(\eta)+\left(m+1-\eta\right) \partial_{\eta}\chi_{q}^{A+}(\eta)+\frac{l_{B}^{2}q^{2}}{2}\chi_{q}^{A+}(\eta)=0 \tag{15a}\] \[\eta\partial_{\eta}^{2}\chi_{q}^{A-}(\eta)+\left(1-m-\eta\right) \partial_{\eta}\chi_{q}^{A-}(\eta)+\left(m+\frac{l_{B}^{2}q^{2}}{2}\right) \chi_{q}^{A-}(\eta)=0 \tag{15b}\] showing the confluent hypergeometric functions as solutions \[\chi_{q}^{A+}(\eta) ={}_{1}F_{1}\left(-\frac{l_{B}^{2}q^{2}}{2},m+1,\eta\right) \tag{16a}\] \[\chi_{q}^{A-}(\eta) ={}_{1}F_{1}\left(-l-\frac{l_{B}^{2}q^{2}}{2},1-m,\eta\right). \tag{16b}\] When we combine all of the above results, we get the following solutions to the second order differential equation (6) \[\Phi^{A+}(r) =r^{|m|}e^{-r^{2}/4l_{B}^{2}}\,{}_{1}F_{1}\left(-\frac{l_{B}^{2}q ^{2}}{2},m+1,\frac{r^{2}}{2l_{B}^{2}}\right) \tag{17a}\] \[\Phi^{A-}(r) =r^{|m|}e^{-r^{2}/4l_{B}^{2}}\,{}_{1}F_{1}\left(-m-\frac{l_{B}^{2 }q^{2}}{2},1-m,\frac{r^{2}}{2l_{B}^{2}}\right) \tag{17b}\] The other components of spinor (4) can be obtained by inserting (17a) and (17b) into (5a). This process yields \[\Phi^{B+}(r) =\frac{q}{2(m+1)}r^{|m|+1}e^{-r^{2}/4l_{B}^{2}}\,{}_{1}F_{1}\left( 1-\frac{l_{B}^{2}q^{2}}{2},m+2,\frac{r^{2}}{2l_{B}^{2}}\right) \tag{18a}\] \[\Phi^{B-}(r) =\frac{1}{q}r^{|m|-1}e^{-r^{2}/4l_{B}^{2}}\left[2m\,{}_{1}F_{1} \left(-m-\frac{l_{B}^{2}q^{2}}{2},1-m,\frac{r^{2}}{2l_{B}^{2}}\right)+\frac{(2 m+l_{B}^{2}q^{2})r^{2}}{2(1-m)l_{B}^{2}}\,{}_{1}F_{1}\left(1-m-\frac{l_{B}^{2}q^{2}}{2 },2-m,\frac{r^{2}}{2l_{B}^{2}}\right)\right] \tag{18b}\] In the forthcoming analysis, we will see how the above results can be worked out to study the scattering phenomenon in terms of different quantities. ## III Scattering problem In order to examine the scattering problem, we first explain how an electron scatters on a circular GQD of radius \(R\) in the presence of a constant magnetic field before identifying the key parameters that characterize the scattering problem. Consider an electron moving along the \(x\) direction with energy \(E=\hbar v_{F}k\), where \(k\) is the corresponding wave number. As a result, the incident electron can be described by a plane wave as follows \[\Phi_{k}^{i}(r,\theta)=\frac{1}{\sqrt{2}}e^{ikr\cos\theta}\binom{1}{1}=\frac{ 1}{\sqrt{2}}\sum_{m=-\infty}^{\infty}i^{m}\binom{J_{m}(kr)e^{im\theta}}{iJ_{m+ 1}(kr)e^{i(m+1)\theta}} \tag{19}\] where \(J_{m}(z)\) is the Bessel function of first kind. The following equation demonstrates how the reflected electron wave can be divided into partial waves since it must adhere to infinite boundary requirements for the scattering mechanism being researched [37] \[\Phi_{k}^{r}(r,\theta)=\frac{1}{\sqrt{2}}\sum_{m=-\infty}^{\infty}a_{m}^{r}i^{ m}\binom{H_{m}(kr)e^{im\theta}}{iH_{m+1}(kr)e^{i(m+1)\theta}} \tag{20}\] and \(H_{m}(x)\) is the Hankel function of first kind that is a linear combinations of \(J_{m}\) and Neumann \(Y_{m}\), i.e., \(H_{m}(x)=J_{m}(x)+iY_{m}(x)\). For a large value of \(x\), its asymptotic behavior is \[H_{m}(x)\underset{x\gg 1}{\sim}\sqrt{\frac{2}{\pi x}}e^{i(x-\frac{ix}{2}-\frac{ x}{4})}. \tag{21}\] The transmitted solution can be obtained from the previous analysis as \[\Phi_{q}^{t}(r,\theta)=\sum_{m=-\infty}^{-1}a_{m}^{t-}\binom{\Phi_{q}^{A-}(r) e^{im\theta}}{i\Phi_{q}^{B-}(r)e^{i(m+1)\theta}}+\sum_{m=0}^{\infty}a_{m}^{t+ }\binom{\Phi_{q}^{A+}(r)e^{im\theta}}{i\Phi_{q}^{B+}(r)e^{i(m+1)\theta}} \tag{22}\] where \(q\) represents the wave number associated to the electron inside the GQD as shown in Fig. 1. To investigate our system's scattering problem, we must first calculate the scattering coefficients \(a_{m}^{r}\) and \(a_{m}^{t}\) using the continuity of eigenspinors at the boundary condition \(r=R\). This is \[\Phi_{k}^{i}(R,\theta)+\Phi_{k}^{r}(R,\theta)=\Phi_{q}^{t}(R,\theta) \tag{23}\] giving rise to two conditions represented by two equations of \(a_{m}^{r}\) and \(a_{m}^{t}\) \[\frac{1}{\sqrt{2}}i^{m}J_{m}(kR)+\frac{1}{\sqrt{2}}i^{m}a_{m}^{r} H_{m}(kR)=a_{m}^{t}\Phi_{q}^{A\pm}(qR) \tag{24a}\] \[\frac{1}{\sqrt{2}}i^{m+1}J_{m+1}(kR)+\frac{1}{\sqrt{2}}i^{m+1}a_ {m}^{r}H_{m+1}(kR)=ia_{m}^{t}\Phi_{q}^{B\pm}(qR). \tag{24b}\] Consequently, we obtain \[a_{m}^{t\pm} =\frac{i\sqrt{2}e^{im\pi/2}}{\pi qR[H_{m}(kR)\Phi_{q}^{B\pm}(qR)- H_{m+1}(kR)\Phi_{q}^{A\pm}(qR)]} \tag{25a}\] \[a_{m}^{r\pm} =\frac{-J_{m}(kR)\Phi_{q}^{B\pm}(qR)+J_{m+1}(kR)\Phi_{q}^{A\pm}(q R)}{H_{m}(kR)\Phi_{q}^{B\pm}(qR)-H_{m+1}(kR)\Phi_{q}^{A\pm}(qR)}. \tag{25b}\] We now define the probability density function \(\rho=\Phi^{\dagger}\Phi\) and current density \(j=\Phi^{\dagger}\sigma\Phi\) using Hamiltonian (1), with the spinor \(\Phi\) being dependent on the region where \(\Phi=\Phi_{t}\) is inside the GQD and \(\Phi=\Phi_{i}+\Phi_{r}\) is outside. As a result, the radial component of the current is given by \[j_{\rm rad}^{r}(\theta)=\Phi^{\dagger}\begin{pmatrix}0&\cos\theta-i\sin\theta \\ \cos\theta+i\sin\theta&0\end{pmatrix}\Phi. \tag{26}\] Taking the asymptotic behavior (21) into account, we calculate \(j_{\rm rad}^{r}(\theta)\) as \[j_{\rm rad}^{r}(\theta)=\frac{4}{\pi kR}\sum_{m=-\infty}^{+\infty}|a_{m}^{r}|^ {2}+\frac{8}{\pi kR}\sum_{m<m^{\prime}}\Re(a_{m}^{r}a_{m^{\prime}}^{r})\cos[( m-m^{\prime})\theta]. \tag{27}\] At this level, we are interested in the quantities related to our system to determine their basic characteristics. Indeed, in the limit \(kr\rightarrow\infty\), (27) is used to calculate the effective scattering cross section \(\sigma\) defined by \[\sigma=\frac{I_{\rm rad}^{r}}{I^{\rm inc}/A_{u}} \tag{28}\] knowing that \(I_{\rm rad}^{r}\) represents the total reflected flux through the GQD of radius \(R\) and that the incident flux per unit area represents the term \(I^{\rm inc}/A_{u}\). Our calculation shows that the total reflected flux \(I_{\rm rad}^{r}\) takes the form \[I_{\rm rad}^{r}=\int_{0}^{2\pi}j_{\rm rad}^{r}(\theta)rd\theta=\frac{8}{k}\sum_ {m=-\infty}^{+\infty}|a_{m}|^{2} \tag{29}\] and \(I^{i}/A_{u}=1\) for the incident wave (19). To improve our study of the scattering problem of Dirac fermions in different sizes of circular quantum dots, we analyze the scattering efficiency \(Q\). This is defined as the ratio of the division of the scattering cross section to the geometrical cross section based on the lines shown in [38] \[Q=\frac{\sigma}{2R}=\frac{4}{kR}\sum_{m=-\infty}^{+\infty}|a_{m}^{r}|^{2}. \tag{30}\] Note that by requiring \(\Delta=0\), we recover the results found in [32]. The analytical results obtained so far will be analyzed numerically in order to study the effect of the energy gap and other parameters on the physical quantities that characterize the scattering phenomenon and also the confinement of Dirac fermions in a gapped GQD. These studies are reflected in the analysis of real space scattering and the evaluation of the lifetime corresponding to Dirac fermions inside the GQD. ## IV Results and discussions We numerically examine the scattering phenomenon of electrons on a gapped GQD when it is exposed to a constant magnetic field. Our studies are based on the analysis of the magnitude in terms of scattering efficiency \(Q\) and density \(\rho\) in the region close to the quantum dot and the lifetime of the quasi-bound states. Fig. 2 represents the contour plot of \(Q\) as a function of incident energy \(E\) and magnetic field \(B\) for the radius \(R=50\) nm and different values of the energy gap \(\Delta\). Fig. 2a shows that for \(\Delta=0\), the pattern of \(Q\) is almost oscillating and consists of six bands of high values of \(Q\), each of which corresponds to a scattering mode \(m=0,1,2,3,4,5\), which is in agreement with the results found previously [32; 39; 40; 22]. As shown in Figs. 2(b,c,d), including the energy gap \(\Delta\) reduces the resonance effect. As a result, we clearly observe that \(Q\) diminishes as long as \(\Delta\) approaches the incident energy. In addition, the more the value of \(\Delta\) is increased, the more the number of scattering modes is gradually decreased until it becomes only four modes when \(\Delta\) approaches \(40\) meV, as depicted in Fig. 2d. We note that when the magnetic field corresponding to the excitation of the scattering modes decreases, the value of \(\Delta\) increases. The scattering efficiency \(Q\) is plotted as a function of the magnetic field \(B\) for various values of the energy gap \(\Delta\) and incident energy \(E\) in Fig. 3. When \(\Delta\) exceeds \(6\) meV for \(E=7\) meV and \(B=0\), \(Q\) is not null, as shown in Fig. 3a. Furthermore, we observe that \(\Delta\) shifts the resonance peaks to the right. This leads to the suppression of a resonance peak at \(\Delta\geq 6\) meV. In Figs. 3(b,c,d), we choose \(E=20\), \(25\) and \(35\) meV where we see that the resonance peaks are always shifted to the right due to the energy gap. Some peaks are also suppressed as \(\Delta\) approaches \(E\) and \(Q\) takes specific values for \(B=0\). This tells us that \(Q\) survives and takes important values even in the absence of the magnetic field, except that one should be placed in the zone where \(\Delta\) is closest to \(E>20\) meV. We show the scattering efficiency \(Q\) as a function of the magnetic field \(B\) in Fig. 4 for E = 20 meV, three values of the energy gap \(\Delta=0\), \(14\), \(18\) meV and the quantum number \(m=0\), \(\cdots=5\) meV. The first three modes, \(m=-1,0,1\) are excited without resonance peaks in Fig. 4a for \(\Delta=0\) meV. On the other hand, for the other scattering modes \(m=2,3,4\), we observe resonance peaks for specific values of \(B\). When \(m\) is increased, the width of these peaks narrows and becomes very narrow for \(m=4\). These results are in good agreement with those obtained in [32]. In Figs. 4(b,c), we see that the behavior of \(Q\) is almost similar to that found in Fig. 4a except that the positions of Figure 2: (color online) The scattering efficiency \(Q\) as a function of the incident energy \(E\) and magnetic field \(B\) for \(R=50\) nm different values of \(\Delta\). (a): \(\Delta=0\) meV, (b): \(\Delta=15\) meV, (c): \(\Delta=25\) meV and (d): \(\Delta=39.5\) meV the resonance peaks are shifted to the right. We also observe the suppression of the \(m=4\) scattering mode. We emphasize that in the absence of the magnetic field, the addition of \(\Delta\) leads to a scattering efficiency \(Q\) no null, which becomes very important with the increase of \(\Delta\). As shown in Figs. 4(b,c), \(Q\) is solely due to the two scattering modes \(m=-1,0\). The scattering efficiency \(Q\) as a function of the magnetic field \(B\) and radius \(R\) of the GQD is shown in Fig. 5 for four different values of the energy gap \(\Delta=0\) meV, 14, 18 meV, 19.9 meV. In Fig. 5a for \(\Delta=0\), we observe that starting from \(R\approx 32\) nm, the wide bands correspond to \(m=1,2,3\) and narrow bands correspond to \(m=4,5\) begin to show up. These important increases of \(Q\) are designated as "scattering resonances" and are associated with a specific scattering mode. It is clear that the interaction is very low below \(R\approx 32\) nm. Furthermore, as shown in Fig. 5a, the interaction is very weak inside the GQD, regardless of its radius, below \(B\approx\)0.4 T. Now by increasing \(\Delta\) in Figs. 5(b,c,d), we see that bangs appear for large values of \(R\) and grow as \(\Delta\) increases. In particular, in Fig. 5d, the bangs appear for \(R=45\) nm and an energy gap \(\Delta=19.9\) meV closes to the incident one. This shows that the bangs corresponding to the scattering modes shift to larger values of \(R\) as long as \(\Delta\) is increased. Figure 4: (color online) The scattering efficiency \(Q\) as a function of the magnetic field \(B\) for \(E=20\) meV, \(m=-1,0,1,2,3,4\), and three values of the energy gap (a): \(\Delta=0\) meV, (b): \(\Delta=14\) meV, (c): \(\Delta=18\) meV. Figure 3: (color online) The scattering efficiency \(Q\) as a function of the magnetic field \(B\) for different values of \(\Delta\) and \(E\). (a): \(E=7\) meV and \(\Delta=0,4,6,6.9\) meV, (b): \(E=20\) meV and \(\Delta=0,14,18,19.9\) meV, (c): \(E=25\) meV and \(\Delta=0,12,21,24.9\) meV, (d): \(E=35\) meV and \(\Delta=0,18,28,34.9\) meV Fig. 6 shows \(Q\) as a function of \(R\) for \(E=20\) meV, four values of \(\Delta\)m and the the magnetic field (a): \(B=0.8\) T, (b): \(B=2.2\) T, (c): \(B=3.2\) T. In Fig. 6a, we observe that the most relevant resonance peaks are concentrated in the radius interval 50-100 nm. However, as \(B\) increase, we see that this interval shifts to small values of \(R\), as shown in Figs. 6(b,c). More importantly, \(Q\) becomes significant when \(R\) is low and \(\Delta\) is close to the incident energy. When \(R\) is less than 20 nm, adding an energy gap \(\Delta\) increases scattering efficiency \(Q\), but when \(R\) exceeds 20 nm, \(Q\) decreases as \(\Delta\) increases. In Fig. 7, each column is dedicated to a scattering mode among those studied previously, and we examine the density in the field near the GQD for each mode under the effect of two values of the energy gap \(\Delta=0\) and 14 meV. The boundary of the GQD is represented by the black circle. The magnetic field values chosen correspond to the peaks of \(m=0,1,2,3\) presented in Fig. 4. Line (1) starts with the results for the four scattering modes \(m=0,1,2,3\) at zero energy gap. In Fig. 7a where \(B=0.4\) T, we see that the large values of the density are distributed in the outer part of the GQD, i.e., the electron wave is diffracted on the GQD boundary, as can be observed in Fig. 4a where only the \(m=-1,0\) modes are non-resonantly excited with a very low value of the scattering efficiency. Thus, in this case, we do not expect electron trapping effects inside the GQD. The mode \(m=1\) is resonantly excited with a broad peak in Fig. 7c for \(B=1.62\) T. With the presence of the diffraction bangs, we also observe that the majority of the density is localized outside the GQD, but now the near field density values are slightly larger. The mode \(m=2\) is excited with a narrower resonance peak in Fig. 7e for \(B=2.56\) T than in the previous case. In this case, the majority of the density is concentrated inside the GQD with a high scattering efficiency. As a result, the small resonance peak of the mode \(m=2\) makes it more likely that the electron will be trapped inside the GQD. The resonance of the mode \(m=3\) is very clear in Fig. 7g for \(B=3.8\) T, with a narrower peak than for \(m=2\). We see the density concentration inside the GQD with a higher scattering efficiency and the suppression of diffraction bangs. Therefore, the electron trapping effect inside the quantum dot is notable. From the aforementioned, we infer that the trapping effect increases with the resonance peak's narrowness. In line (2), we represent the results obtained at an energy gap \(\Delta=14\) meV for the four scattering modes \(m=0,1,2,3\) with (b): (\(m=0\),\(B=0.8\) T) and (d): (\(m=1\),\(B=2.1\) T). It is clearly seen that the density inside and at the GQD boundary is improved compared to those seen before, with the presence of diffraction bangs around the GQD. Comparing (f): (\(m=2\),\(B=3.03\) T) and (h): (\(m=3\),\(B=4.3\) T) with (e) and Figure 6: (color online) The scattering efficiency \(Q\) as a function of the radius \(R\) of the GQD for three values of the magnetic field strength (a): \(B=0.8\) T, (b): \(B=2.2\) T, (c): \(B=3.2\) T, and four values of the energy gap \(\Delta=0\), 14, 18, 19.9 meV. Figure 5: (color online) The scattering efficiency \(Q\) as a function of the magnetic field \(B\) and radius \(R\) of the GQD for an incident energy value of \(E=20\) meV and four values of the energy gap (a): \(\Delta=0\) meV, (b): \(\Delta=14\) meV, (c): \(\Delta=18\) meV, (d): \(\Delta=19.9\) meV. (g), we see that the density is more concentrated inside the GQD and has created a clearer cloud around the core of the GQD. Therefore, when an energy gap is added, the impact of electron trapping inside the GQD is improved. We now perform another analysis in terms of density in real space to evaluate the trapping time (the lifetime of the quasi-bound states). The analysis of the scattering resonances must be performed in terms of the complex incident electronic energy \(E=E_{r}-iE_{i}\), where \(E_{r}\) represents the resonance energy and \(E_{i}\) gives the lifetime of the quasi-bound states \(\tau\), with \(\tau=h/E_{i}\). We have previously indicated that the whole scattering process can be decomposed into several scattering sub-processes. We will now concentrate on the two scattering modes \(m=0,3\) to see how the energy gap \(\Delta\) affects the lifetime \(\tau\) of the quasi-bound states. We use continuity to find the complex energy of the incident electron by matching the transmitted wave function with the reflected wave function on the boundary \(r=R\)[39]. Since the incident energy of the incoming electron is not affected by the magnetic field, we treat the following transcendental equation for \(q\) and \(k\) \[\frac{\Phi_{q}^{A\pm}(qR)}{\Phi_{q}^{B\pm}(qR)}=\frac{H_{m}(kR)}{H_{m+1}(kR)}. \tag{31}\] The lifetime \(\tau\) as a function of the magnetic field \(B\) is shown in Fig. 8 for the two scattering modes \(m=0,3\) and four values of the energy gap \(\Delta=0,14,18,19.9\) meV. The first thing we notice is that \(\tau\) increases as \(B\) increases, which agrees with what we previously discovered by examining the scattering efficiency \(Q\). In Fig. 8a, \(\tau\) becomes visible from \(B=1.8\) T and increases more clearly from \(B\approx 3.35\) T for \(\Delta\) non-null. In Fig. 8b, we see that for a given \(\Delta\), \(\tau\) tends to increase from small values of \(B\) when compared to the case of \(\Delta=0\) for higher magnetic fields. We conclude that when \(\Delta\) is non-null, \(\tau\) improves even further. Figure 7: (color online) A density representation for a real space examination of electron scattering on a magnetically driven GQD. Each graph column is devoted to a given value of the quantum number (A): \(m=0\), (B): \(m=1\), (C): \(m=2\), and (D): \(m=3\). Each graph line represents a specific value of the energy gap (1): \(\Delta=0\) meV and (2): \(\Delta=14\) meV. Each panel corresponds to a given value of magnetic field (a): \(B=0.4\) T, (b): \(B=0.8\) T, (c): \(B=1.62\) T, (d): \(B=2.1\) T, (e): \(B=2.56\) T, (f): \(B=3.03\) T, (g): \(B=3.8\) T, (h): \(B=4.3\) T. A black circle indicates the geographic extent of the GQD. ## V Conclusion The theoretical analysis of the electron scattering mechanism on a magnetically driven graphene quantum dot (GQD) in the presence of a mass term generating an energy gap in the spectrum has been further investigated. In this regard, we first established a theoretical model for the interaction of Dirac fermions with a constant magnetic field in a circular GQD and identified the crucial variables. To deal with the problem correctly, we first solved the corresponding Dirac equation and analytically determined the energy spectrum. Using the continuity of the eigenspinors, we calculated different quantities characterizing the scattering phenomenon. In particular, the efficiency \(Q\) is found to be dependent on the magnetic field, angular moment, radius of the GQD, incident energy, and energy gap. We also presented numerically the obtained results to provide a general interpretation of the scattering phenomenon and to show how an electron at normal incidence can be trapped in a GQD for a certain period of time, with the main objective being to improve this trapping time. We discovered that even in the absence of a magnetic field, scattering efficiency can reach significant values when the energy gap is close to the incident energy of the electron crossing the GQD. For a non-null magnetic field, we showed that the resonance peaks with a higher scattering efficiency are those corresponding to the smallest values of radius of the GQD. By analyzing the probability density, we showed that the diffraction phenomenon is dominant in the domain where the scattering is non-resonant, with a weak localization of the density in the GQD without an energy gap. However, when an energy gap is added, it is found that the density inside the GQD is enhanced. On the other hand, in the domain where the scattering is resonant, we noticed the damping of the diffraction with a very important localization of the density inside and at the boundary of the GQD, as well as with noticeable trapping effects. The important result in our paper is to improve the possibility of trapping electrons in the GQD under the influence of a mass term, creating an energy gap in the energy spectrum. ###### Acknowledgements. We warmly thank Professor Adrian Pena for his valuable support.
2303.10776
Separation of electrons from pions in GEM TRD using deep learning
Machine learning (ML) is no new concept in the high-energy physics community, in fact, many ML techniques have been employed since the early 80s to deal with a broad spectrum of physics problems. In this paper, we present a novel technique to separate electrons from pions in the Gas Electron Multiplier Transition Radiation Detector (GEM TRD) using deep learning. The Artificial Neural Network (ANN) model is trained on the Monte Carlo data simulated using the ATHENA-based detector and simulation framework for the Electron-Ion Collider (EIC) experiment. The ANN model does a good job of separating electrons from pions.
Nilay Kushawaha, Yulia Furletova, Ankhi Roy, Dmitry Romanov
2023-03-19T21:56:38Z
http://arxiv.org/abs/2303.10776v2
# Separation of electrons from pions in GEM TRD using deep learning ###### Abstract Machine learning (ML) is not a completely new concept in the high energy physics community, in fact, many ML techniques have been employed since the early 80s to deal with a broad spectrum of physics problems. In this paper, we have presented a novel technique to separate electrons from pions in the Gas Electron Multiplier Transition Radiation Detector (GEM TRD) using deep learning. The Artificial Neural Network (ANN) model is trained on the Monte Carlo data simulated using the ATHENA based detector and simulation framework for Electron-Ion Collider (EIC) experiment. The ANN model does a good job of separating electrons from pions. keywords: Gas Electron Multiplier, Transition Radiation Detector, Artificial Neural Network, Transition Radiation (TR) photons, Energy deposit + ## 1 Introduction Particle identification is one of the major challenges in experimental physics. The identification of a stable particle is done either on the basis of their interaction or by determining their masses. In traditional particle physics experiments, particles are identified by the characteristic signature they leave in the detector. Conventionally, particle identification was done using the cut-based method where a threshold was fixed and if the signature of the particle was more than the threshold value, then it was classified as a signal. With the advancement of superior hardware and smart algorithms, various machine learning and deep learning techniques came into existence. Deep learning [4] and Artificial Neural Networks [6] have become the most popular tool for research, data-driven and prediction based applications. Transition Radiation Detectors (TRDs) are used for electron identification and for electron/hadron separation (in addition to calorimeter & ring-imaging Cherenkov detector) in some particle physics experiment [13]. In this paper, we will discuss the Gas Electron Multiplier Transition Radiation Detector(GEM TRD) [1], simulation of GEM TRD and the deep learning technique to separate electrons from pions. ## 2 Software Implementation of Detector Setup ### Physics Processes Transition radiation (TR) [1] is produced by charged particles when they cross the boundary between two media with different dielectric constants. When electrons travel through the radiator, TR photons are produced. The total TR energy is proportional to the \(\gamma\)-factor [13] of the charged particle. Some TR photons are absorbed in the 3 cm gas volume (Xe-based mixture) of GEM. The X-ray TR photons are extremely forward peaked, and therefore their clusters overlap with the \(\frac{dE}{dx}\) of charged particle [5]. In the case of pions, no TR photons are produced. Pions begin to produce TR at energies greater than 100 GeV. For the EIC experiment, the particles will be produced in the energy range of up to 50 GeV, therefore for our simulation we used particles with energy, E \(\sim\) 6 GeV [13]. ### Software Dependencies The ATHENA singularity container [11] contains all the necessary soft-wares required for the construction, simulation, visualization of the detector as well as particle generation, analysis, and reconstruction. DD4hep [9] is a software framework included in the singularity container for providing a complete solution to full detector description (geometry, materials, visualization, readout, alignment, calibration,etc.) for the full experiment life cycle which includes detector concept development, detector optimization, construction, operation. For simulating the data we are using the "ddsim" [8] simulation package provided by the ATHENA singularity container [11]. We are simulating the data for electrons and pions in two separate root files. More information about the data is discussed in section 2.4. ### Detector Simulation using DD4hep Software DD4hep software is used to create the detector and radiator geometries. Two disk-like shapes are created: one for the TR-radiator and one for GEM. The GEM disc has a thickness of 3 cm, and the material inside it is composed of xenon gas (Xe) and carbon dioxide (CO\({}_{2}\)) in the 80:20 ratio, the thickness of the radiator is 15 cm and is enclosed with thin sheets of mylar foil (CH\({}_{2}\) & Air). Figure 1 right shows the radiator along with the GEM TRD. The radiator and the GEM are separated by a gap of \(\sim\) 200 um filled with air. To set up the physics list for the sensitive GEM layer and the TR-radiator, we use the QSGP_BERT reference physics [14], which includes all relevant physics processes for particles with energies below 10 GeV. ### Data Generation \(\&\) Overall Setup We are simulating 1000k records each for electrons and pions in two separate root files using a 6 GeV particle gun for training the model. For testing the performance of the model we are simulating 500k mixed records. The root files provide information about the particle's position coordinate, energy deposit, drift time, and momentum information. In order to create features for the machine learning model, the Z-coordinate (Position Z) of the particles Figure 1: _left : GEM TRD module using DD4hep software with thickness 30 mm, middle : Radiator module using DD4hep software with thickness 150 mm, right : GEM TRD and radiator with a gap of 200 microns_ in the drift region is split into 69 bins with the corresponding energy deposit associated with it as the features. We plot a 1-D weighted histogram of the Z-coordinate for both electrons and pions in the 30 mm drift gap with the energy deposit (\(\frac{dE}{dx}\)) as the weight parameter as shown in figure 2. Particles enter the GEM from the left at value 3510 mm and the number corresponding to 3540 mm is the readout of the detector setup. The energy deposit of the electrons is initially quite high whereas the pion's energy deposit remains flat throughout the drift distance. When electron enters the radiator, it interacts with the material and generates soft and hard TR photons. The soft TR photons are mostly absorbed near the entrance window of the sensitive volume, resulting in the enhancement seen in the graph in case of electrons. Some hard TR photons are Figure 2: _DD4hep simulation of \(\frac{dE}{dx}\)+ TR photons vs drift distance for electrons with and without radiator (blue, red) and pions (green)_ absorbed along the drift volume of GEM or could leave the volume undetected. However, in the absence of radiator the electrons also have a flat energy deposit throughout the drift distance as shown in figure 2 in red color. Note, that the \(\frac{dE}{dx}\) of electrons without the radiator is higher than for pions. ## 3 Neural architecture and data generation ### Working of Artificial Neural Network Artificial Neural Networks (ANN) [6] have been around for quite a long time, they have been studied for many years in the hope of achieving human-like performance. ANN is proven to be a powerful tool to find and understand the uniqueness of certain features of the data which are invisible to human eyes, look for hidden patterns and features in the data [3]. The working of an ANN can simply be understood as the mapping of function from one space to some other space either linearly or using some higher-order relation. It can be used for solving both classification as well as regression problems. The ANN described in our paper consists of three layers: * Input layer: It takes the input vectors from the user and multiplies the respective branch weights to it. * Hidden layer: The hidden layer is a collection of neurons that perform all computations on the input data. It is responsible for learning complex patterns from the data. * Output layer: It gives the final predicted output value based on the input features. The neural network model takes the input, multiplies some branch weight to it, concatenates them and applies an activation function on top of it. The prediction of neural network model is compared with the true label and the loss function is calculated. The loss function is then minimized using an optimizer [4] by tweaking the weights until we reach the global minima or a considerable loss value. ### Model Architecture We have designed an ANN model with one input layer, four hidden layers and an output layer using the Keras framework [10]. Figure 3 shows the complete ANN architecture along with the various parameters associated with it. The input layer contains 69 nodes or neurons to handle the input vectors. The hidden layers contain 500, 300, 200 and 100 neurons sequentially with "RELU" activation function [12]. The presence of dropout layer [4] and batch normalization [7] reduces overfitting of the model on the training data. In the output layer, we have one neuron with sigmoid activation function. The loss function we are using is binary cross entropy [10] as we have a binary problem statement. To decrease the loss function we are using the "ADAM" optimizer [15]. The ANN model is trained for 100 epochs and the predictions are made on the test data. Figure 3: _Artificial Neural Network architecture for the model with 69 features_ ### ANN Performance on Test Data We trained three distinct ANN models to compare the performance of the model on the test data. The first model has 29 features, the second has 49, and the third model has 69. However, when the predictions of all three models were compared, it was discovered that the model with 69 features outperformed the other two. The loss function decrement with respect to epochs for the three models is shown in figure 4. We can see that the loss value in middle figure is close to zero, both for training and validation. The accuracy of all the three models with respect to epochs is shown in figure 5. Again it can be seen that the model with the greater number of features Figure 4: _Change in loss value with respect to epochs for training and validation data, left: 29 features, middle: 69 features, right: 49 features_ Figure 5: _Improvement in accuracy with respect to epochs for training and validation data, left: 29 features, middle: 69 features, right: 49 features_ has larger accuracy. Table 1 shows the accuracy and F1 score 3 for the three ANN models. Footnote 3: F1 score is the harmonic mean of precision [17] and recall [16]. It is a special case of F-beta score [4]. We can observe that the ANN model with 69 features beats all other models; however, we cannot raise the number of bins or features beyond 69 since the detector setup's electronics have a threshold resolution on the number of bins it can work with. The existing electronics can only handle 30 bins, but we have provided 69 bins in this study to illustrate that increasing the number of bins can lead to more efficient separation of electrons from pions, providing space for improvement within the detector setup's electronics. ## 4 Results and discussions Figure 6 shows the output of the Artificial Neural Network for all three models. We construct a table 2 with the electron efficiency and pion rejection factor for the different threshold values in order to set an acceptable cut on the output of the ANN model. We can simply identify a balance between \begin{table} \begin{tabular}{|l|l|l|l|} \hline Model & Accuracy & F1 score & F1 score \\ & & (0) & (1) \\ \hline ANNmodel29bins & 0.87 & 0.87 & 0.87 \\ \hline ANNmodel49bins & 0.90 & 0.90 & 0.90 \\ \hline ANNmodel69bins & 0.93 & 0.93 & 0.93 \\ \hline \end{tabular} \end{table} Table 1: _Accuracy and f1 score for three different models where (1) refers to electrons and (0) refers to pions_ the two parameters (electron efficiency and pion rejection factor) according to our needs. For example : if we put the threshold cut at around 0.6 for the ANN model with 69 features, we will get an electron efficiency of 93% and a pion rejection factor of 15. The tradeoff between electron efficiency and pion contamination w.r.t the various threshold values is shown in figure 7. ## 5 Conclusion Electron identification will be very important for the future Electron-Ion Collider (EIC) experiment due to the expected large hadron background. The GEM TRD module with 30 mm drift gap along with the 150 mm radiator provides a desirable \(e/\pi\) separation. The results presented from the Figure 6: _Signal vs background categorization for electrons and pions, left: 29 features, middle: 69 features, right: 49 features_ Figure 7: _Electron efficiency/pion contamination w.r.t threshold values for three different ANN models, left: 29 features, middle: 69 features, right: 49 features_ \begin{table} \begin{tabular}{|p{108.4pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline Model & Threshold value & Electron efficiency & Pion rejection factor \\ \hline & 0.2 & 0.958 & 3.171 \\ & 0.4 & 0.895 & 6.616 \\ ANNmodel29bins & 0.6 & 0.829 & 11.670 \\ & 0.8 & 0.712 & 27.282 \\ & 0.9 & 0.594 & 59.663 \\ \hline & 0.2 & 0.952 & 5.466 \\ & 0.4 & 0.917 & 8.871 \\ ANNmodel49bins & 0.6 & 0.880 & 12.6 \\ & 0.8 & 0.823 & 21.865 \\ & 0.9 & 0.758 & 33.486 \\ \hline & 0.2 & 0.976 & 7.266 \\ & 0.4 & 0.957 & 10.368 \\ ANNmodel69bins & 0.6 & 0.938 & 15.613 \\ & 0.8 & 0.908 & 24.287 \\ & 0.9 & 0.891 & 30.149 \\ \hline \end{tabular} \end{table} Table 2: _Electron efficiency and pion rejection factor for the three ANN models with different threshold values_ DD4hep simulation of the GEM TRD setup and the radiator show that at 92% electron efficiency, a pion rejection factor of about 8.9 can be achieved for the ANN model with 49 bins. The electron efficiency and the pion rejection factor can further be increased by using an ANN model with greater number of bins/features. ## 6 Acknowledgements This work was supported by the U.S. Department of Energy, Office of Science, Office of Nuclear Physics under contract DE-AC05-06OR23177.
2303.08139
Limit Shape of the Generalized Inverse Gaussian-Poisson Distribution
The generalized inverse Gaussian-Poisson (GIGP) distribution proposed by Sichel in the 1970s has proved to be a flexible fitting tool for diverse frequency data, collectively described using the item production model. In this paper, we identify the limit shape (specified as an incomplete gamma function) of the properly scaled diagrammatic representations of random samples from the GIGP distribution (known as Young diagrams). We also show that fluctuations are asymptotically normal and, moreover, the corresponding empirical random process is approximated via a rescaled Brownian motion in inverted time, with the inhomogeneous time scale determined by the limit shape. Here, the limit is taken as the number of production sources is growing to infinity, coupled with an intrinsic parameter regime ensuring that the mean number of items per source is large. More precisely, for convergence to the limit shape to be valid, this combined growth should be fast enough. In the opposite regime referred to as "chaotic", the empirical random process is approximated by means of an inhomogeneous Poisson process in inverted time. These results are illustrated using both computer simulations and some classic data sets in informetrics.
Leonid V. Bogachev, Ruheyan Nuermaimaiti, Jochen Voss
2023-03-14T12:26:31Z
http://arxiv.org/abs/2303.08139v1
# Limit Shape of the Generalized Inverse Gaussian-Poisson Distribution ###### Abstract The generalized inverse Gaussian-Poisson (GIGP) distribution proposed by Sichel in the 1970s has proved to be a flexible fitting tool for diverse frequency data, collectively described using the item production model. In this paper, we identify the limit shape (specified as an incomplete gamma function) of the properly scaled diagrammatic representations of random samples from the GIGP distribution (known as Young diagrams). We also show that fluctuations are asymptotically normal and, moreover, the corresponding empirical random process is approximated via a rescaled Brownian motion in inverted time, with the inhomogeneous time scale determined by the limit shape. Here, the limit is taken as the number of production sources is growing to infinity, coupled with an intrinsic parameter regime ensuring that the mean number of items per source is large. More precisely, for convergence to the limit shape to be valid, this combined growth should be fast enough. In the opposite regime referred to as "chaotic", the empirical random process is approximated by means of an inhomogeneous Poisson process in inverted time. These results are illustrated using both computer simulations and some classic data sets in informetrics. _Keywords:_ count data; frequency distributions; sources/items models; generalized inverse Gaussian-Poisson distribution; Young diagrams; limit shape; informetrics data _MSC 2020:_ Primary 62E17; Secondary 62P25 ## 1 Introduction In many applied situations, one deals with _count data_ in the form of sample frequencies of occurrence in one of the countably many categories ("boxes"), say, labelled by index \(j\in\mathbb{N}=\{1,2,\dots\}\) or \(j\in\mathbb{N}_{0}=\{0,1,2,\dots\}\). It is often appropriate to interpret occurrence in each box \(j\) as the corresponding number of batched "items" produced by one out of the plurality of contributing "sources"; for example, falling into box \(j=0\) is interpreted as no items produced by the source. The observed data set is then the vector \((M_{j})\) of the recorded counts in each box out of the total number of sources \(M=\sum_{j}M_{j}\), with the total number of items \(N=\sum_{j}jM_{j}\). Diverse real-life examples of such a scenario include: abundance data of various species such as butterflies, with different species treated as sources and the observed counts as items; the number of followers (items) of different accounts in Twitter (sources); repeat-buying data, with the number of units bought (items) by households (sources); the number of papers (items) produced by authors (sources); etc.1 In the latter example, papers may themselves play the role of sources, with citations as items. Altogether, this forms an interesting triangle of relationships, _authors-papers-citations (APC)_, which is one of the main subjects of investigation in _informetrics_[14]. Footnote 1: For more examples and further references, see, e.g., a monograph by Egghe [14] or a review paper by Clauset et al. [10]. A natural objective with such types of data is to explain the observed (relative) frequencies \((M_{j}/M)\) by fitting a suitable distributional model \((f_{j})\), preferably possessing some conceptual foundation and applicable to a variety of use cases. Of course, in any real-life data set the numbers \(M_{j}\) will reduce to zero for values \(j\) big enough, but this is reconciled with the modelling prediction \(M_{j}/M\approx f_{j}\) simply by the fact that the theoretical frequencies \(f_{j}\) tend to zero as \(j\to\infty\). However, adequate modelling of the long-tail frequencies is of importance in relation to understanding the behavior of extreme values in the count data (e.g., untypically high citations). A celebrated example of a theoretical frequency model is the _power law_, first proposed by Lotka [25] to describe the publication statistics in chemistry and physics, and based on the empirical observation that the sample frequencies \(M_{j}/M\) approximately follow a power law distribution of the form \(f_{j}=c_{\alpha}j^{-\alpha}\) (\(j\in\mathbb{N}\)), with some exponent \(\alpha>1\) and the normalization constant2\(c_{\alpha}>0\) such that \(\sum_{j=1}^{\infty}f_{j}=1\)[11, 14]. Footnote 2: The normalization constant \(c_{\alpha}\) is expressed via the Riemann zeta function, \(c_{\alpha}^{-1}=\sum_{j=1}^{\infty}j^{-\alpha}=\zeta(\alpha)\). In real-life examples, the power law exponent is typically in the range \(2<\alpha<3\)[10]. An evident heuristic tool to fit a power law model to the count data is by looking at the frequency plots (e.g., histograms) with logarithmic scales on both axes, whereby one seeks a straight-line fit, with the slope \(-\alpha\)[26]. An alternative approach [10], which provides the helpful smoothing of the discrete data, is via the complementary cumulative frequencies \(\bar{F}_{j}=\sum_{\ell\geq j}f_{\ell}\), where, using again the log-log plots, a good fit corresponds to a straight line, with slope \(1-\alpha\). More formally, the model can be fitted using standard statistical methods such as the maximum likelihood or ordinary least squares estimation [26]. The conventional explanation of universality of the power law is based on the principle of _cumulative advantage_, also expressed as the catchphrase "success breeds success", originally coined in the context of scientific productivity [31, 32, 15] (see also a more recent review [20] with a critique of cumulative advantage). Unfortunately, the utility of Lotka's power law for real data modelling is often limited by fitting to the data well only on a reduced range of count values, requiring a truncation of lower values (see an extensive discussion in [10]) or of higher (long-tail) values better described by a _stretched-exponential law_[24]. Numerous other attempts to fit theoretical distributions to a variety of count data sets included using the negative binomial distribution, the modified geometric distribution, the beta binomial distribution, and many more [21] (see the discussion and further references in [41, 20]), however, none of these distributional families proved to be sufficiently "universal" in explaining diverse count data sets, often failing to capture some characteristic features such as modality and long-tail behavior. In a series of papers, Sichel [36, 37, 38, 39, 40, 41] introduced and developed the so-called _generalized inverse Gaussian-Poisson (GIGP)_ model, proposed in an attempt to grasp a plausible production of items by respecting statistical differences in the individual productivity of sources (e.g., papers and authors, respectively). More precisely, a source is assumed to produce items according to a Poisson law with rate \(\lambda\), which is itself random with a specific choice of the _generalized inverse Gaussian (GIG)_ distribution density [36, 22]. In other words, GIGP distribution is a mixed Poisson distribution under the GIG mixing density (see [18] for a survey of Poisson mixture models for long-tailed count data). Sichel applied his GIGP model to a great variety of use cases and multiple data sets, from sentence-lengths and word frequencies in written prose [38, 39] to number of stones found in diamondiferous deposits [37] and scientific production (papers and/or citations) [41]. These examples have demonstrated a remarkable flexibility and versatility of the GIGP distribution family. In a more recent development, Yong [49] proposed to use combinatorial models of random integer partitions to mimic citation count data, where the constituent parts of the integer partition represent the author's papers with the corresponding numbers of citations, respectively. The main perceived advantage of this approach was to leverage the knowledge of so-called _limit shape_ for suitably scaled _Young diagrams_ visualizing parts in the (random) integer partition, which would then enable one to estimate statistically some citation metrics such as the \(h\)_-index3_ introduced by Hirsch [19]. Specifically, noting that the \(h\)-index corresponds geometrically to the location with equal coordinates at the upper boundary of the Young diagram, and using an explicit equation for the limit shape under the scaling \(\sqrt{N}\) along both axes, where \(N\gg 1\) is the total number of citations [44, 30], Yong came up with a simple estimate of the \(h\)-index, \(h\approx 0.54\,\sqrt{N}\), which he then tested using several data sets of mathematical citations [49]. Footnote 3: The \(h\)-index is defined as the maximum number \(h\) of the author’s papers, each one cited at least \(h\) times. In this paper, we apply the notion of limit shape to random samples from the GIGP distribution, taking as a large parameter its expected value \(\eta\) (together with the number of sources \(M\)). Under a suitable normalization, we obtain an explicit formula for the limit shape, given by the incomplete (upper) gamma function, \(\varphi_{\nu}(x)=\int_{x}^{\infty}s^{\nu-1}\,\mathrm{e}^{-s}\,\mathrm{d}s\) (\(x>0\)), indexed by the shape parameter \(\nu\geq-1\) of the GIGP distribution. In terms of empirical data analysis, this amounts to the corresponding scaling of the complementary cumulative frequency plots, which facilitates a quick visual check of the goodness-of-fit of the GIGP model, with an additional insight informed by the recommended scaling. A more careful analysis of the error bounds, based on asymptotic confidence intervals, is made possible by virtue of our result on asymptotically Gaussian fluctuations around the limit shape \(\varphi_{\nu}(x)\). As follows from the predicted limit shape \(\varphi_{\nu}(x)\), the upper tail of the GIGP model has a power-modulated exponential decay, thus strongly deviating from the power law behavior. In most practical examples, this discrepancy is not essential because of scarcity (or lack) of higher counts; however, the question of adequate modelling of extreme values is interesting, with the stretched exponential model mentioned above being an attractive alternative [24]. Note that the insightful property of having a limit shape with a non-trivial scaling is not automatic for count data models; for instance, a notable exception is the power law distribution \(f_{j}=c\,j^{-\alpha}\) because of being scale free: indeed, for any scaling factor \(A>0\), we have \(A^{\alpha}f_{Ax}\equiv f_{x}\left(x>0\right)\). In this regard, let us mention the _generalized power law (GPL)_ model recently introduced and studied by Nuermaimati et al. [29], which aims to bridge small values of counts (frequently truncated under the power law fit) and a power type upper tail. The conceptual justification of the GPL model is also based on the mixing idea as in Sichel [38], but under the different choices of the source production law (geometric instead of Poisson) and the mixing density (a beta distribution instead of the GIG one). The limit shape in the GPL model exists and is given by \(\varphi(x;\alpha)=(1+x)^{1-\alpha}\) (\(x\geq 0\)), where \(\alpha>0\) is the shape parameter of GPL. Although the long tail of the GPL limit shape has the same power decay as in the power law case above, the key difference is that GPL is not scale free, which ensures that the corresponding scaling is non-trivial. The rest of the paper is organized as follows. In Section 2.1, we define our main model of item production and set out basic notation and some elementary relations. The notions of Young diagrams and limit shape are introduced in Section 2.2, illustrated in Section 2.3 via the classic model of integer partitions. In Section 3.1, the GIGP distribution is defined with parameters \(\nu\in\mathbb{R}\), \(\alpha>0\) and \(0<\theta<1\), augmented in Section 3.2 by specification in the boundary case \(\alpha=0\), and followed in Section 3.3 by the asymptotic analysis of the GIGP expected value in the desired limit \(\theta\to 1-\) (Proposition 3.2). In particular, this analysis explains why we impose a restriction \(\nu\geq-1\). After these preparations, the limit shape problem is addressed in Section 4. First, in Section 4.1 the suitable scaling coefficients \(A\) and \(B\) are defined (Assumption 4.2), and our main result is stated as Theorem 4.1 about uniform convergence in probability (for \(x\geq\delta\), with any \(\delta>0\)) of rescaled Young diagrams to the limit shape \(\varphi_{\nu}(x)=\int_{x}^{\infty}\mathrm{e}^{-\lambda}\,\mathrm{d}s\) (\(x>0\)), illustrated using computer simulations for two example cases, \(\nu=0.5\) and \(\nu=-0.5\). Importantly, for this result to be valid, the \(y\)-scaling coefficient \(B\) must grow unboundedly (Assumption 4.3), which in turn implies that the number of sources \(M\) in the item production model should be large enough. Convergence to \(\varphi_{\nu}(x)\) is first shown in Section 4.3 for the expectation of Young diagrams (Theorem 4.2). Pointwise convergence in probability is then established in Section 4.4 (Theorem 4.4). In Section 4.5, we construct a suitable martingale in inverted time \(t=1/x\) (Lemma 4.5), which enables us to prove Theorem 4.1 by applying the Doob-Kolmogorov submartingale inequality [48, Sec. 4.6]. In Section 4.7, an alternative proof of Theorem 4.1 is given using the interpretation of the Young boundary as an empirical process and taking advantage of available concentration inequalities [8]. Fluctuations of random Young diagrams are studied in Section 5, where we establish the pointwise asymptotic normality (Theorem 5.1, unified by characterizing the limit as a Gaussian process in Theorems 5.2 and 5.3. The case where convergence to the limit shape fails due to the number of sources \(M\) not growing fast enough (referred to as a "chaotic" regime) is treated in Section 6, where we demonstrate that a Poisson approximation is a suitable replacement of a deterministic limit (Theorems 6.1, 6.2 and 6.3), illustrated by a computer simulation. In Section 7, the limit shape results are applied to some of the real-life frequency data sets considered earlier by Sichel [41] as a test bed for the proposed GIGP distribution goodness-of-fit. Here, our purpose it to demonstrate that the predicted limit shape, together with the adequate \((A,B)\)-scaling (based on estimated or predefined values of the GIGP parameters) provides a useful visualization tool for a quick and informative checkout of suitability of the GIGP model. The paper concludes with a summary of the main findings in Section 8. Lastly, Appendix A comprises a brief compendium of asymptotic formulas for the Bessel function \(K_{\nu}(z)\) involved in the definition of the GIGP distribution. These formulas (for small argument \(z\) or for large order \(\nu\)) are being extensively used in our asymptotic analysis. ## 2 Setting the scene ### Items production model Suppose there are \(M\) sources, each one producing a batch of items, and let \(X_{i}\) denote the random size of the batch produced by the \(i\)-th source (\(i=1,\ldots,M\)). The range of the output size can be \(j\in\mathbb{N}_{0}\) if empty output is allowed (e.g., citations of a paper), or it can be zero truncated, with \(j\in\mathbb{N}\) (e.g., papers of an author). The sources are independent of one another and their random outputs follow a common frequency distribution \((f_{j})\), that is, the random variables \((X_{i})\) are mutually independent and, for each \(i=1,\ldots,M\), \[\mathsf{P}(X_{i}=j)=f_{j}\qquad(j\in\mathbb{N}_{0}).\] _Remark 2.1_.: To streamline the notation, we keep writing \(j\in\mathbb{N}_{0}\), wherein the zero-truncated case is included with \(f_{0}=0\). _Remark 2.2_.: The _item production model_ introduced above can be rephrased as the classic _occupancy problem_, dealing with independent allocation of \(M\) particles over infinitely many boxes with probability distribution \((f_{j})\)[17]. We assume that the distribution \((f_{j})\) has finite mean, \[\eta:=\mathsf{E}(X_{i})=\sum_{j=0}^{\infty}jf_{j}<\infty. \tag{2.1}\] The total (random) number of produced items is given by the sum of the outputs, \[N=\sum_{i=1}^{M}X_{i}, \tag{2.2}\] with the expected value \[\mathsf{E}(N)=\sum_{i=1}^{M}\mathsf{E}(X_{i})=M\eta. \tag{2.3}\] It is useful to represent each \(X_{i}\) via "scanning" across the range of possible values \(j\), \[X_{i}=\sum_{j=0}^{\infty}jI_{\{X_{i}=j\}}, \tag{2.4}\] where \(I_{A}\) denotes the indicator of event \(A\) (i.e., with values \(1\) if \(A\) occurs and \(0\) otherwise). Of course, \[\mathsf{E}\big{(}I_{\{X_{i}=j\}}\big{)}=\mathsf{P}(X_{i}=j)=f_{j}\qquad(j\in \mathbb{N}_{0}). \tag{2.5}\] Consider the multiplicity \(M_{j}\) of output size \(j\in\mathbb{N}_{0}\) in the pooled production of items \((X_{i})\), \[M_{j}:=\#\big{\{}i\in\{1,\ldots,M\}\colon X_{i}=j\big{\}}=\sum_{i=1}^{M}I_{\{X _{i}=j\}}\qquad(j\in\mathbb{N}_{0}). \tag{2.6}\] Using (2.5), we find the expectation \[\mathsf{E}(M_{j})=\sum_{i=1}^{M}\mathsf{E}\big{(}I_{\{X_{i}=j\}}\big{)}=Mf_{j }\qquad(j\in\mathbb{N}_{0}). \tag{2.7}\] Note that the random variables \((M_{j})\) are not independent; indeed, they sum up to the number of sources, \[\sum_{j=0}^{\infty}M_{j}=\sum_{j=0}^{\infty}\sum_{i=1}^{M}I_{\{X_{i}=j\}}=\sum _{i=1}^{M}\sum_{j=0}^{\infty}I_{\{X_{i}=j\}}=\sum_{i=1}^{M}1=M.\] From the interpretation of the multiplicities \(M_{j}\), it is evident that the total (random) number of produced items is given by \[N=\sum_{j=0}^{\infty}jM_{j}. \tag{2.8}\] The same can be easily obtained using definition (2.2) and decompositions (2.4) and (2.6), \[N=\sum_{i=1}^{M}X_{i}=\sum_{i=1}^{M}\sum_{j=0}^{\infty}jI_{\{X_{i}=j\}}=\sum_ {j=0}^{\infty}j\sum_{i=1}^{M}I_{\{X_{i}=j\}}=\sum_{j=0}^{\infty}jM_{j}.\] The expected value of \(N\) can then be expressed using (2.7) and (2.1), \[\mathsf{E}(N)=\sum_{j=0}^{\infty}j\mathsf{E}(M_{j})=M\sum_{j=0}^{\infty}jf_{j }=M\eta, \tag{2.9}\] which is, of course, the same as (2.3). _Remark 2.3_.: In view of formulas (2.3) and (2.9), the sample mean \(\hat{\eta}=N/M\) is an unbiased estimator of the expected value \(\eta\), possessing all standard properties such as consistency and asymptotic normality. The advantage of this estimator is that it is _non-parametric_, in the sense that it does not require knowledge of any distributional model \((f_{j})\) behind the production output data. ### Young diagrams and limit shape It is useful to rank the sources according to their production output, that is, by considering the (descending) order statistics \(X_{1,M}\geq X_{2,M}\geq\cdots\geq X_{M,M}\); for example, \(X_{1,M}=\max_{1\leq i\leq M}\{X_{i}\}\) is the highest output score amongst \(M\) sources. The production profile is succinctly visualized by the _Young diagram_ formed by the left- and bottom-aligned row blocks of unit height and lengths \(X_{1,M}\geq X_{2,M}\geq\cdots\), respectively, with longer blocks positioned lower (see Fig. 1(a)). In particular, blocks corresponding to the output value \(j=0\) (if it is allowed) degenerate to vertical intervals (of height \(1\) each) placed on top of the rest of the Young diagram along the vertical axis. The upper boundary of the Young diagram is the graph of the (left-continuous) step function \[Y(x):=\sum_{j\geq x}\sum_{i=1}^{M}I_{\{X_{i}=j\}}=\sum_{j\geq x}M_{j}\qquad(x \geq 0) \tag{2.10}\] (see (2.6)). To highlight the dependence on \(x\), rewrite definition (2.10) in the form \[Y(x)=\sum_{j=0}^{\infty}M_{j}\mathbf{1}_{[0,j]}(x)\qquad(x\geq 0), \tag{2.11}\] where \(\mathbf{1}_{D}(x)\) is the indicator function of set \(D\) (i.e., \(\mathbf{1}_{D}(x)=1\) if \(x\in D\) and \(\mathbf{1}_{D}(x)=0\) otherwise). Figure 1: (a) Young diagram and the boundary \(Y(x)\) for \(M=6\) sources and ordered outputs \((X_{i,M})=(4,2,2,2,1,1)\), corresponding to counts \(M_{4}=1\), \(M_{2}=3\), \(M_{1}=2\). (b) The limit shape \(y=\varphi(x)\) for a randomized occupancy problem (i.e., with independent counts \(M_{j}\), see details in Section 2.3), given by the equation \(\mathrm{e}^{-x\pi/\sqrt{6}}+\mathrm{e}^{-y\pi/\sqrt{6}}=1\) (see (2.30)). The shaded area represents a simulated Young diagram under the scaling \(\sqrt{n}\) along both axes, with \(n=100\); horizontal blocks correspond to the ordered outputs \(X_{i,M}\) (with the sample value \(M=17\)). If \(M_{0}>0\) then the function \(Y(x)\) has an isolated peak at \(x=0\); otherwise, \(Y(x)\) is right-continuous at zero. The value at the origin is the total number of sources, \[Y(0)=\sum_{j=0}^{\infty}M_{j}=M,\] whereas the area under the graph of \(Y(x)\) equals the total number of produced items, \[\int_{0}^{\infty}\!Y(x)\,\mathrm{d}x=\sum_{j=0}^{\infty}M_{j}\int_{0}^{\infty} \!\mathbf{1}_{[0,j]}(x)\,\mathrm{d}x=\sum_{j=0}^{\infty}j\,M_{j}=N\] (see (2.11) and (2.8)). Setting \[Z_{i}(x):=\sum_{j\geq x}I_{\{X_{i}=j\}}=I_{\{X_{i}\geq x\}}\qquad(i=1,\ldots,M), \tag{2.12}\] formula (2.10) can be expressed in the form \[Y(x)=\sum_{i=1}^{M}\sum_{j\geq x}I_{\{X_{i}=j\}}=\sum_{i=1}^{M}Z_{i}(x). \tag{2.13}\] The indicators \(Z_{1}(x),\ldots,Z_{M}(x)\) are independent and identically distributed Bernoulli random variables; specifically, \[\mathsf{P}(Z_{i}(x)=1)=\mathsf{P}(X_{i}\geq x)=\sum_{j\geq x}f_{j }=:\bar{F}(x), \tag{2.14}\] \[\mathsf{P}(Z_{i}(x)=0)=\mathsf{P}(X_{i}<x)=\sum_{j<x}f_{j}=:F(x), \tag{2.15}\] where \(\bar{F}(x)+F(x)=1\) for all\(x\geq 0\). Hence, \[\mathsf{E}\big{(}Z_{i}(x)\big{)}=\bar{F}(x),\qquad\mathsf{Var}\big{(}Z_{i}(x) \big{)}=\bar{F}(x)\big{(}1-\bar{F}(x)\big{)}=\bar{F}(x)F(x),\] and, for any \(0\leq x\leq x^{\prime}\), \[\mathsf{Cov}\big{(}Z_{i}(x),Z_{i}(x^{\prime})\big{)}=\bar{F}(x^{\prime})-\bar {F}(x)\,\bar{F}(x^{\prime})=\bar{F}(x^{\prime})F(x).\] It then follows easily from (2.13) that, for each \(x\geq 0\), \[\mathsf{E}\big{(}Y(x)\big{)}=M\bar{F}(x),\qquad\mathsf{Var}\big{(}Y(x)\big{)} =M\bar{F}(x)F(x). \tag{2.16}\] and, for \(0\leq x\leq x^{\prime}\), \[\mathsf{Cov}\big{(}Y(x),Y(x^{\prime})\big{)} =\sum_{i,i^{\prime}=1}^{M}\mathsf{Cov}\big{(}Z_{i}(x),Z_{i^{ \prime}}(x^{\prime})\big{)}\] \[=\sum_{i=1}^{M}\mathsf{Cov}\big{(}Z_{i}(x),Z_{i}(x^{\prime})\big{)} =M\bar{F}(x^{\prime})F(x), \tag{2.17}\] A useful visual insight into the structure of the production distribution may be obtained by looking at scaled Young diagrams, with some scaling coefficients \(A\) and \(B\), \[\widetilde{Y}(x)=\frac{1}{B}\,Y(Ax)=\frac{1}{B}\sum_{j\geq Ax}M_{j}=\frac{1}{B} \sum_{i=1}^{M}Z_{i}(Ax)\qquad(x\geq 0). \tag{2.18}\] The aim is to seek a _limit shape_\(x\mapsto\varphi(x)\) such that, with suitable \(A,B\to\infty\), \[\mathsf{E}\big{(}\widetilde{Y}(x)\big{)}\to\varphi(x)\qquad(x>0), \tag{2.19}\] and, moreover, there is convergence in probability of \(\widetilde{Y}(x)\) to \(\varphi(x)\), that is, for any \(\varepsilon>0\), \[\mathsf{P}\big{(}|\widetilde{Y}(x)-\varphi(x)|>\varepsilon\big{)}\to 0 \qquad(x>0). \tag{2.20}\] _Remark 2.4_.: The reason for restricting the range of convergence in (2.19) and (2.20) to \(x>0\) is that, in some cases, it turns out that \(\varphi(0)=\infty\) (e.g., see Fig.1(b)). The notion of limit shape is motivated by similar topics in the theory of random integer partitions [43, 44]. This classic example is recalled briefly in Section 2.3 by way of illustration, although the setting there is somewhat different from the item production model. In the present paper, we address this problem for the GIGP distribution introduced in Section 3. ### Example: limit shape of integer partitions To illustrate the concept of limit shape, we start with a baseline example of the power law frequency distribution, \(f_{j}=j^{-a}/\zeta(a)\) (\(j\geq 1\)), with \(a>1\). Choose any \(A\to\infty\) such that \(B:=M/A^{a-1}\to\infty\); that is, \(1\ll A\ll M^{1/(a-1)}\). Then the scaled expected Young diagram boundary function specializes to \[\mathsf{E}\big{(}\widetilde{Y}(x)\big{)}=\frac{A^{a-1}}{M}\sum_{ j\geq Ax}Mj^{-a}/\zeta(a) =\frac{1}{\zeta(a)}\sum_{j/A\geq x}\left(\frac{j}{A}\right)^{-a} \frac{1}{A} \tag{2.21}\] \[\to\frac{1}{\zeta(a)}\int_{x}^{\infty}s^{-a}\,\mathrm{d}s=\frac{ x^{-(a-1)}}{(a-1)\,\zeta(a)}, \tag{2.22}\] using that the sum in (2.21) is the Riemann integral sum of the integral in (2.22). Thus, the limit shape exists and is given by the right-hand side of (2.22), but as mentioned in the Introduction, this is of no practical use because the scaling parameter \(A\to\infty\) is arbitrary as long as \(A=o(M^{1/(a-1)})\) (which confirms that the power law distribution is _scale free_). The classic example of a frequency model possessing a meaningful limit shape comes from the theory of random integer partitions. Here, the values \(j=1,2,\dots\) are interpreted as candidate parts into an integer partition, and the corresponding multiplicity \(M_{j}\) is the number of times the part \(j\) is used, respectively. In particular, if \(M_{j}=0\) then the value \(j\) is not involved in the partition, and it is tacitly assumed that only finitely many of \(M_{j}\)'s are non-zero. The sum \(N=\sum_{j=1}^{\infty}jM_{j}\) yields the integer being partitioned into the sum of the parts \(j\) with \(M_{j}>0\). The standard model set-up there is different from the item production model described in Section 2.1. Namely, instead of the premise of \(M\) independent sources, with multiplicities \((M_{j})\) expressed by formula (2.6), the randomized partition model is defined by assuming that the multiplicities \((M_{j})\) are independent random variables with geometric distribution, \(M_{j}\sim\mathrm{Geom}(1-z^{j})\) (\(j\geq 1\)), that is, \[\mathsf{P}(M_{j}=m)=z^{jm}(1-z^{j})\qquad(m\geq 0), \tag{2.23}\] with the expected value given by \[\mathsf{E}(M_{j})=\frac{z^{j}}{1-z^{j}}\qquad(j\geq 1). \tag{2.24}\] The parameter \(z\in(0,1)\) is chosen specifically as \[z=\mathrm{e}^{-\kappa/\sqrt{n}},\qquad\kappa:=\frac{\pi}{\sqrt{6}}=\sqrt{ \zeta(2)}, \tag{2.25}\] where \(n\) is an external (large) parameter. Note that, for any \(z\in(0,1)\), \[\mathsf{P}(M_{j}>0)=1-\mathsf{P}(M_{j}=0)=1-(1-z^{j})=z^{j},\] and \[\sum_{j=1}^{\infty}\mathsf{P}(M_{j}>0)=\sum_{j=1}^{\infty}z^{j}=\frac{z}{1-z}<\infty.\] Therefore, by the Borel-Cantelli lemma (see, e.g., [35, Sec. II.10, p. 255]), the number of nonzero terms in the sequence of random multiplicities \((M_{j})\) is finite with probability \(1\). Due to the mutual independence of \(M_{j}\) and the geometric marginal distributions (2.23), the probability of a given sequence of multiplicities \(M_{j}=m_{j}\) (\(j\geq 1\)) (with finitely many nonzero terms) is expressed as follows, \[\mathsf{P}(M_{j}=m_{j},\,j=1,2,\dots)=\prod_{j=1}^{\infty}z^{jm_{j}}(1-z^{j}) =\frac{z^{N}}{G(z)}, \tag{2.26}\] where \(N=\sum_{j=1}^{\infty}j\,m_{j}\) and \[G(z)=\prod_{j=1}^{\infty}\frac{1}{1-z^{j}}\qquad(0<z<1).\] Formula (2.26) is an instance of the so-called _Boltzmann distribution_, with roots in statistical physics [2, 45] and many applications in probabilistic combinatorics [1] and computing [13]. Motivation for the choice of the Boltzmann distribution (2.26) is due to the fact that its conditioning leads to the uniform distribution on the corresponding subspace. Specifically, denoting by \(\Pi_{n}\) the set of all integer partitions of \(n\), it is easy to see that the conditional probability of any partition in \(\Pi_{n}\) with specific multiplicities of parts \(M_{j}=m_{j}\), conditioned on \(N=\sum_{j=1}^{\infty}jM_{j}=n\), is given by \[\mathsf{P}\big{(}M_{j}=m_{j},\,j\geq 1\,\big{|}\,N=\sum_{j}jM_{j}=n\big{)}=\frac{z ^{n}/G(z)}{(z^{n}/G(z))\cdot\#\Pi_{n}}=\frac{1}{\#\Pi_{n}},\] which is the uniform distribution on \(\Pi_{n}\). Furthermore, the choice of the parameter \(z\) in the asymptotic form (2.25) is explained by the natural calibration condition \[\mathsf{E}(N)=\mathsf{E}\Big{(}\sum_{j=1}^{\infty}jM_{j}\Big{)}\sim n\qquad(n \to\infty). \tag{2.27}\] Indeed, using the mean formula (2.24) and seeking the parameter \(z\) in the form \(z=\mathrm{e}^{-\alpha_{n}}\), with \(\alpha_{n}\to 0\), the asymptotic equation (2.27) is rewritten as \[\mathsf{E}(N)=\sum_{j=1}^{\infty}\frac{j\,\mathrm{e}^{-\alpha_{n}j}}{1- \mathrm{e}^{-\alpha_{n}j}}=\frac{1}{\alpha_{n}^{2}}\sum_{j=1}^{\infty}\frac{ \alpha_{n}j\,\mathrm{e}^{-\alpha_{n}j}}{1-\mathrm{e}^{-\alpha_{n}j}}\,\alpha _{n}\sim n. \tag{2.28}\] Observing that the sum in (2.28) is a Riemann integral sum, it follows that \[\sum_{j=1}^{\infty}\frac{\alpha_{n}j\,\mathrm{e}^{-\alpha_{n}j}} {1-\mathrm{e}^{-\alpha_{n}j}}\,\alpha_{n} \to\int_{0}^{\infty}\frac{s\,\mathrm{e}^{-s}}{1-\mathrm{e}^{-s}}\, \mathrm{d}s\] \[=\sum_{\ell=1}^{\infty}\int_{0}^{\infty}s\,\mathrm{e}^{-\ell s}\, \mathrm{d}s=\sum_{\ell=1}^{\infty}\frac{1}{\ell^{2}}=\zeta(2)=\frac{\pi^{2}}{ 6}=\kappa^{2}.\] Substituting this into equation (2.28), we obtain \(\alpha_{n}\sim\kappa/\sqrt{n}\), in line with (2.25). The expected limit shape in the partition model can now be easily computed [44, 5]: setting \(A=B=\sqrt{n}\), we have, for any \(x>0\), \[\mathsf{E}\big{(}\widetilde{Y}(x)\big{)}=\frac{1}{B}\sum_{j\geq Ax }\mathsf{E}(M_{j}) =\frac{1}{\sqrt{n}}\sum_{j\geq\sqrt{n}\,x}\frac{\mathrm{e}^{-\alpha _{n}j}}{1-\mathrm{e}^{-\alpha_{n}j}}\] \[\to\frac{1}{\kappa}\int_{\kappa x}^{\infty}\frac{\mathrm{e}^{-u} }{1-\mathrm{e}^{u}}\,\mathrm{d}u=\frac{1}{\kappa}\sum_{\ell=1}^{\infty}\int_{ \kappa x}^{\infty}\mathrm{e}^{-\ell u}\,\mathrm{d}u\] \[=\frac{1}{\kappa}\sum_{\ell=1}^{\infty}\frac{1}{\ell}\,\mathrm{e }^{-\ell x}=-\frac{1}{\kappa}\log{(1-\mathrm{e}^{-\kappa x})}. \tag{2.29}\] Thus, the limit shape \(y=\varphi(x)\) is given by the equation \[y=-\kappa^{-1}\log{(1-\mathrm{e}^{-\kappa x})}\qquad(x>0),\] or, in a more symmetric form, \[\mathrm{e}^{-\kappa x}+\mathrm{e}^{-\kappa y}=1\qquad(x,y>0), \tag{2.30}\] where \(\kappa=\pi/\sqrt{6}\) (see (2.25)). The plot of this function is shown in Fig. 1(b). Note that \(\varphi(0)=\infty\). According to the calculation in (2.29), this implies that the expected value of \(M\) grows faster than \(\sqrt{n}\). More precisely, we have \[\mathsf{E}(M)=\sum_{j=1}^{\infty}\frac{\mathrm{e}^{-\alpha_{n}j}}{1-\mathrm{e}^{ -\alpha_{n}j}}=\sum_{j=1}^{m}\frac{\mathrm{e}^{-\alpha_{n}j}}{1-\mathrm{e}^{- \alpha_{n}j}}+\frac{1}{\alpha_{n}}\sum_{j>m}^{\infty}\frac{\alpha_{n}\mathrm{e }^{-\alpha_{n}j}}{1-\mathrm{e}^{-\alpha_{n}j}}, \tag{2.31}\] where \(m=[1/\alpha_{n}]\) and \(\alpha_{n}=-\kappa/\sqrt{n}\) (see (2.25)). Arguing as before, we see that the last sum in (2.31) converges to the integral \(\int_{1}^{\infty}\mathrm{e}^{-\kappa u}\,(1-\mathrm{e}^{-\kappa u})^{-1}\, \mathrm{d}u<\infty\). Next, write \[\sum_{j=1}^{m}\frac{\mathrm{e}^{-\alpha_{n}j}}{1-\mathrm{e}^{-\alpha_{n}j}}= \frac{1}{\alpha_{n}}\sum_{j=1}^{m}\frac{1}{j}+\sum_{j=1}^{m}\bigg{(}\frac{ \mathrm{e}^{-\alpha_{n}j}}{1-\mathrm{e}^{-\alpha_{n}j}}-\frac{1}{\alpha_{n}j }\bigg{)},\] where [27, 2.10.8] \[\sum_{j=1}^{m}\frac{1}{j}\sim\log m\sim-\log\alpha_{n}\] and \[\sum_{j=1}^{m}\bigg{(}\frac{\mathrm{e}^{-\alpha_{n}j}}{1-\mathrm{e}^{-\alpha_ {n}j}}-\frac{1}{\alpha_{n}j}\bigg{)}\sim\frac{1}{\alpha_{n}}\int_{0}^{1}\! \bigg{(}\frac{\mathrm{e}^{-u}}{1-\mathrm{e}^{-u}}-\frac{1}{u}\bigg{)}\, \mathrm{d}u=O(\alpha_{n}^{-1}),\] noting that the integrand function has a finite limit at zero. As a result, \[\mathsf{E}(M)\sim\alpha_{n}^{-1}(-\log\alpha_{n})\sim\frac{\sqrt{n}}{2\kappa} \log n=\frac{\sqrt{6n}}{2\pi}\log n\qquad(n\to\infty). \tag{2.32}\] _Remark 2.5_.: Two different model settings discussed above -- with independent outputs \(X_{i}\) (\(i=1,\ldots,M\)), like in the item production model (Section 2.2), or with independent multiplicities \(M_{j}\) (\(j\in\mathbb{N}_{0}\)), like in a randomized model of integer partitions (Section 2.3), are in fact closely connected and, in a sense, equivalent to one another. Indeed, randomization of certain parameters in combinatorial structures is a frequently used technical tool [1] aiming to overcome structural constraints, such as a prescribed sum of parts in integer partitions [16, 5]. As another example directly related to the item production model, in the occupancy problem (see Remark 2.2) it is conventional to use the so-called _poissonization_[1, 7] by replacing the original (co-dependent) multiplicities \(M_{j}\) by independent Poisson random variables with mean \(Mf_{j}\), respectively (\(j\in\mathbb{N}_{0}\)) [6, 17]. In each of these settings, the anticipated equivalence is guaranteed via a suitable "bridge" between the original and randomized versions of the problem, such as a local limit theorem for the asymptotics of probabilities \(\mathsf{P}\big{(}\sum_{j}jM_{j}=n\big{)}\) in the case of integer partitions [16, 5], or a "depoissonization lemma" in the occupancy problem [17, 6]. ## 3 The GIGP model ### The GIGP distribution The _generalized inverse Gaussian-Poisson (GIGP)_ distribution introduced by Sichel [36, 41] is of the form \[f_{j}=\frac{\left(1-\theta\right)^{\nu/2}}{K_{\nu}\big{(}\alpha\left(1- \theta\right)^{1/2}\big{)}}\cdot\frac{\left(\frac{1}{2}\,\alpha\,\theta\right) ^{j}}{j!}\,K_{\nu+j}(\alpha)\qquad(j\in\mathbb{N}_{0}), \tag{3.1}\] where parameters have the range \(\nu\in\mathbb{R}\), \(\alpha>0\) and \(0<\theta<1\), and \(K_{\nu}(\cdot)\) is the _modified Bessel function of the second kind_ of order \(\nu\)[27, SS10.25(i), SS10.25(ii)]. As was mentioned in the Introduction, the GIGP model (3.1) is a mixed Poisson distribution, \[f_{j}=\int_{0}^{\infty}\frac{\lambda^{j}\mathrm{e}^{-\lambda}}{j!}\,g(\lambda) \,\mathrm{d}\lambda\qquad(j\geq 0), \tag{3.2}\] with the mixing density for the Poisson parameter \(\lambda\) chosen as a _generalized inverse Gaussian (GIG)_ density [36] (see also [22, p. 284])4 Footnote 4: We follow the nomenclature of [36]. The connection with an alternative parameterization \((\theta,\psi,\chi)\) in [22] is via the maps \(\theta\mapsto\nu\), \(\psi\mapsto 2(1-\theta)/\theta\), \(\chi\mapsto\alpha^{2}\theta/2\). \[g(\lambda)=\frac{\left(2\left(1-\theta\right)^{1/2}\!/\alpha\theta\right)^{ \nu}}{2\,K_{\nu}\!\left(\alpha\left(1-\theta\right)^{1/2}\right)}\,\lambda^{ \nu-1}\exp\!\left(-\frac{\left(1-\theta\right)\lambda}{\theta}-\frac{\alpha^ {2}\theta}{4\lambda}\right)\qquad(\lambda>0). \tag{3.3}\] The normalization in (3.3) is due to one of the integral representations for the Bessel function [27, 10.32.10]. Representation (3.2) explains why formula (3.1) defines a probability distribution, \[\sum_{j=0}^{\infty}f_{j}=\int_{0}^{\infty}\sum_{j=0}^{\infty}\frac{\lambda^{ j}\mathrm{e}^{-\lambda}}{j!}\,g(\lambda)\,\mathrm{d}\lambda=\int_{0}^{\infty}g( \lambda)\,\mathrm{d}\lambda=1,\] and it also leads to a curious identity for the Bessel functions, which does not seem to have been mentioned in the special functions literature, \[\sum_{j=0}^{\infty}\frac{\left(\frac{1}{2}\alpha\,\theta\right)^{j}K_{\nu+j}( \alpha)}{j!}=\frac{K_{\nu}\!\left(\alpha\left(1-\theta\right)^{1/2}\right)}{ \left(1-\theta\right)^{\nu/2}}. \tag{3.4}\] From formula (3.2), the expression (3.1) is easily obtained using the normalization of the GIG density (3.3) with parameters \(\theta\) and \(\alpha\) replaced by \(\tilde{\theta}=\theta/(1+\theta)\) and \(\tilde{\alpha}=\alpha\,\sqrt{1+\theta}\), respectively. Furthermore, formula (3.2) implies that the expected value of the GIGP distribution (3.1) coincides with that of the GIG distribution (3.3), \[\eta=\sum_{j=0}^{\infty}jf_{j}=\int_{0}^{\infty}\sum_{j=0}^{ \infty}j\,\frac{\lambda^{j}\mathrm{e}^{-\lambda}}{j!}\,g(\lambda)\,\mathrm{d}\lambda =\int_{0}^{\infty}\!\lambda\,g(\lambda)\,\mathrm{d}\lambda\] \[=\frac{\alpha\,\theta}{2\left(1-\theta\right)^{1/2}}\cdot\frac{K_ {\nu+1}\!\left(\alpha\left(1-\theta\right)^{1/2}\right)}{K_{\nu}\!\left( \alpha\left(1-\theta\right)^{1/2}\right)}, \tag{3.5}\] where the last computation is based on the normalization in (3.3) with order \(\nu+1\).5 Footnote 5: Expression (3.5) follows directly from the definition (3.1) by using the identity (3.4) with order \(\nu+1\). As was pointed out by Sichel [41, p. 315], the frequencies (3.1) satisfy the recurrence relation \[f_{j+2}=\frac{\left(\nu+j+1\right)\theta}{j+2}\,f_{j+1}+\frac{\alpha^{2} \theta^{2}}{4\left(j+2\right)\left(j+1\right)}\,f_{j}\qquad(j\in\mathbb{N}_{0 }),\] which can be obtained by integration by parts of the integral representation mentioned above after formula (3.3). The tail of the GGP distribution (3.1) has a power-geometric decay, as can be shown using Stirling's formula [27, 5.11.3] and the asymptotics (A.7) of the Bessel function of large order, yielding \[f_{j}\sim\frac{\left(1-\theta\right)^{\nu/2}\left(\tfrac{1}{2}\alpha\right)^{- \nu}}{2K_{\nu}\big{(}\alpha\left(1-\theta\right)^{1/2}\big{)}}\,j^{\nu-1} \theta^{j}\qquad(j\to\infty). \tag{3.6}\] ### The boundary case \(\alpha=0\) The value \(\alpha=0\) can also be included in the GIGP class via the limit \(\alpha\to 0+\). To this end, we need to consider several cases for the value of the order \(\nu\). Namely, if \(\nu>0\) then, using the small argument asymptotics of the Bessel function (see (A.2)), we obtain from (3.1) \[f_{j}\sim\left(1-\theta\right)^{\nu}\,\frac{\Gamma(\nu+j)\,\theta^{j}}{\Gamma (\nu)\,j!}=\binom{\nu+j-1}{j}\left(1-\theta\right)^{\nu}\theta^{j}\qquad(j\in \mathbb{N}_{0}), \tag{3.7}\] where \(\Gamma(z):=\int_{0}^{\infty}s^{z-1}\,\mathrm{e}^{-s}\,\mathrm{d}s\) (\(z>0\)) is the gamma function [27, 5.2.1]. Formula (3.7) defines a _negative binomial distribution_ with parameters \(\nu\) and \(\theta\)[21, Sec. 5.1], with the expected value given by \[\eta=\frac{\nu\,\theta}{1-\theta}. \tag{3.8}\] The latter expression is consistent with the limit of (3.5) as \(\alpha\to 0+\) (again using (A.2)). The tail behavior of (3.7) is retrieved with the aid of Stirling's formula [27, 5.11.3], \[f_{j}\sim\frac{\left(1-\theta\right)^{\nu}j^{\nu-1}\theta^{j}}{\Gamma(\nu)} \qquad(j\to\infty), \tag{3.9}\] which is formally in agreement with the limit of (3.6) as \(\alpha\to 0+\). However, for \(\nu\leq 0\) the limiting GIGP distribution degenerates to \(f_{0}=1\) and \(f_{j}=0\) for all \(j\geq 1\). Indeed, for \(\nu=0\) we get, using the asymptotic formula (A.4), \[f_{0}=\frac{K_{0}(\alpha)}{K_{0}\big{(}\alpha\left(1-\theta\right)^{1/2}\big{)} }\sim\frac{-\log\alpha}{-\log\big{(}\alpha\left(1-\theta\right)^{1/2}\big{)} }\to 1.\] For \(\nu<0\), with the aid of the asymptotic formulas (A.1) and (A.2) we have \[f_{0}=\frac{\left(1-\theta\right)^{\nu/2}K_{\nu}(\alpha)}{K_{\nu}\big{(} \alpha\left(1-\theta\right)^{1/2}\big{)}}\sim\frac{\left(1-\theta\right)^{\nu /2}\tfrac{1}{2}\,\Gamma(-\nu)\big{(}\tfrac{1}{2}\alpha\big{)}^{\nu}}{\tfrac{1 }{2}\,\Gamma(-\nu)\big{(}\tfrac{1}{2}\alpha\left(1-\theta\right)^{1/2}\big{)} ^{\nu}}=1.\] To rectify this degeneracy, we switch to the zero-truncated GIGP distribution defined by \[\mathsf{P}(X_{i}=j\,|\,X_{i}\geq 1)=\frac{f_{j}}{1-f_{0}}\qquad(j\in\mathbb{N})\] and taken in the limit as \(\alpha\to 0+\). We denote the resulting conditional frequencies by \((\tilde{f}_{j})\) (\(j\in\mathbb{N}\)), and the corresponding expected value by \(\tilde{\eta}\). We restrict analysis to the range \(-1<\nu\leq 0\), and consider separately the cases \(\nu=0\) and \(-1<\nu<0\) (see Remark 3.2 below for why the value \(\nu=-1\) is not compatible with \(\alpha=0\)). _Remark 3.1_.: The case \(\nu<-1\) with \(\alpha>0\) is excluded from consideration (see Proposition 3.2(e) and a comment before this proposition). Hence, it is of no interest for us to consider the limit \(\alpha\to 0\) here. Case \(\nu=0\) Applying the asymptotic formula (A.4), we obtain \[1-f_{0}=\frac{K_{0}\big{(}\alpha\left(1-\theta\right)^{1/2}\big{)}-K_{0}(\alpha )}{K_{0}\big{(}\alpha\left(1-\theta\right)^{1/2}\big{)}}\sim\frac{\log(1- \theta)}{\log\alpha},\] whereas (A.2) and (A.5) give for \(j\geq 1\) \[f_{j}=\frac{\left(\tfrac{1}{2}\alpha\theta\right)^{j}}{j!}\cdot\frac{K_{j}( \alpha)}{K_{0}\big{(}\alpha\left(1-\theta\right)^{1/2}\big{)}}\sim\frac{1}{- \log\alpha}\cdot\frac{\theta^{j}}{j},\] using that \(\Gamma(j)=(j-1)!\). Hence, \[\frac{f_{j}}{1-f_{0}}\sim\tilde{f}_{j}:=\frac{1}{-\log\left(1-\theta\right)} \cdot\frac{\theta^{j}}{j}\qquad(j\in\mathbb{N}), \tag{3.10}\] which is _Fisher's logarithmic series distribution_[21, Sec. 7.1.2]. Note that the tail behavior of (3.10) is automatically power-geometric akin to (3.9) (with \(\nu=0\)). The expected value of this distribution is easily computed, \[\tilde{\eta}=\frac{1}{-\log\left(1-\theta\right)}\sum_{j=1}^{\infty}\theta^{j }=\frac{\theta}{\left(1-\theta\right)\big{(}-\log\left(1-\theta\right)\big{)}}. \tag{3.11}\] Case \(-1<\nu<0\) With the aid of the asymptotic formula (A.6) we get \[1-f_{0} =\frac{K_{\nu}\big{(}\alpha\left(1-\theta\right)^{1/2}\big{)}- \left(1-\theta\right)^{\nu/2}K_{\nu}(\alpha)}{K_{\nu}\big{(}\alpha\left(1- \theta\right)^{1/2}\big{)}}\] \[\sim\frac{\Gamma(\nu+1)}{\left(-\nu\right)\Gamma(-\nu)}\left( \tfrac{1}{2}\alpha\right)^{-2\nu}\left(1-\left(1-\theta\right)^{-\nu}\right),\] and furthermore, for \(j\geq 1\), \[f_{j}\sim\frac{\left(1-\theta\right)^{\nu/2}\left(\tfrac{1}{2}\alpha\theta \right)^{j}}{j!}\cdot\frac{\tfrac{1}{2}\,\Gamma(\nu+j)\left(\tfrac{1}{2} \alpha\right)^{-\nu-j}}{\tfrac{1}{2}\,\Gamma(-\nu)\left(\tfrac{1}{2}\alpha \left(1-\theta\right)^{1/2}\right)^{\nu}}\sim\frac{\Gamma(\nu+j)\,\theta^{j} }{\Gamma(-\nu)\,j!}\left(\tfrac{1}{2}\alpha\right)^{-2\nu}.\] Hence, \[\frac{f_{j}}{1-f_{0}}\sim\tilde{f}_{j}:=\frac{\left(-\nu\right)\Gamma(\nu+j) \,\theta^{j}}{\Gamma(\nu+1)\left(1-\left(1-\theta\right)^{-\nu}\right)j!} \qquad(j\in\mathbb{N}). \tag{3.12}\] This is an _extended negative binomial_ distribution [21, Sec. 5.12.2], with the expected value \[\tilde{\eta}=\frac{\left(-\nu\right)\theta\left(1-\theta\right)^{-\nu-1}}{1- \left(1-\theta\right)^{-\nu}}. \tag{3.13}\] The tail decay of the distribution (3.12) is easily obtained using Stirling's formula [27, 5.11.3], \[\tilde{f}_{j}\sim\frac{(-\nu)\,j^{\nu-1}\,\theta^{j}}{\Gamma(\nu+1)\left(1-(1- \theta)^{-\nu}\right)}\qquad(j\to\infty). \tag{3.14}\] _Remark 3.2_.: If \(\nu=-1\) then, using (A.1), (A.3) and (A.4), we have \[1-f_{0}=\frac{K_{1}\big{(}\alpha\,(1-\theta)^{1/2}\big{)}-(1-\theta)^{-1/2}\,K _{1}(\alpha)}{K_{1}\big{(}\alpha\,(1-\theta)^{1/2}\big{)}}\sim\tfrac{1}{2} \alpha^{2}\theta\,(-\log\alpha),\] and \[f_{1}=\frac{(1-\theta)^{-1/2}\left(\tfrac{1}{2}\alpha\,\theta\right)K_{0}( \alpha)}{K_{1}\big{(}\alpha\,(1-\theta)^{1/2}\big{)}}\sim\tfrac{1}{2}\alpha^{2 }\theta\,(-\log\alpha)\,,\] hence \[\frac{f_{1}}{1-f_{0}}\sim\tilde{f}_{1}=1.\] Thus, the limiting conditional distribution \((\tilde{f}_{j})\) appears to be degenerate, with all mass concentrated at \(j=1\). This is unsuitable for the modeling purposes, which explains why the "corner" case \(\nu=-1\), \(\alpha=0\) is excluded from consideration. ### Asymptotics of the GIGP mean As indicated by the integer partition example in Section 2.2, for the existence of a meaningful limit shape, the area of the Young diagram must grow faster than the number of constituent blocks (see (2.32)). In the context of the item production model, this means that the total number of items, \(N=\sum_{j}jM_{j}\), should be much larger than the number of sources, \(M=\sum_{j}M_{j}\). Recalling from (2.3) that the expected total number of items is given by \(\mathsf{E}(N)=M\eta\) (where \(\eta=\mathsf{E}(X_{i})\) is the expected number of items per source, see (2.1)), this implies that a suitable limiting regime is determined by \(\eta\to\infty\). In turn, from the expression (3.5) for the GIGP mean \(\eta\), one can hypothesize that the latter is achieved if \(\theta\approx 1\), while the parameters \(\alpha\) and \(\nu\) are kept fixed. This can be verified (cf. Proposition 3.2 below) using the known asymptotic formulas for the Bessel function \(K_{\nu}(z)\) with \(z\to 0\), adapted to our needs in the next lemma. **Lemma 3.1**.: _For \(\alpha>0\) and \(\nu\in\mathbb{R}\) fixed, the following asymptotics hold as \(\theta\to 1-\),_ \[K_{\nu}\big{(}\alpha\,(1-\theta)^{1/2}\big{)}\sim\begin{cases}\tfrac{1}{2} \,\Gamma(\nu)\big{(}\tfrac{1}{2}\alpha\big{)}^{-\nu}(1-\theta)^{-\nu/2}&(\nu>0 ),\\ \tfrac{1}{2}\big{(}-\log\,(1-\theta)\big{)}&(\nu=0),\\ \tfrac{1}{2}\,\Gamma(-\nu)\big{(}\tfrac{1}{2}\alpha\big{)}^{\nu}(1-\theta)^{ \nu/2}&(\nu<0).\end{cases} \tag{3.15}\] Proof.: The leading terms of the asymptotics (3.15) follow directly from formulas (A.2) for \(\nu\neq 0\) (with the aid of (A.1) for \(\nu<0\)) and (A.5) for \(\nu=0\), Using this lemma, we can characterize more precisely the asymptotic behavior of the GIGP mean in the limit as \(1-\theta\to 0+\). In particular, this analysis reveals that the desired growth to infinity is in place for \(\nu\geq-1\), but fails for \(\nu<-1\). **Proposition 3.2**.: _The expected values \(\eta\) and \(\tilde{\eta}\) of the GIGP \((\alpha>0)\) and zero-truncated GIGP \((\alpha=0)\) distributions, respectively, have the following asymptotics as \(\theta\to 1-\)._ * \(\nu>0\)_,_ \(\alpha\geq 0\)_:_ \[\eta\sim\frac{\nu}{1-\theta}.\] (3.16) * \(\nu=0\)_,_ \(\alpha\geq 0\)_:_ \[\eta\sim\frac{1}{(1-\theta)\big{(}-\log{(1-\theta)}\big{)}},\qquad\tilde{\eta }\sim\frac{1}{(1-\theta)\big{(}-\log{(1-\theta)}\big{)}}.\] (3.17) * \(-1<\nu<0\)_,_ \(\alpha\geq 0\)_:_ \[\eta\sim\frac{\Gamma(\nu+1)\big{(}\frac{1}{2}\alpha\big{)}^{-2\nu}}{\Gamma(- \nu)\left(1-\theta\right)^{\nu+1}},\qquad\tilde{\eta}\sim\frac{-\nu}{\left(1- \theta\right)^{\nu+1}}.\] (3.18) * \(\nu=-1\)_,_ \(\alpha>0\)_:_ \[\eta\sim\big{(}\frac{1}{2}\alpha\big{)}^{2}\big{(}-\log{(1-\theta)}\big{)}.\] (3.19) * \(\nu<-1\)_,_ \(\alpha>0\)_:_ \[\eta\sim\frac{\big{(}\frac{1}{2}\alpha\big{)}^{2}}{-\nu-1}.\] (3.20) Proof.: Consider cases (a)-(e) using the asymptotic formulas of Lemma 3.1. * For \(\alpha>0\), using the first line of (3.15) for orders \(\nu\) and \(\nu+1\), we have \[\frac{K_{\nu+1}\big{(}\alpha\left(1-\theta\right)^{1/2}\big{)}}{K_{\nu}\big{(} \alpha\left(1-\theta\right)^{1/2}\big{)}}\sim\frac{\frac{1}{2}\,\Gamma(\nu+1) \big{(}\frac{1}{2}\,\alpha\left(1-\theta\right)^{1/2}\big{)}^{-\nu-1}}{\frac{ 1}{2}\,\Gamma(\nu)\big{(}\frac{1}{2}\,\alpha\left(1-\theta\right)^{1/2}\big{)} ^{-\nu}}=\frac{\nu}{\frac{1}{2}\alpha\left(1-\theta\right)^{1/2}},\] (3.21) where we also used the recurrence property of the gamma function, \(\Gamma(\nu+1)=\nu\,\Gamma(\nu)\)[27, 5.5.1]. Substituting this into (3.5) gives (3.16). If \(\alpha=0\) then (3.16) readily follows from (3.8). * For \(\alpha>0\), formulas (3.15) with \(\nu=0\) and \(\nu=1\) give \[\frac{K_{1}\big{(}\alpha\left(1-\theta\right)^{1/2}\big{)}}{K_{0}\big{(} \alpha\left(1-\theta\right)^{1/2}\big{)}}\sim\frac{\left(\frac{1}{2}\alpha \left(1-\theta\right)^{1/2}\right)^{-1}}{-\log{(1-\theta)}},\] (3.22) and the first formula in (3.17) follows from (3.5). If \(\alpha=0\) then formula (3.11) immediately gives the second formula in (3.17). * For \(\alpha>0\), using the symmetry relation (A.1), similarly to (3.21) we obtain \[\frac{K_{\nu+1}\big{(}\alpha\left(1-\theta\right)^{1/2}\big{)}}{K_{\nu}\big{(} \alpha\left(1-\theta\right)^{1/2}\big{)}}\sim\frac{\Gamma(\nu+1)\big{(}\frac{1 }{2}\alpha\left(1-\theta\right)^{1/2}\big{)}^{-2\nu-1}}{\Gamma(-\nu)},\] and the first formula in (3.18) then follows from (3.5). The second formula in (3.18) is immediate from (3.13). * Follows from (3.5) using the symmetry relation (A.1) and the asymptotic ratio (3.22). * Again using (A.1) and the first line of (3.15) with orders \(-\nu>0\) and \(-\nu-1>0\), we obtain \[\frac{K_{\nu+1}\big{(}\alpha\left(1-\theta\right)^{1/2}\big{)}}{K_{\nu}\big{(} \alpha\left(1-\theta\right)^{1/2}\big{)}}\sim\frac{\Gamma(-\nu-1)\big{(}\frac{ 1}{2}\alpha\left(1-\theta\right)^{1/2}\big{)}}{\Gamma(-\nu)},\] and (3.20) follows from (3.5), again using the recurrence \(\Gamma(z+1)=z\,\Gamma(z)\)[27, 5.5.1], now with \(z=-\nu-1\). Thus, the proof of Proposition 3.2 is complete. Proposition 3.2 describes the growth of the expected value \(\eta\) (for \(\alpha>0\)) or \(\tilde{\eta}\) (for \(\alpha=0\)) in terms of the small parameter \(1-\theta\). For the purposes of the GIGP model fitting, it is useful to express \(1-\theta\) through \(\eta\) or \(\tilde{\eta}\), respectively, by solving the asymptotic equations (3.16), (3.17), (3.18), and (3.19). **Proposition 3.3**.: _Under the conditions of Proposition 3.2, the following asymptotics hold._ * \(\nu>0\)_,_ \(\alpha\geq 0\)_:_ \[1-\theta\sim\frac{\nu}{\eta}.\] * \(\nu=0\)_,_ \(\alpha\geq 0\)_:_ \[1-\theta\sim\frac{1}{\eta\log\eta},\qquad 1-\theta\sim\frac{1}{\tilde{\eta}\log \tilde{\eta}}.\] * \(-1<\nu<0\)_,_ \(\alpha\geq 0\)_:_ \[1-\theta\sim\left(\frac{\Gamma(\nu+1)\big{(}\frac{1}{2}\alpha\big{)}^{-2\nu}}{ \Gamma(-\nu)\,\eta}\right)^{1/(\nu+1)},\qquad 1-\theta\sim\left(\frac{-\nu}{ \tilde{\eta}}\right)^{1/(\nu+1)}.\] * \(\nu=-1\)_,_ \(\alpha>0\)_:_ \[\log\left(1-\theta\right)\sim-\frac{4\eta}{\alpha^{2}}.\] _Remark 3.3_.: Formula (3.19) provides only the logarithmic asymptotics of \(1-\theta\), but this suffices for the estimation purposes. ## 4 The limit shape in the GIGP model ### Scaling coefficients and the main theorem Let the frequencies \(f_{j}\) (\(j\in\mathbb{N}_{0}\)) be given by the GIGP distribution formula (3.1) with parameters \(0<\theta<1\), \(\nu\geq-1\) and \(\alpha\geq 0\), excluding the "corner" pair \(\nu=-1\), \(\alpha=0\). The case \(\alpha=0\) is understood as the limit of conditional probabilities \(\mathsf{P}(X_{i}=j\,|\,X_{i}>0)=f_{j}/(1-f_{0})\) (\(j\in\mathbb{N}\)) as \(\alpha\to 0+\) (see Section 3.2). Given the random vector of observed multiplicities \((M_{j})\) produced by \(M\) sources, our aim is to study the asymptotics of scaled Young diagrams with the boundary (see (2.18)) \[\widetilde{Y}(x):=\frac{Y(Ax)}{B}=\frac{1}{B}\sum_{j\geq Ax}M_{j}=\frac{1}{B} \sum_{i=1}^{M}Z_{i}(Ax)\qquad(x\geq 0). \tag{4.1}\] We proceed under the following assumptions on the limiting regime, including the specification of the scaling coefficients \(A\) and \(B\). _Assumption 4.1_.: The number of sources is large, \(M\to\infty\). In addition, the intrinsic parameter \(\theta\in(0,1)\) is assumed to be close to its upper limit \(1\), that is, \(\theta\to 1-\), which guarantees that the mean number of items per source is large (see Proposition 3.2). _Assumption 4.2_.: The \(x\)-scaling coefficient \(A\) is chosen to be \[A=\frac{1}{-\log\theta}\sim\frac{1}{1-\theta}\to\infty\qquad(\theta\to 1-), \tag{4.2}\] whereas the \(y\)-scaling coefficient \(B\) is specified according to particular domains in the space of parameters \(\nu\) and \(\alpha\) as follows: * \(\nu>0\), \(\alpha\geq 0\) : \[B=\frac{M}{\Gamma(\nu)}.\] (4.3) * \(\nu=0\), \(\alpha\geq 0\) : \[B=\frac{M}{-\log\left(1-\theta\right)}.\] (4.4) * \(-1\leq\nu<0\), \(\alpha>0\) : \[B=\frac{M\left(\frac{1}{2}\alpha\right)^{-2\nu}\left(1-\theta\right)^{-\nu}}{ \Gamma(-\nu)}.\] (4.5) * \(-1<\nu<0\), \(\alpha=0\) : \[B=\frac{M\left(-\nu\right)\left(1-\theta\right)^{-\nu}}{\Gamma(\nu+1)}.\] (4.6) _Assumption 4.3_.: The \(y\)-scaling coefficient \(B\) defined in Assumption 4.2 is large, \(B\to\infty\). For \(\nu>0\), this is automatic according to (4.3) (as long as \(M\to\infty\)), but for \(\nu\leq 0\) we must assume in addition that \(M\gg-\log\left(1-\theta\right)\) if \(\nu=0\) and \(M\gg\left(1-\theta\right)^{\nu}\) if \(\nu<0\). _Remark 4.1_.: The need to impose an additional condition in Assumption 4.3 on the joint limiting behavior of the external parameter \(M\to\infty\) and the intrinsic GIGP parameter \(\theta\to 1-\) for \(\nu\leq 0\) shows that, in order to have a manifested limit shape in the data, the number of sources, \(M\), must be sufficiently large. We will clarify the opposite situation below in Section 6. For \(\nu\geq-1\), consider the function \[\varphi_{\nu}(x):=\int_{x}^{\infty}\!s^{\nu-1}\,\mathrm{e}^{-s}\,\mathrm{d}s \qquad(x>0), \tag{4.7}\] which is the _(upper) incomplete gamma function_[27, 8.2.2]. The following is our main result, establishing convergence in probability of the scaled Young diagrams (see (4.1)) to the limit shape \(\varphi_{\nu}(x)\). **Theorem 4.1**.: _Under Assumptions 4.1, 4.2 and 4.3, for any \(\varepsilon>0\) and any \(\delta>0\) we have_ \[\mathsf{P}\left(\sup_{x\geq\delta}\big{|}\widetilde{Y}(x)-\varphi_{\nu}(x) \big{|}\geq\varepsilon\right)\to 0. \tag{4.8}\] The proof of Theorem 4.1 is developed below in Sections 4.3 to 4.7. ### Graphical illustration using computer simulations In this section, we illustrate the limit shape approximation using computer simulated data in two example cases, with \(\nu=0.5\) and \(\nu=-0.5\) (see Fig. 2, left panels). The other parameter settings are as follows, \(\alpha=2\). \(\theta=0.99\), and \(M=1000\). The plots depict the data as the upper boundary of the Young diagram \(Y(x)\) defined in (2.10) and the theoretical GGP complementary distribution function \(\bar{F}(x)\) (see (2.14) and (3.1)), along with the limit shape scaled back to the original frequencies of counts, that is, \(x\mapsto B\,\varphi_{\nu}(x/A)\), where \(A=-1/\log\theta\doteq 99.49916\) (see (4.2)) and \(B\doteq 564.1896\) for \(\nu=0.5\) or \(B\doteq 56.41896\) for \(\nu=-0.5\) (see (4.3) and (4.5), respectively). In both cases, the plots show a very good fit of the limit shape in the bulk of the observed values. The inspection of the tail behavior is facilitated by observing from (4.7) that \[\varphi_{\nu}(x) =-\int_{x}^{\infty}\!s^{\nu-1}\,\mathrm{d}(\mathrm{e}^{-s})\] \[=x^{\nu-1}\,\mathrm{e}^{-x}+(\nu-1)\!\int_{x}^{\infty}\!s^{\nu-2 }\,\mathrm{e}^{-s}\,\mathrm{d}s\sim x^{\nu-1}\,\mathrm{e}^{-x}\qquad(x\to \infty).\] Therefore, according to (4.1) and (4.8), it may be expected that, for large enough \(x\), \[y=Y(Ax)\approx B\,\varphi_{\nu}(x)\approx B\,x^{\nu-1}\,\mathrm{e}^{-x},\] or, taking the logarithm, \[\log Y(Ax)+x\approx\log B+(\nu-1)\log x. \tag{4.9}\] Hence, switching from \((x,y)\) to the new coordinates \[u=\log x,\qquad v=\log y+x, \tag{4.10}\] a transformed data plot may be expected to be close to a straight line with slope \(\nu-1\), as well as the tails of the theoretical GGP distribution function and of the limit shape alike. This is illustrated for the simulated data in Fig. 2 (right panels), showing a reasonable linearization of the long tails in both cases, \(\nu=0.5\) and \(\nu=-0.5\). The graphical method described above can be used for a quick visual check of suitability of the GIGP frequency model even before estimating the model parameters, by first experimenting with the scaling coefficient \(A=-1/\log\theta\) (see (4.2)) aiming to get a linearized data plot (thus producing a crude estimate for the parameter \(\theta\)), followed by reading off the fitted slope (which estimates the parameter \(\nu-1\)), and then exploiting the fitted intercept (close to \(\log B\), see (4.9)) to get an estimate for the parameter \(\alpha\) using one of the formulas (4.3) to (4.6). We will apply this method to some real data sets in Section 7. Figure 2: Illustration of the limit shape approximation using \(M=1000\) random values \((X_{i})\) simulated using the GIGP model (3.1) with parameters \(\theta=0.99\), \(\alpha=2\), and (a) \(\nu=0.5\) or (b) \(\nu=-0.5\). In the left panels, the black stepwise plots represent the upper boundary \(Y(x)\) of the corresponding Young diagrams, together with the GIGP complementary distribution function \(\bar{F}(x)\) shown as blue dotted plots, while the smooth red curves represent the scaled back limit shape, \(x\mapsto B\,\varphi_{\nu}(x/A)\). In the right panels, the tails are shown in transformed coordinates (4.10), with the same line and color coding. ### Convergence of expected Young diagrams We start our proof of Theorem 4.1 by showing that convergence to the limit shape \(\varphi_{\nu}(x)\) holds for the expected Young diagrams. From (2.16) and (2.18), we have \[\mathsf{E}\big{(}\widetilde{Y}(x)\big{)}=\frac{M\bar{F}(Ax)}{B}. \tag{4.11}\] **Theorem 4.2**.: _Under Assumptions 4.1 and 4.2,_ \[\mathsf{E}\big{(}\widetilde{Y}(x)\big{)}\to\varphi_{\nu}(x)\qquad(x>0), \tag{4.12}\] _uniformly in \(x\geq\delta\) for any \(\delta>0\)._ _Remark 4.2_.: Note that Assumption 4.3 is not needed in Theorem 4.2. The following useful criterion for uniform convergence of monotone functions (adapted to the half-line domain) is well known (see, e.g., [34, Sec. 0.1]). **Lemma 4.3**.: _Let a sequence of monotone functions on \((0,\infty)\), uniformly bounded on \([\delta,\infty)\) for any \(\delta>0\), converge pointwise to a continuous (monotone) function. Then this convergence is uniform on \([\delta,\infty)\), for any \(\delta>0\)._ Noting that the limiting function \(\varphi_{\nu}(x)\) in (4.7) is continuous and monotone decreasing on \((0,\infty)\), by Lemma 4.3 it suffices to prove pointwise convergence (4.12), for each \(x>0\). _Remark 4.3_.: In calculations below, we confine ourselves to the leading asymptotics (3.6) of terms in the series \(\bar{F}(Ax)\) (see (4.11)). A more careful analysis involving control over the approximation errors is straightforward by using the classic Euler-Maclaurin summation formula [27, SS 2.10(i)] and uniform asymptotic expansions of the Bessel function of large order [27, SS10.41(ii)]. Proof of Theorem 4.2.: The proof below is broken down according to various sub-domains of the parameters \(\nu\) and \(\alpha\) (see Assumption 4.2). First, we consider the cases with \(\alpha>0\), where the GIGP distribution is supported on \(j\in\mathbb{N}_{0}\), and then switch to the boundary cases with \(\alpha=0\), where the support is reduced to \(j\in\mathbb{N}\). * \(\alpha>0\) Using the asymptotic approximation (3.6) of the frequencies \(f_{j}\) (with \(j\geq Ax\geq A\delta\gg 1\)), from (4.11) we obtain \[\mathsf{E}\big{(}\widetilde{Y}(x)\big{)}=\frac{M}{B}\sum_{j\geq Ax}f_{j}\sim \frac{M\left(1-\theta\right)^{\nu/2}\left(\frac{1}{2}\alpha\right)^{-\nu}}{2 BK_{\nu}\big{(}\alpha\left(1-\theta\right)^{1/2}\big{)}}\sum_{j\geq Ax}j^{\nu-1} \theta^{j}.\] (4.13) Recalling that \(A=\left(-\log\theta\right)^{-1}\sim\left(1-\theta\right)^{-1}\), for the last sum in (4.13) we have \[A^{-\nu}\sum_{j\geq Ax}j^{\nu-1}\theta^{j}=\sum_{j\geq Ax}\left(\frac{j}{A} \right)^{\nu-1}\mathrm{e}^{-j/A}\,\frac{1}{A}\to\int_{x}^{\infty}s^{\nu-1} \mathrm{e}^{-s}\,\mathrm{d}s=\varphi_{\nu}(x),\] (4.14) which is evident by interpreting (4.14) as the Riemann integral sum converging to the integral on the right. Furthermore, the asymptotics of the denominator in (4.13) is obtained from formulas (3.15) (see Lemma 3.1). Hence, returning to (4.13) and recalling the definitions (4.2) of \(A\) and (4.3), (4.4), (4.5) of \(B\), we easily obtain (4.12). * \(\alpha=0\) Using the tail approximations (3.9) (\(\nu>0\)), (3.10) (\(\nu=0\)) and (3.14) (\(-1<\nu<0\)), from (4.11) we obtain, similarly to (4.13) and (4.14), \[\mathsf{E}\big{(}\widetilde{Y}(x)\big{)}\sim\frac{MC_{\nu}(\theta)}{B}\sum_{j \geq Ax}j^{\nu-1}\theta^{j}\sim\frac{MC_{\nu}(\theta)A^{\nu}}{B}\,\varphi_{\nu }(x),\] (4.15) where \(A\sim\left(1-\theta\right)^{-1}\) (see (4.2)) and \[C_{\nu}(\theta):=\begin{cases}(1-\theta)^{\nu}/\Gamma(\nu)&(\nu>0),\\ (-\log(1-\theta))^{-1}&(\nu=0),\\ (-\nu)/\Gamma(\nu+1)&(-1<\nu<0).\end{cases}\] Now, using the specifications (4.3), (4.4), or (4.6), it is immediate to see that the right-hand side of (4.15) is reduced to \(\varphi_{\nu}(x)\). This completes the proof of Theorem 4.2. ### Pointwise convergence of random Young diagrams Before addressing a stronger Theorem 4.1 stating the uniform convergence in probability, we start with a simpler statement about pointwise convergence of \(\widetilde{Y}(x)\) for any \(x>0\). **Theorem 4.4**.: _Under Assumptions 4.1, 4.2 and 4.3, the mean squared deviation of \(\widetilde{Y}(x)\) from the limit shape \(\varphi_{\nu}(x)\) is asymptotically small,_ \[\mathsf{E}\big{(}\big{|}\widetilde{Y}(x)-\varphi_{\nu}(x)\big{|}^{2}\big{)} \to 0, \tag{4.16}\] _uniformly in \(x\geq\delta\) for any \(\delta>0\). This implies convergence in probability, \(\widetilde{Y}(x)\stackrel{{\mathrm{p}}}{{\to}}\varphi_{\nu}(x)\), that is, for any \(\varepsilon>0\),_ \[\sup_{x\geq\delta}\mathsf{P}\big{(}\big{|}\widetilde{Y}(x)-\varphi_{\nu}(x) \big{|}\geq\varepsilon\big{)}\to 0. \tag{4.17}\] Proof.: By the standard decomposition of the mean squared deviation, we have \[\mathsf{E}\big{(}\big{|}\widetilde{Y}(x)-\varphi_{\nu}(x)\big{|}^{2}\big{)}= \mathsf{Var}\big{(}\widetilde{Y}(x)\big{)}+\big{(}\mathsf{E}\big{(}\widetilde {Y}(x)\big{)}-\varphi_{\nu}(x)\big{)}^{2}. \tag{4.18}\] Using formulas (2.16) and (2.18), the variance term in (4.18) is estimated as follows, \[\mathsf{Var}\big{(}\widetilde{Y}(x)\big{)} =\frac{M\bar{F}(Ax)F(Ax)}{B^{2}}\] \[\leq\frac{M\bar{F}(Ax)}{B^{2}}\sim\frac{\varphi_{\nu}(x)}{B}\to 0, \tag{4.19}\] according to (4.11), (4.12), and also Assumption 4.3, which guarantees that \(B\to\infty\). By Theorem 4.2, convergence in (4.19) is uniform on \([\delta,\infty)\), for every \(\delta>0\). As for the second term on the right-hand side of (4.18), due to Theorem 4.2 it is asymptotically small, uniformly on every interval \([\delta,\infty)\). Hence, the limit (4.16) follows. Finally, convergence in probability (4.17) is a standard consequence of (4.16) due to Chebyshev's inequality [35, Sec. II.6, p. 192]. ### Auxiliary martingale Recalling the definition (2.12) and interpreting \(Z_{i}(x)\) as a random process with "time" \(x\in[0,\infty)\), consider a rescaled process in inverted time \(t=1/x\), \[\widetilde{Z}_{i}(t):=\frac{1-Z_{i}(1/t)}{F(1/t)}=\frac{I_{\{X_{i}<1/t\}}}{F(1/t )}=\begin{cases}1/F(1/t),&0<t<1/X_{i},\\ 0,&1/X_{i}\leq t<\infty.\end{cases} \tag{4.20}\] Note that, with probability 1, \[\lim_{t\to 0+}\widetilde{Z}_{i}(t)=\frac{I_{\{X_{i}<\infty\}}}{F(\infty)}=1,\] so \(\widetilde{Z}_{i}(t)\) can be extended by continuity to the origin by setting \(\widetilde{Z}_{i}(0):=1\). Clearly, \[\mathsf{E}\big{(}\widetilde{Z}_{i}(t)\big{)} =\frac{\mathsf{P}(X_{i}<1/t)}{F(1/t)}=1, \tag{4.21}\] \[\mathsf{Var}\big{(}\widetilde{Z}_{i}(t)\big{)} =\frac{F(1/t)\big{(}1-F(1/t)\big{)}}{F(1/t)^{2}}=\frac{\bar{F}(1 /t)}{F(1/t)}, \tag{4.22}\] using that \(\bar{F}(1/t)=1-F(1/t)\). Let \(\mathcal{F}_{i}(t)=\sigma\{\widetilde{Z}_{i}(s),\,s\leq t\}\) (\(t\geq 0\)) denote the smallest sigma-algebra containing all events \(\{Z_{i}(1/s)=1\}=\{X_{i}\geq 1/s\}\) with \(0\leq s\leq t\). Consider the product sigma-algebra \(\mathcal{F}(t)=\mathcal{F}_{1}(t)\otimes\cdots\otimes\mathcal{F}_{M}(t)\), and define the random process \[W(t):=\widetilde{Z}_{1}(t)+\cdots+\widetilde{Z}_{M}(t)-M\qquad(t\geq 0). \tag{4.23}\] From (4.20), it is easy to see that the process \(W\) is _cadlag_ (i.e., its paths are everywhere right-continuous and have left limits). Furthermore, we have (see (4.21) and (4.22)) \[\mathsf{E}\big{(}W(t)\big{)}=0,\qquad\mathsf{Var}\big{(}W(t)\big{)}=\frac{M \,\bar{F}(1/t)}{F(1/t)}, \tag{4.24}\] since the random variables \(\widetilde{Z}_{1}(t),\ldots,\widetilde{Z}_{M}(t)\) in the sum (4.23) are mutually independent. **Lemma 4.5**.: _The process \(W(t)\) is a martingale with respect to the filtration \(\mathcal{F}(t)\)\((t\geq 0)\), that is, for any \(t\geq s\geq 0\) we have \(\mathsf{E}\big{(}W(t)\,|\,\mathcal{F}(s)\big{)}=W(s)\), with probability \(1\),_ Proof.: Let \(t\geq s\geq 0\). Due to definition (4.23) and independence of \((X_{i})\), we have \[\mathsf{E}\big{(}W(t)\,|\,\mathcal{F}(s)\big{)}=\sum_{i=1}^{M}\mathsf{E}\big{(} \widetilde{Z}_{i}(t)\,|\,\mathcal{F}_{i}(s)\big{)}-M, \tag{4.25}\] so it suffices to prove that \[\mathsf{E}\big{(}\widetilde{Z}_{i}(t)\,|\,\mathcal{F}_{i}(s)\big{)}= \widetilde{Z}_{i}(s). \tag{4.26}\] By definition, information contained in the sigma-algebra \(\mathcal{F}_{i}(s)\) relates to the threshold events of the form \(\{X_{i}\geq 1/r\}\) for all \(0\leq r\leq s\). In view of the target random variable in (4.26) expressed through \(Z_{i}(1/t)=I_{\{X_{i}\geq 1/t\}}\), this essentially amounts to the knowledge of whether \(X_{i}\geq 1/s\) or not. Suppose first that \(X_{i}<1/s\), that is, \(\widetilde{Z}_{i}(s)=1/F(1/s)\) (see (4.20)). Then \[\mathsf{E}\big{(}\widetilde{Z}_{i}(t)\,|\,X_{i}<1/s\big{)} =\frac{\mathsf{P}(X_{i}<1/t\,|\,X_{i}<1/s)}{F(1/t)}\] \[=\frac{\mathsf{P}(X_{i}<1/t)}{F(1/t)\,\mathsf{P}(X_{i}<1/s)}\] \[=\frac{1}{F(1/s)}=\widetilde{Z}_{i}(s).\] Similarly, if \(X_{i}\geq 1/s\geq 1/t\) then \(\widetilde{Z}_{i}(s)=0\) and \[\mathsf{E}\big{(}\widetilde{Z}_{i}(t)\,|\,X_{i}\geq 1/s\big{)} =\frac{\mathsf{P}(X_{i}<1/t\,|\,X_{i}\geq 1/s)}{F(1/t)}\] \[=\frac{\mathsf{P}(X_{i}<1/t,X_{i}\geq 1/s)}{F(1/t)\,\mathsf{P}(X _{i}\geq 1/s)}\] \[=0=\widetilde{Z}_{i}(s).\] Thus, the relation (4.26) holds true, and in view of (4.25) and the definition (4.23) the proof of Lemma 4.5 is complete. The martingale \(W(t)\) will be used in the next section for the estimation of the uniform distance between the functions \(\widetilde{Y}(x)\) and \(\varphi_{\nu}(x)\). ### Uniform convergence of random Young diagrams Pointwise convergence established in Theorem 4.4 can be strengthened to the uniform convergence away from the origin, yielding our main result stated above as Theorem 4.1. Proof of Theorem 4.1.: Note that \[\sup_{x\geq\delta}\big{|}\widetilde{Y}(x)-\varphi_{\nu}(x)\big{|}\leq\sup_{x \geq\delta}\big{|}\widetilde{Y}(x)-\mathsf{E}\big{(}\widetilde{Y}(x)\big{)} \big{|}+\sup_{x\geq\delta}\big{|}\mathsf{E}\big{(}\widetilde{Y}(x)-\varphi_ {\mu}(x)\big{)}\big{|},\] where \(\mathsf{E}\big{(}\widetilde{Y}(x)\big{)}=M\bar{F}(Ax)/B\) (see (4.11)). Hence, by virtue of Theorem 4.2, it suffices to consider the deviations \[\sup_{x\geq\delta}\bigg{|}\widetilde{Y}(x)-\frac{M\bar{F}(Ax)}{B}\bigg{|}\geq\varepsilon.\] Using the definitions (4.20), (4.23) and the relation \(\bar{F}(x)=1-F(x)\), we have \[\widetilde{Y}(x)-\frac{M\bar{F}(Ax)}{B} =\frac{1}{B}\left(\sum_{i=1}^{M}Z_{i}(Ax)-M\bar{F}(Ax)\right)\] \[=-\frac{1}{B}\left(\sum_{i=1}^{M}\bigl{(}1-Z_{i}(Ax)\bigr{)}-MF( Ax)\right)\] \[=-\frac{F(Ax)}{B}\left(\sum_{i=1}^{M}\widetilde{Z}_{i}(1/Ax)-M\right)\] \[=-\frac{F(Ax)}{B}\,W(1/Ax).\] Since \(0\leq F(Ax)\leq 1\) and \(t:=1/Ax\in[0,1/A\delta]\), this implies \[\sup_{x\geq\delta}\bigg{|}\widetilde{Y}(x)-\frac{M\bar{F}(Ax)}{B}\bigg{|}\leq \frac{1}{B}\sup_{t\leq 1/A\delta}\bigl{|}W(t)\bigr{|}.\] Hence, by the Doob-Kolmogorov submartingale inequality (see, e.g., [48, Theorem 6.16, p. 101]) applied to the martingale \(W(t)\) (see Lemma 4.5) and using formulas (4.24), we obtain \[\mathsf{P}\biggl{(}\sup_{x\geq\delta}\bigg{|}\widetilde{Y}(x)- \frac{M\bar{F}(Ax)}{B}\bigg{|}\geq\varepsilon\biggr{)} \leq\mathsf{P}\biggl{(}\sup_{t\leq 1/A\delta}\bigl{|}W(t) \bigr{|}\geq B\varepsilon\biggr{)}\] \[\leq\frac{\mathsf{Var}\bigl{(}W(1/A\delta)\bigr{)}}{B^{2} \varepsilon^{2}}\] \[=\frac{M\bar{F}(A\delta)}{F(A\delta)\,B^{2}\varepsilon^{2}}\sim \frac{\varphi_{\nu}(\delta)}{B\varepsilon^{2}}\to 0, \tag{4.27}\] where at the last step we used that \(F(A\delta)\to 1\) (as \(A\to\infty\)) and the limit (4.19) (with \(x=\delta\)). This completes the proof of Theorem 4.1. ### Young boundary as an empirical process The random function \(Y(x)\) given by (2.13) can be viewed as an _empirical process_[8, Ch. 11], determined by an independent random sample \((X_{1},\ldots,X_{M})\) through a family of test functions \((g_{x},\,x\geq 0)\), defined by \[g_{x}(j):=\mathbf{1}_{[x,\infty)}(j)\qquad(j\in\mathbb{N}_{0}). \tag{4.28}\] Namely, using (2.12) and (2.13), we can write \[Y(x)=\sum_{i=1}^{M}Z_{i}(x)=\sum_{i=1}^{M}\mathbf{1}_{[x,\infty)}(X_{i})=\sum_ {i=1}^{M}g_{x}(X_{i}).\] The advantage of this approach is that there are sharp upper bounds for the variance of the supremum of an empirical process. Note that \[\sup_{x\geq\delta}\,\left|\widetilde{Y}(x)-\frac{M\bar{F}(Ax)}{B}\right| \leq\frac{1}{B}\sup_{x\geq\delta}\sum_{i=1}^{M}\big{|}g_{Ax}(X_{i} )-\bar{F}(Ax)\big{|}\] \[=\frac{1}{B}\sup_{x\geq\delta}\sum_{i=1}^{M}\big{|}\tilde{g}_{x}(X _{i})\big{|}, \tag{4.29}\] where \[\tilde{g}_{x}(j):=g_{Ax}(j)-\bar{F}(Ax)\qquad(j\in\mathbb{N}_{0}). \tag{4.30}\] According to [8, Theorem 11.1, pp. 314, 316], \[\mathsf{Var}\!\left(\sup_{x\geq\delta}\sum_{i=1}^{M}\big{|}\tilde{g}_{x}(X_{i} )\big{|}\right)\leq\sum_{i=1}^{M}\mathsf{E}\!\left(\sup_{x\geq\delta}\big{|} \tilde{g}_{x}(X_{i})\big{|}^{2}\right)\!. \tag{4.31}\] Using (4.28) and (4.30), we have \[\big{|}\tilde{g}_{x}(X_{i})\big{|}^{2} \leq\big{(}Z_{i}(Ax)+\bar{F}(Ax)\big{)}^{2}\] \[=Z_{i}(Ax)^{2}+2\,Z_{i}(Ax)\bar{F}(Ax)+\bar{F}(Ax)^{2}\] \[\leq Z_{i}(Ax)+2\,Z_{i}(Ax)+\bar{F}(Ax),\] hence \[\sup_{x\geq\delta}\big{|}g_{Ax}(X_{i})\big{|}^{2}=3\,Z_{i}(A\delta)+\bar{F}(A\delta)\] and (see (4.31)) Returning to (4.29), this gives, together with Chebyshev's inequality, \[\mathsf{P}\!\left(\sup_{x\geq\delta}\,\left|\widetilde{Y}(x)- \frac{M\bar{F}(Ax)}{B}\right|\geq\varepsilon\right) \leq\mathsf{P}\!\left(\sup_{x\geq\delta}\sum_{i=1}^{M}\big{|} \tilde{g}_{x}(X_{i})\big{|}\geq B\varepsilon\right)\] \[\leq\frac{4M\bar{F}(A\delta)}{B^{2}\varepsilon^{2}}\sim\frac{4\, \varphi_{\nu}(\delta)}{B\,\varepsilon^{2}}\to 0,\] using that \(B\to\infty\). Thus, this furnishes an alternative proof of Theorem 4.1 (cf. (4.27)). ## 5 Fluctuations of random Young diagrams Recalling that \(\widetilde{Y}(x)\) is a (normalized) sum of independent indicators \(Z_{i}(Ax)=I_{\{X_{i}\geq Ax\}}\), \(i=1,\ldots,M\) (see (4.1)), it is natural to expect that \(\widetilde{Y}(x)\) is asymptotically normal, with mean \(\mathsf{E}\!\left(\widetilde{Y}(x)\right)=M\bar{F}(Ax)/B\sim\varphi_{\nu}(x)\) and variance \(M\bar{F}(Ax)F(Ax)/B^{2}\sim\varphi_{\nu}(x)/B\) (see (4.11) and (4.12)). However, a standard central limit theorem is not directly applicable because the "success" probability \(\mathsf{P}\!\left(Z_{i}(Ax)=1\right)=\bar{F}(Ax)\) is not constant (and, moreover, it tends to \(0\)), so we have to re-prove this statement using the method of characteristic functions. **Theorem 5.1**.: _Under Assumptions 4.1, 4.2 and 4.3, for any \(x>0\),_ \[\varUpsilon(x):=\sqrt{\frac{B}{\varphi_{\nu}(x)}}\left(\widetilde{Y}(x)-\frac{M \bar{F}(Ax)}{B}\right)\stackrel{{\mathrm{d}}}{{\longrightarrow}} \mathcal{N}(0,1), \tag{5.1}\] _where \(\mathcal{N}(0,1)\) is a standard normal law (i.e., with zero mean and unit variance), and \(\stackrel{{\mathrm{d}}}{{\rightarrow}}\) denotes convergence in distribution._ Proof.: Substituting (4.1), the left-hand side of (5.1) is rewritten as \[\varUpsilon(x)=\frac{1}{\sqrt{B\,\varphi_{\nu}(x)}}\sum_{i=1}^{M}\bigl{(}Z_{i} (Ax)-\bar{F}(Ax)\bigr{)}. \tag{5.2}\] The characteristic function of (5.2) is given by \[\psi(t;x):=\mathsf{E}\bigl{(}\mathrm{e}^{\mathrm{i}t\varUpsilon(x)}\bigr{)}= \mathrm{e}^{-\mathrm{i}\tilde{t}M\bar{F}(Ax)}\left(1+\bar{F}(Ax)\bigl{(} \mathrm{e}^{\mathrm{i}\tilde{t}}-1\bigr{)}\right)^{M}, \tag{5.3}\] where \[\tilde{t}=\frac{t}{\sqrt{B\,\varphi_{\nu}(x)}},\qquad t\in\mathbb{R}. \tag{5.4}\] Choosing the principal branch of the logarithm function \(\mathbb{C}\setminus\{0\}\ni z\mapsto\log z\in\mathbb{C}\) (i.e., such that \(\log 1=0\)), we can rewrite (5.3) as \[\log\psi(t;x)=-\mathrm{i}\tilde{t}\,M\bar{F}(Ax)+M\log(1+w), \tag{5.5}\] where \[w:=\bar{F}(Ax)\bigl{(}\mathrm{e}^{\mathrm{i}\tilde{t}}-1\bigr{)}. \tag{5.6}\] Since \(A\to\infty\) and \(B\to\infty\) (by Assumptions 4.2 and 4.3), we have \(\tilde{t}\to 0\) and \(w\to 0\), hence \[\log(1+w)=w-\tfrac{1}{2}w^{2}+O(|w|^{3}).\] Therefore, Taylor expanding \(\mathrm{e}^{\mathrm{i}\tilde{t}}=1+\mathrm{i}\tilde{t}-\tfrac{1}{2}\tilde{t}^ {2}+O(\tilde{t}^{3})\) and substituting (5.4) and (5.6), formula (5.5) is elaborated as follows, \[\log\psi(t;x)=-\frac{M\bar{F}(Ax)F(Ax)\,t^{2}}{2B\,\varphi_{\nu}(x)}+O\biggl{(} \frac{M\bar{F}(Ax)}{B^{3/2}}\biggr{)}\to-\frac{t^{2}}{2},\] using that \(M\bar{F}(Ax)/B\sim\varphi_{\nu}(x)\) and \(F(Ax)\to 1\) for any \(x>0\). Thus, \(\psi(t;x)\to\mathrm{e}^{-t^{2}/2}\), which is the characteristic function of the normal distribution \(\mathcal{N}(0,1)\), as claimed. Similarly, Theorem 5.1 can be extended to the finite-dimensional convergence. **Theorem 5.2**.: _Under Assumptions 4.1, 4.2 and 4.3, the random process \((\varUpsilon(x),x>0)\) defined in (5.1) and (5.2), converges, in the sense of convergence of finite-dimensional distributions, to a Gaussian random process \((\Xi(x),x>0)\) with zero mean and covariance function_ \[K(x,x^{\prime}):=\mathsf{Cov}\bigl{(}\Xi(x),\Xi(x^{\prime})\bigr{)}=\sqrt{ \frac{\varphi_{\nu}(x^{\prime})}{\varphi_{\nu}(x)}}\qquad(0<x\leq x^{\prime}).\] Proof.: The proof proceeds along the same lines as in Theorem 5.1 via asymptotic analysis of the multivariate characteristic functions for any finite arrays \(0<x_{1}\leq\cdots\leq x_{m}\) (\(m\in\mathbb{N}\)), \[\psi(t_{1},\ldots,t_{m};x_{1},\ldots,x_{m})=\mathsf{E}\!\left(\mathfrak{i}\sum_{ k=1}^{m}t_{k}\Upsilon(x_{k})\right)\qquad(t_{1},\ldots,t_{m}\in\mathbb{R}).\] The limiting covariance function is easy to compute: using (5.2) and (2.17) we have, for \(0<x\leq x^{\prime}\), \[\mathsf{Cov}\!\left(\Upsilon(x),\Upsilon(x^{\prime})\right)=\frac{M\bar{F}(Ax ^{\prime})F(Ax)}{B\sqrt{\varphi_{\nu}(x)\,\varphi_{\nu}(x^{\prime})}}\to\sqrt {\frac{\varphi_{\nu}(x^{\prime})}{\varphi_{\nu}(x)}},\] as required. We conclude this section by observing that the limiting Gaussian process \(\Xi(x)\) can be represented, in the distributional sense, through a standard Brownian motion \((B_{t},t\geq 0)\) (i.e., with mean zero and covariance function \(\mathsf{Cov}(B_{t},B_{t^{\prime}})=\min\left\{t,t^{\prime}\right\}\)). **Theorem 5.3**.: _The following distributional representation holds,_ \[\Xi(x)\stackrel{{\mathrm{d}}}{{=}}\frac{B_{\varphi_{\nu}(x)}}{ \sqrt{\varphi_{\nu}(x)}}\qquad(x>0). \tag{5.7}\] _In particular, a rescaled process in inverted time \(t=1/x\), defined by_ \[\widetilde{\Xi}(t):=\sqrt{\varphi_{\nu}(1/t)}\;\Xi(1/t)\qquad(t\geq 0), \tag{5.8}\] _has independent increments._ Proof.: It suffices to observe that the covariance function of the process on the right-hand side of (5.7) is given by (for any \(0<x\leq x^{\prime}\)) \[\frac{1}{\sqrt{\varphi_{\nu}(x)\,\varphi_{\nu}(x^{\prime})}}\,\mathsf{Cov}\! \left(B_{\varphi_{\nu}(x)},B_{\varphi_{\nu}(x^{\prime})}\right)=\frac{1}{ \sqrt{\varphi_{\nu}(x)\,\varphi_{\nu}(x^{\prime})}}\,\varphi_{\nu}(x^{\prime} )=K(x,x^{\prime}),\] using that \(\varphi_{\nu}(x^{\prime})\leq\varphi_{\nu}(x)\). Finally, independence of increments for the process (5.8) is a straightforward consequence of the same property for the Brownian motion \(B_{t}\). ## 6 Poisson approximation in the "chaotic" regime In this section, we consider the case wherein Assumption 4.3 is not satisfied, so that the \(y\)-scaling coefficient \(B\) is bounded (which is only possible for \(\nu\leq 0\), see formulas (4.3) to (4.6)). We call this case _chaotic_ because convergence of the random variable \(\widetilde{Y}(x)\) to the limit shape \(\varphi_{\nu}(x)\) does not hold here (cf. Theorem 4.4), despite convergence of the expected value \(\mathsf{E}\!\left(\widetilde{Y}(x)\right)=M\bar{F}(Ax)/B\to\varphi_{\nu}(x)\) (Theorem 4.2). The root cause of this failure is that, although \(\widetilde{Y}(x)\) is a normalized sum of independent Bernoulli variables \(Z_{i}(Ax)=I_{\{X_{i}\geq Ax\}}\) (see (4.1)), the success probability \(\mathsf{P}\!\left(X_{i}\geq Ax\right)=\bar{F}(Ax)\) tends to zero, which is not offset by a fast enough growth of the number of terms \(M\) (see Remark 4.1). ### One-dimensional distributions We start by studying one-dimensional distributions of \(Y(x)\), that is, at a given point \(x>0\). For orientation, consider a stylized case where \(B=1\), then \(M\bar{F}(Ax)\to\varphi_{\nu}(x)\) and, according to the classic Poisson "law of small numbers" [46], the binomial distribution of the sum \(\widetilde{Y}(x)=Z_{1}(Ax)+\cdots+Z_{M}(Ax)\) is asymptotically close to a Poisson distribution with parameter \(\lambda=\varphi_{\nu}(x)\). That is to say, the sums \(\widetilde{Y}(x)\) do not settle down to a deterministic constant (like in a law of large numbers) but, due to a persistent "small" randomness, admit a non-degenerate (Poisson) approximation without any normalization. This observation is generalized as follows. **Theorem 6.1**.: _Suppose that Assumptions 4.1 and 4.2 are satisfied but Assumption 4.3 is not, so that \(B=O(1)\). Then the distribution of the random variable \(Y(Ax)\) for \(x>0\) is approximated by a Poisson distribution with parameter \(M\bar{F}(Ax)\sim B\,\varphi_{\nu}(x)\) and with the corresponding error in total variation distance (or in Kolmogorov's uniform distance) bounded by \(O(M^{-1})=o(1)\)._ Proof.: This is an immediate consequence of a well-known approximation for the binomial distribution of the total number of successes in \(n\) independent Bernoulli trials, with success probability \(p\), by a Poisson distribution with parameter \(\lambda=np\), with the error bounded by \(\sigma^{2}=np^{2}\) (see, e.g., [3, 28]). In our case, \(\lambda=M\bar{F}(Ax)\) and \(\sigma^{2}=M\bar{F}(Ax)^{2}\sim\big{(}B\,\varphi_{\nu}(x)\big{)}^{2}\!/M=O(1/ M)=o(1)\). The Poisson approximation stated in Theorem 6.1 is illustrated in Fig. 3 using \(100\) simulated samples (of size \(M=35\) each) from the GIGP distribution (3.1) with parameters \(\nu=-0.5\), \(\alpha=2\), and \(\theta=0.99\). The \(y\)-scaling coefficient computed from (4.5) is given by \(B\doteq 1.974664\), confirming that this is a chaotic regime (i.e., where Assumption 4.3 is not satisfied). The \(x\)-scaling coefficient (4.2) specializes to \(A\doteq 99.49916\). The left panel in Fig. 3 shows the sample Young diagrams superimposed on one another using transparent shading (in blue), so that darker places correspond to a more frequent occurrence. As anticipated, there is no convergence to a deterministic limit shape, but an emerging "typical" boundary of blue diagrams clearly indicates an expected curve establishing in the limit. In the right panel of Fig. 3, we choose a trial value \(x_{0}=0.2\) and plot a histogram for the observed frequencies of the random values \(Y(Ax_{0})\), where \(Ax_{0}\doteq 19.89983\). A visual inspection supports a reasonable match with Poisson distribution with the mean \(M\bar{F}(Ax_{0})\doteq 4.342498\). This is confirmed by Pearson's \(\chi^{2}\)-test, with the bins labelled by the values of \(j\) from \(0\) to \(9\) and the respective observed frequencies \(o_{j}\). Since the expected frequencies \(e_{0}\doteq 1.3004\) and \(e_{9}\doteq 3.340067\) are less than \(5\), we follow a common recommendation and combine the bins \(j=0\) and \(j=9\) with \(j=1\) and \(j=8\), respectively. The grouped \(\chi^{2}\)-statistic is calculated to yield \(1.972246\) on \(10-2-1=7\) degrees of freedom, with the \(p\)-value of \(96.14\%\), so the goodness-of-fit test is comfortably passed. ### Finite-dimensional distributions The result of Theorem 6.1 can be generalized to finite-dimensional distributions. It is convenient to set \(Y(\infty)=0\), using that, with probability \(1\), \(Y(x)\to 0\) as \(x\to\infty\). **Theorem 6.2**.: _Under the hypotheses of Theorem 6.1, for any finite array \(0<x_{1}<\cdots<x_{k}<x_{k+1}=\infty\), the increments \(\{Y(Ax_{i})-Y(Ax_{i+1}),\,i=1,\ldots,k\}\) are asymptotically independent, with marginal distributions approximated by Poisson distributions with parameters \(\lambda_{i}=M\big{(}\bar{F}(Ax_{i})-\bar{F}(Ax_{i+1})\big{)}\), respectively, with the error (in total variation) bounded by \(O(M^{-1})=o(1)\)._ _Remark 6.1_.: The leftmost increment \(Y(0)-Y(Ax_{1})=M-Y(Ax_{1})\) is excluded from the statement, because it is (linearly) expressible through the other increments, \[Y(0)-Y(Ax_{1})=M-\sum_{i=1}^{k}\bigl{(}Y(Ax_{i})-Y(Ax_{i+1})\bigr{)}.\] Proof of Theorem 6.2.: Note that the joint distribution of the increments is _multinomial_, with parameter \(M\) (the number of trials) and probabilities \(p_{i}=\bar{F}(Ax_{i})-\bar{F}(Ax_{i+1})\), corresponding to outcomes belonging to intervals \([Ax_{i},Ax_{i+1})\), respectively (\(i=1,\ldots,k\)). The claim then follows by a general Poisson approximation theorem (see [28]). In particular, denoting \(p:=p_{1}+\cdots+p_{k}\), the error in total variation is known to be bounded by \(O(Mp^{2})\). In our case, \(p=\sum_{i=1}^{k}\bigl{(}\bar{F}(Ax_{i})-\bar{F}(Ax_{i+1})\bigr{)}=\bar{F}(Ax_ {1})\), and the estimate \(O(M^{-1})\) readily follows like in Theorem 6.1. _Remark 6.2_.: To get a sense of why asymptotic independence of increments in Theorem 6.2 is true, it is helpful to check out that the covariance between the neighboring increments Figure 3: Illustration of Poisson statistics in the chaotic regime. The left panel shows superimposed Young diagrams of \(100\) random samples of size \(M=35\) each, generated from the GIGP model (3.1) with parameters \(\nu=-0.5\), \(\alpha=2\), and \(\theta=0.99\). The right panel shows the histogram of observed frequencies \(o_{j}\) of the random values \(Y(Ax_{0})\), where \(Ax_{0}\doteq 19.89983\). For orientation, the mean of the approximating Poisson distribution is given by \(M\bar{F}(Ax_{0})\doteq 4.342498\). vanishes in the limit. Indeed, for any \(0<x<x^{\prime}<x^{\prime\prime}\leq\infty\) we have, with the aid of (2.17). \[\mathsf{Cov}\big{(}Y(Ax)-Y(Ax^{\prime}),\,Y(Ax^{\prime})-Y(Ax^{ \prime\prime})\big{)}\] \[\qquad=M\bar{F}(Ax^{\prime})\,F(Ax)-M\bar{F}(Ax^{\prime\prime})\,F (Ax)\] \[\qquad\qquad-M\bar{F}(Ax^{\prime})\,F(Ax^{\prime})+M\bar{F}(Ax^{ \prime\prime})\,F(Ax^{\prime})\] \[\qquad\qquad\to\varphi_{\nu}(x^{\prime})-\varphi_{\nu}(x^{\prime \prime})-\varphi_{\nu}(x^{\prime})+\varphi_{\nu}(x^{\prime\prime})=0,\] again using the convergence in Theorem 4.2. The finite-dimensional approximations of Theorem 6.2 can be unified in terms of a suitable Poisson process considered in inverted time. Specifically, consider an inhomogeneous Poisson process \((\xi_{t},\,t\geq 0)\), \(\xi_{0}=0\), with integrated rate function \(\varLambda(t)=M\bar{F}(A/t)\sim B\,\varphi_{\nu}(1/t)\), that is, \(\mathsf{E}(\xi_{t})=\varLambda(t)\). **Theorem 6.3**.: _Under the hypotheses of Theorems 6.1 and 6.2, the distribution of the process \((Y(Ax),\,0<x\leq\infty)\) is approximated by the distribution of the random process \((\xi_{1/x},\,0<x\leq\infty)\)._ Thus, unlike the "regular" case of Sections 4 and 5, where the random Young diagrams enjoy convergence to the limit shape \(\varphi_{\nu}(x)\), in the chaotic case that we consider in the present section, the role of the function \(\varphi_{\nu}(x)\) is that it determines the (integrated) rate of the Poisson approximation. ## 7 Real Data Examples In this section, we look at how well the theoretical limit shape \(\varphi_{\nu}(x)\) conforms to some real data sets studied earlier by Sichel [41]. ### Lotka's data set: author productivity We start with a classic data set considered by Lotka in his seminal paper [25], comprising the counts of the number of papers (items) published by authors (sources) in _Chemical Abstracts_ during 1907-1916. This data set is usually considered as a baseline example of the power law statistics of counts [10, 25], but Sichel [41, pp. 316-317 and Table 2] argued that a GIGP model with predefined parameters \(\nu=-0.5\), \(\alpha=0\) and an estimated \(\theta=0.96876\) is a better fit to the data (with the \(p\)-value of \(91.8\%\) in Pearson's \(\chi^{2}\)-test). The distinctive difference between the two models is of course the long-tail behavior, either power or power-geometric, respectively. To examine the goodness-of-fit graphically, similarly to Section 4.2 we first plot the empirical Young diagram \(Y(x)\), compared with the fitted GIGP complementary distribution function \(\bar{F}(x)\) and contrasted with the theoretical limit shape scaled back to the original coordinates, that is, \(x\mapsto B\,\varphi(x/A)\). Here, \(M=6891\), and the scaling coefficients calculated from (4.2) and (4.6) are given by \(A\doteq 31.5076\) and \(B\doteq 343.5839\). Although the value of \(A\) is not particularly large (because \(\theta\) is not extremely close to \(1\)), a big value of \(B\) confirms a reasonable predisposition of the data for a good limit shape approximation. The left panel in Fig. 4(a) demonstrates an excellent fit to the bulk of the data for both the GIGP model as well as the limit shape (scaled back to the original coordinates, \(x\mapsto B\,\varphi(x/A)\)). The visual inspection of the tails in the right panel of Fig. 4(a) (in transformed coordinates (4.10)) confirms a good fit but only for moderately large observed values \(j\), whereas the region \(u\geq 4.5\), corresponding to values \(j\geq\mathrm{e}^{4.5}\approx 90\), reveals increasing deviations from the GIGP prediction. This suggests that very large values in the tail of Lotka's data require a different fitting model, such as a stretched-exponential approximation [24]. Incidentally, upon a closer look at the upper extremes in the Lotka data set, there is just a handful larger than \(90\), namely, \(95\), \(107\), \(109\), \(114\), and \(346\). A surprising maximum \(346\) looks like a genuine outlier, three times bigger than the runner-up! Interestingly, this record is attributed to Professor Emil Abderhalden, a prolific and controversial Swiss biochemist and physiologist who worked in the first half of the 20th century [47]. Being rather extraordinary, perhaps this individual record needs to be removed from statistical analysis. Figure 4: GIGP model fit to real data sets. Black plots represent the data, blue dotted plots show the fitted GIGP complementary distribution functions, and smooth red lines depict the graphs of the scaled-back limit shape. The right panel shows the tail versions of these plots in transformed coordinates (4.10). ### Chen's data set: journal use For our second real data example, we revisit the data set from Chen [9], considered by Sichel [41, pp. 318-319 and Table 4] for the sake of testing a GIGP model. The data comprised counts of use (items) of physics journals in the M.I.T. Science Library in 1971, recorded per each volume (sources) taken from the shelves for reading or photocopying. The total number of sources involved (i.e., the number of volumes ever requested) was \(M=138\). Sichel fitted a GIGP model with predefined values \(\nu=0\), \(\alpha=0\) and an estimated \(\theta=0.99369\). He tested goodness-of-fit via Pearson's \(\chi^{2}\)-test, observing a reasonably high \(p\)-value of \(31.2\%\), thus not signalling any significant mismatch. To cross-examine the fit using our methods, similarly as in Section 7.1 we plot the empirical Young diagram's boundary \(Y(x)\) along with the fitted GIGP function \(\bar{F}(x)\) and scaled-back limit shape \(B\,\varphi(x/A)\) (see Fig. 4(b), left panel), where the scaling coefficients calculated from (4.2) and (4.4) are given by \(A\doteq 157.9781\) and \(B\doteq 27.24247\). It is worth pointing out that, in contrast to Lotka's data set considered in Section 7.1, here we have quite a large value for the \(x\)-scaling coefficient \(A\) but a relatively small value of the \(y\)-scaling parameter \(B\). At first glance, the plots in Fig. 4(b) (left panel) seem to conform to the GIGP model; however, one cannot help noticing a visible deviation from the theoretical prediction around the value \(x=100\). This is confirmed by looking at the tail plots in transformed coordinates (4.10) (see Fig. 4(b), right panel), where \(x=100\) corresponds to \(u=\log 100\doteq 4.60517\). To test statistically whether the deviations are significant, we can use the asymptotic normality of \(Y(x)\) due to Theorem 5.1. Specifically, setting \(x=100\) and stadardizing according to formula (5.1), we calculate \(\Upsilon(100/A)\doteq-3.413073\), with an extremely small \(p\)-value of \(0.032\%\). Thus, the deviation is highly significant, which implies that the GIGP model is not an accurate fit, at least for moderately large values starting from about \(x=70\). ## 8 Conclusion In this paper, we have investigated the asymptotic convergence of the item production profile (visualized via Young diagrams and properly scaled) to the limit shape for the class of GIGP distributions, introduced by Sichel [36] and successfully applied to a plurality of count data sets including frequent use cases in informetrics [41]. The limit is taken when the number of production sources is large, \(M\gg 1\), but also subject to a natural assumption of asymptotically large number of items per source, leading to the parametric regime \(\theta\approx 1\), where \(\theta\in(0,1)\) is one of the GIGP scale parameters. The family of the resulting limit shapes has been identified as the incomplete gamma function \(\varphi_{\nu}(x)=\int_{x}^{\infty}s^{\nu-1}\,\mathrm{e}^{-s}\,\mathrm{d}s\), where \(\nu\geq-1\) is the GIGP shape parameter (represented as the order of the Bessel function \(K_{\nu}(z)\) involved in the definition of the GIGP probability distribution). The suitable \(x\)-scaling coefficient is universal, \(A=-1/\log\theta\to\infty\), but the \(y\)-scale \(B\) (proportional to the number of sources \(M\)) depends on the sign of \(\nu\). In particular, it follows that \(B\to\infty\) if \(\nu>0\) (which guarantees convergence to the limit shape), but this may fail for \(\nu\leq 0\), for instance, due to \(M\) not being large enough. In the latter case (termed "chaotic") we show that the production profile is approximated by a suitable Poisson process with rate defined in terms of the limit shape \(\varphi_{\nu}(x)\). In the regular case (i.e, where the convergence to limit shape holds true), we also show the asymptotic normality of fluctuations. Our theoretical results are illustrated using computer simulations (with \(\nu=0.5\) and \(\nu=-0.5\)), showing an excellent match of the limit shape to empirical data in regular cases but also confirming the Poisson statistics in a chaotic regime. We also propose a simple transformation of the data leading to linearized tails of the distribution, which may aid the visual diagnostics of tenability of the GIGP model. When applied to real-life data sets, our methods provide a novel approach to extracting useful information about the GIGP model fit. One such examples is the classic Lotka data on author productivity [25], where the fitted GIGP model enjoys an excellent match, which is conveniently grasped by our graphical plots. However, the plots of properly enhanced upper tails reveal a certain departure from the GIGP model, which is not captured by the usual aggregated goodness-of-fit tools such as Pearson \(\chi^{2}\)-test or Kolmogorov-Smirnov test based on uniform distance between probability distributions [41]. The identified extremes can be tracked back and interpreted in the original data, in particular flagging up a very special "outlier" as mentioned in Section 7.1. Furthermore, in our second example using Chen's data on journal usage [9], our approach has revealed a certain location in the range of counts with a noticeable deviation of the data frequency plot from the theoretical GIGP prediction. By virtue of our result on asymptotic normality of fluctuations, we have shown that these deviations are highly significant. This is to be compared with a reasonably confident "pass" of goodness-of-fit based on traditional tools [41], again demonstrating superiority of our methods. We expect that the approach developed in this paper is likely to be useful also in a wider context of various count data. ## Appendix A Asymptotic formulas for the Bessel function The following is a list of useful properties of the Bessel function \(K_{\nu}(z)\), including some asymptotic formulas under various regimes for the argument \(z\) and the order \(\nu\). For ease of use, we collect these facts here, with reference to the NIST handbook [27]. **Lemma A.1** ([27], 10.27.3).: _For any \(\nu\) and \(z\),_ \[K_{-\nu}(z)=K_{\nu}(z).\] (A.1) **Lemma A.2**.: _Let \(\nu\) be fixed and \(z\to 0+\)._ 1. ([27], 10.30.2) _If_ \(\nu>0\) _then_ \[K_{\nu}(z)\sim\tfrac{1}{2}\Gamma(\nu)\left(\tfrac{1}{2}z\right)^{-\nu}.\] (A.2) 2. ([27], 10.31.1 with the aid of 10.25.2) _If_ \(\nu=1\) _then_ \[K_{1}(z)=z^{-1}+\tfrac{1}{2}z\log z+O(z).\] (A.3) * ([27], 10.31.2 with the aid of 10.25.2) _If_ \(\nu=0\) _then_ \[K_{0}(z)=-\log\bigl{(}\tfrac{1}{2}z\bigr{)}-\gamma+O(z^{2}\log z).\] (A.4) _where_ \(\gamma=0.5772\dots\) _is Euler's constant (__[_27_]__, 5.2.3). In particular,_ \[K_{0}(z)\sim-\log z.\] (A.5) * ([27], 10.27.4 and 10.25.2 with the aid of 5.5.3) _For_ \(-1<\nu<0\)_,_ \[K_{\nu}(z)=\tfrac{1}{2}\,\Gamma(-\nu)\,\bigl{(}\tfrac{1}{2}z\bigr{)}^{\nu}+ \frac{\Gamma(\nu+1)}{2\nu}\,\bigl{(}\tfrac{1}{2}z\bigr{)}^{-\nu}+O(z^{\nu+2}).\] (A.6) **Lemma A.3** ([27], 10.41.2).: _If \(z\neq 0\) is fixed and \(\nu\to+\infty\), then_ \[K_{\nu}(z)\sim\sqrt{\frac{\pi}{2\nu}}\,\Bigl{(}\frac{\mathrm{e}z}{2\nu}\Bigr{)} ^{-\nu}.\] (A.7)
2302.07123
On the Arrow of Time and Organized Complexity in the Universe
There is a widespread assumption that the universe in general, and life in particular, is getting more complex with time. This paper formulates this assumption as a macroscopic law of increasing complexity and presents a hypothesis that this macroscopic law emerges in the universe. For this formulation, we represent any object of complexity as the source of the observed value of the object, which is a probability distribution and treated in a unified manner for various objects. To define the degree of complexity, we utilize a quantitative definition of the complexity of organized matters, organized complexity [15]. We then apply this hypothesis to the fine-tuning problem of the universe about the fundamental physical constants, which appear to be fine-tuned for life. This hypothesis explains the problem such that the fundamental physical constants are fine-tuned for the emergence of this macroscopic law, which would be more plausible and clearly defined than for life. An (approximate) reduction of this macroscopic law to fundamental physical laws would certify this law and concretely evaluate the conditions of the fundamental physical constants so that this law emerges.
Tatsuaki Okamoto
2023-02-14T15:31:56Z
http://arxiv.org/abs/2302.07123v3
# On the Arrow of Time and Organized Complexity in the Universe ###### Abstract This paper presents a new hypothesis on a macro law in the universe, the _law of increasing complexity_, to formulate the assumption that the universe we observe and the biosphere on Earth are getting more diverse and complex with time. This formulation utilizes a quantitative definition of the complexity of organized matters, _organized complexity_ (OC) [6]. We then apply this law to the _coincidence_ (or fine-tuning) problem about the fundamental physical constants. We introduce a new principle, the _principle of increasing complexity_, on the law of increasing complexity and explain the coincidence with this new principle without using the anthropic principle. The principle implies that an (approximate) reduction of this macro law to fundamental physical laws would lead to a concrete analysis of the coincidence problem of fundamental physical constants. ## I Introduction ### Arrow of Time We often see macro phenomena in the assemblies of many elements which do not occur on several elements [1; 2]. For example, the arrow of time (or asymmetry in time), such as the law of increasing entropy, appears for the assemblies of many atoms and molecules (in a macro world), but the arrow of time does not appear (or symmetric in time) for several atoms and molecules (in a micro world). We call such macro phenomena _emergence_ in this paper. The law of increasing entropy for an isolated system is an _emergent macro law_. It is deemed one of the physical laws governing the universe, as it is difficult to reduce to fundamental physical laws strictly. Here note that Boltzmann's H-theorem is not a strict reduction to fundamental laws but an approximate reduction to Newtonian mechanics under some assumptions and for limited cases [3]. A dissipative structure, which is characterized by the appearance of order or self-organization in a non-equilibrium thermodynamic open system, an assembly of many atoms and molecules (in a macro world), introduced by Ilya Prigogine is also a typical example of emergence, where such a structure does not appear for several atoms and molecules (in a micro world) [4]. Accordingly, using only fundamental physical laws to explain cosmic phenomena is insufficient, but we also need emergent macro laws, such as the law of increasing entropy. As human beings, we tend to differentiate between artificial objects and things existing in the natural world. This way of thinking no doubt originates in the idea that humankind is something special, different from nature. However, humans and other life forms on Earth are all born of nature within this universe, and humans and artificial objects are simply a part of nature. In other words, all cosmic phenomena are a product of physical laws rooted in the universe. The present form of the universe and Earth is a consequence of these laws, including emergent macro laws. If we examine the transitions that have taken place in the form of the universe from its birth to the present, we will see that a profound flow exists from the state immediately after the Big Bang, when not even elementary particles existed, to the formation of elementary particles and light atoms, the formation and evolution of stars, galaxies, and clusters of galaxies, the formation of heavy atoms by supernova explosions, the formation of diverse molecules, the beginning of life on Earth, the emergence of diverse life forms and development of the biosphere, and the beginning and evolution of the human race and the noosphere (civilization and society). _In this universe, the extent of diverse and complex things is increasing over time._ Observing such temporal transitions in the universe, it would be natural to think that there may be a macro law behind these transitions in the universe, that is, a law in the manner of "in this universe, the extent of diverse and complex things is increasing over time." There are undoubtedly many people who have already come to this idea [5]. However, this idea has yet to bear fruit as a bona fide law. We then have a question. _Can we formulate the assumption, "in this universe, the extent of diverse and complex things is increasing over time," as a macro law?_ ### Coincidence and Anthropic Principle The fundamental physical laws include a variety of physical constants, such as the gravitational constant, speed of light, Planck's constant, and electron charge. However, there is yet to be an answer to how these physical constants came to take on their currently measured values. The only thing that can be said is that these values were selected by chance. If this were the case, it could not be denied that another universe different from ours may have physical constants with entirely different values. Many physicists think that numerous universes apart from ours would have different physical constants. A strange thing here is that the physical constants of our universe have been set to values conducive to the creation of life and the human race in such an intricately balanced way that they could hardly be a product of chance. Life and the human race could not have existed if they were anything different. For example, there is a constant that determines the magnitude of the nuclear force called the strong force that binds elementary particles in the atomic nucleus. A constant also determines the magnitude of the electromagnetic force that binds electrons to the atomic nucleus and gives repulsion between protons. If the values of these two constants were to deviate even slightly and upset the balance between them (e.g., the electromagnetic force were to be slightly stronger), atoms but hydrogen could not exist, which would mean no carbon, and by extension, no life or human beings on Earth. To answer the question as to why the laws and physical constants of our universe have been set to values suitable for the creation of life and the human race, the _anthropic principle_ was introduced. It is that this universe is inherently oriented to producing human life (or intelligent organisms), or the fundamental physical laws of this universe are set up so that human life (or intelligent organisms) appears. Though there is a possibility that diverse universes exist apart from ours, a universe from among those that just happened to have physical constants appropriate for human life would have to be our universe. There are several variations of the anthropic principle. For example, a universe having no agent like the human race capable of being conscious of that universe is equivalent to a non-existing universe, or a universe that "exists" is compelled to produce intelligent organisms that are conscious of that universe. There are criticisms of the anthropic principle and its variations. Why focus on human beings (or intelligent organisms)? Couldn't other living things (such as amoebas or octopi) be considered instead of human beings? How should extremely vague concepts like intelligent life and consciousness be defined? Are human beings (or intelligent organisms) the culmination of evolution in the universe? Do they occupy a special place in the universe? We then have a question. _Can we explain the coincidence without using the anthropic principle?_ ### Contribution In this paper, we answer these questions. We first present a new hypothesis on a macro law in the universe, the _law of increasing complexity_, to answer the question given in Section I.1. In this formulation, we utilize a quantitative definition of the complexity of organized matters, _organized complexity_ (OC) [6] (see Appendix A) to formally capture the "extent of diverse and complex things." We then apply this law to the _coincidence_ (or fine-tuning) problem about the fundamental physical constants. We introduce a new principle, the _principle of increasing complexity_, on the law of increasing complexity: _the fundamental physical laws of this universe are set up so that the law of increasing complexity emerges._ We explain the coincidence with this new principle without using the anthropic principle as an answer to the question given in Section I.2. Then, an (approximate) reduction of the law of increasing complexity to fundamental physical laws would clarify the conditions for fundamental physical constants so that this law emerges in the universe, which would lead to a concrete analysis of the coincidence problem of fundamental physical constants. ### Notations \(X:=Y\) denotes that \(X\) is defined by \(Y\). The sets of natural, and real numbers are denoted by \(\mathbb{N}\), and \(\mathbb{R}\), respectively. The set of \(n\)-bit strings is denoted by \(\{0,1\}^{n}\) (\(n\in\mathbb{N}\)), \(\{0,1\}^{*}:=\cup_{n\in\mathbb{N}}\{0,1\}^{n}\). When \(x\in\{0,1\}^{*}\), \(|x|\) denotes the bit length of \(x\). When \(a,b\in\mathbb{R}\), \([a,b]\) denotes set \(\{x\mid x\in\mathbb{R}\), \(a\leq x\leq b\}\subset\mathbb{R}\). A probability distribution over \(\{0,1\}^{n}\) is \(\{(a,p_{a})\mid a\in\{0,1\}^{n},p_{a}\in[0,1],\sum_{a\in\{0,1\}^{n}}p_{a}=1\}\). When \(A\) is a probability distribution, \(a\overset{\mathbb{R}}{\leftarrow}A\) denotes that element \(a\in\{0,1\}^{n}\) is randomly selected from \(A\) ## II New Hypothesis ### Law of Increasing Complexity This section answers the question given in Section I.1. We introduce the _law of increasing complexity_ as a new macro (emergent) law in the universe, which formulates the assumption that the universe we observe and the biosphere on Earth are getting more diverse and complex with time. This law uses the quantitative definition of organized matters, _organized complexity_ (OC) [6]. Roughly, the organized complexity (OC) for an object (probability distribution) is given by the shortest size of a stochastic automaton form of circuit, oc-circuit, for simulating the object. See Appendix A for an informal explanation of OC. This definition solves the issues of previous definitions and satisfies most requirements for the quantitative definition of the complexity of organized matters [6]. Let \(\mathcal{O}\) be an observing system. For example, we have observed the biosphere on Earth by using a huge number of observing devices at various places, which constitute an observing system, \(\mathcal{O}\). The output of \(\mathcal{O}\) is that of the whole devices, which is an enormous amount of observation data totally expressed by a finite bit string, \(\{0,1\}^{n}\) for some \(n\in\mathbb{N}\). Here note that, without loss of general ity, observed data can be expressed in binary form since any physically observed data are bounded and have only finite precision (with rounding error). Let \(\mathcal{O}(A)\in\{0,1\}^{n}\) (for some \(n\in\mathbb{N}\)) be the output (observation data) of \(\mathcal{O}\) observing object \(A\) (e.g., the biosphere on Earth at a time), and \(\tilde{\mathcal{O}}(A)\) be the source of \(\mathcal{O}(A)\), which is a probability distribution over \(\{0,1\}^{n}\) and \(\mathcal{O}(A)\stackrel{{\text{\tiny{\sf R}}}}{{=}}\tilde{ \mathcal{O}}(A)\). (See [6] for reasoning that the source of observation data is a probability distribution. It is the same as the information source in Shannon's information theory [8; 9].) Let \(\mathsf{OC}(X,\delta)\) be the organized complexity (OC) of a distribution \(X\) over \(\{0,1\}^{n}\) (for some \(n\in\mathbb{N}\) ), at precision level \(\delta\) (\(0\leq\delta<1\)) [6] (Appendix A). Let \[C_{\mathcal{O},\delta}(A):=\mathsf{OC}(\tilde{\mathcal{O}}(A),\delta).\] **Definition 1**: _We now introduce an order relation between two observing systems \(\mathcal{O}_{0}\) and \(\mathcal{O}_{1}\), where any object \(A\) observable by \(\mathcal{O}_{1}\) can be observed by \(\mathcal{O}_{0}\). We denote \(\mathcal{O}_{0}\geq\mathcal{O}_{1}\) iff for any object \(A\) observable by \(\mathcal{O}_{1}\) and any precision \(\delta(>0)\), \(C_{\mathcal{O}_{0},\delta}(A)\geq C_{\mathcal{O}_{1},\delta}(A)\)._ **Hypothesis (Law of Increasing Complexity)** _Let \(\mathcal{S}\) be an open system as an assembly of many elements where both energy and matter enter or leave. The state of \(\mathcal{S}\) temporally transitions and let \(\mathcal{S}_{t}\) be \(\mathcal{S}\) at time \(t\)._ _There exists an open system \(\mathcal{S}\) in the universe, along with a time interval \(I_{\mathcal{S}}:=[T_{0},T_{1}]\) (\(T_{0},T_{1}\in\mathbb{R}\), \(0<T_{0}<T_{1}\)), an observing system \(\mathcal{O}_{\mathcal{S}}\), and a precision level \(\delta_{\mathcal{S}}\) such that for any observing system \(\mathcal{O}\geq\mathcal{O}_{\mathcal{S}}\), any precision level \(\delta\leq\delta_{\mathcal{S}}\), and any time \(T\in I_{\mathcal{S}}\), there exists \(s>0\) (\(s\in\mathbb{R}\)),_ \[C_{\mathcal{O},\delta}(\mathcal{S}_{T+s})>C_{\mathcal{O},\delta}(\mathcal{S}_{ T}).\] We assume that this law emerges for the following open systems: * a subspace of the universe, for example, the subspace within the spherical surface of the universe around \(13.8\) billion years ago that is now observed from Earth (and now has expanded to around \(47\) billion light-years away from Earth), * the biosphere on Earth, which is the assembly of living things as well as the environment on Earth, where for open system \(\mathcal{S}\), the interval, \(I_{\mathcal{S}}\), is from the beginning of \(\mathcal{S}\) to some future. The interval for the biosphere on Earth should be up to some time before its extinction caused by the increasing solar activity. It is still unknown how long the law of increasing complexity will continue to emerge for a subspace of the universe since we have no established theory for the distant future of the universe. For the law of increasing complexity, we assume that an observing system \(\mathcal{O}\) can observe an open system \(\mathcal{S}\) at any time in the interval \(I_{\mathcal{S}}\). For the biosphere on Earth, we infer the observation at a past time from fossil digs, gene analysis, historical remains, and so on with some precision \(\delta\). For a subspace of the universe, we infer a past observation from various astronomical observations with different directions and distances (times) and the assumption that the form of any finite region anywhere in the universe should be about the same as that extracted from the results of observations made to date. ### Physical Laws and Computation Theory In physics, mathematics is the language used to describe physical laws and theories. Various (new) forms of mathematics have been used to provide a suitable language for describing physical laws. Issae Newton developed a new mathematical technique, calculus, to describe his theory of dynamics. Albert Einstein described his theory of general relativity using Riemannian geometry, a new mathematical theory that had only recently come into existence. It may seem odd that the law of increasing complexity introduced here uses a computation concept, _oc-circuit_, despite being a law targeting physical phenomena in the universe. However, computation theory is a purely mathematical theory founded in the 1930s by mathematicians such as Alan Turing and Kurt Godel [7]. In the same way that various forms of mathematics were used to describe physical laws, computation theory is a form appropriate for describing the law of increasing complexity. ## III Beyond the Anthropic Principle ### Principle of Increasing Complexity We will answer the question given in Section I.2. First, we will introduce a new principle, the _principle of increasing complexity_, using the law of increasing complexity given in the previous section. We consider the law of increasing complexity as a macro law governing _emergent_ phenomena that, like the law of increasing entropy, is _difficult to reduce to fundamental physical laws strictly_. However, it is most certainly true that macro laws emerge from fundamental physical laws, that is, that they are based on fundamental physical laws. Accordingly, if fundamental physical laws were to change, any macro law emergent from them would have to change in some way. If the physical constants of fundamental physical laws were to change, the law of increasing complexity might not emerge. With this in mind, the _principle of increasing complexity_ can be naturally derived as follows. **(Principle of Increasing Complexity)** _The fundamental physical laws of this universe are set up so that the law of increasing complexity emerges._ This principle of increasing complexity prescribes the relationship between micro fundamental physical laws and the macro law of increasing complexity via the concept of emergence. In contrast to the anthropic principle described using the difficult-to-define concept of human beings or intelligent organisms, the principle of increasing complexity is described with clearly defined notions. ### Explaining the Coincidence with the New Principle The anthropic principle (the fundamental physical laws of this universe are set up so that human life or intelligent organisms appears) was introduced to explain the coincidence of some physical phenomena and laws of this universe (e.g., the coincidence of physical constants). An example of reasoning regarding physical constants often took the following forms. If the values of the physical constants were other than what they are now, no atoms but hydrogen would be able to form, and no human life would appear. Hence, the values of the physical constants should be what they are now under the anthropic principle. Alternatively, if the values of the physical constants were other than what they are now, no stars would be born, which means no heavy atoms or various kinds of molecules and, consequently, no human life. In most forms of such reasoning, declarations of the form "no human life would appear" could be replaced by "no law of increasing complexity would emerge" without losing validity. For example, if the values of the physical constants were other than what they are now, no atoms but hydrogen would be able to form, no structure nor order would be born, and, as a consequence, no law of increasing complexity would emerge. Therefore, the values of the physical constants should be what they are now under the _principle of increasing complexity_. Note that there may exist some gap between this new principle and the anthropic principle since the principle of increasing complexity does not always imply the appearance of intelligent organisms like human beings. In this sense, the principle of increasing complexity is more general than the anthropic principle. One of the most crucial problems about this emergent macro law, the law of increasing complexity, is to (approximately) reduce this macro law to fundamental physical laws. For the law of increasing entropy, Boltzmann's H-theorem is an approximate reduction of this law to Newtonian mechanics. An approximate reduction for the law of increasing complexity should be more complicated than H-theorem since organized matters are more complicated and diverse than random matters. The principle of increasing complexity is about the relationship between the law of increasing complexity and fundamental physical laws (and constants). An (approximate) reduction for the law of increasing complexity would clarify this relationship or the conditions for fundamental physical constants so that this macro law emerges in the universe and would lead to a concrete analysis of the coincidence problem of fundamental physical constants. Dissipative structures and self-organizations have been extensively studied to investigate some order and organization that appeared in a natural environment. These studies should help find an (approximate) reduction for the law of increasing complexity. ## IV Epilogue There is an anecdote about Max Plank, a celebrated German physicist, who, on expressing his desire to study physics at university, was discouraged from doing so by the people around him since they considered physics at that time (in the latter half of the 19th century) to be a well-established academic discipline in which no more major discoveries could be expected. Most physicists of this period believed that all physical phenomena could be explained in terms of Newtonian dynamics and Maxwell's theory of electromagnetism and considered that physics had become a _complete academic field_. This situation changed at the end of the 19th century when the results of several experiments and observations that were not initially thought to be of any consequence began to spread dark clouds over the entire field of physics. On April 27, 1900, the British physicist William Thomson, Lord Kelvin, gave a lecture at Britain's Royal Society titled Nineteenth-Century Clouds over the Dynamical Theory of Heat and Light. In this lecture, he described how the Michelson-Morley experiment of 1887 involving the speed of light refuted the existence of the virtual substance known as the ether, which was thought to be the medium for light waves, and how the wavelength distribution of black body radiation could not be explained by statistical-mechanical techniques based on Newtonian dynamics. It was these two phenomena that Lord Kelvin likened to dark clouds hanging over 19th-century physics. It was not long after this lecture that 20th-century physics got off to a resounding start in dissipating these clouds through such revolutionary theories as Plank's quantum theory and Einstein's theory of photoelectric effect and special theory of relativity. What was it that brought about this dramatic transition to 20th-century physics from the so-called _complete field_ of physics covered by ominous dark clouds in the latter half of the 19th century? It was none other than the expansion of physical phenomena as targets of study made possible by advances in experimental and observational technologies. Physics is an attempt to explain observed phenomena by physical laws as concise as possible, which means that physical laws can change depending on what is being observed. The physical phenomena that had been targeted by physicists up to the middle of the 19th century were within the range of what could be seen in daily life or observed in space by the astronomical telescopes of the time. Newtonian dynamics and the theory of electromagnetism were sufficient to explain that range of phenomena, and in this sense, physics had become a _complete academic field_. Targets of observation in physics up to the present have been expanding in both an increasingly micro direction and an increasingly macro direction. The results of observations in the micro direction have promoted advances in the theory of elementary particles in relation to the minute structure of matter, and the results of observations in the macro direction have borne fruit in the field of cosmology in relation to gigantic structures in the universe. Of interest here is that this expansion of target physical phenomena in both micro and macro directions enables us to peer into the universe's past. It has dramatically increased our knowledge of the universe's history since its birth. In fact, observing the far reaches of the universe is equivalent to observing the universe's past, and investigating the structure of elementary particles means learning about how elementary particles and matter in space came into existence. In 1964, Arno Penzias and Robert Wilson of Bell Telephone Laboratories constructed a microwave antenna for radio astronomy and satellite communications. They began observations with the antenna turned toward space, and discovered that radio waves corresponding to black body radiation at an absolute temperature of about 3 K arrived from space in all directions. This provided strong evidence that the universe began with the Big Bang, and for this achievement, Penzias and Wilson received the 1978 Nobel Prize for Physics. Up to that time, mainstream thinking was centered on steady-state cosmology, which holds that the universe is not so changing over time. Big Bang cosmology was thought to be more of a fantasy than anything else, but this discovery of cosmic background radiation led to its widespread recognition. Since then, precise observations made by COBE, WMAP, and other observational satellites, as well as space telescopes and terrestrial astronomical telescopes, have been steadily clarifying the history of the universe since its birth. Research into the appearance of life forms on Earth and their subsequent evolution began with the publication of "On the Origin of Species" by Charles Darwin in 1859. Details on the appearance of organisms on Earth and the history of their evolution have been uncovered through research based on geochronology and fossils and, more recently, on gene analysis. In particular, research based on gene analysis, which has become a major field of study, has been making much progress. The results of analysis based on geological ages and fossils have been found to be highly consistent with the results of gene analysis, thereby validating established theories on the evolution of life forms on Earth. As we described earlier, physical laws change according to what is being targeted for observation. Just as quantum mechanics could not have been formulated from observation targets used in the former half of the 19th century, attempts to find laws using the history of transitions in this universe as a target of observation could not have been made 60 years ago when steady-state cosmology was the mainstream belief and no DNA-based research of the evolution of organisms was being conducted. Now that this is possible, this paper aims to make such an attempt. ## V Concluding remark This paper introduced a new macro law in the universe, the _law of increasing complexity_, to formulate an assumption that the universe we observe and the biosphere on Earth are getting more diverse and complex with time. We then applied this law to the _coincidence_ (or fine-tuning) problem about the fundamental physical constants, where we introduced a new principle, the _principle of increasing complexity_. We explained the coincidence with this principle without using the anthropic principle. A major open problem is to (approximately) reduce this macro law to fundamental physical laws. Such an (approximate) reduction would clarify the conditions for fundamental physical constants so that this macro law emerges in the universe, leading to a concrete analysis of the coincidence problem of fundamental physical constants.
2310.18212
Robustness of Algorithms for Causal Structure Learning to Hyperparameter Choice
Hyperparameters play a critical role in machine learning. Hyperparameter tuning can make the difference between state-of-the-art and poor prediction performance for any algorithm, but it is particularly challenging for structure learning due to its unsupervised nature. As a result, hyperparameter tuning is often neglected in favour of using the default values provided by a particular implementation of an algorithm. While there have been numerous studies on performance evaluation of causal discovery algorithms, how hyperparameters affect individual algorithms, as well as the choice of the best algorithm for a specific problem, has not been studied in depth before. This work addresses this gap by investigating the influence of hyperparameters on causal structure learning tasks. Specifically, we perform an empirical evaluation of hyperparameter selection for some seminal learning algorithms on datasets of varying levels of complexity. We find that, while the choice of algorithm remains crucial to obtaining state-of-the-art performance, hyperparameter selection in ensemble settings strongly influences the choice of algorithm, in that a poor choice of hyperparameters can lead to analysts using algorithms which do not give state-of-the-art performance for their data.
Damian Machlanski, Spyridon Samothrakis, Paul Clarke
2023-10-27T15:34:08Z
http://arxiv.org/abs/2310.18212v2
# Robustness of Algorithms for Causal Structure Learning to Hyperparameter Choice ###### Abstract Hyperparameters play a critical role in machine learning. Hyperparameter tuning can make the difference between state-of-the-art and poor prediction performance for any algorithm, but it is particularly challenging for structure learning due to its unsupervised nature. As a result, hyperparameter tuning is often neglected in favour of using the default values provided by a particular implementation of an algorithm. While there have been numerous studies on performance evaluation of causal discovery algorithms, how hyperparameters affect individual algorithms, as well as the choice of the best algorithm for a specific problem, has not been studied in depth before. This work addresses this gap by investigating the influence of hyperparameters on causal structure learning tasks. Specifically, we perform an empirical evaluation of hyperparameter selection for some seminal learning algorithms on datasets of varying levels of complexity. We find that, while the choice of algorithm remains crucial to obtaining state-of-the-art performance, hyperparameter selection in ensemble settings strongly influences the choice of algorithm, in that a poor choice of hyperparameters can lead to analysts using algorithms which do not give state-of-the-art performance for their data. **Keywords:** Hyperparameters, model selection, causal discovery, structure learning, performance evaluation, misspecification, robustness. ## 1 Introduction Uncovering causal graphs is an immensely useful tool in data-driven decision-making as it helps understand the underlying data generating process. A large number of causal structure learning algorithms incorporate Machine Learning (ML) methods. These, in turn, heavily rely on hyperparameters for accurate predictions (Bergstra et al., 2011). In addition, there has been growing evidence that correctly specified hyperparameters can close the performance gap between State-of-the-Art (SotA) and other methods (Paine et al., 2020; Zhang et al., 2021; Machlanski et al., 2023; Tonshoff et al., 2023). _Are hyperparameters as important in structure recovery?_ Hyperparameter optimisation is extremely challenging in structure learning as the true graphs are inaccessible outside of simulated environments. This inability to reliably tune could be one of the reasons behind the struggle to apply some of the algorithms to real data problems (Kaiser and Sipos, 2021), or why hyperparameters are often completely neglected in this area. On the one hand, benchmarks and evaluation frameworks (e.g. Raghu et al. (2018); Tu et al. (2019)) usually focus on finding a learning algorithm that works best under specific circumstances but without considering hyperparameters as part of the problem. On the other hand, studies that address hyperparameter tuning (e.g. Strobl (2021); Biza et al. (2022)) consider individual algorithms but not the impact of tuning (or the lack of it) on selecting the best algorithm for the available data. Understanding how hyperparameters affect algorithm choice, as well as individual methods, is clearly missing but can be a crucial next step towards more stable causal discovery in real data applications. To make matters worse, the evaluation metrics used for tuning can be imperfect and sometimes favour specific learning methods (Curth and van der Schaar, 2023). This brings us to the core questions of this paper: _Do different algorithms perform similarly given access to a hyperparameter oracle? How robust are they against misspecified hyperparameters?_ In this work, we set out to address these questions and investigate the impact hyperparameters have on graph recovery performance of individual algorithms, as well as on the best algorithm choice (see Figure 1). We start by showing how a single hyperparameter plays a crucial role in the simplest graph problem (two variables). More extensive experiments strengthen this observation and confirm it as a more general phenomenon. The experimental setup involves many seminal structure learning algorithms tested against real and simulated datasets. **Contributions.** This paper offers the following contributions: * Compare algorithms' performances and their winning percentages across hyperparameters. * Compare algorithms' performances under well-specified and misspecified hyperparameters. * Compare algorithms' winning percentages under well-specified and misspecified hyperparameters. **Related work.** This work connects with the existing literature mainly through the topics of performance evaluation and hyperparameter analysis. The performance of structure learning algorithms has been evaluated from a number of different perspectives, such as mixed data types (Raghu et al., 2018) or time series data (Assaad et al., 2022). In an attempt to strengthen the evaluation, there have been efforts to develop testing environments that closely resemble real-life datasets. Some examples include simulators based on gene regulatory networks (Van den Bulcke et al., 2006) or neuropathic pain pathology (Tu et al., 2019). Grunbaum et al. (2023) take evaluation further by proposing to test algorithms on the parts of real-life datasets that are known a priori. Furthermore, to improve reproducibility, Rios et al. (2021) developed a benchmarking platform that covers a wide range of learning methods and data scenarios. The importance of hyperparameters and their impact on performance have been mostly studied in other areas outside of structure learning. In the offline Reinforcement Learning (RL) setting, Paine et al. (2020) reported, among other aspects, that robustness to hyperparameter choices is an important issue and that careful tuning can deliver close to optimal policies. Zhang et al. (2021), on the other hand, make a case for hyperparameter tuning in model-based RL. Furthermore, according to Machlanski et al. (2023) and Tonshoff et al. (2023), hyperparameters alone can be responsible for reaching, or even surpassing, SotA performance levels. Hyperparameters in structure learning have mostly been discussed in the context of tuning. One common approach is to select hyperparameters that result in stable structure predictions across random data samples (Liu et al., 2010; Sun et al., 2013; Strobl, 2021). Another strand of work performs out-of-sample validation for tuning purposes based on predictive accuracy of models fitted in accordance with the recovered graph structure (Biza et al., 2022), or assigned scores developed specifically for structure recovery tuning (Chobtham and Constantinou, 2023). Metrics based on regression error have been also considered (Marx and Vreeken, 2019), though in the context of two variables. **Structure.** In Section 2 we briefly discuss the basics of structure learning. Section 3 demonstrates the importance of hyperparameters in structure learning via a bivariate example, further motivating more extensive numerical experiments presented in Section 4. Section 5 concludes the paper and offers potential future work directions. ## 2 Structure Learning We briefly describe the notation and data assumptions used throughout the paper as well as important details of structure learning methods necessary to read the technical parts of the document. For a more detailed review, see recent literature (e.g. Eberhardt (2017); Glymour et al. (2019)). ### Graphs Let \(\mathcal{G}=(V,\mathcal{E})\) be a graph with nodes/vertices \(V=\{1,\ldots,p\}\) and edges \(\mathcal{E}\subseteq V^{2}\). Edges are pairs of nodes \((j,k)\in\mathbf{V}\) where \((v,v)\notin\mathcal{E}\) to exclude self-cycles. Nodes \(j,k\) are **adjacent** in \(\mathcal{G}\) if either \((j,k)\in\mathcal{E}\) or \((k,j)\in\mathcal{E}\). An edge is **undirected** if \((j,k)\in\mathcal{E}\) and \((k,j)\in\mathcal{E}\), whereas it is **directed** if only one pair appears in \(\mathcal{E}\); if this pair is \((j,k)\) then \(j\) is called a **child of parent \(k\)**. The set of parents of \(j\) in \(\mathcal{G}\) is denoted by \(\mathbf{PA}_{j}^{\mathcal{G}}\). We call \(\mathcal{G}\) undirected if all its edges are undirected; conversely, \(\mathcal{G}\) consisting only of directed edges is directed. A **mixed** graph consists of both directed and undirected edges. The **skeleton** of any directed or mixed graph \(\mathcal{G}\) is an equivalent graph with all directed edges replaced by undirected ones. A **fully connected** graph \(\mathcal{G}\) is one where all pairs of nodes are adjacent. A (directed) **path** is a sequence of nodes connected by (directed) edges. A **partially directed acyclic graph** (PDAG) is a mixed graph such that there is no pair \((j,k)\) such that there are directed paths from \(j\) to \(k\) and vice versa. Then, \(\mathcal{G}\) is a **directed acyclic graph** (DAG) if it is a PDAG and is directed. Two graphs are _Markov equivalent_, or belong to the same _equivalence class_, when they involve the same sets of _d-separations_(Pearl, 2000). A **completed PDAG** (CPDAG) can encode such a class of graphs, in which undirected edges mean that the graphs within the class may contain a directed edge in either direction; directed edges denote agreement in edge direction in subsumed graphs. ### Assumptions Now consider a vector of random variables \(\mathbf{X}=(X_{1},\ldots,X_{p})\) generated according to an unknown data generating process (DGP) leading to joint distribution \(\mathcal{L}(\mathbf{X})\). The node \(j\in\mathbf{V}\) represents random variable \(X_{j}\) and the edge between nodes \(j\) and \(k\) in \(\mathcal{E}\) is directed if and only if \(X_{k}\) is used in the DGP to generate \(X_{j}\). We further assume the following. **Assumption 1** (Sufficiency).: There are no hidden confounders. **Assumption 2** (Markov condition).: Two variables are independent in \(\mathcal{L}(\mathbf{X})\) if they are _d-separated_ in \(\mathcal{G}\). **Assumption 3** (Faithfulness).: Two variables are _d-separated_ in \(\mathcal{G}\) if they are independent in \(\mathcal{L}(\mathbf{X})\). Figure 1: Summary of the main idea of the paper. We explore various structure learning algorithms and investigate how hyperparameters affect their performance. Notation: \(h_{1}\) and \(h_{2}\) are different hyperparameter values; \(X\) denotes i.i.d. data provided to algorithms; \(\hat{A}\) is recovered adjacency matrix while \(\mathcal{G}(\hat{A})\) is a causal graph based on \(\hat{A}\); SHD is structural Hamming distance (lower is better). Note how recovered graphs differ between different hyperparameters of the same algorithm (green edges are correct; red incorrect). _sim_mean_ are hyperparameters that achieved the best **average** performance across all simulations. ### Learning Methods The goal of causal structure learning is to infer (or identify) graph \(\mathcal{G}\) given the distribution \(\mathcal{L}(\mathbf{X})\). If it is possible to do this, we say \(\mathcal{G}\) is **identifiable** from \(\mathcal{L}\). Traditional methods, such as PC (Spirtes and Glymour, 1991) or GES (Chickering, 2002), were often built around assumptions a-c) above, which are in most cases not enough to identify a unique DAG solution, only the class of CPDAGs. If one, however, assumes the DGP is a Structural Causal Model (SCM) then identification is possible, for example, if the DAG is a linear SCM with additive non-Gaussian noise: \(X_{j}=f_{j}(X_{\mathbf{p}\mathbf{A}_{j}^{Q}})+\epsilon_{j}\)(Shimizu et al., 2006). Subsequent research has extended this result to nonlinear SCMs with additive noise (Hoyer et al., 2008), linear models with Gaussian noise terms of equal variances (Peters and Buhlmann, 2014), and additive models of the form \(X_{j}=\sum_{k\in\mathbf{P}\mathbf{A}_{j}^{Q}}f(X_{k})+\epsilon_{j}\)(Buhlmann et al., 2014). More recent approaches also involve neural network-based algorithms that specifically restate the learning task as a continuous optimisation problem (Zheng et al., 2018, 2020). ## 3 Hyperparameters in Structure Learning ### Bivariate Example An illustrative example, strongly inspired by Marx and Vreeken (2019), is the classic _cause-effect pairs_ challenge (Guyon et al., 2019) that consists of two (synthetically generated here) variables \(X\) and \(Y\), with the goal of establishing the existence and direction of the causal link between them (\(X\to Y\), \(X\gets Y\), no link) given only observed data. One possible solution is to fit two regressors \(y=f(x)\) and \(x=g(y)\), and predict the causal direction based on the lower prediction error of the two models (\(\epsilon_{f}=[y-\hat{f}(x)]^{2}\) and \(\epsilon_{g}=[x-\hat{g}(y)]^{2}\); no link if the errors are comparable). As shown in Figure 2, changing the hyperparameter that controls the number of regression parameters can result in a different causal direction being predicted. This is precisely what constitutes the problem, since the true DGP and the correct hyperparameter value are unknown. **Observation 1**.: Incorrect hyperparameters can cause prediction mistakes. **Observation 2**.: There might be more than one correct and incorrect hyperparameter choice. The problem grows in complexity as the number of graph nodes and edges increases. This is because each edge is a potentially different function to approximate that will require a different hyperparameter value (function complexity) to obtain the correct answer. In addition, many algorithms provide multiple hyperparameters to tune, making even more room for further mistakes, effectively increasing the chance of hyperparameter misspecification. In fact, even the bivariate example can involve more hyperparameters by, for instance, introducing a threshold such that the algorithm predicts 'no link' if \(|\epsilon_{f}-\epsilon_{g}|<threshold\). To this end, we make the following claim and set up the following key definition: **Claim 1**.: The existence and direction of an edge in the predicted graph strongly depends on algorithm's hyperparameters. **Definition 1** (Hyperparameter Misspecification).: Mistakes in predicted graph structure arising from incorrect hyperparameters. ### General Form of the Problem Let \(\mathbf{X}\in\mathbb{R}^{n\times p}\) represent i.i.d. data of \(n\) observations and \(p\) features. Furthermore, let \(A\in\{0,1\}^{p\times p}\) be a binary **adjacency matrix** of directed graph \(\mathcal{G}(A)\) such that \(a_{jk}=1\) if \((j,k)\in\mathcal{E}\) and \(a_{jk}=0\) otherwise. A **weighted** adjacency matrix \(W\in\mathbb{R}^{p\times p}\) is defined such that \(A(W)_{jk}=1\) if \(w_{jk}\neq 0\) and zero otherwise, which results in a weighted graph \(\mathcal{G}(W)\). Let us also define distance \(d(A,B)\) between two adjacency matrices \(A\) and \(B\) such that \(d(A,B)=0\) if and only if \(\mathcal{G}(A)=\mathcal{G}(B)\) are the same graph. From now on, let us denote \(A\) as the true adjacency matrix and \(\hat{A}\) its estimate obtained by a programme \(P\) from i.i.d. data \(X\) using programme options \(O\) so that \[\hat{A}=P(O,X), \tag{1}\] where \(O\) generally involves the user specifying algorithm \(S\) and hyperparameter values \(H\). Therefore, given \(K\) user-specified candidate programmes \(P=\{P_{1},\ldots,P_{K}\}\) and \[\hat{A}_{k}=P(S_{k},H_{k},X), \tag{2}\] where \(S_{k}\) and \(H_{k}\) are the algorithm and hyperparameter choices associated with program candidate \(k\), then the best program is \[k^{*}=\operatorname*{argmin}_{k\in\{1,\ldots,K\}}d(A,\hat{A}_{k}), \tag{3}\] Note that \(k^{*}\) is generally not identifiable unless \(A\) is known. Furthermore, identification of \(\mathcal{G}\) does not guarantee \(\hat{A}_{k^{*}}=A\) as \(\hat{A}_{k^{*}}\) depends on algorithms \(S\) and their ability to identify \(A\), as well as the choice of their hyperparameters \(H\). More generally, when considering different algorithms \(S\) and hyperparameters \(H\), Equation 3 is the standard **model selection** problem, whereas if the choice of algorithms is fixed to a specific value, leaving hyperparameters as the only variable, the task reduces to **hyperparameter tuning**. In practical terms, obtaining the distance \(d(A,\hat{A})\) is not feasible, as the true matrix \(A\) and its corresponding graph \(\mathcal{G}(A)\) are inaccessible outside of simulated environments. As algorithms can vary substantially in design, the appropriate way to compare them requires the use of distance measures that incorporate the ground truth \(A\). This renders model selection impractical in structure learning problems. Tuning hyperparameters of a single algorithm might be feasible by comparing its relative scores across explored hyperparameter values. ### Common Hyperparameters Despite differences in algorithms, many of them share similar hyperparameters. Commonly used ones are briefly described here. Figure 2: The number of allowed regression _parameters_, a **hyperparameter**, clearly affects prediction error of the two models and can determine predicted causal direction (see decision colours at the bottom). The algorithm predicts \(X\to Y\) if \(\epsilon_{f}<\epsilon_{g}\), \(X\gets Y\) if \(\epsilon_{f}>\epsilon_{g}\), and is inconclusive otherwise. Note that the true causal direction is unknown in practice. **Significance level of independence tests.** Refers to the _p-value_ of independence tests and the desired level of certainty. Decreasing the value (increasing certainty) will usually result in fewer predicted edges. Often named as _alpha_ and incorporated by traditional and pairwise algorithms. **Sparsity.** A penalty term that encourages sparser solutions. Higher values result in fewer predicted edges. Similar in mechanism to _L1 regularisation_ which discards less relevant features. Often employed by regression-based solutions, especially if they perform some form of feature selection. **Model complexity.** A penalty that encourages simpler models to avoid overfitting (_L2 regularisation_). As shown in the example in Section 3.1, its influence on the final prediction is complicated. Usually applies to solutions that model the assumed form of SEMs. **Post-prunning threshold.** Many SEM-based methods output weighted adjacency matrices \(W\) that need to be converted to the binary form of \(A\). This is usually done by applying a threshold below which all edges are set to zero. That is, \(a_{ij}=1\) if \(w_{ij}>w\_thresh\); \(0\) otherwise.. Note that _alpha_ and \(w\_threshold\) are algorithm agnostic and can be transferred between methods, whereas the other two hyperparameters may differ in value between algorithms. ## 4 Experiments Since our analysis in Section 3 is based merely on a simple and artificial example, our next step is to study the influence of hyperparameters more rigorously in a more general setting. We devise a set of experiments consisting of diverse data sets of various size and difficulty (Section 4.1), processed by a representative set of structure learning algorithms (Section 4.2). Different hyperparameter selection scenarios are also detailed (Section 4.4). The experimental framework is implemented through Benchpress (Rios et al., 2021), a benchmarking platform to evaluate structure learning algorithms. Performances of all algorithms are collected from Benchpress, followed by some mild post-processing of the results to suit our analysis of hyperparameters. All numerical experiments can be fully replicated using the code and data that are available online at: [https://github.com/dmachlanski/benchpress-dm](https://github.com/dmachlanski/benchpress-dm). ### Graphs and Data #### 4.1.1 Simulations We follow recent literature in structure learning when it comes to simulating different DGPs. The simulation procedure starts with generating a random DAG \(\mathcal{G}\) with \(p\) nodes and \(d\) edges, built according to a random graph model. The resulting graph is binary, with \(A\in\{0,1\}^{p\times p}\). Next, i.i.d. data \(\mathrm{X}\in\mathbb{R}^{n\times p}\) are sampled from a simulated SEM of choice, with \(n\) being the sample size. Each individual combination of settings is repeated for \(10\) seeds and forms a single experiment. In our experiments, we explore \(p\in\{10,20,50\}\), \(d\in\{1p,4p\}\), and \(n\in\{200,1000\}\). Included random graph models are Erdos-Renyi (ER) (Erdos and Renyi, 1959) and Barabasi-Albert (Barabasi and Albert, 1999), with the latter also known as scale-free (SF). We also explore \(n=10,000\) but only for sparse ER graphs with \(p=50\) nodes due to computational limitations. As for explored SEMs, we include the following: **Linear Non-Gaussian (gumbel)**. \(X=XW^{T}+z\in\mathbb{R}^{p}\), with \(W\in\mathbb{R}^{p\times p}\) as edge weights assigned independently from \(U([-2,-0.5]\cup[0.5,2])\) and based on \(A\). Noise \(z\) follows the Gumbel distribution \(z\sim\text{Gumbel}(0,I_{p\times p})\). **Nonlinear Gaussian (gp)**. \(X_{j}=f_{j}(X_{\mathbf{PA}_{j}^{\mathcal{G}}})+z_{j}\) for all \(j\in[p]\) in the topological order of \(\mathcal{G}\). Noise \(z_{j}\) follows Gaussian distribution \(z_{j}\sim\mathcal{N}(0,1)\), \(j=1,\dots,p\). Where functions \(f_{j}\) represent a draw from a Gaussian process with a unit bandwidth RBF kernel. Note that both settings have been shown to be identifiable. That is, linear non-Gaussian additive models (Shimizu et al., 2006) and nonlinear additive models (Hoyer et al., 2008). #### 4.1.2 Real Datasets We also tested structure learning algorithms against real or semi-real datasets. The most popular ones in the literature are _protein signaling_ and _SynTReN_. **Protein signaling** comes from Sachs et al. (2005) which measures protein and phospholipid expression levels in human cells. The ground truth causal graph has been established and accepted by the experts in the field. We use the second dataset that is already logged and standardised and consists of \(n=902\) observations, \(p=11\) nodes and \(d=17\) edges. **SynTReN** is a generator of synthetic transcriptional regulatory networks and related gene expression data that simulate a real experiment (Van den Bulcke et al., 2006)1. We use the same data as in Lachapelle et al. (2019), which consist of 10 random seeds, \(n=500\) samples and \(p=20\) nodes. Footnote 1: [http://bioinformatics.intec.ugent.be/kmarchal/SynTReN/index.html](http://bioinformatics.intec.ugent.be/kmarchal/SynTReN/index.html) ### Structure Learning Algorithms We consider in our setup the following algorithms. Due to high computational demands, we only focus on well-established and seminal algorithms that, in our view, effectively represent different classes of solutions. More details about the hyperparameters involved can be found in Appendix A.1 and A.2. * **PC**(Spirtes & Glymour, 1991). Peter and Clark algorithm. Constraint-based approach that starts with a fully-connected undirected graph and removes edges based on conditional independence tests. Next, it attempts to orient as many of the remaining edges as possible. The result is a CPDAG. **Hyperparameters:**_alpha_ (significance level for conditional independence tests). * **FCI**(Spirtes et al., 1993). Fast Causal Inference. Constraint-based. An important generalisation of PC to unknown confounding variables. **Hyperparameters:**_alpha_ (significance level for conditional independence tests). * **FGES**(Ramsey et al., 2017). Fast Greedy Equivalence Search. Optimised and parallelised version of the original score-based GES algorithm (Chickering, 2002). It starts with an empty graph and adds an edge that yields maximum score improvement until no significant score gain is achieved. Then it removes edges in the same greedy manner until a plateau. **Hyperparameters:**_penaltyDiscount_ (sparsity penalty). * **LiNGAM**(Shimizu et al., 2006). Linear Non-Gaussian Acyclic Model. Assumes linear SEMs and non-Gaussian noise that enters additively: \(X_{j}=\sum_{k\in\mathbf{PA}_{j}^{G}}w_{jk}X_{k}+\epsilon_{j}\). **Hyperparameters:**_max_iter_ (FastICA (Hyvarinen, 1999)), _thresh_ (post-prunning threshold). * **ANM**(Hoyer et al., 2008). Additive Noise Model. Assumes nonlinear SEMs and additive noise: \(X_{j}=f_{j}(X_{\mathbf{PA}_{j}^{G}})+\epsilon_{j}\). **Hyperparameters:**_alpha_ (significance level for the independence test). * **CAM**(Buhlmann et al., 2014). Causal Additive Models. Assumes a generalised additive noise model with additive noise and functions: \(X_{j}=\sum_{k\in\mathbf{PA}_{j}^{G}}f(X_{k})+\epsilon_{j}\). **Hyperparameters:**_cutoff_ (variable selection threshold). * **NOTEARS**(Zheng et al., 2018). Score-based continuous DAG optimisation with a smooth acyclicity regularisation term. Assumes linear SEMs with additive noise. **Hyperparameters:**_lambda1_ (sparsity term), _max_iter_ (optimisation steps) and _w_threshold_ (post-prunning threshold). * **NOTEARS MLP**(Zheng et al., 2020). Nonlinear extension of _NOTEARS_ by incorporating the Multi-Layer Perceptron (MLP). Assumes nonlinear SEMs with additive noise. **Hyperparameters:**_lambda1_ (sparsity term), _lambda2_ (regularisation strength), _w_threshold_ (post-prunning threshold), _hidden_units_ (number of units in the hidden layer). Many traditional algorithms, such as PC, FCI and FGES, make the standard set of assumptions that involve sufficiency, faithfulness and Markov condition. These, however, are often not enough to identify a unique DAG as a solution, which is a major drawback of these methods (they output CPDAGs). Making assumptions about distributions and functional forms of the data generating process seems to be critical to overcome this issue (all methods above except for PC, FCI and FGES output DAGs). ### Evaluation In order to compare algorithms' performances, we employ the commonly used _structural Hamming distance_ (SHD) metric, which is provided via Benchpress (Rios et al., 2021, appendix A.1.). For the convenience of the reader, we briefly describe it here as well. Let us define \(E\) and \(E^{\prime}\) as a set of edges of the true and predicted DAG respectively. Then, for \(e\in E^{\prime}\), true positives (TP) and false positives (FP) are assigned as follows: \[TP(e)=\begin{cases}1&\text{if $e\in E$ and correctly oriented}\\ 0.5&\text{if $e\in E$ and incorrectly oriented}\\ 0&\text{otherwise}\end{cases} \tag{4}\] \[FP(e)=\begin{cases}1&\text{if $e\notin E$}\\ 0.5&\text{if $e\in E$ and incorrectly oriented}\\ 0&\text{otherwise}\end{cases} \tag{5}\] where TP and FP are sums of all TP(e) and FP(e) scores respectively. The _structural Hamming distance_ (SHD) aggregates the number additions, removals and reversals in predicted edges so they match the true ones (\(E=E^{\prime}\)). It can be defined as: \[SHD=|E|-TP+FP \tag{6}\] Note the SHD defined as above allows to evaluate mixed graphs, that is, compare DAGs to CPDAGs. If, for instance, a predicted undirected edge exists in \(E\) but is supposed to be directed, it will result in \(TP=0.5\) and \(FP=0.5\), ultimately leading to \(SHD=1\). This shows that the need to orient an undirected edge is treated equally as the need to add, remove or reverse an edge so \(E=E^{\prime}\). Such evaluation puts algorithms outputting CPDAGs at a disadvantage compared to DAG-only methods. We justify it on the grounds that the main focus of this study is DAG recovery, hence any predicted undirected edge is treated as any other mistake. The ability to evaluate mixed graphs is an important feature for this study as it allows us to compare algorithms outputting CPDAGs and DAGs. ### Hyperparameters All incorporated learning algorithms have at least one hyperparameter. We collect performances of algorithms across all hyperparameter combinations (exhaustive grid search; see Appendix A.1). To better understand the influence of hyperparameters on structure recovery performance, we experiment with four different hyperparameter selection strategies described below. **BEST.** To simulate the choice of the best hyperparameters (as if we had access to hyperparameter oracle), we pick hyperparameter values that achieved **the lowest** SHD in that particular data setting. Each data setting can have a different set of the best hyperparameters. **WORST.** Identified similarly to 'best' except the criterion here is **the highest** SHD. **DEFAULT.** Default hyperparameter values recommended by the authors of an algorithm. See Appendix A.2 for specific values. **SIM_MEAN.** An alternative to 'default'. We found a single set of hyperparameter values per algorithm that achieved **the lowest average** SHD across all simulations. In other words, these are _simulation-derived_ default values. See Appendix A.1 for identified values. ### Results Presented results employ the following naming convention with respect to the data generating process: _graph_p_ (number of nodes), _graph_d_ (edge density), _graph_type_ (graph models; ER or SF), _data_n_ (sample size), _data_sem_ (SEM type; gumbel or gp). Error bars are standard errors unless stated otherwise. Note only the most important results are presented in the main content of the paper. The rest of the results, that do not change conclusions, are in Appendix B. #### 4.5.1 Performance vs. Hyperparameter Quality As shown in Figure 3, achieved performance is clearly affected by both algorithm and hyperparameter choices. Even assuming access to hyperparameter oracle ('best'), selecting different algorithms will have a significant impact on the result. Furthermore, fixed hyperparameters seem to be a viable strategy as they are relatively close in SHD to the best cases. The differences between simulation-derived and paper-default values ('sim_mean' and 'default' respectively) are negligible in most cases. The worst hyperparameters, on the other hand, can result in performances substantially worse than the fixed ones. This shows the risks of hyperparameter misspecification, the degree of which clearly varies across algorithms (different robustness). SHDs against dense graphs are generally high even with the best hyperparameters, suggesting that those cases should be avoided with current tools. For the sake of clarity, consider the upper left subfigure (\(graph\_p=10\), \(graph\_d=1\), \(data\_sem=gumbel\)) as an example. First, we can see that SHD varies significantly across algorithms even if they have access to a hyperparameter oracle as the blue bars Figure 3: SHD performances depending on the quality of selected hyperparameters. Numbers are averaged across data seeds as well as sample sizes and graph types due to negligible differences. range from around 2 SHD (FGES) to as much as over 15 (ANM). Next, we can observe that the orange and green bars that refer to 'default' or'sim_mean' hyperparameters are in most cases not that far from blue bars ('best' hyperparameters) within the same algorithm, showing that a fixed set of hyperparameter values can work surprisingly well. Looking at the red bars that represent the 'worst' hyperparameter values, it is clear that SHDs associated with them are much higher than with any other bar colour. This shows how misspecified hyperparameters can degrade algorithm's performance. The differences between best and worst cases (blue and red respectively), which we read as robustness to misspecification, can also vary between algorithms. For instance, ANM is fairly robust due to its small worst-best difference in SHD, whereas NOTEARS, despite its low SHD in the best case, has a substantially higher SHD in the worst case (high worst-best difference as a result), showing its poor robustness to misspecification. #### 4.5.2 Performance Distribution Across Hyperparameters As per Figure 4, all algorithms perform similarly when averaged across all simulations and hyperparameters. But as simulation circumstances change, algorithms respond differently. Specifically, not only do the best and worst hyperparameter cases vary across algorithms, but also the proportion of relatively good performances. A high proportion of low SHD values, for example, could be read as a proxy for robustness to hyperparameter misspecification. Figure 6 confirms that increased sample size generally helps, even with relatively large graphs (\(p=50\)), although some algorithms require more data to notice significant benefits (see LiNGAM in gumbel and NOTEARS_MLP in gp). Positive effects can be noticed with respect to improved best hyperparameter cases and an increased proportion of good performances. This case also confirms that relatively big and sparse graphs can be recovered with high accuracy given the right hyperparameters. Figure 4: Distributions of SHD performances across all hyperparameters. #### 4.5.3 Winning Algorithms vs. Hyperparameter Quality Previous analysis revealed that the best algorithm choice may depend on specific DGP properties as well as hyperparameter choices. To make it clearer, we collect winning algorithms across different DGP properties and hyperparameter selection strategies. An algorithm with the lowest SHD in a given setting wins. The cumulative winning percentages are presented in Figure 7. The results confirm that no one algorithm wins in all settings (i.e, there are no free lunches). Specific DGP properties may favour certain methods. When looking at how winning odds change depending on different hyperparameter choices, it is clear that the best algorithm selection depends not only on DGP properties, but also on the type of available hyperparameters. For instance, in order to minimise the consequences of potential hyperparameter misspecification, it might be best to choose the algorithm that wins most frequently in a specific setting with the worst hyperparameters. In addition, it is clear that sample size and the type of the simulated graph do not impact winning odds across hyperparameters in a significant way. #### 4.5.4 Semi-Synthetic and Real Data We put our simulation-derived findings to a test by performing structure recovery on SynTReN (Figure 8) and Sachs (Figure 9) datasets. All numbers are compared to SotA performances retrieved from Lachapelle et al. (2019), which are \(33.7\pm 3.7\) and \(12\) SHD for SynTReN and Sachs respectively. Both cases generally confirm that fixed hyperparameters (_sim_mean_ and _default_) can work almost as well as the best hyperparameters, and that even the best hyperparameters may not be enough to reach the best possible performance as some algorithms perform better than other under those conditions. It is also clear from both cases that hyperparameters play an important role and, in fact, can decide whether an algorithm reaches or beats SotA. For instance, against SynTReN, both NOTEARS methods and LiNGAM seem to be good options under the best and fixed hyperparameters. But under the worst hyperparameters, NOTEARS methods can be extremely inaccurate, making ANM the safest choice in this case. In the Sachs dataset, this is no longer the case with ANM, showing that the best algorithm pick indeed strongly depends on DGP properties. All algorithms except ANM can, in fact, beat SotA on Sachs data. However, when it comes to robustness to hyperparameter misspecification and safety of use, NOTEARS methods appear to be the most risky, with LiNGAM being extraordinarily robust, as it beats SotA even with its worst hyperparameters. Figure 5: across all hyperparameters Figure 6: Performances across all hyperparameters for ER graphs with \(p=50\) and \(d=1\). ANM was excluded due to long execution time against \(10,000\) samples. ## 5 Conclusion In this work, we have successfully shown that hyperparameters play an important role in causal structure learning. However, the way hyperparameters influence the methods is somewhat different than recent results from the ML literature. More specifically, Machlanski et al. (2023); Tonshoff et al. (2023) found that many learners can reach SotA performance levels with the right hyperparameters, reducing the importance of model selection. But in this study, we observe that algorithms still differ significantly in performance even with access to a hyperparameter oracle. However, reliable tuning is not always available in structure learning, leading to hyperparameter misspecification and prediction errors. This is where hyperparameters become important as we showed that different learning algorithms vary in robustness to hyperparameter misspecification, and that strong performance under the right hyperparameters does not necessarily translate to misspecification robustness. As a consequence, an algorithm that is the best pick under correct hyperparameters, might be a suboptimal choice when its hyperparameters are misspecified; another algorithm with better misspecification robustness might be safer to use, especially in those cases where minimising the consequences of potential misspecification is a priority. Thus, overall, the best algorithm choice may depend not only on the properties of the data generating process, but also on the quality of selected hyperparameter values. In terms of secondary findings, default hyperparameters seem to perform surprisingly well in many cases, and hence may constitute a viable alternative to tuning, though the risk of potential misspecification should be carefully considered when doing so. Another interesting observation is that relatively large sparse graphs (50 nodes) can be recovered with high accuracy, subject to large sample size and the right hyperparameters. Figure 7: Winning percentage depending on hyperparameter quality and simulation properties. Figure 8: Performances against the SynTReN dataset. Numbers are averaged across data seeds. Figure 9: Performances against the Sachs dataset. ### Limitations This study is naturally limited by our choice of explored algorithms, hyperparameters and simulation properties. It is worth noting, however, that we do not intend to identify the best possible learning algorithm or hyperparameters. On the contrary, the objective of this work is to show that the appropriate algorithm choices are nuanced, as also recently shown in the treatment effect estimation domain (Curth and van der Schaar, 2023), and that hyperparameters should be part of that subtle decision-making process, further confirming the importance of hyperparameters reported in the literature (Paine et al., 2020; Zhang et al., 2021; Machlanski et al., 2023; Tonshoff et al., 2023). More extensive search spaces are unlikely to negate such conclusions. ### Future work Provided our observation that increasing the sample size generally increases the proportion of good performances across all hyperparameters of an algorithm, a natural next step would be to examine whether the hyperparameter misspecification problem vanishes as the sample size approaches infinity. As hyperparameter tuning is one possible strategy of mitigating misspecification with finite sample sizes, further work into reliable tuning schemes might help stabilise performance and facilitate better adoption of algorithms in real-world datasets. Another issue is the barrier of using tuning metrics, as obtaining them often requires algorithm modifications that cannot be done without expert knowledge of the method. Making the metrics easier to use could facilitate further research on the topic, as well as provide means for good tuning practices. Finally, choosing an algorithm appropriate for the problem at hand has a considerable impact on the final performance. Although some hyperparameter tuning metrics are general enough to perform algorithm selection as well (e.g. Liu et al. (2010); Biza et al. (2022)), doing so on real-life datasets is still a challenge. One promising direction could be validation that incorporates domain knowledge (Grunbaum et al., 2023). #### Acknowledgments All three authors (DM, SS and PC) were supported by the ESRC Research Centre on Micro-Social Change (MiSoC) - ES/S012486/1. This research was supported in part through computational resources provided by the Business and Local Government Data Research Centre BLG DRC (ES/S007156/1) funded by the Economic and Social Research Council (ESRC).
2308.02000
On the Transition from Neural Representation to Symbolic Knowledge
Bridging the huge disparity between neural and symbolic representation can potentially enable the incorporation of symbolic thinking into neural networks from essence. Motivated by how human gradually builds complex symbolic representation from the prototype symbols that are learned through perception and environmental interactions. We propose a Neural-Symbolic Transitional Dictionary Learning (TDL) framework that employs an EM algorithm to learn a transitional representation of data that compresses high-dimension information of visual parts of an input into a set of tensors as neural variables and discover the implicit predicate structure in a self-supervised way. We implement the framework with a diffusion model by regarding the decomposition of input as a cooperative game, then learn predicates by prototype clustering. We additionally use RL enabled by the Markovian of diffusion models to further tune the learned prototypes by incorporating subjective factors. Extensive experiments on 3 abstract compositional visual objects datasets that require the model to segment parts without any visual features like texture, color, or shadows apart from shape and 3 neural/symbolic downstream tasks demonstrate the learned representation enables interpretable decomposition of visual input and smooth adaption to downstream tasks which are not available by existing methods.
Junyan Cheng, Peter Chin
2023-08-03T19:29:35Z
http://arxiv.org/abs/2308.02000v1
# On the Transition from Neural Representation to Symbolic Knowledge ###### Abstract Bridging the huge disparity between neural and symbolic representation can potentially enable the incorporation of symbolic thinking into neural networks from essence. Motivated by how human gradually builds complex symbolic representation from the prototype symbols that are learned through perception and environmental interactions. We propose a Neural-Symbolic **Transitional Dictionary Learning (TDL)** framework that employs an EM algorithm to learn a transitional representation of data that compresses high-dimension information of visual parts of an input into a set of tensors as neural variables and discover the implicit predicate structure in a self-supervised way. We implement the framework with a diffusion model by regarding the decomposition of input as a cooperative game, then learn predicates by prototype clustering. We additionally use RL enabled by the Markovian of diffusion models to further tune the learned prototypes by incorporating subjective factors. Extensive experiments on 3 abstract compositional visual objects datasets that require the model to segment parts without any visual features like texture, color, or shadows apart from shape and 3 neural/symbolic downstream tasks demonstrate the learned representation enables interpretable decomposition of visual input and smooth adaption to downstream tasks which are not available by existing methods. ## 1 Introduction Symbols, such as languages, mathematics, and signs, are crucial in human thinking and represent key aspects of System II intelligence Chandler (2022); Kahneman (2011). However, modern Neural Networks (NN) often lack essential capabilities like interpretability, compositionality, and logical reasoning, which are inherent in symbolic thinking. As a result, there is growing interest in imbuing NN with System II intelligence. Existing approaches overlook a crucial fact: the neural representations learned by current NN, focused on data compression, are not well-suited for System II tasks that require explicit structural information, such as properties and relations. For instance, while humans perceive a character as strokes organized by structures, typical machine learning algorithms view it as a combination of principal components. This difference in representation hinders the adoption of symbolic thinking in NN. To bridge this gap between neural and symbolic representations, we propose a Transitional Dictionary Learning (TDL) framework. It leverages the Expectation-Maximization (EM) algorithm to learn a transitional representation that combines the advantages of both neural and symbolic representations. This representation compresses high-dimensional information into tensors as neural variables and learns predicate structures within and between those variables. The learned representation allows the model to decompose input images into interpretable visual parts and embeds implicit relations. Examples are presented in Figure 1. To implement the framework, we model the decomposition process as a cooperative game of parts solved by a diffusion model. We use prototype clustering on the decomposed parts to learn implicit predicates. To obtain parts aligned with human intuition, we experiment with Reinforcement Learning (RL) to incorporate subjective human factors and preferences that evolved through complex symbol emergence processes. To evaluate whether the model really learned a dictionary of meaningful concepts and relations or only a visual pattern recognizer, we experiment with our method on a novel task of decomposing abstract compositional visual objects, as shown in Figure 1, there are no visual features including edges, color difference, or texture in the objects shown in our datasets, which are the basis for vision models to distinguish different objects and parts, thus the only way to separate the strokes out is the concept of strokes, therefore, decomposing the abstract objects in our datasets into meaningful parts gives a strong indicator that the model discovers symbolic concepts and relations. To the best of our knowledge, there is no existing method that can achieve this. We conduct experiments on three datasets comprising abstract compositional visual objects and evaluate them across three downstream tasks involving symbol grounding and transfer learning. The results demonstrate that our method effectively learns the transitional representation, allowing interpretable decomposition of data and seamless adaptation to neural and symbolic tasks. We are the first method to learn to segment abstract objects into parts without relying on visual features like texture, edges, shadows, or color differences. Our supplementary materials include code and datasets, publicly available for further exploration. ## 2 Dictionary Learning for Transitional Representation We formulate the transitional representation in Section 2.1, we learn such a transitional representation by optimizing the features of both neural and symbolic representations simultaneously under a dictionary learning framework, the neural part illustrated in Section 2.2, and the symbolic part discussed in Section 2.3, we combine them to get our TDL framework, finally, we derive a metric for evaluating the learned dictionary in Section 2.3.3. ### Transitional Representation An ideal transitional representation should provide structural information like properties and relations of the compositions of the input while not losing the compression ability of high-dimension data. The structural information can be represented as _predicates_, which are broadly applied in symbolic AI that are hypothesized to be able to represent all intelligent activities Newell and Simon (1976), and isomorphic to typical symbolic systems from Curry-Howard correspondence Howard (1980) and its extensions. Our transitional representation learns implicit predicates in different rays and orders in multiple dictionaries. We assume that an input \(x\) can be represented as a set of atomic sentences \(\Omega=\{\rho_{1}^{1}(\cdot),\rho_{2}^{1}(\cdot),...,\rho_{1}^{2}(\cdot, \cdot),\rho_{2}^{2}(\cdot,\cdot),...\}\) of predicates from different order and arity. For a 1-ary sentence \(\rho^{1}(r)\): when \(\rho^{1}\) is first-order, \(r\) is a logical variable tensor represented by NN; when \(\rho\) is higher-order, \(r\) is the neural representation for another sentence. Similar for N-ary cases. The model learns to predict logic variables \(R\) which compressed high-dimension information of input \(x\) by \(P(R|x,D)\) that embeds potential sentences \(P(\Omega|R,D,x)\), that can depict the input in a meaningful way, based on \(D\), the learned dictionaries of different any/order predicates. We use a deep learning model to implicitly learn the dictionary that allows it to use multiple vectors as neural variables to compress an input while learning to let the output vectors be implicitly organized by discovered predicates. There are infinity ways of choosing the variables, the challenge comes in three folds, firstly, the variables should accurately compress the information from the original input, secondly, the variables are expected to be meaningful and Figure 1: Interpret samples from three datasets as visual parts marked with different colors, shallowness represents confidence. The left in each pair is input, the right is interpretation. interpretable, for example, we wish for compressed strokes instead of principle components to be the variables when given a handwritten character, thirdly, the found variables should be the ones that can form meaningful relations each other, for example, we expect the strokes are the ones that can be reused for composing different characters. ### Learning Neural Features The first challenge discussed above of compressing high-dimension information corresponds to the neural features of the learned representation. We formulate it in the dictionary learning framework. Given the dataset \(X=\{x_{1},...,x_{n}\}\), we learns a dictionary \(D=\{d_{1},...,d_{n}\}\), \(D\in\mathbb{R}^{d\times n}\), to explain the data. we assume an input \(x_{i}\) is composed by \(f(\overrightarrow{r_{i}};D)\) where \(\overrightarrow{r_{i}}=(r_{i}^{1},...,r_{i}^{m})\) and \(r_{i}^{j}\in\mathbb{R}^{d}\) is the set of neural variables, each variable is transformed from a prototype \(d_{k}\in D\) in the dictionary by \(r_{i}^{j}=T(d_{k})\), \(f\) is a composition function describes how \(x_{i}\) is composed of \(\overrightarrow{r_{i}}\). The target of a sparse dictionary learning Kreutz-Delgado et al. (2003) is \[\arg\min_{D,r_{i}}\sum_{i}\epsilon(x_{i},f(\overrightarrow{r_{i}};D))+ \lambda|\overrightarrow{r_{i}}|,|d_{i}|<1,\forall d_{i}\in D \tag{1}\] it learns to give parsimony explanations with sparse representations and compress the high-dimension information by minimizing the reconstruction error \(\epsilon\). Specifically, the original sparse dictionary learning applies the reconstruction function \(f(\overrightarrow{r_{i}};D)=D\overrightarrow{r_{i}}\) and error \(\epsilon(x_{i},f(\overrightarrow{r_{i}};D))=||x_{i}-f(\overrightarrow{r_{i} };D)||_{2}^{2}\) which assumes the data generation process to be linear and the reconstruction in pixel space. In Formula 1, we relax these assumptions to enable more complex compositional processes, such as the composition of parts instead of the linear combination of vector components. The representation that meets the first challenge that can be learned by optimizing this formula. ### Learning Symbolic Features To tackle the remaining two challenges we discussed at the end of Section 2.1, we need to learn to let the variables \(\overrightarrow{r_{i}}\) embed predicate structure. To simplify the problem, we restrict the variables of an image to be the parts of it. However, there are still infinitely many ways to segment an image as parts while we expect only the "meaningful" ones. The criteria of "meaningful" can be varied, including both objective and human subjective ones. We regard the objective ones can be learned without supervision, while the subjective ones should be learned by aligning with humans. We assume that the objective criteria already allow a good initial segmentation, which can be tuned to a subjective good one with low-cost tunings. We'll focus on self-supervised learning towards the objective criteria in this section, and later we will discuss how to align human preference later. #### 2.3.1 Reduce to Subwork tokenization Finding objective criteria that allow us to segment an input image into meaningful parts is non-trivial. One criterion actually implied in the third challenge is that the parts should be easily reusable to form compositions with each other. By regarding the reusable parts as prototypes in the dictionary. Our task is to find the most common reusable composable prototypes from a set of images to build the dictionary. This is identical to the subword tokenization problem which tries to build the dictionary by finding frequently used subwords in a corpus of sentences. Subword means the split points inside a sentence can be arbitrary instead of only space or commas. Subword tokenizer learns to segment the sequence of characters from sentences into subwords based on a learned dictionary. While in images, the model learns to segment parts from the pixels matrix using the learned dictionary. By analogy pixels as characters, parts as words, images as sentences, dataset as corpus, and ignoring the difference in structures (i.e. sequence vs matrix) and dictionary (i.e. discrete vs continuous) for now, we may reduce our problem to subword tokenization, and then put the simplified parts back. Subword tokenization can be formulated as decomposing a given sentence string \(s\) into a sequence of subwords \(\overrightarrow{s}=(s^{1},...,s^{n})\) using a dictionary \(D\) where \(s^{i}\in D\). Kudo and Richardson (2018) introduces ULM that solves this problem with the EM algorithm, it initializes a huge dictionary randomly or heuristically, then iterate E and M steps to remove terms from the dictionary until the dictionary reaches an ideal size. In the E-step, sentences are decomposed by the Viterbi algorithm with the current dictionary \(D^{(t)}\) as \(\overrightarrow{s_{i}^{2}}=\arg\max_{s_{i}^{2}\in U(s_{i})}P(\overrightarrow{s _{i}^{2}})\), where \(P(\overrightarrow{s_{i}})=\prod_{j=1}^{m}P(s_{i}^{j})\) assumes a 1-gram Harris (1954) language model, \(U(s_{i})\) is the set of all possible decomposition of \(s_{i}\), the likelihood of corpus is \(L(D^{(t)})=\sum_{i=1}^{N}P(\overrightarrow{s_{i}^{2}})\). In the M-step, measure the contribution of each dictionary term by \(l(d)=L(D^{(t)})-L(D^{(t)}-\{d\})\) as the likelihood loss if remove a term \(d\), then maximize the expectation by removing the terms with lowest contribution (e.g. 20%) from \(D^{(t)}\) to get \(D^{(t+1)}\). #### 2.3.2 Neural-Symbolic Transitional Dictionary Learning The ULM learns an efficient dictionary of terms that can be frequently reused, we will extend this EM algorithm back to our problem for learning a dictionary of reusable prototypes. To put it in our problem, we firstly generalize sentence inputs to our visual input \(x_{i}=f(\overrightarrow{r_{i}};D)\), where \(\overrightarrow{r_{i}}\) are neural variables that correspond to visual parts, then generalize the Viterbi algorithm to a decomposition function \(\overrightarrow{r_{i}}=g(x_{i};\theta,D)\) parameterized by \(\theta\) and dictionary \(D\) to sample the candidate segmentations of image. The reusability of words in ULM relies on the 1-gram language model. Extend to images, the likelihood of a dataset can be computed as \(\log P(X)=\sum_{i=1}^{n}\log\sum_{j=1}^{m_{i}}\log P(r_{i}^{j})\), where \(m_{i}\) is the length of \(\overrightarrow{r_{i}}\) follows \(r_{i}^{j}=T(d_{q})\) where \(d_{q}\in D\). Maximizing it is essentially clustering the \(r_{i}^{j}\) with \(d_{q}\) as centroids, it implicitly discovers the property or category information of the parts, i.e. the _1-ary predicate_. However, the 1-gram assumption may omit rich relations between parts. A natural extension is the N-gram, where the likelihood of an input \(x_{i}\) is \(\log P(r_{i})=\sum_{j=1}^{m_{i}}\log P(r_{i}^{j}|r_{i}^{1:j})\), for example, a 2-gram model is \(\log P(r_{i})=\sum_{j=1}^{m_{i}}\log P(r_{i}^{j}|r_{i}^{j-1})\). However, the Markov assumption for N-gram does not fit for non-sequential structures such as images, thus we use a joint probability \(P(r_{i}^{p},r_{i}^{q},...r_{i}^{k})\) over an arbitrary subset of the decomposed terms instead. It implicitly models the association among the \(N\) terms, which gives relational information and thus learns an _N-ary predicate_, for example, when \(N=2\), it models the highly-correlated pairs \((r_{i}^{p},r_{i}^{q})\) that may exhibit some relationship. The higher-order predicates model more complex relations, like the association over associations, e.g. \(P((r_{i}^{k},r_{i}^{j};\rho_{1}),(r_{i}^{p},r_{i}^{q};\rho_{2}))\), the association between two pairs associated with predicates \(\rho_{1}\) and \(\rho_{2}\) respectively. We can learn predicates of different order and arity by using different probabilistic assumptions. Based on this idea, a flexible EM framework for learning predicates of any order and any-ary through different probabilistic assumptions of \(P(\overrightarrow{r_{i}})\) can be given: \[E-Step:Q(\theta|\theta^{(t)})=E_{D\sim P(\cdot|R,\theta^{(t)})}[ \log P(R,D|\theta)] \tag{2}\] \[M-Step:\theta^{(t+1)}=\arg\min_{\theta^{(t+1)}}Q(\theta^{(t+1)} |\theta^{(t)}) \tag{3}\] where \(R=\{r_{i}^{j},i\in[1,n],j\in[1,m_{i}]\}\) is the set of all terms decomposed from dataset \(X\). By learning multiple dictionaries of different any and order of predicates over the decomposed parts \(R\), the decomposed parts will tend to be the ones that can be frequently reused in different relations and categories over the dataset which meets the remaining challenges we discussed in Section 2.1. Thus introduce the _"symbolic"_ features to the representation. By optimizing this EM framework in Formulas 2, 3, along with the sparse dictionary learning in Formula 1 at the same time, the learned representation should embed both neural and symbolic features. We propose it as our **Transitional Dictionary Learning (TDL)** framework. We will introduce our implementation of the TDL framework in Section 3. #### 2.3.3 Clustering Information Gain Given the learned representation, a problem is how to evaluate whether it meets our objective criteria of learning reusable compositions. We may evaluate it by viewing whether the parts shown in the decompositions can be clustered to a few centroids which means the parts applied are highly reusable. Based on this idea, we propose the Clustering Information Gain (CIG) by comparing the Mean Clustering Error (MCE) of the decomposed terms in the test set, marked as \(MCE=[\sum_{i=1}^{n}\sum_{j=1}^{m_{i}}(\min_{d\in D}||r_{i}^{j}-d||_{2})/m_{i}] /n\propto-L(\theta)\) with the random decomposition, \(MCE_{rand}\), which is a lower-bound when the decomposed terms are randomly scattered, while the best case of MCE is 0 when the compositions perfectly explained by learned concepts. CIG is defined as \(1-\frac{MCE_{rand}}{MCE_{rand}}\) gives a score normalized in \([0,1]\) therefore giving us an intuitive indicator. More details in Appendix D.4. ## 3 Method To implement the TDL framework, we need to first implement a segmentation network that gives an image as input, it can output a set of masks where each mask corresponds to a part for the Formula 2, we compose the parts and compare it with the input for the Formula 1, then a clustering of the decomposed parts from the training set in the Formula 3. We use a diffusion model that solves the cooperative game of parts as the segmentation network in Section 3.1, then we introduce prototype clustering in Section 3.2. Figure 2 shows an overview of our architecture. Furthermore, we propose RL to tune the representation with subjective criteria in Section 3.3. ### Game-Theory inspired Decomposition Precise unsupervised pixel-wise masks of the input can be obtained from attention map Dosovitskiy et al. (2020); Hamilton et al. (2022), or clustering pixel-wise features Amir et al. (2021) with Transformers, or generated by diffusion models Wu et al. (2022) with U-Nets. In our tasks, the dataset sizes are relatively small which is not effective to train a Transformer-based model, while Wu et al. (2022) shows how diffusion models generate accurate masks on small datasets, we also apply a diffusion model to implement the segmentation network. We formulate the segmentation network as the encoder and decoder parts. Given a visual input \(x_{i}\in R^{H\times W\times C}\), we assume 2D input here which can be easily extended to 3D voxels, the encoder network \(\overrightarrow{r}=g(x;\theta,D)\) generate a set of neural variables \(\overrightarrow{r}\) that can be decoded as parts of image \(x_{i}^{j}=f_{g}(r_{i}^{j})\) by \(f_{g}\), the input can be simply reconstructed through \(x_{i}=\sum_{j=1}^{m_{j}}x_{i}^{j}\). The encoder can be viewed as the downsampling part in UNet while the decoder is the upsampling part. #### 3.1.1 Decomposition as cooperative game The challenge is that different inputs may have different numbers of parts which demands a self-organized way to dynamically preserve only meaningful parts. Inspired by Gemp et al. (2021) which models PCA as a competitive game where players are candidate principle components moved by changing its value, we also model the decomposition process as the cooperative game of the parts. We randomly initialized \(N_{P}\) neural variables \(\overrightarrow{r}\) as players move by adjusting the values, where \(N_{P}\) is a hyper-parameter, the players cooperatively reconstruct the input, while competing by avoiding repeating each other, and the ones whose corresponding part \(x^{j}=f_{g}(r^{j})\) is empty are dropped. The utilities of the game are modeled by a GT loss \[L_{GT}=L_{Reconstruction}+\alpha_{1}L_{overlap}+\alpha_{2}L_{resources}+ \alpha_{3}L_{norm} \tag{4}\] It evaluates the equilibrium state reached by \(N_{P}\) players with image parts \(\overrightarrow{x}=(x^{1},x^{2},...x^{N_{P}})\). \(L_{overlap}\) avoids the player repeating each other as competition, \(L_{resources}\) avoids the extreme case where one player does the entire job while others are empty, \(L_{norm}\) regularizes the action space, and **Reconstruction loss**\(L_{Reconstruction}\) is the cooperative target. We leave more details about the loss function design in Appendix B.2. #### 3.1.2 Game Playing as Markov Process Players move in the direction that maximizes the utility \(-L_{GT}\), the optimal move for a player \(j\) given by gradient \(\nabla_{r_{i}^{j}}L_{GT}\) which is expansive to compute. Instead, we train an actor network \(r_{i}^{j}(t+1)=Actor(s_{i}^{j}(t);\theta_{A},D)\) to output the move for player \(j\) based on current state \(s_{i}^{j}(t)\) and dictionaries \(D\in R^{N_{D}\times dim_{D}}\). The state encodes useful information for taking action about competitors, cooperators, and input. The game proceeds with K steps, in each step, players take action and then broadcast their moves to update the state of each other, this results in a Markov process \(P(r_{i}^{j}(t+1)|s_{i}^{j}(t);D)\) for each player \(j\), we implement it as a score-based diffusion model SMLD Song et al. (2021). Figure 2: Overview our architecture. The diffusion model decomposes a training image into visual parts for computing decomposition loss including the reconstruction loss, then be clustered with memory banks of different dictionaries for computing clustering error. The move of a player can be sampled by the Langevin dynamics \(r_{i}^{j}(t+1)=r_{i}^{j}(t)+\epsilon\nabla_{r_{i}^{j}(t)}L_{GT}+\sqrt{2\epsilon}z( t),z(t)\)\(N(0,I),t=0,1,...,K\) where \(\epsilon\) is the step size, the gradient approximated by a scoring function, implemented as our actor-network, that minimizes \(E_{x}[||\nabla_{r}L_{GT}(x)-Actor(s;\theta_{A},D)||^{2}]\) which is trained by a target \(L_{SMLD}\) from SMLD. We apply this loss as a regularization which results in the **Decomposition loss**\(L_{Decomposition}=L_{GT}+\beta L_{SMLD}\). The decomposition of the input obtained in the equilibrium of the game. In our experiments, a few steps are enough to converge, thus we do full samplings for each sample during training. More details about our diffusion model architectural design and optimizations can be found in Appendix B.1. ### Prototype Clustering of Predicates We initialize terms as prototype vectors for each dictionary, then iteratively optimize them by K-Means clustering. Clustering all neural variables \(R\) for the entire dataset every time is expansive and the distribution is shifting as the segmentation network update. Thus, we maintain a FIFO memory bank \(M=(m_{0},...,m_{L})\) of size \(L\) for each dictionary, for 1-ary predicates, \(m=f_{m}^{i}(r^{i})\) maps a neural logic variable \(r^{i}\) to a representation \(m\in\mathbb{R}^{dim_{m}}\), for 2-ary, \(m=f_{m}^{2}(r^{i},r^{j})\) maps a pair \((r^{i},r^{j})\), and similar for higher order and arity. Specifically, a pair representation computed as \(m_{(i,j)}=f_{m}^{2}(f_{g}(r^{i})+f_{g}(r^{j}))\) in our work then mapping with convolution layers. By storing the most recent decomposed parts, \(M\) can be assumed to be generated from a steady distribution. The clustering executes every one or a few training steps, starting after warming up epochs. Similar to Caron et al. (2018), we run K-Means on set \(\overrightarrow{m}+M\) for each dictionary, where \(\overrightarrow{m}=(m_{1}^{1},m_{1}^{2},...,m_{2}^{1},m_{2}^{2},...,m_{B}^{N_{ P}})\) is the set of mapped logic variables in a batch of length \(B\), we drop unwanted terms like empty ones and randomly sample pairs to improve efficiency. The clustering gives assignments \(C\) for each term in \(\overrightarrow{m}\), we create pseudo labels \(Y\) for terms in \(\overrightarrow{m}\) by assigning prototypes in dictionary \(D\) for their nearest clustering centroids given by \(C\). Then try to move the prototype closer to the centroid and the samples closer to the prototype in latent space by minimizing the Cross-Entropy (CE) loss \(\min_{D,\theta}CELoss(dist(\overrightarrow{m},D),Y)\), where \(dist\) is a distance metric (e.g. L2 distance), compute the current assignment of each term. The **clustering loss** is defined as the summation of CE loss of all dictionaries \(L_{Clustering}=\gamma\sum_{D_{i}\in D}CELoss(dist_{i},E_{i})\). We optimize it with decomposition loss as \(L_{TDL}=L_{Clustering}+L_{Decomposition}\) to implement the TDL framework. Detail about the architecture can be found in Appendix B.1.2. ### Tuning by Reinforcement Learning Benefiting from the Markovian of the game-playing process, we naturally apply Proximal Policy Optimization (PPO) Schulman et al. (2017) to our model. PPO iteratively samples several sequences of the game by the actor-network and then uses these samples to optimize the policy. We train a critic model in addition to predicting the value of a given state \(v_{i}^{j}(t)=Critic(s_{i}^{j}(t);\theta_{C},D)\) for PPO. We design a heuristic reward function to encourage "energy-saving" shapes with three criteria, smoothness, solidity, and continuity. See Appendix D.5 for details. We also do an experiment on RL from human feedback in Appendix D.2. ## 4 Experiment and Analysis Figure 3: Comparison between our method and baselines on the OmniGlot test set. Our method can learn interpretable strokes compared to the baselines that failed to give effective strokes. We perform experiments to evaluate whether the TDL framework can learn the representation that embeds both neural and symbolic features. And also explores how we can utilize the representation for the downstream neural and symbolic tasks. We introduce the experiment settings in Section 4.1, and the results are discussed in Section 4.2, we further do downstream tasks in 4.3, and we also do extensive additional experiments in Appendix D to comprehensively analyze the proposed methods. ### Experiment setting We choose 3 datasets of _abstract_ composable visual objects: **LineWorld** is synthesized by the babyARC engine Wu et al. (2022), it uses _lines_ as the basic concept and _parallel_ and _perpendicular_ as basic relations to randomly draw non-overlapping shapes of "F", "E", "T", etc., in each image. **OmniGlot** Lake et al. (2015) consists of handwritten characters with recorded strokes. Each character is implicitly composed hierarchically as subparts and parts from strokes Lake et al. (2015). **ShapeNet5** consists of 3D shapes in 5 categories (bed, chair, table, sofa, lamp) with shared basic elements Achlioptas et al. (2019) from the ShapeNet database Chang et al. (2015). We voxelized it by binvox Nooruddin and Turk (2003), Min (2004 - 2019), we replace 2D conv layers with 3D in all methods for this dataset. We further derive 3 downstream tasks based on them, which we will discuss later. We compare our method with 3 state-of-the-art unsupervised part segmentation methods which are the most similar to our work with completely different motivations, DFF Collins et al. (2018), SCOPS Hung et al. (2019), and UPD Choudhury et al. (2021), which decomposes images into parts as a heatmap of \(k\) channels, each channel represents a part. Full details about our baselines, datasets, and splits of the train, dev, test set, hardware settings, and our hyper-param searches can be found in Appendix C. ### Self-supervised Learning of Transitional representation Results for SSL of transitional representation are listed in the first three columns in Table 1. We train an Auto-Encoder for each dataset as a reference for the reconstruction error as an indicator of whether high-dimension information is preserved. The inputs LineWorld and ShapeNet5 are binary thus we use IoU for a better intuitive. CIG is proposed in Section 2.3.3. SP is a heuristic Shape score that evaluates whether the parts are natural or not by three continuity, solidity, and smoothness, normalized between 0 and 1.0. The full details of SP are provided in Appendix D.5. Our model achieves 58.0, 68.5, 54.6 CIG, 82.6, 70.6, and 60.1 SP in the three datasets respectively, significantly better than the baselines which rely on visual features. And the huge advantage keeps even without RL. With a low reconstruction error of 94.3 IoU, 1.8 MAE, and 79.8 IoU compared to the reference. It shows the learned representation can both preserve high-dimension information and learns meaningful parts. Figure 3 shows a comparison of results between ours and baselines, our method can decompose the inputs into human interpretable strokes, while the baselines do not work for such abstract inputs. We provide extensive additional samples in Appendix G and a qualitative study in Appendix D.6 which shows that our method has much better human interpretability, and the metrics SP and CIG can be good predictors for human interpretability. ### Adapt to downstream tasks We experiment on two neural and symbolic tasks, symbol grounding and transfer learning. We pre-train our model and baselines then finetune on those tasks. \begin{table} \begin{tabular}{l c c c c c c c c c c c} \hline \hline & \multicolumn{3}{c}{**LineWorld**} & \multicolumn{3}{c}{**OmniGlot**} & \multicolumn{3}{c}{**ShapeNet5**} & \multicolumn{3}{c}{**LW-G**} & \multicolumn{1}{c}{**OG-G**} \\ \cline{2-11} & IoU & CIG & SP & MAE & CIG & SP & IoU & CIG & SP & IoU & Acc. & IoU \\ \hline AE & 97.7 & - & 0.9 & - & 85.1 & - & - & - & - & - \\ \hline DFF & - & 33.1 & 38.3 & - & 36.9 & 33.3 & - & 20.1 & 19.2 & 43.1 & 28.8 & 42.8 \\ SCO. & - & 35.7 & 42.4 & - & 38.6 & 38.9 & - & 23.1 & 24.3 & 46.8 & 26.4 & 46.9 \\ UPD & - & 36.3 & 42.8 & - & 42.8 & 37.4 & - & 25.4 & 22.6 & 46.2 & 28.7 & 48.9 \\ \hline **Ours** & 94.3 & **58.0** & **82.6** & 1.8 & **68.5** & **77.6** & 79.8 & **54.6** & **60.1** & **78.4** & **74.8** & **75.9** \\ **w/o RL** & 93.7 & 57.0 & 71.9 & 2.0 & 65.1 & 68.0 & 78.8 & 52.9 & 54.4 & 78.2 & 74.3 & 75.1 \\ \hline \hline \end{tabular} \end{table} Table 1: Results on SSL and symbol grounding, values are multiplied by 100, the higher the better. Ours and w/o RL are our models with and without RL with the heuristic reward model. We compare with a reference Auto-Encoder (AE), and DFF Collins et al. (2018), SCOPS (SCO.) Hung et al. (2019), UPD Choudhury et al. (2021). #### 4.3.1 Symbol grounding We design two grounding tasks: LW-G is also synthesized with the babyARC engine with given concept masks and relation tuples as labels. The target is to segment the concepts (i.e. lines) and then predict the relations within evaluated by IoU and top-1 accuracy respectively. We align the prediction with ground truth by the minimal overall IoU when evaluating. We pre-train the models on LineWorld. We add a relation prediction head on the top of baselines while our method directly adapts the 2-ary predicate dictionary. OG-G is a subset of OmniGlot with the stroke masks as ground truth, the target is to segment the input as ground truth strokes. We align prediction and ground truth similarly to LW-G, then evaluate by IoU. We pre-train models on OmniGlot without OG-G samples. Results are reported in Table 1 LW-G and OG-G, we achieved 78.4 IoU, 74.8 Acc. on LW-G and 75.9 IoU on OG-G, largely outperforming baselines whose relation prediction did not converge due to incorrect segmentations. This shows that the learned representation models underlying predicate distribution can be smoother transferred to concrete ones. #### 4.3.2 Transfer learning We pre-train on a set of shapes from "chair" and other 4 similar categories, then transfer learning on small sets of 230\(\sim\)550 samples from unseen categories "Bed", "Lamp", "Sofa", "Table" used in ShapeGlot Achlioptas et al. (2019) and compare with the one without pretraining. The results in Table 2 show that the learned representations are reusable and effectively benefit the generalization of neural representation to unseen classes. Without the pre-training, the samples for each class are not even adequate for learning effective decomposition. ## 5 Related Works We give a brief review of the related works here. We leave a long related work in Appendix A. **Symbol Emergence**. Studies of primitive symbols in mirror systems Wicker et al. (2003), language evolution theory Petitto (2005), Loockvane (1999) and Broca's area Arbib (2011), Tomasello (2008), Higuchi et al. (2009) supports a hypothesis proposed that rudimentary symbols emerged in the prehuman brain without human communication Taniguchi et al. (2018). **Neural-Symbolic AI (NSAI)**, specific NSAI applications Segler et al. (2018), Amizadeh et al. (2020), Young et al. (2019), Inala et al. (2020) and general NSAI methods Wang et al. (2019), Cornelio et al. (2023), Dong et al. (2019), Goyal et al. (2021), Vlastelica et al. (2021), Karpas et al. (2022) have been proposed, however, they ignore the NS representation problem. **Compositional Learning**. Compositional knowledge in NN was explored Hinton (2021), Garau et al. (2022), Chen et al. (2020), Mendez and Eaton (2022). Concept learning composes concepts with primitives Lake et al. (2015), Wu et al. (2022), Mao et al. (2019), Cao et al. (2021), and discover relations Shanahan et al. (2020), Kipf et al. (2020). We give a general framework to learn predicates without prior compared to current methods. **Unsupervised Segmentation and Parsing**. Unsupervised segmentation Kim et al. (2020), Van Gansbeke et al. (2021), Melas-Kyriazi et al. (2022), and parsing Lin et al. (2020), Bear et al. (2020), Lou et al. (2022), and the co-part segmentation Collins et al. (2018), Hung et al. (2019), Choudhury et al. (2021), Yu et al. (2022), He et al. (2022) were studies. However, they rely on visual features or priors. ## 6 Discussion We proposed to learn a transitional representation inspired by the prehuman symbol emergence in this work. However, we do not cover all sides of symbol emergence, such as environmental interactions and social communications Taniguchi et al. (2018), Ma et al. (2022) that has been studied in embodied Gupta et al. (2021) and phylogenetic Ma et al. (2022) intelligence, which implicates a pathway to intelligence by learning from _observation, interaction_, then _communication_. For the implementation side, the study on multi-trace diffusion Mariani et al. (2023) may largely improve the performance by accelerating the converging of the game. The game theory design also enables efficient decentralized training Gemp et al. (2021). Finally, symbolic reasoning may achieve by combining the representation with symbolic modules Mendez and Eaton (2022). \begin{table} \begin{tabular}{l c c c c c c c c c c c c} \hline \hline & \multicolumn{3}{c}{**Bed**} & \multicolumn{3}{c}{**Lamp**} & \multicolumn{3}{c}{**Sofa**} & \multicolumn{3}{c}{**Table**} \\ \cline{2-13} & IoU & CIG & SP & IoU & CIG & SP & IoU & CIG & SP & IoU & CIG & SP \\ \hline w/ PT & 67.3 & 48.1 & 52.9 & 61.1 & 42.1 & 49.1 & 62.2 & 46.8 & 45.2 & 68.3 & 50.1 & 54.6 \\ w/o PT & 18.1 & 19.0 & 13.2 & 18.3 & 19.9 & 14.6 & 21.5 & 18.9 & 19.8 & 19.9 & 22.1 & 17.9 \\ \hline \hline \end{tabular} \end{table} Table 2: Transfer learning on ShapeGlot, all values are multiplied by 100, the higher the better. ## 7 Conclusion In this work, we introduce the TDL framework to learn a transitional representation that embeds both the advantage of neural representations that compress high-dimension information and the symbolic representations that learns concept and relations as predicates. Experiments on SSL and downstream tasks on the abstract visual object datasets show that the learned representation enables meaningful decomposition of objects and smoothly adapts to downstream neural and symbolic tasks which are not available by existing methods that rely on visual features instead of concepts. We believe our work can contribute to the representation learning for NSAI and inspire researchers to discover the mystery of System II intelligence.
2303.00565
AdaSAM: Boosting Sharpness-Aware Minimization with Adaptive Learning Rate and Momentum for Training Deep Neural Networks
Sharpness aware minimization (SAM) optimizer has been extensively explored as it can generalize better for training deep neural networks via introducing extra perturbation steps to flatten the landscape of deep learning models. Integrating SAM with adaptive learning rate and momentum acceleration, dubbed AdaSAM, has already been explored empirically to train large-scale deep neural networks without theoretical guarantee due to the triple difficulties in analyzing the coupled perturbation step, adaptive learning rate and momentum step. In this paper, we try to analyze the convergence rate of AdaSAM in the stochastic non-convex setting. We theoretically show that AdaSAM admits a $\mathcal{O}(1/\sqrt{bT})$ convergence rate, which achieves linear speedup property with respect to mini-batch size $b$. Specifically, to decouple the stochastic gradient steps with the adaptive learning rate and perturbed gradient, we introduce the delayed second-order momentum term to decompose them to make them independent while taking an expectation during the analysis. Then we bound them by showing the adaptive learning rate has a limited range, which makes our analysis feasible. To the best of our knowledge, we are the first to provide the non-trivial convergence rate of SAM with an adaptive learning rate and momentum acceleration. At last, we conduct several experiments on several NLP tasks, which show that AdaSAM could achieve superior performance compared with SGD, AMSGrad, and SAM optimizers.
Hao Sun, Li Shen, Qihuang Zhong, Liang Ding, Shixiang Chen, Jingwei Sun, Jing Li, Guangzhong Sun, Dacheng Tao
2023-03-01T15:12:42Z
http://arxiv.org/abs/2303.00565v1
AdaSAM: Boosting Sharpness-Aware Minimization with Adaptive Learning Rate and Momentum for Training Deep Neural Networks ###### Abstract Sharpness aware minimization (SAM) optimizer has been extensively explored as it can generalize better for training deep neural networks via introducing extra perturbation steps to flatten the landscape of deep learning models. Integrating SAM with adaptive learning rate and momentum acceleration, dubbed AdaSAM, has already been explored empirically to train large-scale deep neural networks without theoretical guarantee due to the triple difficulties in analyzing the coupled perturbation step, adaptive learning rate and momentum step. In this paper, we try to analyze the convergence rate of AdaSAM in the stochastic non-convex setting. We theoretically show that AdaSAM admits a \(\mathcal{O}(1/\sqrt{bT})\) convergence rate, which achieves linear speedup property with respect to mini-batch size \(b\). Specifically, to decouple the stochastic gradient steps with the adaptive learning rate and perturbed gradient, we introduce the delayed second-order momentum term to decompose them to make them independent while taking an expectation during the analysis. Then we bound them by showing the adaptive learning rate has a limited range, which makes our analysis feasible. To the best of our knowledge, we are the first to provide the non-trivial convergence rate of SAM with an adaptive learning rate and momentum acceleration. At last, we conduct several experiments on several NLP tasks, which show that AdaSAM could achieve superior performance compared with SGD, AMSGrad, and SAM optimizers. Sharpness-aware minimization, Adaptive learning rate, Non-convex optimization, linear speedup. ## I Introduction Sharpness-aware minimization (SAM) [1] is a powerful optimizer for training large-scale deep learning models by explicitly minimizing the gap between the training performance and generalization performance. It has achieved remarkable results in training various deep neural networks, including ResNet [1, 2, 3], vision transformer [4, 5], language models [6, 7, 8], on extensive benchmarks. However, SAM-type methods suffer from several issues during training the deep neural networks, especially for huge computation costs and heavily hyper-parameter tuning procedure. In each iteration, SAM needs double gradients computation compared with classic optimizers, like SGD, Adam [9], AMSGrad [10], due to the extra perturbation step. Hence, SAM requires to forward and back propagate twice for one parameter update, resulting in one more computation cost than the classic optimizers. Moreover, as there are two steps during the training process, it needs double hyper-parameters, which makes the learning rate tuning unbearable and costly. Adaptive learning rate optimization methods [11] scale the gradients based on the history gradient information to accelerate the convergence by tuning the learning rate automatically. These methods, such as Adagrad [12], Adam [9], and AMSGrad [10], have been proposed for solving the computer vision, natural language process, and generative neural networks tasks [11, 13, 14, 15]. Recently, several works have tried to ease the learning rate tuning in SAM by inheriting the triplet advantages of SAM, adaptive learning rate, and momentum acceleration. For example, [16] and [17] train ViT models and NLP models with adaptive learning rates and momentum acceleration, respectively. Although remarkable performance has been achieved, their convergences are still unknown since the adaptive learning rate and momentum acceleration are used in SAM. Directly analyzing its convergence is complicated and difficult due to the three coupled steps of optimization, i.e., the adaptive learning rate estimation is coupled with the momentum step and perturbation step of SAM. In this paper, we analyze the convergence rate of SAM with an adaptive learning rate and momentum acceleration, dubbed AdaSAM, in the non-convex stochastic setting. To circumvent the difficulty in the analysis, we develop a novel technique to decouple the three-step training of SAM from the adaptive learning rate and momentum step. The analysis procedure is mainly divided into three parts. The first part is to analyze the procedure of the SAM. Then we analyze the second step that adopts the adaptive learning rate method. We introduce a second-order momentum term from the previous iteration, which is related to the adaptive learning rate and independent of SAM while taking an expectation. Then we can bound the term composed by the SAM and the previous second-order momentum due to the limited adaptive learning rate. In the last part, we analysis the momentum acceleration that is combined with the SAM and the adaptive learning rate. The momentum acceleration lead to an extra term in convergence analysis. Here, we introduce an auxiliary sequence to absorb it and show that their summation over the all iterations is controllable. We prove that AdaSAM enjoys the property of linear speedup property with respect to the batch size, i.e. \(\mathcal{O}(1/\sqrt{bT})\) where \(b\) is the mini-batch size. Empirically, we apply AdaSAM to train RoBERTa model on the GLUE benchmark to evaluate our theoretical findings. We show that AdaSAM achieves the best performance in experiments, where it wins 6 tasks of 8 tasks, and the linear speedup can be clearly observed. In the end, we summarize our contributions as follows: * We present the first convergence guarantee of the adaptive SAM method with momentum acceleration under the stochastic non-convex setting. Our results suggest that a large mini-batch can help convergence due to the established linear speedup with respect to batch size. * We conduct a series of experiments on various tasks. The results show that AdaSAM outperforms most of the state-of-art optimizers and the linear speedup is verified. ## II Preliminary and Related Work In this section, we first describe the basic problem setup and then introduce several related works on the SAM, adaptive learning rate and momentum steps. ### _Problem Setup_ In this work, we focus on stochastic nonconvex optimization \[\min_{x\in\mathbb{R}^{d}}f(x):=\mathbb{E}_{\xi\sim D}f_{\xi}(x), \tag{1}\] where \(d\) is dimension of variable \(x\), \(D\) is the unknown distribution of the data samples, \(f_{\xi}(x)\) is a smooth and possibly non-convex function, and \(f_{\xi_{i}}(x)\) denotes the objective function at the sampled data point \(\xi_{i}\) according to data distribution \(D\). In machine learning, it covers empirical risk minimization as a special case and \(f\) is the loss function when the dataset \(D\) cover \(N\) data points, i.e., \(D=\{\xi_{i},i=1,2,\ldots,N\}\). Problem (1) reduces to the following finite-sum problem: \[\min_{x\in\mathbb{R}^{d}}f(x):=\frac{1}{N}\sum_{i}f_{\xi_{i}}(x). \tag{2}\] Notations.Without additional declaration, we represent \(f_{i}(x)\) as \(f_{\xi_{i}}(x)\) for simplification, which is the \(i\)-th loss function while \(x\in\mathbb{R}^{d}\) is the model parameter and \(d\) is the parameter dimension. We denote the \(l_{2}\) norm as \(\|\cdot\|_{2}\). A Hadamard product is denoted as \(a\odot b\) where \(a\),\(b\) are two vectors. For a vector \(a\in\mathbb{R}^{d}\), \(\sqrt{a}\) is denoted as a vector that the \(j\)-th value, \((\sqrt{a})_{(j)}\), is equal to the square root of \(a_{j}\). ### _Related Work_ Sharpness-aware minimizationMany works try to improve the generalization ability during training the deep learning model. Some methods such as dropout [18], weight decay [19], and regularization methods [20, 21] provide an explicit way to improve generalization. Previous work shows that sharp minima may lead to poor generalization whereas flat minima perform better [22, 23, 24]. Therefore, it is popular to consider sharpness to be closely related to the generalization. Sharpness-aware minimization (SAM) [1] targets to find flat minimizers explicitly by minimizing the training loss uniformly in the entire neighborhood. Specifically, SAM aims to solve the following minimax saddle point problem: \[\min_{x}\max_{\|\delta\|\leq\rho}f(x+\delta)+\lambda\|x\|_{2}^{2}, \tag{3}\] where \(\rho\geq 0\) and \(\lambda\geq 0\) are two hyperparameters. That is, the perturbed loss function of \(f(x)\) in a neighborhood is minimized instead of the original loss function \(f(x)\). By using Taylor expansion of \(f(x+\delta)\) with respect to \(\delta\), the inner max problem is approximately solved via \[\delta^{*}(x) =\operatorname*{arg\,max}_{\|\delta\|\leq\rho}f(x+\delta)\] \[\approx\operatorname*{arg\,max}_{\|\delta\|\leq\rho}f(x)+\delta^ {\top}\nabla f(x)\] \[=\operatorname*{arg\,max}_{\|\delta\|\leq\rho}\delta^{\top} \nabla f(x)=\rho\frac{\nabla f(x)}{\|\nabla f(x)\|}.\] By dropping the quadratic term, (3) is simplified as the following minimization problem \[\min_{x}f\left(x+\rho\frac{\nabla f(x)}{\|\nabla f(x)\|}\right). \tag{4}\] The stochastic gradient of \(f\left(x+\rho\frac{\nabla f(x)}{\|\nabla f(x)\|}\right)\) on a batch data \(b\) includes the Hessian-vector product, SAM further approximates the gradient by \[\nabla_{x}f_{b}\left(x+\rho\frac{\nabla f_{b}(x)}{\|\nabla f_{b}(x)\|}\right) \approx\nabla_{x}f_{b}(x)\big{|}_{x+\rho\frac{\nabla f_{b}(x)}{\|\nabla f_{b }(x)\|}}.\] Then, along the negative direction \(-\nabla_{x}f_{b}(x)\big{|}_{x+\rho\frac{\nabla f_{b}(x)}{\|\nabla f_{b}(x)\|}}\), SGD is applied to solve the surrogate minimization problem (4). It is easy to see that SAM requires twice gradient back-propagation, i.e., \(\nabla f_{b}(x)\) and \(\nabla_{x}f_{b}(x)\big{|}_{x+\rho\frac{\nabla f_{b}(x)}{\|\nabla f_{b}(x)\|}}\). Due to the existence of hyperparameter \(\rho\), one needs to carefully tune both \(\rho\) and learning rate in SAM. In practice, \(\rho\) is predefined to control the radius of the neighborhood. Recently, Several variants of SAM are proposed to improve its performance. For example, [16, 8, 17] have empirically incorporated adaptive learning rate with SAM and shown impressive generalization accuracy, while their convergence analysis has never been studied. ESAM [25] proposes an efficient method by sparsifying the gradients to alleviate the double computation cost of backpropagation. ASAAM [17] modifies SAM by adaptively scaling the neighborhood so that the sharpness is invariant to parameters re-scaling. GSAM [16] simultaneously minimizes the perturbed function and a new defined surrogate gap function to further improve the flatness of minimizers. Liu et al. [26] also study SAM in large-batch training scenario and periodically update the perturbed gradient. Recently, [3, 8] improve the efficiency of SAM by adopting the sparse gradient perturbation technique. [27, 28] extend SAM to the federated learning setting with a significant performance gain. On the other hand, there are some works analyzing the convergence of the SAM such as [29] without considering the normalization step, i.e., the normalization in \(\frac{\nabla f_{b}(x)}{\|\nabla f_{b}(x)\|}\). Adaptive optimizerThe adaptive optimizer can automatically adjust the learning rate based on the history gradients methods. The first adaptive method, Adagrad [12], can achieve a better result than other first-order methods under the convex setting. While training the deep neural network, Adagrad will decrease the learning rate rapidly with a degraded performance. Adadelta [30] is proposed to change this situation and introduces a learning rate based on the exponential average history gradients. Adam [9] additionally adds momentum step to stabilize the training process, and it shows great performance in many tasks. However, Reddi et al [10] give a counterexample that it cannot converge even when the objective function is convex and propose an alternative method called AMSGrad with convergence guarantee. Then, many works [31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44] have been proposed to study the convergence of adaptive methods and their variants in the nonconvex setting. However, their analysis techniques can not directly extend to establish the convergence of SAM with adaptive learning rate due to the coupled perturbation step and adaptive learning rate. Momentum accelerationMomentum methods such as Polyak's heavy ball method [45], Nestrov's accelerated gradient descent method [46] and accelerated projected method [47] are used to optimize the parameters of the deep neural network. In practice, they have been used to accelerated for federated learning tasks [48], non-negative latent factor model [49] and recommender systems [50]. There are many theoretical works [51, 52, 53] that focus on analyzing the momentum acceleration for optimizing non-convex problem. [54] shows that it is important for tuning momentum while training deep neural network. [55] first points out linear convergence results for stochastic momentum method. [56] proposes a class of accelerated zeroth-order and first-order momentum method to solve mini-optimization and minimax-optimization problem. [57] extend the momentum method by introducing an RNA scheme and a constrained formulation RNA which has non-linear updates. [58] propose a heuristic adaptive restart method and [59] propose a scheduled restart momentum accelerated SGD method named SRSGD which helps reduce the training time. [60] adds one momentum term on to the distributed gradient algorithm. ## III Methodology In this section, we introduce SAM with adaptive learning rate and momentum acceleration, dubbed AdaSAM, to stabilize the training process of SAM and ease the learning rate tuning. Then, we present the convergence results of AdaSAM. At last, we give the proof sketch for the main theorem. ### _AdaSAM Algorithm_ AdaSAM for solving Problem (1) is described in Algorithm 1. In each iteration, a mini-batch gradient estimation \(g_{t}\) at point \(x+\epsilon(x)\) with batchsize \(b\) is computed, i.e., \[g_{t}=\nabla_{x}f_{b}(x)|_{x_{t}+\epsilon(x_{t})}=\frac{1}{b}\sum_{i\in B} \nabla f_{\xi_{t}}(x_{t}+\delta(x_{t})).\] Here, \(\delta(x_{t})\) is the extra perturbed gradient step in SAM that is given as follows \[\delta(x_{t})=\rho\frac{s_{t}}{\|s_{t}\|},\ \mathrm{where}\ s_{t}=\nabla_{x}f_{b}(x )|_{x_{t}}=\frac{1}{b}\sum_{i\in B}\nabla f_{\xi_{t}}(x_{t}).\] Then, the momentum term of \(g_{t}\) and the second-order moment term \([g_{t}]^{2}\) is accumulatively computed as \(m_{t}\) and \(v_{t}\), respectively. AdaSAM then updates iterate along \(-m_{t}\) with the adaptive learning rate \(\gamma\eta_{t}\). **Remark 1**.: _Below, we give several comments on AdaSAM:_ * _When_ \(\beta_{2}=1\)_, the adaptive learning rate reduces to the diminishing one as SGD. Then, AdaSAM recovers the classic SAM optimizer._ * _If we drop out the 8-th line_ \(\hat{v}_{t}=\max(\hat{v}_{t-1},v_{t}),\) _then our algorithm becomes the variant of Adam. The counterexample that Adam does not converge in the_ _[_10_]_ _also holds for the SAM variant, while AdaSAM can converge._ ``` Input: Initial parameters \(x_{0}\), \(m_{-1}=0\), \(\hat{v}_{-1}=\epsilon^{2}\)(a small positive scalar to avoid the denominator diminishing), base learning rate \(\gamma\), neighborhood size \(\rho\) and momentum parameters \(\beta_{1}\), \(\beta_{2}\). Output: Optimized parameter \(x_{T+1}\) 1foriteration\(t\in\{0,1,2,...,T-1\}\)do 2 Sample mini-batch \(B=\{\xi_{t_{1}},\xi_{t_{2}},...,\xi_{t_{|B|}}\}\); 3 Compute gradient \(s_{t}=\nabla_{x}f_{B}(x)|_{x_{t}}=\frac{1}{b}\sum_{i\in B}\nabla f_{t_{i}}(x_ {t})\); 4 Compute \(\delta(x_{t})=\rho_{t}\frac{s_{t}}{\|s_{t}\|}\); 5 Compute SAM gradient \(g_{t}=\nabla_{x}f_{B}(x)|_{x_{t}+\delta(x_{t})}\); 6\(m_{t}=\beta_{1}m_{t-1}+(1-\beta_{1})g_{t}\); 7\(v_{t}=\beta_{2}v_{t-1}+(1-\beta_{2})[g_{t}]^{2}\); 8\(\hat{v}_{t}=\max(\hat{v}_{t-1},v_{t})\); 9\(\eta_{t}=1/\sqrt{\hat{v}_{t}}\); 10\(x_{t+1}=x_{t}-\gamma m_{t}\odot\eta_{t}\); 11 12 end for ``` **Algorithm 1**AdaSAM: SAM with adaptive learning rate and momentum acceleration ### _Convergence Analysis_ Before presenting the convergence results of the AdaSAM algorithm, we first introduce some necessary assumptions. **Assumption 1** (\(L\)-smooth).: \(f_{i}\) _and \(f\) is differentiable with gradient Lipschitz property: \(\|\nabla f_{i}(x)-\nabla f_{i}(y)\|\leq L\|x-y\|,\|\nabla f(x)-\nabla f(y)\| \leq L\|x-y\|,\forall x,y\in\mathbb{R}^{d},i=1,2,...,N,\) which also implies the descent inequality, i.e., \(f_{i}(y)\leq f_{i}(x)+\langle\nabla f_{i}(x),y-x\rangle+\frac{f_{i}}{2}\|y-x\|^ {2}\)._ **Assumption 2** (Bounded variance).: _The estimator of the gradient is unbiased and the variance of the stochastic gradient is bounded. i.e.,_ \[\mathbb{E}\nabla f_{i}(x)=\nabla f(x),\quad\mathbb{E}\|\nabla f_{i}(x)-\nabla f (x)\|^{2}\leq\sigma^{2}.\] _When the mini-batch size \(b\) is used, we have \(\mathbb{E}\|\nabla f_{b}(x)-\nabla f(x)\|^{2}\leq\frac{\sigma^{2}}{b}\)._ **Assumption 3** (**Bounded stochastic gradients**).: _The stochastic gradient is uniformly bounded, i.e.,_ \[\left\|\nabla f_{i}(x)\right\|_{\infty}\leq G,for\ any\ i=1,\ldots,N.\] **Remark 2**.: _The above assumptions are commonly used in the proof of convergence for adaptive stochastic gradient methods such as [31, 32, 61, 62]._ Below, we briefly explain the main idea of analyzing the convergence of the AdaSAM algorithm. First, we discuss the difficulty of applying the adaptive learning rate on SAM. We notice that the main step which contains adaptive learning rate in convergence analysis is to estimate the expectation \(\mathbb{E}[x_{t+1}-x_{t}]=-\mathbb{E}m_{t}\odot\eta_{t}=-\mathbb{E}(1-\beta_{1 })g_{t}\odot\eta_{t}-\mathbb{E}\beta_{1}m_{t-1}\odot\eta_{t}\), which is conditioned on the filtration \(\sigma(x_{t})\). In this part, we consider the situation that \(\beta_{1}=0\) which does not include the momentum. Then, we apply delay technology to disentangle the dependence between \(g_{t}\) and \(\eta_{t}\), that is \[\mathbb{E}g_{t}\odot\eta_{t} =\mathbb{E}[g_{t}\odot\eta_{t-1}]+\mathbb{E}[g_{t}\odot(\eta_{t }-\eta_{t-1})]\] \[=\nabla f(x_{t})\odot\eta_{t-1}+\mathbb{E}[g_{t}\odot(\eta_{t}- \eta_{t-1})].\] The second term \(\mathbb{E}[g_{t}\odot(\eta_{t}-\eta_{t-1})]\) is dominated by the first term \(\nabla f(x_{t})\odot\eta_{t-1}\). Then, it is not difficult to get the convergence result of the stochastic gradient descend with the adaptive learning rate such as AMSGrad. However, when we apply the same strategy to AdaSAM, we find that \(\mathbb{E}g_{t}\odot\eta_{t-1}\) cannot be handled similarly because \(\mathbb{E}g_{t}=\mathbb{E}\nabla_{x}f_{b}\left(x+\rho\frac{\nabla f_{b}(x)}{ \left\|\nabla f_{b}(x)\right\|}\right)\neq\nabla f(x_{t})\). Inspired by [29, Lemma 16], our key observation is that \[\mathbb{E}\nabla_{x}f_{b}\left(x+\rho\frac{\nabla f_{b}(x)}{\left\| \nabla f_{b}(x)\right\|}\right) \approx\mathbb{E}\nabla_{x}f_{b}\left(x+\rho\frac{\nabla f(x)}{ \left\|\nabla f(x)\right\|}\right)\] \[=\nabla_{x}f\left(x+\rho\frac{\nabla f(x)}{\left\|\nabla f(x) \right\|}\right)\] and we prove the other terms such as \(\mathbb{E}\left(\nabla_{x}f_{b}\left(x+\rho\frac{\nabla f_{b}(x)}{\left\| \nabla f_{b}(x)\right\|}\right)-\nabla_{x}f_{b}\left(x+\rho\frac{\nabla f(x)}{ \left\|\nabla f(x)\right\|}\right)\right)\odot\eta_{t-1}\) have small values that do not dominate the convergence rate. On the other hand, when we apply the momentum steps, we find that the term \(\mathbb{E}m_{t-1}\odot\eta_{t}\) cannot be ignored. By introducing an auxiliary sequence \(z_{t}=x_{t}+\frac{\beta_{1}}{1-\beta_{1}}(x_{t}-x_{t-1})\), we have \(\mathbb{E}[z_{t+1}-z_{t}]=-\mathbb{E}[\frac{\beta_{1}}{1-\beta_{1}}\gamma m_{ t-1}\odot(\eta_{t-1}-\eta_{t})-\gamma g_{t}\odot\eta_{t}]\). The first term contains the momentum term which has a small value due to the difference of the adaptive learning rate \(\eta_{t}\). Thus, it is diminishing without hurting the convergence rate. **Theorem 1**.: _Under the assumptions 1,2,3, and \(\gamma\) is a fixed number satisfying \(\gamma\leq\frac{\epsilon}{16L}\), for the sequence \(\{x_{t}\}\) generated by Algorithm 1, we have the following convergence rate_ \[\frac{1}{T}\sum_{t=0}^{T-1}\mathbb{E}\|\nabla f(x_{t})\|_{2}^{2}\!\leq\!\frac {2G(f(x_{0})\!-\!f^{*})}{\gamma T}\!+\!\frac{8G\gamma L}{\epsilon}\frac{\sigma^ {2}}{b\epsilon}\!+\!\Phi \tag{5}\] _where_ \[\Phi=\frac{45GL^{2}\rho_{t}^{2}}{\epsilon}+\frac{2G^{3}}{(1-\beta _{1})T}d(\frac{1}{\epsilon}-\frac{1}{G})+\frac{6\gamma^{2}L^{2}\beta_{1}^{2}}{ (1-\beta_{1})^{2}}\frac{dG^{3}}{\epsilon^{3}}\] \[+\frac{2(4+(\frac{\beta_{1}}{1-\beta_{1}})^{2})\gamma LG^{3}}{ \epsilon}d(\epsilon^{-2}-G^{-2})+\frac{8G\gamma L}{\epsilon}\frac{L\rho_{t}^{ 2}}{\epsilon}, \tag{6}\] _in which \(T\) is the number of iteration, \(f^{*}\) is the minimal value of the function \(f\), \(\gamma\) is the base learning rate, \(b\) is the mini-batch size, \(d\) is the dimension of paramter \(x\). \(\beta_{1}\), \(G\), \(L\), \(\epsilon\), \(\sigma^{2}\), \(d\) are fixed constants._ Theorem 1 characterizes the convergence rate of the sequence \(\{x_{t}\}\) generated by AdaSAM with respect to the stochastic gradient residual. The first two terms of the right hand side of Inequality (5) are the terms that dominate the convergence rate. Compared with the first two terms, \(\Phi\) is a small value while we set neighborhood size \(\rho\) and learning rate \(\gamma\) as small values which are related to large iteration number \(T\). Then, we obtain the following corollary directly. **Corollary 1** (**Mini-batch linear speedup**).: _Under the same conditions of Theorem 1. Furthermore, when we choose the base learning rate \(\gamma=O(\sqrt{\frac{b}{T}})\) and neighborhood size \(\rho=O(\sqrt{\frac{1}{bT}})\), the following result holds:_ \[\frac{1}{T}\sum_{t=0}^{T-1}\mathbb{E}\|\nabla f(x_{t})\|_{2}^{2} =O\left(\frac{1}{\sqrt{bT}}\right)+O\left(\frac{1}{bT}\right)+O \left(\frac{1}{T}\right)\] \[+O\left(\frac{1}{b^{\frac{1}{2}}T^{\frac{3}{2}}}\right)+O\left( \frac{b^{\frac{1}{2}}}{T^{\frac{3}{2}}}\right)+O\left(\frac{b}{T}\right).\] _When \(T\) is sufficiently large, we achieve the linear speedup convergence rate with respect to mini-batch size \(b\), i.e.,_ \[\frac{1}{T}\sum_{t=0}^{T-1}\mathbb{E}\|\nabla f(x_{t})\|_{2}^{2}=O\left(\frac{1}{ \sqrt{bT}}\right). \tag{7}\] **Remark 3**.: _Two comments are given about the above results:_ * _To reach a_ \(O(\delta)\) _stationary point, when the batch size is 1, it needs_ \(T=O(\frac{1}{\delta^{2}})\) _iterations. When the batch size is_ \(b\)_, we need to run_ \(T=O(\frac{1}{\delta^{2}})\) _steps. The method with batch size_ \(b\) _is_ \(b\) _times faster than batch size of 1, which means that it has the mini-batch linear speedup property._ * _According to_ _[_37, 63, 64_]__, AdaSAM can be extended to distributed version and achieves linear speedup property with respect to the number of works in the Parameter-Sever setting._ ### _Proof Sketch_ In this part, we give the proof sketch of the Theorem 1. For the complete proof, please see Appendix. Below, we first introduce an auxiliary sequence \(z_{t}=x_{t}+\frac{\beta_{1}}{1-\beta_{1}}(x_{t}-x_{t-1})\). By applying \(L\)-smooth condition, we have \[f(z_{t+1})\!\leq\!f(z_{t})\!+\!\langle\nabla f(z_{t}),z_{t+1}-z_{t}\rangle\!+\! \frac{L}{2}\|z_{t+1}-z_{t}\|^{2}. \tag{8}\] Applying it to the sequence \(\{z_{t}\}\) and using the delay strategy yield \[f(z_{t+1})-f(z_{t})\] \[\leq\langle\nabla f(z_{t}),\frac{\gamma\beta_{1}}{1-\beta_{1}}m_{t-1} \odot(\eta_{t-1}-\eta_{t})\rangle+\frac{L}{2}\|z_{t+1}-z_{t}\|^{2}\] \[+\langle\nabla f(z_{t}),\frac{\gamma}{b}\sum_{i\in B}\nabla f_{i}(x _{t}+\rho_{t}\frac{s_{t}}{\|s_{t}\|})\odot(\eta_{t-1}-\eta_{t})\rangle\] \[+\langle\nabla f(z_{t})-\nabla f(x_{t}),-\frac{\gamma}{b}\sum_{i\in B} \nabla f_{i}(x_{t}+\rho_{t}\frac{s_{t}}{\|s_{t}\|})\odot\eta_{t-1}\rangle\] \[+\langle\nabla f(x_{t}),-\frac{\gamma}{b}\sum_{i\in B}\nabla f_{i} (x_{t}+\rho_{t}\frac{\nabla f(x_{t})}{\|\nabla f(x_{t})\|})\odot\eta_{t-1}\rangle\] \[+\langle\nabla f(x_{t}),\frac{\gamma}{b}\sum_{i\in B}\nabla f_{i} (x_{t}+\rho_{t}\frac{\nabla f(x_{t})}{\|\nabla f(x_{t})\|})\odot\eta_{t-1}\] \[-\frac{\gamma}{b}\sum_{i\in B}\nabla f_{i}(x_{t}+\rho_{t}\frac{s_ {t}}{\|s_{t}\|})\odot\eta_{t-1}\rangle. \tag{9}\] From the Lemma 5, Lemma 6, Lemma 7 in appendix, we can bound the above terms in (III-A) as follows \[\langle\nabla f(z_{t}),\frac{\gamma}{b}\sum_{i\in B}\nabla f_{i} (x_{t}+\rho_{t}\frac{s_{t}}{\|s_{t}\|})\odot(\eta_{t-1}-\eta_{t})\rangle\] \[\leq\gamma G^{2}\|\eta_{t-1}-\eta_{t}\|_{1}, \tag{10}\] \[\langle\nabla f(z_{t}),\frac{\gamma\beta_{1}}{1-\beta_{1}}m_{t-1} \odot(\eta_{t-1}-\eta_{t})\rangle\] \[\leq\frac{\gamma\beta_{1}}{1-\beta_{1}}G^{2}\|\eta_{t-1}-\eta_{t }\|_{1},\] (11) \[\langle\nabla f(x_{t}),\frac{\gamma}{b}\sum_{i\in B}\nabla f_{i} (x_{t}+\rho_{t}\frac{\nabla f(x_{t})}{\|\nabla f(x_{t})\|})\odot\eta_{t-1}\] \[-\frac{\gamma}{b}\sum_{i\in B}\nabla f_{i}(x_{t}+\rho_{t}\frac{s_ {t}}{\|s_{t}\|})\odot\eta_{t-1}\rangle\] \[\leq\frac{\gamma}{2\mu^{2}}\|\nabla f(x_{t})\odot\sqrt{\eta_{t-1 }}\|^{2}+\frac{2\mu^{2}\gamma L^{2}\rho_{t}^{2}}{\epsilon}. \tag{12}\] Then we substitute them into the (III-A), and take the conditional expectation to get \[\mathbb{E}f(z_{t+1})-f(z_{t})\] \[\leq\mathbb{E}\langle\nabla f(x_{t}),-\frac{\gamma}{b}\sum_{i\in B }\nabla f_{i}(x_{t}+\rho_{t}\frac{\nabla f(x_{t})}{\|\nabla f(x_{t})\|}) \odot\eta_{t-1}\rangle\] \[+\frac{\gamma}{2\mu^{2}}\|\nabla f(x_{t})\odot\sqrt{\eta_{t-1}} \|^{2}+\frac{\gamma}{1-\beta_{1}}G^{2}\|\eta_{t-1}-\eta_{t}\|_{1}\] \[+\mathbb{E}\langle\nabla f(z_{t})-\nabla f(x_{t}),-\frac{\gamma} {b}\sum_{i\in B}\nabla f_{i}(x_{t}+\rho_{t}\frac{s_{t}}{\|s_{t}\|})\odot\eta_ {t-1}\rangle\] \[+\frac{2\mu^{2}\gamma L^{2}\rho_{t}^{2}}{\epsilon}+\frac{L}{2} \mathbb{E}\|z_{t+1}-z_{t}\|^{2}, \tag{13}\] where \(\mu>0\) is a constant to be determined. Next, from the Lemma 8, Lemma 10 and Lemma 9 in Appendix, we have \[\mathbb{E}\langle\nabla f(x_{t}),-\frac{\gamma}{b}\sum_{i\in B} \nabla f_{i}(x_{t}+\rho_{t}\frac{\nabla f(x_{t})}{\|\nabla f(x_{t})\|}) \odot\eta_{t-1}\rangle\] \[\leq-\gamma\|\nabla f(x_{t})\odot\sqrt{\eta_{t-1}}\|^{2}+ \mathbb{E}\frac{\gamma}{2\alpha^{2}}\|\nabla f(x_{t})\odot\sqrt{\eta_{t-1}}\|^ {2}\] \[\quad+\frac{\gamma\alpha^{2}L^{2}\rho_{t}^{2}}{2\epsilon}, \tag{14}\] \[\frac{L}{2}\mathbb{E}\|z_{t+1}-z_{t}\|^{2}\leq\frac{LG^{2}\gamma ^{2}\beta_{1}^{2}}{(1-\beta_{1})^{2}}\mathbb{E}\|\eta_{t}-\eta_{t-1}\|^{2}\] \[\quad+\gamma^{2}L(3\frac{1+\beta}{\beta\epsilon}(\frac{L\rho_{t}^{ 2}}{\epsilon}+\frac{\sigma^{2}}{b\epsilon}+\mathbb{E}\|\nabla f(x_{t})\odot \sqrt{\eta_{t-1}}\|^{2})\] \[\quad+(1+\beta)G^{2}\mathbb{E}\|\eta_{t}-\eta_{t-1}\|^{2}),\] (15) \[\mathbb{E}\langle\nabla f(z_{t})-\nabla f(x_{t}),-\frac{\gamma}{b} \sum_{i\in B}\nabla f_{i}(x_{t}+\rho_{t}\frac{s_{t}}{\|s_{t}\|})\odot\eta_{t-1}\rangle\] \[\leq\frac{\gamma^{3}L^{2}\beta_{1}^{2}}{2\epsilon(1-\beta_{1})^{ 2}}(\frac{1}{\lambda_{1}^{2}}+\frac{1}{\lambda_{2}^{2}}+\frac{1}{\lambda_{3}^{ 2}})\frac{dG_{\infty}^{2}}{\epsilon^{2}}+\frac{\gamma L^{2}\rho_{t}^{2}}{2 \epsilon}(\lambda_{2}^{2}+4\lambda_{3}^{2})\] \[\quad+\frac{\gamma\lambda_{1}^{2}}{2}\|\nabla f(x_{t})\odot\sqrt{ \eta_{t-1}}\|^{2}. \tag{16}\] Next, we substitute it into the (III-A). Taking the expectation over all history information yields \[\mathbb{E}f(x_{t+1})-\mathbb{E}f(x_{t})\] \[\leq-\gamma(1-\frac{1}{2\mu^{2}}-\frac{1}{2\alpha^{2}}-\frac{3 \gamma L(1+\beta)}{\beta\epsilon}-\frac{\lambda_{1}^{2}}{2})\mathbb{E}\|\nabla f (x_{t})\odot\sqrt{\eta_{t-1}}\|^{2}\] \[+\frac{2\mu^{2}\gamma L^{2}\rho_{t}^{2}}{\epsilon}+\frac{\gamma}{1 -\beta_{1}}G^{2}\mathbb{E}\|\eta_{t-1}-\eta_{t}\|_{1}+\frac{\gamma\alpha^{2}L^{2 }\rho^{2}}{2\epsilon}\] \[+\frac{\gamma^{3}L^{2}\beta_{1}^{2}}{2\epsilon(1-\beta_{1})^{2}}( \frac{1}{\lambda_{1}^{2}}+\frac{1}{\lambda_{2}^{2}}+\frac{1}{\lambda_{3}^{3}}) \frac{dG_{\infty}^{2}}{\epsilon^{2}}+\frac{\gamma L^{2}\rho_{t}^{2}}{2\epsilon}( \lambda_{2}^{2}+4\lambda_{3}^{2})\] \[+\gamma^{2}LG^{2}((\frac{\beta_{1}}{1-\beta_{1}})^{2}+1+\beta) \mathbb{E}\|\eta_{t}-\eta_{t-1}\|^{2}\] \[+\frac{3\gamma^{2}L(1+\beta)}{\beta\epsilon}(\frac{L\rho_{t}^{2}}{ \epsilon}+\frac{\sigma^{2}}{b\epsilon}). \tag{17}\] We set \(\mu^{2}=\alpha^{2}=8\), \(\beta=3\), \(\lambda_{1}^{2}=\frac{1}{4}\), \(\lambda_{2}^{2}=\lambda_{3}^{2}=1\) and we choose \(\frac{2\gamma L}{\epsilon}\leq\frac{1}{8}\). Note that \(\eta_{t}\) is bounded. We have \[\frac{\gamma}{2G}\mathbb{E}\|\nabla f(x_{t})\|^{2}\leq\frac{\gamma}{2} \mathbb{E}\|\nabla f(x_{t})\odot\sqrt{\eta_{t-1}}\|^{2} \tag{18}\] \[\leq-\mathbb{E}f(x_{t+1})+\mathbb{E}f(x_{t})+\frac{45\gamma L^{2 }\rho_{t}^{2}}{2\epsilon}+\frac{4\gamma^{2}L}{\epsilon}(\frac{L\rho_{t}^{2}}{ \epsilon}+\frac{\sigma^{2}}{b\epsilon})\] \[+\frac{\gamma}{1-\beta_{1}}G^{2}\mathbb{E}\|\eta_{t-1}-\eta_{t }\|_{1}+\frac{3\gamma^{3}L^{2}\beta_{1}^{2}}{(1-\beta_{1})^{2}}\frac{dG_{ \infty}^{2}}{\epsilon^{3}}\] \[+(4+(\frac{\beta_{1}}{1-\beta_{1}})^{2})^{2}\gamma^{2}LG^{2} \mathbb{E}\|\eta_{t}-\eta_{t-1}\|^{2}. \tag{19}\] \begin{table} \begin{tabular}{l c c c c c c c c c} \hline \hline \multirow{2}{*}{**Model**} & **CoLA** & Then, telescoping it from \(t=0\) to \(t=T-1\), and assuming \(\gamma\) is a constant, it follows that \[\frac{1}{T}\sum_{t=0}^{T-1}\mathbb{E}\|\nabla f(x_{t})\|^{2}\leq \frac{2G(f(x_{0})-f^{*})}{\gamma T}+\frac{8G\gamma L}{\epsilon}\frac{\sigma^{2} }{b\epsilon}\] \[+\frac{45GL^{2}\rho_{t}^{2}}{\epsilon}+\frac{2G^{3}}{(1-\beta_{1 })T}d(\frac{1}{\epsilon}-\frac{1}{G})+\frac{6\gamma^{2}L^{2}\beta_{1}^{2}}{(1- \beta_{1})^{2}}\frac{dG^{3}}{\epsilon^{3}}\] \[+\frac{8G\gamma L}{\epsilon}\frac{L\rho_{t}^{2}}{\epsilon}+\frac{ 2(4+(\frac{\beta_{1}}{1-\beta_{1}})^{2})\gamma LG^{3}}{T}d(\epsilon^{-2}-G^{-2 }), \tag{20}\] which completes the proof. ## IV Experiments In this section, we apply AdaSAM to train language models and compare it with SGD, AMSGrad, and SAM to show its effectiveness. Due to space limitations, more experiments, including visualization, task description, implementation details and results description, are placed in the Appendix. ### _Experimental Setup_ **Tasks and Datasets.** We evaluate AdaSAM on a popular benchmark, _i.e._ General Language Understanding Evaluation (GLUE) [65], which consists of several language understanding tasks including sentiment analysis, question answering and textual entailment. For a fair comparison, we report the results based on single-task, without multi-task or ensemble training. We evaluate the performance with Accuracy ("_Acc_") metric for most tasks, except the F1 scores for QQP and MRPC, the Pearson-Spearman correlations ("_Peor/Scor_") for STS-B and the Matthew correlations ("_Mcc_") for CoLA. The performance is better as the metric is higher. **Implementations.** We conduct our experiments using a widely-used pre-trained language model, RoBERTa-large1 in the open-source toolkit fairseq2, with 24 transformer layers, a hidden size of 1024. For fine-tuning on each task, we use different combinations of hyper-parameters, including the Fig. 1: The loss and evaluation metric v.s. steps on MRPC, RTE, CoLA, SST-2, STS-B, MNLI, QQP, and QNLL(\(\beta_{1}=0.9\)) learning rate, the number of epochs, the batch size, _etc_3. In particular, for RTE, STS-B and MRPC of GLUE benchmark, we first fine-tune the pre-trained RoBERTa-large model on the MNLI dataset and continue fine-tuning the RoBERTa-large-MNLI model on the corresponding single-task corpus for better performance, as many prior works did [66, 7]. All models are trained on NVIDIA DGX SuperPOD cluster, in which each machine contains 8\(\times\)40GB A100 GPUs. Footnote 3: Due to the space limitation, we show the details of the dataset and training setting in Appendix A. ### _Results on GLUE Benchmark_ Table I shows the performance of SGD, SAM, AMSGrad, and AdaSAM. For the AdaSAM, we tune the neighborhood size of the perturbation parameter from 0.01, 0.005, and 0.001. The result shows that AdaSAM outperforms AMSGrad on 6 tasks of 8 tasks except for QNLI and QQP. Overall, it improves the 0.28 average score than AMSGrad. On the other hand, Table I indicates that SAM is better than SGD on 7 tasks of 8 tasks except for RTE. And SAM can significantly improve performance. Comparing the results of Table I, we can find that the adaptive learning rate method is better than SGD tuned with handicraft learning rate. AdaSAM achieves the best metric on 6 tasks which is CoLA, SST-2, MRPC, STS-B, RTE, QNLI, and MNLI. In general, AdaSAM is better than the other methods. In addition, Figure 3 shows the convergence speed of the detailed loss and evaluation metrics vs. the number of steps during training, respectively. The loss curve of AdaSAM decreases faster than SAM and SGD in all tasks, and it has a similar decreasing speed as the AMSGrad. The evaluation metric curve of AdaSAM and AMSGrad show that the AdaSAM is better than SGD and SAM and decreases the loss value as faster as the AMSGrad in all tasks. ### _Mini-batch Speedup_ In this part, we test the performance with different batch sizes to validate the linear speedup property. The experiments are conducted on the MRPC, RTE, and CoLA tasks. The batch size is set as 4, 8, 16, 32, respectively. We scale the learning rate as \(\sqrt{N}\), which is similar as [67], where \(N\) is the batch size. The results show that the training loss decreases faster as the batchsize increases, and the loss curve with the batch size of 32 achieves nearly half iterations as the curve with the batch size of 16. ### _Ablation Study_ In this subsection, we conduct the experiments the momentum hyper-parameter \(\beta_{1}\) is set to 0 to evaluate the influence of the momentum acceleration and the adaptive learning rate. Table II shows that AdaSAM outperforms AMSGrad on 6 tasks of 8 tasks except for SST-2 and RTE. In Table II, we also compare SGD and SAM, and without the momentum, SAM outperforms SGD on all tasks. Under this situation, AdaSAM without the momentum acceleration method is better than the other methods. When comparing the result of Table I and Table II, we find that both the adaptive learning rate method and the momentum acceleration are helpful for the model's generalization ability. When there is no momentum term, SAM with an adaptive \begin{table} \begin{tabular}{l c c c c c c c c c} \hline \hline \multirow{2}{*}{**Model**} & **CoLA** & **SST-2** & **MRPC** & **STS-B** & **RTE** & **MNLI** & **QNLI** & **QQP** \\ & mcc. & Acc. & Acc./F1 & Pcor./Scor. & Acc. & m/mm. & Acc. & F1/ Acc. & \\ \hline SGD & 0 & 51.722 & 68.38/ 81.22 & 5.55/ 7.2 & 51.27 & 32.51/ 32.42 & 53.32 & 0 / 63.18 & 37.23 \\ SAM(\(\rho=\)0.01) & 41.91 & 95.3 & 68.38/ 81.22 & 9.21/ 10.38 & 53.07 & 87.99/ 87.8 & 51.24 & 83.44/ 87.27 & 63.1 \\ SAM(\(\rho=\)0.005) & 58.79 & 81.54 & 68.38/ 81.22 & 13.52/ 16.6 & 53.79 & 88.42/ 88.15 & 92.95 & 83.84/ 87.7 & 67.91 \\ SAM(best) & 58.79 & 95.3 & 68.38/ 81.22 & 13.52/ 16.6 & 53.79 & 88.42/ 88.15 & 92.95 & 83.84/ 87.7 & 67.90 \\ AMSGrad & 63.78 & 96.44 & 89.71/ 92.44 & 89.98/ 90.35 & 87.36 & 90.65/ 90.35 & 94.53 & 88.59/ 91.27 & 88.79 \\ \hline AdaSAM(\(\rho=\)0.01) & 69.23 & 96.22 & 89.96/ 92.84 & 88.83/ 89.07 & 87 & 90.83/ 90.41 & 94.8 & 88.67/ 91.38 & 89.1 \\ AdaSAM(\(\rho=\)0.005) & 68.47 & 96.22 & 89.96/ 92.82 & 91.59/ 91.22 & 73.65 & 90.75/ 90.42 & 94.73 & 88.72/ 91.46 & 88.33 \\ AdaSAM(best) & 69.23 & 96.22 & 89.96/ 92.84 & 91.59/ 91.22 & 87 & 90.83/ 90.42 & 94.8 & 88.72/ 91.46 & 89.52 \\ \hline \hline \end{tabular} \end{table} TABLE II: Results of SGD, SAM, AMSGrad and AdaSAM on the GLUE benchmark without momentum, i.e., \(\beta_{1}=0\) Fig. 2: The linear speedup verification of AdaSAM with the number of batch size of 4, 8, 16, 32. learning rate improves the 0.74 average score to AMSGrad. With a momentum term, AdaSAM improves the 0.28 average score to AMSGrad. It shows that the adaptive method can improve the performance with or without momentum acceleration and it achieves the best performance with momentum acceleration. And we can find that momentum acceleration improves the performance of SAM, AMSGrad and AdaSAM. ## V Conclusion In this work, we study the convergence rate of Sharpness aware minimization optimizer with an adaptive learning rate and momentum acceleration, dubbed AdaSAM in the stochastic non-convex setting. To the best of our knowledge, we are the first to provide the non-trivial \(\mathcal{O}(1/\sqrt{bT})\) convergence rate of AdaSAM, which achieves a linear speedup property with respect to mini-batch size \(b\). We have conducted extensive experiments on several NLP tasks, which verifies that AdaSAM could achieve superior performance compared with AMSGrad and SAM optimizers. Future works include extending AdaSAM to the distributed setting and reducing the twice gradient back-propagation cost.
2302.01552
Self-similar quantum groups
We introduce the notion of self-similarity for compact quantum groups. For a finite set $X$, we introduce a $C^*$-algebra $\mathbb{A}_X$, which is the quantum automorphism group of the infinite homogeneous rooted tree $X^*$. Self-similar quantum groups are then certain quantum subgroups of $\mathbb{A}_X$. Our main class of examples are called finitely-constrained self-similar quantum groups, and we find a class of these examples that can be described as quantum wreath products by subgroups of the quantum permutation group.
Nathan Brownlowe, David Robertson
2023-02-03T05:21:08Z
http://arxiv.org/abs/2302.01552v1
# Self-similar quantum groups ###### Abstract. We introduce the notion of self-similarity for compact quantum groups. For a finite set \(X\), we introduce a \(C^{*}\)-algebra \(\mathbb{A}_{X}\), which is the quantum automorphism group of the infinite homogeneous rooted tree \(X^{*}\). Self-similar quantum groups are then certain quantum subgroups of \(\mathbb{A}_{X}\). Our main class of examples are called finitely-constrained self-similar quantum groups, and we find a class of these examples that can be described as quantum wreath products by subgroups of the quantum permutation group. Brownlowe was supported by the Australian Research Council grant DP200100155, and both authors were supported by the Sydney Mathematical Research Institute. groups in [11]. Wang was motivated by one of Connes' questions from his noncommutative geometry program: what is the _quantum_ automorphism group of a space? Wang's work in [11] provided an answer for finite spaces; in particular, Wang formally defined the notion of a quantum automorphism group, and then showed that his quantum permutation group \(A_{s}(n)\) is the quantum automorphism group of the space with \(n\) points. For three or fewer points this algebra is commutative, and hence indicating no quantum permutations; but for four or more points, remarkably the algebra is noncommutative and infinite-dimensional. Since the appearance of [11], follow-up work progressed in multiple directions, including the results of Bichon in [2] in which he introduced quantum automorphisms of finite graphs. These algebras are quantum subgroups of the quantum permutation groups. Bichon used this construction to define the quantum dihedral group \(D_{4}\). Later still in [1], Banica and Bichon classified all the compact quantum groups acting on four points; that is, all the compact quantum subgroups of \(A_{s}(4)\). Quantum automorphisms of infinite graphs have recently been considered by Rollier and Vaes in [8], and by Voigt in [15]. Our current work is the result of us asking the question: is there a reasonable notion of self-similarity for _quantum_ groups? We answer this question in the affirmative for compact quantum groups. We do this by first constructing the quantum automorphism group \(\mathbb{A}_{X}\) of the homogeneous rooted tree \(X^{*}\), and then identifying the quantum analogue of the restriction maps \(g\mapsto g|_{w}\) for \(g\in\operatorname{Aut}(X^{*})\), \(w\in X^{*}\). We then define a self-similar quantum group to be any quantum subgroup \(A\) of \(\mathbb{A}_{X}\) for which the restriction maps factor through the quotient map \(\mathbb{A}_{X}\to A\). We characterise self-similar quantum groups in terms of a certain homomorphism \(A\otimes C(X)\to C(X)\otimes A\), which can be thought of as quantum state-transition function. The main class of examples we examine are quantum analogues of finitely constrained self-similar groups. In our main theorem about these examples we describe a class of finitely constrained self-similar groups as free wreath products by quantum subgroups of quantum permutation groups. We start with a small preliminaries section in which we collect all the required definitions from the literature on compact quantum groups. In Section 3 we then identify a compact quantum group \(\mathbb{A}_{X}\) which we prove is the quantum automorphism group of the homogeneous rooted tree \(X^{*}\). The \(C^{*}\)-algebra \(\mathbb{A}_{X}\) is a noncommutative, infinite-dimensional \(C^{*}\)-algebra whose abelianisation is the algebra of continuous functions on the automorphism group of the tree \(X^{*}\). In Section 4 we introduce the notion of self-similarity for compact quantum groups, and we characterise self-similar quantum groups \(A\) in terms of morphisms \(A\otimes C(X)\to C(X)\otimes A\), mimicking the fact that classical self-similar action are governed by the maps \(G\times X\to X\times G\colon(g,x)\mapsto(g\cdot x,g|_{x})\). In Section 5 we define finitely constrained self-similar quantum groups, which are the quantum analogues of the classical finitely constrained self-similar groups studied in [4, 9]. In particular, we consider subalgebras \(\mathbb{A}_{d}\) of \(\mathbb{A}_{X}\), which are the quantum automorphism groups of the finite subtrees \(X^{[d]}\) of \(X^{*}\) of depth \(d\). To each quantum subgroup \(\mathbb{P}\) of \(\mathbb{A}_{d}\), we construct a quantum subgroup \(A_{\mathbb{P}}\), which we prove is a self-similar quantum group. We then build on the work of Bichon in [3] by constructing free wreath products of compact quantum groups by quantum subgroups of the quantum permutation group (which corresponds to the subalgebra \(\mathbb{A}_{1}\) of \(\mathbb{A}_{X}\)), and we prove that every \(A_{\mathbb{P}}\) coming from a quantum subgroup \(\mathbb{P}\) of \(\mathbb{A}_{1}\) is canonically isomorphic to the free wreath product \(A_{\mathbb{P}}*_{w}\mathbb{P}\). ## 2. Preliminaries In this section we collect some basics on compact quantum groups. We start with Woronowicz's definition of a compact quantum group [14]. **Definition 2.1**.: A _compact quantum group_ is a pair \((A,\Phi)\) where \(A\) is a unital \(C^{*}\)-algebra and \(\Phi:A\to A\otimes A\) is a unital \(*\)-homomorphism such that 1. \(\frac{(\Phi\otimes\mathrm{id})\Phi=(\mathrm{id}\otimes\Phi)\Phi}{(A \otimes 1)\Phi(A)}=A\otimes A=\overline{(1\otimes A)\Phi(A)}\). We call \(\Phi\) the _comultiplication_ and (1) is called _coassociativity_. _Remark 2.2_.: It is proved in [14] that \((A,\Phi)\) is a compact quantum group if and only if there is a family of matrices \(\{a^{\lambda}=(a^{\lambda}_{i,j})\in M_{d_{\lambda}}(A):\lambda\in\Lambda\}\) for some indexing set \(\Lambda\) such that 1. \(\Phi(a^{\lambda}_{i,j})=\sum_{k=1}^{d_{\lambda}}a^{\lambda}_{i,k}\otimes a^{ \lambda}_{k,j}\) for all \(\lambda\in\Lambda\) and \(1\leq i,j\leq d_{\lambda}\), 2. \(a^{\lambda}\) and its transpose \((a^{\lambda})^{T}\) are invertible elements of \(M_{d_{\lambda}}(A)\) for every \(\lambda\in\Lambda\), 3. the \(*\)-subalgebra \(\mathcal{A}\) of \(A\) generated by the entries \(\{a^{\lambda}_{i,j}:1\leq i,j\leq d_{\lambda},\lambda\in\Lambda\}\) is dense in \(A\). _Example 2.3_.: A key example for us is Wang's _quantum permutation groups_\((A_{s}(n),\Phi)\) from [11]. Here, \(n\) is a positive integer, and \(A_{s}(n)\) is the universal \(C^{*}\)-algebra generated by elements \(a_{ij}\), \(1\leq i,j\leq n\), satisfying \[a^{2}_{ij}=a_{ij}=a^{*}_{ij}\text{ for all }1\leq i,j\leq n,\] \[\sum_{j=1}^{n}a_{ij}=1\text{ for all }1\leq i\leq n,\] \[\sum_{i=1}^{n}a_{ij}=1\text{ for all }1\leq j\leq n.\] The comultiplication \(\Phi\) satisfies \(\Phi(a_{ij})=\sum_{k=1}^{n}a_{ik}\otimes a_{kj}\) for all \(1\leq i,j\leq n\). **Definition 2.4**.: If \((A_{1},\Phi_{1})\) and \((A_{2},\Phi_{2})\) are compact quantum groups, then a _morphism_\(\pi\) from \((A_{1},\Phi_{1})\) to \((A_{2},\Phi_{2})\) is a homomorphism of \(C^{*}\)-algebras \(\pi\colon A_{1}\to A_{2}\) satisfying \((\pi\otimes\pi)\circ\Phi_{1}=\Phi_{2}\circ\pi\). **Definition 2.5**.: Let \((A,\Phi)\) be a compact quantum group. A _Woronowicz ideal_ is an ideal \(I\) of \(A\) such that \(\Phi(I)\subseteq\ker(q\otimes q)\), where \(q\) is the quotient map \(A\to A/I\). Then \((A/I,\Phi^{\prime})\), where \(\Phi^{\prime}\colon A/I\to A/I\otimes A/I\) satisfies \(\Phi^{\prime}\circ q=(q\otimes q)\circ\Phi\) is a compact quantum group called a _quantum subgroup_ of \((A,\Phi)\). **Definition 2.6**.: A _(left) coaction_ of a compact quantum group \((A,\Phi)\) on a unital \(C^{*}\)-algebra \(B\) is a unital \(*\)-homomorphism \(\alpha:B\to A\otimes B\) satisfying 1. \(\frac{(\mathrm{id}\otimes\alpha)\alpha=(\Phi\otimes\mathrm{id})\alpha}{\alpha (B)(A\otimes 1)}=A\otimes B\). We refer to (1) as the _coaction identity_ and (2) is known as the _Podles condition_. ## 3. Quantum automorphisms of a homogeneous rooted tree In this section we introduce a compact quantum group \(\mathbb{A}_{X}\) which we prove is the quantum automorphism group of the infinite homogeneous rooted tree \(X^{*}\). We start with the notion of an action of a compact quantum group on \(X^{*}\). Note that for \(n\geq 0\) we write \(X^{n}\) for all the words in \(X\) of length \(n\), and we then the tree \(X^{*}\) can be identified with \(\bigcup_{n\geq}X^{n}\), where \(X^{0}=\{\varnothing\}\) and \(\varnothing\) is the root of the tree. **Definition 3.1**.: Let \(X\) be a finite set and let \((A,\Phi)\) be a compact quantum group. An action of \(A\) on the homogeneous rooted tree \(X^{*}\) is a system \[\alpha=(\alpha_{n}:C(X^{n})\to A\otimes C(X^{n}))\] of left coactions, such that for any \(m<n\) the diagram commutes, where \(i_{m,n}:C(X^{m})\to C(X^{n})\) is the injective homomorphism satisfying \[i_{m,n}(p_{w})=\sum_{w^{\prime}\in X^{n-m}}p_{ww^{\prime}}.\] We now define the main object of interest in this section, the \(C^{*}\)-algebra \(\mathbb{A}_{X}\), before proving that it is indeed a compact quantum group in Theorem 3.4. At some point in the later stages of this project we became aware of [8], and their notion of the quantum automorphism group \(\operatorname{QAut}\Pi\) of a locally finite connected graph \(\Pi\). A straightforward argument shows that \(\mathbb{A}_{X}\) is \(\operatorname{QAut}\Pi\) for \(\Pi\) the homogeneous rooted tree, but we include the proof of Theorem 3.4 for completeness. **Definition 3.2**.: Let \(X\) be a finite set. Define \(\mathbb{A}_{X}\) to be the universal \(C^{*}\)-algebra generated by elements \(\{a_{u,v}:u,v\in X^{n},n\geq 0\}\) subject to the following relations: 1. \(a_{\varnothing,\varnothing}=1\), 2. for any \(n\geq 0,u,v\in X^{n}\), \(a_{u,v}^{*}=a_{u,v}^{2}=a_{u,v}\), 3. for any \(n\geq 0,u,v\in X^{n}\) and \(x\in X\) \[a_{u,v}=\sum_{y\in X}a_{ux,vy}=\sum_{z\in X}a_{uz,vx}.\] _Remarks 3.3_.: 1. For each \(d\in\mathbb{N}\) we denote by \(\mathbb{A}_{d}\) the subalgebra of \(\mathbb{A}_{X}\) generated by \(\{a_{u,v}:u,v\in X^{d}\}\). Note that \(\mathbb{A}_{1}\) is the Wang's quantum permutation group \(A_{s}(|X|)\) from Example 2.3. 2. We can interpret (3) as follows: each projection \(a_{u,v}\) decomposes as an \(|X|\times|X|\) square of projections \(\{a_{ux,vy}:x,y\in X\}\) with a magic square type property where every row and column sums to \(a_{u,v}\). For example, if \(X=\{0,1,2\}\) we have the following structure. 3. Repeated applications of (3) from Definition 3.2 show that for all \(u,u^{\prime},v,v^{\prime},w\in X^{n},n\in\mathbb{N}\), we have \[u\neq u^{\prime},v\neq v^{\prime}\implies a_{u,w}a_{u^{\prime},w}=0=a_{w,v}a_{w, v^{\prime}},\] and that for all \(u=u_{1}\cdots u_{n},v=v_{1}\cdots v_{n}\in X^{n},n\in\mathbb{N}\), and \(x,y\in X\) we have \[a_{x,y}a_{u,v}=a_{u,v}a_{x,y}=\begin{cases}a_{u,v}&\text{if }u_{1}=x,v_{1}=y\\ 0&\text{otherwise.}\end{cases}\] We will freely use these two identities without comment throughout the rest of the paper. **Theorem 3.4**.: _The \(C^{*}\)-algebra \(\mathbb{A}_{X}\) is a compact quantum group with comultiplication \(\Delta:\mathbb{A}_{X}\to\mathbb{A}_{X}\otimes\mathbb{A}_{X}\) satisfying_ \[\Delta(a_{u,v})=\sum_{w\in X^{n}}a_{u,w}\otimes a_{w,v},\] _for all \(u,v\in X^{n}\) and \(n\geq 1\)_ Proof.: To see that \(\Delta\) exists, it's enough to show that the elements \[b_{u,v}:=\sum_{w\in X^{n}}a_{u,w}\otimes a_{w,v}\] for \(u,v\in X^{n}\) and \(n\geq 1\) satisfy Definition 3.2. Firstly, \(b_{\varnothing,\varnothing}=\Delta(a_{\varnothing,\varnothing})=a_{\varnothing, \varnothing}\otimes a_{\varnothing,\varnothing}=1\otimes 1\). For (2), we have \[b_{u,v}^{*}=\sum_{w\in X^{n}}a_{u,w}^{*}\otimes a_{w,v}^{*}=\sum_{w\in X^{n}}a_ {u,w}\otimes a_{w,v}=b_{u,v}\] and \[b_{u,v}^{2} =\left(\sum_{w\in X^{n}}a_{u,w}\otimes a_{w,v}\right)^{2}\] \[=\sum_{w,z\in X^{n}}a_{u,w}a_{u,z}\otimes a_{w,v}a_{z,v}\] \[=\sum_{w\in X^{n}}a_{u,w}^{2}\otimes a_{w,v}^{2}\] \[=\sum_{w\in X^{n}}a_{u,w}\otimes a_{w,v}\] \[=b_{u,v}.\] For (3) fix \(u,v\in X^{n}\) and \(x\in X\). Then \[b_{u,v} =\sum_{w\in X^{n}}a_{u,w}\otimes a_{w,v}\] \[=\sum_{w\in X^{n}}\sum_{z\in X}a_{ux,wz}\otimes a_{w,v}\] \[=\sum_{w\in X^{n}}\sum_{z\in X}a_{ux,wz}\otimes\sum_{y\in X}a_{ wz,vy}\] \[=\sum_{y\in X}\sum_{w\in X^{n+1}}a_{ux,w}\otimes a_{w,vy}\] \[=\sum_{y\in X}b_{ux,vy}.\] So by the universal property of \(\mathbb{A}_{X}\) there is a homomorphism \(\Delta:\mathbb{A}_{X}\to\mathbb{A}_{X}\otimes\mathbb{A}_{X}\) such that \[\Delta(a_{u,v})=\sum_{w\in X^{n}}a_{u,w}\otimes a_{w,v}.\] For coassociativity, we have \[(\operatorname{id}\otimes\Delta)\circ\Delta(a_{u,v}) =\sum_{w\in X^{n}}a_{u,w}\otimes\Delta(a_{w,v})\] \[=\sum_{w\in X^{n}}a_{u,w}\otimes\left(\sum_{z\in X^{n}}a_{w,z} \otimes a_{z,v}\right)\] \[=\sum_{z\in X^{n}}\left(\sum_{w\in X^{n}}a_{u,w}\otimes a_{w,z} \right)\otimes a_{z,v}\] \[=\sum_{z\in X^{n}}\Delta(a_{u,z})\otimes a_{z,v}\] \[=(\Delta\otimes\operatorname{id})\circ\Delta(a_{u,v}).\] Finally, we show that that the set of matrices \[\{a_{n}=(a_{u,v})_{u,v\in X^{n}}\in M_{X^{n}}(\mathbb{A}_{X}):n\geq 1\}\] satisfy the conditions of Definition 2.2. Conditions (1) and (3) are clear. For (2) we show that given any \(n\geq 1\) the matrix \(a_{n}\) is invertible with inverse given by \((a_{n})^{T}\). Given \(u,v\in X^{n}\) we have \[(a_{n}(a_{n})^{T})_{u,v}=\sum_{w\in X^{n}}a_{u,w}a_{v,w}=\delta_{u,v}\sum_{w \in X^{n}}a_{u,w}=\delta_{u,v}1_{A}.\] Likewise, we can show \(((a_{n})^{T}a_{n})_{u,v}=\delta_{u,v}1_{A}\) and hence \((a_{n})^{T}=a_{n}^{-1}\) as required. _Remark 3.5_.: The canonical dense \(*\)-subalgebra of \(\mathbb{A}_{X}\) is the \(*\)-subalgebra generated by the projections \(\{a_{u,v}:u,v\in X^{n},n\geq 0\}\). This is a Hopf \(*\)-algebra with counit \(\varepsilon\colon\mathbb{A}_{X}\to\mathbb{C}\) and coinverse \(\kappa\colon\mathbb{A}_{X}\to\mathbb{A}_{X}\) satisfying \(\varepsilon(a_{u,v})=\delta_{u,v}\) and \(\kappa(a_{u,v})=a_{v,u}\), for \(u,v\in X^{n}\), \(n\in\mathbb{N}\). We now show that \((\mathbb{A}_{X},\Delta)\) is the quantum automorphism group (in the sense of [11, Definition 2.3]) of the homogeneous rooted tree. **Proposition 3.6**.: _There is an action \(\gamma=(\gamma_{n})_{n=1}^{\infty}\) of \(\mathbb{A}_{X}\) on \(X^{*}\). Moreover, if \(\alpha=(\alpha_{n})_{n=1}^{\infty}\) is an action of a compact quantum group \((A,\Phi)\) on \(X^{*}\) then there is a quantum group homomorphism \(\pi:\mathbb{A}_{X}\to A\) such that \((\pi\otimes\operatorname{id})\circ\gamma_{n}=\alpha_{n}\) for any \(n\geq 1\)._ Proof.: For any \(n\geq 1\), the elements \[q_{w}:=\sum_{w^{\prime}\in X^{n}}a_{w,w^{\prime}}\otimes p_{w^{\prime}}\in \mathbb{A}_{X}\otimes C(X^{n})\] for each \(w\in X^{n}\) are mutually orthogonal projections and satisfy \[\sum_{w\in X^{n}}q_{w}=\sum_{w,w^{\prime}\in X^{n}}a_{w,w^{\prime}}\otimes p_{w^ {\prime}}=1\otimes 1\] Therefore there is a unital \(*\)-homomorphism \(\gamma_{n}:C(X^{n})\to\mathbb{A}_{X}\otimes C(X^{n})\) satisfying \(\gamma_{n}(p_{w})=q_{w}\). We have \[(\Delta\otimes\operatorname{id})\gamma_{n}(p_{w}) =\sum_{w^{\prime}\in X^{n}}\Delta(a_{w,w^{\prime}})\otimes p_{w^ {\prime}}\] \[=\sum_{w^{\prime},z\in X^{n}}a_{w,z}\otimes a_{z,w^{\prime}} \otimes p^{\prime}_{w}\] \[=\sum_{z\in X^{n}}a_{w,z}\otimes\alpha_{n}(p_{z})\] \[=(\operatorname{id}\otimes\gamma_{n})\gamma_{n}(p_{w}),\] and so each \(\gamma_{n}\) satisfies the coaction identity. For a fixed \(v\in X^{n}\) we have \[\sum_{u\in X^{n}}\gamma_{n}(p_{u})(a_{u,v}\otimes 1)=\sum_{u,w\in X^{n}}a_{u,w}a _{u,v}\otimes p_{w}=\sum_{u\in X^{n}}a_{u,v}\otimes p_{v}=1\otimes p_{v}.\] Multiplying by any element \(a\otimes 1\in\mathbb{A}_{X}\otimes 1\) shows that \(\gamma_{n}(C(X^{n}))(\mathbb{A}_{X}\otimes 1)\) contains the elements \(a\otimes p_{v}\) of \(\mathbb{A}_{X}\otimes C(X^{n})\) and hence the required density is satisfied. Finally, fix \(m<n\) and \(w\in X^{m}\). Then \[(\operatorname{id}\otimes i_{m,n})\gamma_{m}(p_{w}) =\sum_{z\in X^{m}}a_{w,z}\otimes i_{m,n}(p_{z})\] \[=\sum_{z\in X^{m}}\sum_{z^{\prime}\in X^{n-m}}a_{w,z}\otimes p_{ zz^{\prime}}\] \[=\sum_{z\in X^{m}}\sum_{z^{\prime}\in X^{n-m}}\sum_{w^{\prime} \in X^{n-m}}a_{ww^{\prime},zz^{\prime}}\otimes p_{zz^{\prime}}\] \[=\sum_{w^{\prime}\in X^{n-m}}\alpha_{n}(p_{ww^{\prime}})\] \[=\gamma_{n}(i_{m,n}(p_{w})),\] and so the collection \(\gamma=(\gamma_{n})_{n=1}^{\infty}\) defines an action of \((\mathbb{A}_{X},\Delta)\) on the homogeneous rooted tree \(X^{*}\). Now suppose \((\alpha_{n})_{n=1}^{\infty}\) is an action of a compact quantum group \((A,\Phi)\) on \(X^{*}\). Let \(b_{\varnothing,\varnothing}:=1\in A\) and for \(n\geq 1\) and \(u,v\in X^{n}\) define \(b_{u,v}\in A\) to be the unique elements satisfying \[\alpha_{n}(p_{u})=\sum_{v\in X^{n}}b_{u,v}\otimes p_{v}.\] The coaction identity for \(\alpha_{n}\) says that \[\Phi(b_{u,v})=\sum_{w\in X^{n}}b_{u,w}\otimes b_{w,v} \tag{3.1}\] for any \(u,v\in X^{n}\). We claim that the collection \(\{b_{u,v}:u,v\in X^{n},n\geq 0\}\subseteq A\) satisfies Definition 3.2. Condition (1) is by definition. For (2) and (3), we appeal to the universal property of the quantum permutation groups \(A_{s}(|X|^{n})\) for \(n\geq 1\). Since for any \(n\geq 1\), \(\alpha_{n}\) defines a coaction of \((A,\Phi)\) on \(C(X^{n})\), [11, Theorem 3.1] says that the elements \(\{b_{u,v}:u,v\in X^{n}\}\) satisfy conditions (3.1)-(3.3) of [11, Section 3]. Condition (3.1) is precisely (2). Conditions (3.1) and (3.2) say that for any \(v\in X^{n}\) we have \[\sum_{u\in X^{n}}b_{u,v}=1_{A}=\sum_{w\in X^{n}}b_{v,w}.\] For any \(u\in X^{n}\) and \(x\in X\) we have \[p_{ux}\leq\sum_{y\in X}p_{uy}=i_{n,n+1}(p_{u}),\] and hence \[\sum_{v\in X^{n}}\sum_{y\in X}b_{ux,vy}\otimes p_{vy} =\alpha_{n+1}(p_{ux})\] \[\leq\alpha_{n+1}(i_{n,n+1}(p_{u}))\] \[=(\operatorname{id}_{A}\otimes i_{n,n+1})\alpha_{n}(p_{u})\] \[=\sum_{v\in X^{n}}\sum_{y\in X}b_{u,v}\otimes p_{vy}.\] It follows that \(b_{ux,vy}\leq b_{u,v}\) for any \(x,y\in X\). Therefore, for any \(u,v\in X^{n}\) and \(x\in X\) we have \[b_{u,v}=b_{u,v}\left(\sum_{w\in X^{n}}\sum_{y\in X}b_{ux,wy}\right)=\sum_{y\in X }b_{ux,vy}.\] Likewise for any \(y\in X\) we have \(b_{u,v}=\sum_{x\in X}b_{ux,vy}\) and (3) holds. Therefore, the universal property of \(\mathbb{A}_{X}\) provides a homomorphism \(\pi:\mathbb{A}_{X}\to A\) satisfying \(\pi(a_{u,v})=b_{u,v}\). It follows from (3.1) that \((\pi\otimes\pi)\circ\Delta=\Phi\otimes\pi\) and so \(\pi\) is a compact quantum group homomorphism. The identity \((\pi\otimes\operatorname{id})\circ\gamma_{n}=\alpha_{n}\) is immediate. **Proposition 3.7**.: _For \(|X|\geq 2\) the \(C^{*}\)-algebra \(\mathbb{A}_{X}\) is non-commutative and infinite dimensional._ Proof.: Without loss of generality, assume \(X=\{0,1\}\). Let \(B\) be the universal unital \(C^{*}\)-algebra generated by two (non-commuting) projections \(p\) and \(q\). It is known from [7] that \(B\cong C^{*}(\mathbb{Z}_{2}*\mathbb{Z}_{2})\), which is non-commutative and infinite dimensional. Define the matrix \[(b_{u,v})_{u,v\in X^{2}}=\begin{pmatrix}p&1_{B}-p&0&0\\ 1_{B}-p&p&0&0\\ 0&0&q&1_{B}-q\\ 0&0&1_{B}-q&q\end{pmatrix}\in M_{4}(B).\] Define \(b_{\varnothing,\varnothing}=b_{0,0}=b_{1,1}=1_{B}\), \(b_{0,1}=b_{1,0}=0\) and for \(u,v\in X^{2}\) and \(w,w^{\prime}\in X^{*}\) define \(b_{uw,vw^{\prime}}:=\delta_{w,w^{\prime}}b_{u,v}\). Then these elements satisfy the relations in Definition 3.2 and hence there is a surjective homomorphism \(\mathbb{A}_{X}\to B\). Since \(B\) is non-commutative and infinite-dimensional so is \(\mathbb{A}_{X}\). _Remark 3.8_.: The group \(\operatorname{Aut}(X^{*})\) of automorphisms of a homogeneous rooted tree \(X^{*}\) is compact totally disconnected Hausdorff group under the permutation topology. A neighbourhood basis of the identity is given by the family of subgroups \[\{G_{u}:=\{g\in\operatorname{Aut}(X^{*}):g\cdot u=u\}:u\in X^{*}\},\] and since the orbit of any \(u\in X^{*}\) is finite, each of these open subgroups is closed and hence compact. Cosets of these subgroups are of the form \(G_{u,v}:=\{g\in G:g\cdot v=u\}\). Then \(\{G_{u,v}:u,v\in X^{*}\}\) is a basis of compact open sets for the topology on \(\operatorname{Aut}(X^{*})\). It follows that the indicator functions \(f_{u,v}:=1_{G_{u,v}}\) span a dense subset of \(C(\operatorname{Aut}(X^{*}))\). It is easily checked that the elements \(f_{u,v}\) satisfy (1)-(3) of Definition 3.2 and the universal property of \(C(\operatorname{Aut}(X^{*}))\) then implies that it is the abelianisation of \(\mathbb{A}_{X}\). ## 4. Self-similarity If \(g\in\operatorname{Aut}(X^{*})\) and \(x\in X\), the _restriction_\(g|_{x}\) is the unique element of \(\operatorname{Aut}(X^{*})\) satisfying \[g\cdot(xw)=(g\cdot x)g|_{x}\cdot w\quad\text{for all $w\in X^{*}$}.\] A subgroup \(G\leq\operatorname{Aut}(X^{*})\) is called _self-similar_ if \(G\) is closed under taking restrictions. That is, whenever \(g\in G\) and \(x\in X\), the restriction \(g|_{x}\) is an element of \(G\). With the topology inherited from \(\operatorname{Aut}(X^{*})\), the restriction map \(G\to G\colon g\mapsto g|_{x}\) is continuous. If \(G\) is any group acting on \(X^{*}\) by automorphisms, we call the action _self-similar_ if the image of \(G\) in \(\operatorname{Aut}(X^{*})\) is self-similar. To have a reasonable notion of self-similarity for quantum subgroups of \(\mathbb{A}_{X}\), we need to understand how restriction manifests itself in the function algebra \(C(\operatorname{Aut}(X^{*}))\). Given \(x\in X\) and \(u,v\in X^{n}\) we have \[\{g:g|_{x}\cdot u=v\} =\left(\bigcup_{y\in X}\{g:g\cdot x=y\}\right)\cap\{g:g|_{x}\cdot u =v\}\] \[=\bigcup_{y\in X}\left(\{g:g\cdot x=y\}\cap\{g:g|_{x}\cdot u=v\}\right)\] \[=\bigcup_{y\in X}\{g:g\cdot(xu)=yv\},\] and hence the corresponding indicator functions satisfy \[1_{\{g:g|_{x}\cdot u=v\}}=\sum_{y\in X}1_{\{g:g\cdot(xu)=yv\}}.\] This formula motivates the following result. **Proposition 4.1**.: _For each \(x\in X\) there is a homomorphism \(\rho_{x}:\mathbb{A}_{X}\to\mathbb{A}_{X}\) satisfying_ \[\rho_{x}(a_{u,v})=\sum_{y\in X}a_{yu,xv}, \tag{4.1}\] _for all \(u,v\in X^{n}\)._ We illustrate the formula for a restriction map in Figure 1 by considering \(X=\{0,1,2\}\) and looking at what the restriction map \(\rho_{1}\) does to the projection \(a_{1,2}\). Proof of Proposition 4.1.: Fix \(x\in X\). We show that the elements \[\{b_{u,v}:=\rho_{x}(a_{u,v}):u,v\in X^{n},n\geq 1\}\] satisfy the conditions of Definition 3.2. For (1) we have \[b_{\varnothing,\varnothing}=\rho_{x}(a_{\varnothing,\varnothing})=\sum_{y\in X }a_{y,x}=1.\] For (2), we have \[b_{u,v}^{*}=\left(\sum_{y\in X}a_{yu,xv}\right)^{*}=\sum_{y\in X}a_{yu,xv}^{*} =\sum_{y\in X}a_{yu,xv}=b_{u,v}\] and \[b_{u,v}^{2}=\left(\sum_{y\in X}a_{yu,xv}\right)^{2}=\sum_{y,z\in X}a_{yu,xv}a_{ zu,xv}=\left(\sum_{y\in X}a_{yu,xv}\right)=b_{u,v}.\] For (3), fix \(y\in X\). Then \[\sum_{z\in X}b_{uy,vz}=\sum_{z\in X}\sum_{w\in X}a_{wuy,xvz}=\sum_{w\in X}a_{wu,xv}=b_{u,v}.\] A similar calculation shows \(\sum_{z\in X}b_{uz,vy}=b_{u,v}\). Hence there is a homomorphism \(\rho_{x}\) with the desired formula. _Remark 4.2_.: We define \(\rho_{\varnothing}\) to be the identity homomorphism \(\mathbb{A}_{X}\to\mathbb{A}_{X}\), and for \(w=w_{1}\cdots w_{n}\in X^{n}\) we define \(\rho_{w}\) to be the composition \(\rho_{w_{1}}\circ\cdots\circ\rho_{w_{n}}\). A routine calculation shows that for all \(u,v\in X^{n}\) we have \[\rho_{w}(a_{u,v})=\sum_{z\in X^{n}}a_{zu,wv}.\] _Remark 4.3_.: A similar argument to the one in the proof of Proposition 4.1 shows that for each \(x\in X\) there is a homomorphism \(\sigma_{x}\colon\mathbb{A}_{X}\to\mathbb{A}_{X}\) satisfying \[\sigma_{x}(a_{u,v})=\sum_{y\in X}a_{xu,yv}\] for all \(u,v\in X^{n}\), \(n\in\mathbb{N}\). It is straightforward to see that \(\sigma_{x}=\kappa\circ\rho_{x}\circ\kappa\), where \(\kappa\) is the coinverse. We can now state the main definition of the paper. **Definition 4.4**.: We call \(\rho_{w}\) the _restriction by \(w\)_. A quantum subgroup \(A\) of \(\mathbb{A}_{X}\) is _self-similar_ if for each \(x\in X\) the restriction \(\rho_{x}\) factors through the quotient map \(q:\mathbb{A}_{X}\to A\); that is, if there exists a homomorphism \(\widetilde{\rho_{x}}:A\to A\) such that the diagram commutes. To motivate the main result of this section, let \(G\) be a group. To construct a self-similar action of \(G\) on \(X^{*}\), it suffices to have a function \(f\colon G\times X\to X\times G\) such that \(f(e,x)=(x,e)\) for all \(x\in X\), and such that the following diagram commutes This data allows us to define an action of \(G\) on \(X^{*}\), which is self-similar with \(g\cdot x\) and \(g|_{x}\) the unique elements of \(X\) and \(G\) satisfying \((g\cdot x,g|_{x}):=f(g,x)\). Our next result is a compact quantum group analogue of the above result. We will be working with multiple different identity homomorphisms and units. For clarity we adopt the following notational conventions: we write \(\operatorname{id}_{A}\) for the identity homomorphism on a \(C^{*}\)-algebra \(A\), and for \(n\geq 1\) write \(\operatorname{id}_{n}\) for the identity homomorphism on the commutative \(C^{*}\)-algebra \(C(X^{n})\). Likewise, \(1_{A}\) will denote the unit of \(A\), \(1\) and \(1_{n}\) will denote the units of \(C(X)\) and \(C(X^{n})\) respectively. **Theorem 4.5**.: _Suppose \((A,\Phi)\) is a compact quantum group equipped with a unital \(*\)-homomorphism \(\psi:C(X)\otimes A\to A\otimes C(X)\) satisfying_ \[(\Phi\otimes\operatorname{id}_{1})\psi=(\operatorname{id}_{A}\otimes\psi)( \psi\otimes\operatorname{id}_{A})(\operatorname{id}_{1}\otimes\Phi) \tag{4.2}\] _and_ \[\overline{\psi(C(X)\otimes 1_{A})(A\otimes 1)}=A\otimes C(X). \tag{4.3}\] _Then \((A,\Phi)\) acts on the homogeneous rooted tree \(X^{*}\) and moreover the image of \(\mathbb{A}_{X}\), under the homomorphism \(\pi:\mathbb{A}_{X}\to A\) from Proposition 3.6, is a self-similar compact quantum group._ Proof.: We begin by defining an action of \((A,\Phi)\) on \(X^{*}\). Identify \(C(X)\) with \(C(X)\otimes 1_{A}\subseteq C(X)\otimes A\) and let \(\alpha_{1}:=\psi|_{C(X)\otimes 1_{A}}\). Then \(\alpha_{1}\) is clearly unital and the coaction identity and Podles condition for \(\alpha_{1}\) follow from (4.2) and (4.3). Now inductively define \(\alpha_{n+1}:=(\psi\otimes\operatorname{id}_{n})(\operatorname{id}_{1}\otimes \alpha_{n}):C(X^{n+1})\to A\otimes C(X^{n+1})\) for \(n\geq 1\), where we are supressing the canonical isomorphism \(C(X^{n+1})\cong C(X)\otimes C(X^{n})\). Again, \(\alpha_{n+1}\) is clearly unital whenever \(\alpha_{n}\) is. If we assume \(\alpha_{n}\) satisfies the coaction identity, then \[(\Phi\otimes\operatorname{id}_{n+1})\alpha_{n+1} =(\Phi\otimes\operatorname{id}_{n+1})(\psi\otimes\operatorname{id }_{n})(\operatorname{id}_{1}\otimes\alpha_{n})\] \[=(\operatorname{id}_{A}\otimes\psi\otimes\operatorname{id}_{n}) (\psi\otimes\operatorname{id}_{A}\otimes\operatorname{id}_{n})(\operatorname {id}_{1}\otimes\Phi\otimes\operatorname{id}_{n})(\operatorname{id}_{1} \otimes\alpha_{n})\] \[=(\operatorname{id}_{A}\otimes\psi\otimes\operatorname{id}_{n})(\psi \otimes\operatorname{id}_{A}\otimes\operatorname{id}_{n})(\operatorname{id}_{1} \otimes\operatorname{id}_{A}\otimes\alpha_{n})(\operatorname{id}_{1}\otimes \alpha_{n})\] \[=(\operatorname{id}_{A}\otimes\psi\otimes\operatorname{id}_{n})( \operatorname{id}_{A}\otimes\operatorname{id}_{1}\otimes\alpha_{n})(\psi \otimes\operatorname{id}_{n})(\operatorname{id}_{1}\otimes\alpha_{n})\] \[=(\operatorname{id}_{A}\otimes\alpha_{n+1})\alpha_{n+1},\] and so \(\alpha_{n+1}\) also satisfies the coaction identity. Since \(\alpha_{1}\) is a coaction, we see that \(\alpha_{n}\) satisfies the coaction identity for any \(n\geq 1\). To see that each \(\alpha_{n}\) satisfies the Podles condition, we argue by induction. We know it is satisfied for \(n=1\). Suppose for some \(n\geq 1\) that \[\overline{\alpha_{n}(C(X^{n}))(A\otimes 1_{n})}=A\otimes C(X^{n}).\] Fix a spanning element \(a\otimes p_{u}\otimes p_{x}\in A\otimes C(X^{n+1})\) where \(u\in X^{n}\) and \(x\in X\). By the inductive hypothesis we can approximate \[a\otimes p_{u}\sim\sum_{i}\alpha_{n}(f_{i})(a_{i}\otimes 1_{n}),\] where \(f_{i}\in C(X^{n})\) and \(a_{i}\in A\). Then \[a\otimes p_{u}\otimes p_{x}\sim\sum_{i}(\alpha_{n}(f_{i})\otimes 1)(1_{A} \otimes 1_{n}\otimes p_{x})(a_{i}\otimes 1_{n+1}). \tag{4.4}\] By definition of \(\alpha_{n}\), for any \(f\in C(X^{n})\) we have \[\alpha_{n}(f)\otimes 1 =((\psi\otimes\operatorname{id}_{n-1})\dots(\operatorname{id}_{n -1}\otimes\psi)(f\otimes 1_{A}))\otimes 1\] \[=(\psi\otimes\operatorname{id}_{n})\dots(\operatorname{id}_{n-1} \otimes\psi\otimes\operatorname{id}_{1})(f\otimes 1_{A}\otimes 1)\] \[=(\psi\otimes\operatorname{id}_{n})\dots(\operatorname{id}_{n-1} \otimes\psi\otimes\operatorname{id}_{1})(\operatorname{id}_{n}\otimes\psi)(f \otimes 1\otimes 1_{A})\] \[=\alpha_{n+1}(f\otimes 1).\] So we can write (4.4) as \[\sum_{i}\alpha_{n+1}(f_{i}\otimes 1)(1_{A}\otimes 1_{n}\otimes p_{x})(a_{i} \otimes 1_{n+1}).\] Since \(\psi\) is unital, we have \[1_{A}\otimes 1_{n}\otimes p_{x}=(\psi\otimes\operatorname{id}_{n})(1\otimes 1_{A} \otimes 1_{n-1}\otimes p_{x}),\] which can be approximated using the induction hypothesis by \[(\psi\otimes\operatorname{id}_{n})(1\otimes 1_{A}\otimes 1_{n-1} \otimes p_{x}) \sim(\psi\otimes\operatorname{id}_{n})\left(1\otimes\sum_{j} \alpha_{n}(g_{j})(b_{j}\otimes 1_{n})\right)\] \[=(\psi\otimes\operatorname{id}_{n})\left(\sum_{j}(\operatorname{ id}_{1}\otimes\alpha_{n})(1\otimes g_{j})(1\otimes b_{j}\otimes 1_{n})\right)\] \[=\sum_{j}\alpha_{n+1}(1\otimes g_{j})(\psi\otimes\operatorname{ id}_{n})(1\otimes b_{j}\otimes 1_{n}).\] Finally, applying the Podles condition for \(\alpha_{1}\) we can approximate \[\psi(1\otimes b_{j})\sim\sum_{k}\alpha_{1}(h_{k})(c_{k}\otimes 1)=\sum_{k}\psi(h_{k} \otimes 1_{A})(c_{k}\otimes 1),\] so \[(\psi\otimes\operatorname{id}_{n})(1\otimes b_{j}\otimes 1_{n}) \sim\sum_{k}(\psi(h_{k}\otimes 1_{A})\otimes 1_{n})(c_{k}\otimes 1_{n+1})\] \[=\sum_{k}\alpha_{n+1}(h_{k}\otimes 1_{n})(c_{k}\otimes 1_{n+1}).\] Combining these approximations we can write \[a\otimes p_{u}\otimes p_{x}\sim\sum_{i,j,k}\alpha_{n+1}((f_{i}\otimes 1)(h_{k} \otimes g_{j}))(c_{k}a_{i}\otimes 1_{n+1}),\] where \(f_{i},g_{j}\in C(X^{n}),h_{k}\in C(X)\) and \(a_{i},c_{k}\in A\). Thus \(\alpha_{n+1}\) satisfies the Podles condition and so by induction \(\alpha_{n}\) satisfies the Podles condition for every \(n\geq 1\). It remains to show that \(\alpha_{n}\circ i_{m,n}=(\operatorname{id}_{A}\otimes i_{m,n})\circ\alpha_{m}\) for any \(m<n\). As in the proof of Proposition 3.6, for any \(n\geq 1\) and \(u,v\in X^{n}\) we will let \(b_{u,v}\in A\) be the unique elements satifsying \[\alpha_{n}(p_{u})=\sum_{v\in X^{n}}b_{u,v}\otimes p_{v}.\] We know from the same proof that for any \(n\geq 1\) and \(v\in X^{n}\) we have \[\sum_{u\in X^{n}}b_{u,v}=1_{A}.\] If \(m<n\), for any \(u\in X^{m}\) we have \[\alpha_{n}\circ i_{m,n}(p_{u}) =(\psi\otimes\operatorname{id}_{n-1})\ldots(\operatorname{id}_{m -1}\otimes\psi\otimes\operatorname{id}_{n-m})(\operatorname{id}_{m}\otimes \alpha_{n-m})(i_{m,n}(p_{u}))\] \[=\sum_{w\in X^{n-m}}(\psi\otimes\operatorname{id}_{n-1})\ldots( \operatorname{id}_{m-1}\otimes\psi\otimes\operatorname{id}_{n-m})( \operatorname{id}_{m}\otimes\alpha_{n-m})(p_{u}\otimes p_{w})\] \[=\sum_{w,z\in X^{n-m}}(\psi\otimes\operatorname{id}_{n-1})\ldots( \operatorname{id}_{m-1}\otimes\psi\otimes\operatorname{id}_{n-m})(p_{u} \otimes b_{w,z}\otimes p_{z})\] \[=\sum_{z\in X^{n-m}}(\psi\otimes\operatorname{id}_{m-1})\ldots( \operatorname{id}_{m-1}\otimes\psi)\left(p_{u}\otimes\sum_{w\in X^{n-m}}b_{w,z }\right)\otimes p_{z}\] \[=\sum_{z\in X^{n-m}}(\psi\otimes\operatorname{id}_{m-1})\ldots( \operatorname{id}_{m-1}\otimes\psi)(p_{u}\otimes 1_{A})\otimes p_{z}\] \[=\sum_{z\in X^{n-m}}\alpha_{m}(p_{u})\otimes p_{z}\] \[=\sum_{v\in X^{m}}b_{u,v}\otimes\sum_{z\in X^{n-m}}p_{v}\otimes p _{z}\] \[=\sum_{v\in X^{m}}b_{u,v}\otimes i_{m,n}(p_{v})\] \[=(\operatorname{id}_{A}\otimes i_{m,n})\circ\alpha_{m}(p_{u}).\] So we have that \((\alpha_{n})_{n=1}^{\infty}\) defines an action of \((A,\Phi)\) on \(X^{*}\). Finally, let \(\pi:\mathbb{A}_{X}\to A\) be the homomorphism from Proposition 3.6. We have \(\pi(a_{u,v})=b_{u,v}\) for any \(u,v\in X^{n}\) and \(n\geq 1\). For each \(x\in X\) define a homomorphism \(\tilde{\rho_{x}}:A\to A\) by \[\psi(1\otimes a)=\sum_{x\in X}\tilde{\rho_{x}}(a)\otimes p_{x},\] where \(a\in A\). For any \(u\in X^{n}\) we have \[\alpha_{n+1}(1\otimes p_{u})=\sum_{y\in X}\alpha_{n+1}(p_{yu})=\sum_{v\in X^{n }}\sum_{x,y\in X}b_{yu,xv}\otimes p_{x}\otimes p_{v}.\] On the other hand, we know \(\alpha_{n+1}=(\psi\otimes\mathrm{id}_{n})(\mathrm{id}_{1}\otimes\alpha_{n})\) and \[(\psi\otimes\mathrm{id}_{n})(\mathrm{id}_{1}\otimes\alpha_{n})(1\otimes p_{u} )=\sum_{v\in X^{n}}\psi(1\otimes b_{u,v})\otimes p_{v}=\sum_{v\in X^{n}}\sum_{ x\in X}\tilde{\rho_{x}}(b_{u,v})\otimes p_{x}\otimes p_{v},\] and by comparing tensor factors we see that \(\tilde{\rho_{x}}(b_{u,v})=\sum_{y\in X}b_{yu,xv}\). Hence, the diagram commutes, and so \(\pi(\mathbb{A}_{X})\subseteq A\) is a self-similar quantum group. **Proposition 4.6**.: _The following are equivalent_ 1. \((A,\Phi)\) _is a quantum self-similar group, and_ 2. \((A,\Phi)\) _is a quantum subgroup of_ \((\mathbb{A}_{X},\Delta)\) _and there is a homomorphism_ \(\psi:C(X)\otimes A\to A\otimes C(X)\) _satisfying the hypotheses of Theorem_ 4.5_._ Proof.: Theorem 4.5 is the implication \((2)\Longrightarrow(1)\). To see \((1)\Longrightarrow(2)\) suppose \((A,\Phi)\) is a quantum self-similar group. By definition there is a surjective quantum group morphism \(q:\mathbb{A}_{X}\to A\). It is routine to check that there is a homomorphism \(\psi:C(X)\otimes A\to A\otimes C(X)\) satisfying \[\psi(p_{x}\otimes q(a_{u,v}))=\sum_{y\in X}q(a_{xu,yv})\otimes p_{y}.\] Given \(u,v\in X^{n}\) we have \[(\Phi\otimes\mathrm{id}_{1})\psi(p_{x}\otimes q(a_{u,v})) =\sum_{y\in X}\Phi(q(a_{xu,yv}))\otimes p_{y}\] \[=\sum_{w\in X^{n}}\sum_{y,z\in X}q(a_{xu,zw})\otimes q(a_{zw,yv}) \otimes p_{y}\] \[=(\mathrm{id}_{A}\otimes\!\psi)\left(\sum_{w\in X^{n}}\sum_{z\in X }q(a_{xu,zw})\otimes p_{z}\otimes q(a_{w,v})\right)\] \[=(\mathrm{id}_{A}\otimes\!\psi)(\psi\otimes\mathrm{id}_{A})\left( \sum_{w\in X^{n}}p_{x}\otimes q(a_{u,w})\otimes q(a_{w,v})\right)\] \[=(\mathrm{id}_{A}\otimes\!\psi)(\psi\otimes\mathrm{id}_{A})( \mathrm{id}_{1}\otimes\!\Phi)(p_{x}\otimes q(a_{u,v})),\] and so \(\psi\) satisfies (4.2). For (4.3) notice that for any \(q(a)\in A\) and \(z\in X\) we have \[q(a)\otimes p_{z}=(1\otimes p_{z})(q(a)\otimes 1)\] \[=\left(\sum_{x\in X}q(a_{x,z})\otimes p_{z})\right)(q(a)\otimes 1)\] \[=\left(\sum_{x,w\in X}q(a_{x,w})\otimes p_{w})(q(a_{x,z})\otimes 1 )\right)(q(a)\otimes 1)\] \[=\sum_{x\in X}\psi(p_{x}\otimes 1)(q(a_{x,z}a)\otimes 1).\qed\] _Example 4.7_.: If \(G\) is a closed subgroup of \(\operatorname{Aut}(X^{*})\) which is self-similar, then \(C(G)\) is a commutative self-similar quantum group. The quotient map \(\mathbb{A}_{X}\to C(Aut(X^{*}))\) takes a generator \(a_{u,v}\) to the indicator function \(f_{u,v}\) defined in Remark 3.8. For a function \(f\in C(\operatorname{Aut}(X^{*}))\) and \(x\in X\) the restriction homomorphism \(\tilde{\rho_{x}}\) satisfies \(\tilde{\rho_{x}}(f)(g)=f(g|_{x})\), for any \(g\in G\). ## 5. Finitely constrained self-similar quantum groups ### Classical finitely constrained self-similar groups Fix \(d\geq 1\), and let \(X^{[d]}=\bigcup_{k\leq d}X^{k}\) be the finite subtree of \(X^{*}\) of depth \(d\). The group of automorphism \(\operatorname{Aut}(X^{[d]})\) is a quotient of \(\operatorname{Aut}(X^{*})\), and the quotient map is given by restriction to the finite subtree. We write \(r_{d}:\operatorname{Aut}(X^{*})\to\operatorname{Aut}(X^{[d]})\) for this restriction map. Fix a subgroup \(P\leq\operatorname{Aut}(X^{[d]})\). Define \[G_{P}:=\{g\in\operatorname{Aut}(X^{*}):r_{d}(g|_{w})\in P\text{ for all }w\in X^{*}\}.\] By the properties of restriction, if \(g,h\in G_{P}\), then for any \(w\in X^{*}\) \[r_{d}((gh)|_{w})=r_{d}(g|_{h\cdot w}h|_{w})=r_{d}(g|_{h\cdot w})r_{d}(h|_{w}) \in P.\] Likewise, \(r_{d}(g^{-1}|_{w})=r_{d}(g|_{g^{-1}\cdot w})^{-1}\in P\). Hence \(G_{P}\) is a self-similar group, called a _finitely constrained self-similar group_. More details for these groups can be found in [9]. ### Finitely constrained self-similar quantum groups Consider the subalgebra \(\mathbb{A}_{d}\subseteq\mathbb{A}_{X}\) generated by the elements \(\{a_{u,v}:|u|=|v|\leq d\}\). Since \(\Delta:\mathbb{A}_{d}\to\mathbb{A}_{d}\otimes\mathbb{A}_{d}\) the subalgebra \(\mathbb{A}_{d}\) is a quotient quantum group. The abelianisation of \(\mathbb{A}_{d}\) is the algebra \(C(\operatorname{Aut}(X^{[d]})\) of continuous functions on the finite group \(\operatorname{Aut}(X^{[d]})\). **Definition 5.1**.: Suppose \(\mathbb{P}\) is a quantum subgroup of \(\mathbb{A}_{d}\), where \(\mathbb{P}=\mathbb{A}_{d}/I\). Denote by \(q_{I}:\mathbb{A}_{d}\to\mathbb{P}\) the quotient map; so \(I=\ker(q_{I})\). We denote by \(J\) the smallest closed 2-sided ideal of \(\mathbb{A}_{X}\) generated by \(\{\rho_{w}(I):w\in X^{*}\}\), and by \(A_{\mathbb{P}}\) the quotient \(A_{\mathbb{P}}:=\mathbb{A}_{X}/J\). In the next result we prove that \(A_{\mathbb{P}}\) is a self-similar quantum group, and we call it a _finitely constrained self-similar quantum group_. **Proposition 5.2**.: _Each \(A_{\mathbb{P}}\) is a self-similar quantum group._ To prove Proposition 5.2 we need two lemmas. Recall that for \(g,h\in\operatorname{Aut}(X^{*}),w\in X^{*}\) we have \[(gh)|_{w}=g|_{h\cdot w}h|_{w}.\] In the first lemma, we establish an analogous relationship between the comultiplication \(\Delta\) on \(\mathbb{A}_{X}\) and the restriction maps \(\rho_{w}\). **Lemma 5.3**.: _For any \(n\geq 1,w\in X^{n}\) and \(a\in\mathbb{A}_{X}\) we have_ \[(\Delta\circ\rho_{w})(a)=\sum_{y\in X^{n}}(1\otimes a_{y,w})(\rho_{y}\otimes \rho_{w})(\Delta(a)).\] Proof.: Let \(a_{u,v}\) be a generator of \(\mathbb{A}_{X}\), with \(|u|=|v|=k\geq 0\). Then \[(\Delta\circ\rho_{w})(a_{u,v}) =\Delta\left(\sum_{\alpha\in X^{n}}a_{\alpha u,wv}\right)\] \[=\sum_{y\in X^{n}}\sum_{\beta\in X^{k}}\sum_{\alpha\in X^{n}}a_{ \alpha u,y\beta}\otimes a_{y\beta,wv}\] \[=\sum_{y\in X^{n}}\sum_{\beta\in X^{k}}\rho_{y}(a_{u,\beta}) \otimes a_{y\beta,wv}\] \[=\sum_{y\in X^{n}}\sum_{\beta\in X^{k}}\rho_{y}(a_{u,\beta}) \otimes a_{y,w}\rho_{w}(a_{\beta,v})\] \[=\sum_{y\in X^{n}}(1\otimes a_{y,w})(\rho_{y}\otimes\rho_{w}) \left(\sum_{\beta\in X^{k}}a_{u,\beta}\otimes a_{\beta,v}\right)\] \[=\sum_{y\in X^{n}}(1\otimes a_{y,w})(\rho_{y}\otimes\rho_{w}( \Delta(a_{u,v})).\] To see that this formula extends to \(\mathbb{A}_{X}\) it's enough to show that for any \(w\in X^{*}\) the map \[a\mapsto\sum_{y\in X^{n}}(1\otimes a_{y,w})(\rho_{y}\otimes\rho_{w})(\Delta(a))\] is linear and multiplicative. Linearity is clear, and multiplicativity follows from the orthogonality of the projections \(1\otimes a_{y,w}\) and \(1\otimes a_{z,w}\) for \(y\neq z\). **Lemma 5.4**.: _Consider the quotient maps \(q_{I}:\mathbb{A}_{d}\to\mathbb{A}_{d}/I\) and \(q_{J}:\mathbb{A}_{X}\to\mathbb{A}_{X}/J\). Then for any \(n\geq 1\) and \(y,w\in X^{n}\)_ \[\ker(q_{I}\otimes q_{I})\subseteq\ker((q_{J}\circ\rho_{y})\otimes(q_{J}\circ \rho_{w})).\] Proof.: By definition of \(J\) we have \(I\subseteq J\circ\rho_{w}\) for any \(w\in X^{*}\). Therefore there is a commuting diagram Then if \(c\in\ker(q_{I}\otimes q_{I})\) we have \[(q_{J}\circ\rho_{y})\otimes(q_{J}\circ\rho_{w})(c)=(\pi_{y}\circ q_{I})\otimes (\pi_{w}\circ q_{I})(c)=\pi_{y}\otimes\pi_{w})\circ(q_{I}\otimes q_{I})(c)=0,\] as required. Proof of Proposition 5.2.: To see that \(A_{\mathbb{P}}\) is a compact quantum group, it suffices to show that \(J\) is a Woronowicz ideal. In other words, we need to show that \(\Delta(J)\subset\ker(q_{J}\otimes q_{J})\) where \(q_{J}:\mathbb{A}_{X}\to\mathbb{A}_{X}/J=:A_{\mathbb{P}}\) is the quotient map. Since \(J\) is generated as an ideal by \(\bigcup_{w\in X^{*}}\rho_{w}(I)\) it's enough to show that \[(q_{J}\otimes q_{J})(\Delta\circ\rho_{w}(i))=0\] for any \(i\in I\) and \(w\in X^{*}\). Because \(I\) is a Woronowicz ideal we know that \(\Delta(i)\in\ker(q_{I}\otimes q_{I})\). Then by Lemmas 5.3 and 5.4 we have \[(q_{J}\otimes q_{J})(\Delta\circ\rho_{w}(i)) =(q_{J}\otimes q_{J})\left(\sum_{y\in X^{n}}(1\otimes a_{y,w})( \rho_{y}\otimes\rho_{w})(\Delta(i))\right)\] \[=\sum_{y\in X^{n}}(1\otimes q_{J}(a_{y,w}))(q_{J}\circ\rho_{y} \otimes q_{J}\circ\rho_{w})(\Delta(i))\] \[=0.\] Finally, \(A_{\mathbb{P}}\) is self-similar since by definition of \(J\) we have \(\rho_{w}(J)\subset J\) for any \(w\in X^{*}\). ### Free wreath products It is well known that for any \(d\geq 1\) the group \(\operatorname{Aut}(X^{[d+1]})\) is isomorphic to the wreath product \(\operatorname{Aut}(X^{[d]})\wr\operatorname{Sym}(X)\). Since \(\operatorname{Aut}(X^{*})\) is the inverse limit over \(d\) of the groups \(\operatorname{Aut}(X^{[d]})\), it can be thought as the infinitely iterated wreath product \(\ldots\wr\operatorname{Sym}(X)\wr\operatorname{Sym}(X)\). It follows that \(\operatorname{Aut}(X^{*})\cong\operatorname{Aut}(X^{*})\wr\operatorname{Sym }(X)\). More generally, it is shown in [4] that if \(P\leq\operatorname{Sym}(X)=\operatorname{Aut}(X^{[1]})\), then the finitely constrained self-similar group \(G_{P}\) is the infinitely iterated wreath product \(\ldots\wr P\wr P\). In this section we prove in Theorem 5.7 an analogue of this result for finitely constrained self-similar quantum groups. In [3], Bichon constructs a free wreath product of a compact quantum group by the quantum permutation group \(\mathbb{A}_{s}(n)\). Bichon also comments in Remark 2.4 of [3] that there is a natural analogue of this construction for free wreath products by quantum subgroups of \(\mathbb{A}_{s}(n)\). In this section we formally extend this definition to take free wreath products by any quantum subgroup of \(\mathbb{A}_{s}(n)\), and we prove that the finitely constrained self-similar quantum group \(A_{\mathbb{P}}\) induced from a quantum subgroup \(\mathbb{P}\) of \(A_{s}(n)\) is a free wreath product by \(\mathbb{P}\). We begin by recalling the definition of the free wreath product from [3]; note that we use our notation \(\mathbb{A}_{1}\) instead of \(A_{s}(|X|)\). **Definition 5.5**.: Let \(X\) be a set of at least two elements. Let \((A,\Phi)\) be a compact quantum group, and \(\mathbb{P}\) a quantum subgroup of \(\mathbb{A}_{1}\). For each \(x\in X\), we denote by \(\nu_{x}\) the inclusion of \(A\) in the free product \(C^{*}\)-algebra \((*_{x\in X}A)*\mathbb{P}\). The _free wreath product_ of \(A\) by \(\mathbb{P}\) is the quotient of \((*_{x\in X}A)*\mathbb{P}\) by the two-sided ideal generated by the elements \[\nu_{x}(a)q_{I}(a_{x,y})-q_{I}(a_{x,y})\nu_{x}(a),\ x,y\in X,a\in A.\] The resulting \(C^{*}\)-algebra is denoted by \(A*_{X,w}\mathbb{P}\), and the quotient map is denoted by \(q_{w}\). If \(X\) is understood, we typically just write \(A*_{w}\mathbb{P}\). **Theorem 5.6**.: _Let \((A,\Phi)\) be a compact quantum group, and \(\mathbb{P}\) a quantum subgroup of \(\mathbb{A}_{1}\). The free wreath product \(A*_{w}\mathbb{P}\) from Definition 5.5 is a compact quantum group with comultiplication \(\Phi_{w}\) satisfying_ \[\Phi_{w}(q_{w}(q_{I}(a_{x,y}))) =\sum_{z\in X}q_{w}(q_{I}(a_{x,z}))\otimes q_{w}(q_{I}(a_{z,y})) \tag{5.2}\] \[\Phi_{w}(q_{w}(\nu_{x}(a))) =\sum_{z\in X}(q_{w}\otimes q_{w})\big{(}(\nu_{x}\otimes\nu_{z} )(\Phi(a))(q_{I}(a_{x,z})\otimes 1)\big{)}, \tag{5.1}\] _for each \(x,y\in X\) and \(a\in A\)._ Proof.: Since \(I\) is a Woronowicz ideal, we have \(\Delta|_{I}\subseteq\ker(q_{I}\otimes q_{I})\), and so the map \((q_{I}\otimes q_{I})\circ\Delta_{\mathbb{A}_{1}}\) descends to a map \[\phi\colon\mathbb{P}\to\mathbb{P}\otimes\mathbb{P}\subseteq((*_{x\in X}A)* \mathbb{P})^{\otimes 2}.\] Then \((q_{w}\otimes q_{w})\circ\phi\colon\mathbb{P}\to(A*_{w}\mathbb{P})^{\otimes 2}\) satisfies \[(q_{w}\otimes q_{w})\circ\phi(q_{I}(a_{x,y}))=\sum_{z\in X}q_{w}(q_{I}(a_{x,z}) )\otimes q_{w}(q_{I}(a_{z,y}))\quad\text{for all $x,y\in X$}.\] For each \(x\in X\), consider the continuous linear map \(\phi_{x}\colon A\to(A*_{w}\mathbb{P})^{\otimes 2}\) given by \[\phi_{x}(a)=(q_{w}\otimes q_{w})\left(\sum_{z\in X}(\nu_{x}\otimes\nu_{z})( \Phi(a))(q_{I}(a_{x,z})\otimes 1)\right).\] We claim that \(\phi_{x}\) is a homomorphism. To see this, let \(\{a^{\lambda}=(a^{\lambda}_{i,j})\in M_{d_{\lambda}}(A):\lambda\in\Lambda\}\) is be a family of matrices satisfying (1)-(3) of Definition 2.2, and \(\mathcal{A}\) be the \(*\)-subalgebra of \(A\) spanned by the entries \(a^{\lambda}_{i,j}\). Let \(a,b\in\mathcal{A}\) and use Sweedler's notation to write \(\Phi(a)=a_{(1)}\otimes a_{(2)}\) and \(\Phi(b)=b_{(1)}\otimes b_{(2)}\). We have \[\phi_{x}(a)\phi_{x}(b)=\sum_{z,z^{\prime}\in X}q_{w}\left(\nu_{x}(a_{(1)})q_{ I}(a_{x,z})\nu_{x}(b_{(1)})q_{I}(a_{x,z^{\prime}})\right)\otimes q_{w}(\nu_{z}(a _{(2)})\nu_{z^{\prime}}(b_{(2)})),\] and then since \[q_{w}\left(\nu_{x}(a_{(1)})q_{I}(a_{x,z})\nu_{x}(b_{(1)})q_{I}(a _{x,z^{\prime}})\right) =q_{w}\left(\nu_{x}(a_{(1)}b_{(1)})q_{I}(a_{x,z}a_{x,z^{\prime}})\right)\] \[=\delta_{z,z^{\prime}}q_{w}\left(\nu_{x}(a_{(1)}b_{(1)})q_{I}(a_{ x,z})\right),\] we have \[\phi_{x}(a)\phi_{x}(b) =\sum_{z\in X}q_{w}\left(\nu_{x}(a_{(1)}b_{(1)})q_{I}(a_{x,z}) \right)\otimes q_{w}(\nu_{z}(a_{(2)})\nu_{z}(b_{(2)}))\] \[=(q_{w}\otimes q_{w})\left(\sum_{z\in X}(\nu_{x}\otimes\nu_{z})( a_{(1)}b_{(1)}\otimes a_{(2)}b_{(2)})(q_{I}(a_{x,z})\otimes 1)\right)\] \[=(q_{w}\otimes q_{w})\left(\sum_{z\in X}(\nu_{x}\otimes\nu_{z})( \Phi(ab))(q_{I}(a_{x,z})\otimes 1)\right)\] \[=\phi_{x}(ab).\] Since \(\mathcal{A}\) is dense in \(A\), it follows that \(\phi_{x}\) is a homomorphism on \(A\). The universal property of \((*_{x\in X}A)*\mathbb{P}\) now gives a homomorphism \(\widetilde{\Phi}\colon(*_{x\in X}A)*\mathbb{P}\to(A*_{w}\mathbb{P})^{\otimes 2}\) satisfying \[\widetilde{\Phi}(q_{I}(a_{x,y})) =\sum_{z\in X}q_{w}(q_{I}(a_{x,z}))\otimes q_{w}(q_{\mathbb{P}}( a_{z,y})),\] \[\widetilde{\Phi}(\nu_{x}(a)) =\sum_{z\in X}(q_{w}\otimes q_{w})\big{(}(\nu_{x}\otimes\nu_{z})( \Phi(a))(q_{I}(a_{x,z})\otimes 1)\big{)}.\] For each \(a\in\mathcal{A},x,y\in X\) we have \[\widetilde{\Phi}(\nu_{x}(a)q_{I}(a_{x,y}))=\sum_{z,z^{\prime}\in X}q_{w}\left( \nu_{x}(a_{(1)})q_{I}(a_{x,z})q_{I}(a_{x,z^{\prime}})\right)\otimes q_{w}( \nu_{z}(a_{(2)})q_{I}(a_{z^{\prime},y}))\] \[=\sum_{1\leq k\leq d_{\lambda}}\big{(}q_{w}(\nu_{x}(a_{i,k}^{ \lambda}))\otimes q_{w}(\nu_{z}(a_{k,j}^{\lambda}))\big{)}(q_{w}(q_{I}(a_{x,z})) \otimes 1)(q_{w}\circ q_{I})^{\otimes 2}(\Delta(a_{x,y}))\] \[=\sum_{1\leq k\leq d_{\lambda}}\big{(}q_{w}(\nu_{x}(a_{i,k}^{ \lambda}))\otimes q_{w}(\nu_{z}(a_{k,j}^{\lambda}))\big{)}(q_{w}(q_{I}(a_{x,z}) )\otimes q_{w}(q_{I}(a_{z,y})))\] \[=\sum_{1\leq k\leq d_{\lambda}}q_{w}(\nu_{x}(a_{i,k}^{\lambda}) \otimes q_{w}(\nu_{z}(a_{k,j}^{\lambda}))\otimes q_{w}(\nu_{z}(a_{k,j}^{ \lambda})q_{I}(a_{z,y})).\] It follows that \[\Phi_{w}(a^{(\lambda,X)}_{(i,x),(j,y)}) =\sum_{z\in X}\sum_{1\leq k\leq d_{\lambda}}q_{w}(\nu_{x}(a^{\lambda }_{i,k})q_{I}(a_{x,z}))\otimes q_{w}(\nu_{z}(a^{\lambda}_{k,j})q_{I}(a_{z,y}))\] \[=\sum_{z\in X}\sum_{1\leq k\leq d_{\lambda}}a^{(\lambda,X)}_{(i,x),(k,z)}\otimes a^{(\lambda,X)}_{(k,z),(j,y)},\] and so (1) holds for all matrices \(a^{(\lambda,X)}\). To see that \(a^{(\lambda,X)}\) is invertible, we define \(b^{(\lambda,X)}\) by \[b^{(\lambda,X)}_{(i,x),(j,y)}:=q_{w}(q_{I}(a_{y,x})\nu_{y}((a^{\lambda})_{i,j} ^{-1})).\] Then we have \[(a^{(\lambda,X)}b^{(\lambda,X)})_{(i,x),(j,y)} =\sum_{z\in X}\sum_{1\leq k\leq d_{\lambda}}a^{(\lambda,X)}_{(i,x),(k,z)}b^{(\lambda,X)}_{(k,z),(j,y)}\] \[=q_{w}\Big{(}\sum_{z\in X}\sum_{1\leq k\leq d_{\lambda}}\nu_{x}(a ^{\lambda}_{i,k})q_{I}(a_{x,z})q_{I}(a_{y,z})\nu_{y}((a^{\lambda})_{k,j}^{-1}) \Big{)}\] \[=\delta_{x,y}q_{w}\Big{(}\sum_{1\leq k\leq d_{\lambda}}\nu_{x}(a ^{\lambda}_{i,k})\Big{(}\sum_{z\in X}q_{I}(a_{x,z})\Big{)}\nu_{x}((a^{\lambda} )_{k,j}^{-1})\Big{)}\] \[=\delta_{x,y}q_{w}\Big{(}\nu_{x}\Big{(}\sum_{1\leq k\leq d_{ \lambda}}a^{\lambda}_{i,k}(a^{\lambda})_{k,j}^{-1}\Big{)}\Big{)}\] \[=\delta_{x,y}q_{w}(\nu_{x}((a^{\lambda}(a^{\lambda})^{-1})_{i,j}))\] \[=\delta_{x,y}\delta_{i,j}1.\] A similar calculation shows that \((b^{(\lambda,X)}a^{(\lambda,X)})_{(i,x),(j,y)}=\delta_{x,y}\delta_{i,j}1\), and so \(a^{(\lambda,X)}\) is invertible. Similar calculations also show that \(c^{(\lambda,X)}\) with entries \[c^{(\lambda,X)}_{(i,x),(j,y)}:=q_{w}(q_{I}(a_{x,y})\nu_{x}(((a^{\lambda})^{T}) _{i,j}^{-1}))\] is the inverse of \((a^{(\lambda,X)})^{T}\). We also have \[(a^{X}(a^{X})^{T})_{x,y}=\sum_{z\in X}a^{X}_{x,z}a^{X}_{y,z}=q_{w}\left(q_{I} \left(\sum_{z\in X}a_{x,z}a_{y,z}\right)\right)=\delta_{x,y}1.\] Similarly, \((a^{X})^{T}a^{X}\) is the identity. So \(a^{X}\) and \((a^{X})^{T}\) are mutually inverse, and (2) is satisfied. We now claim that the entries of the matrices \(\{a^{(\lambda,X)}:\lambda\in\Lambda\}\cup\{a^{X}\}\) span a dense subset of \(A*_{X,w}\mathbb{P}\). For each \(x,y\in X\) we obviously have \(q_{w}(q_{I}(a_{x,y}))\) in this span since they are the entries of \(a^{X}\). For each \(x\in X\), \(\lambda\in\Lambda\) and \(1\leq i,j\leq d_{\lambda}\) we have \[\sum_{y\in X}a^{(\lambda,X)}_{(i,x),(j,y)}=q_{w}\left(\nu_{x}(a^{\lambda}_{i,j} )q_{I}\left(\sum_{y\in X}a_{x,y}\right)\right)=q_{w}(\nu_{x}(a^{\lambda}_{i,j} )),\] and so each \(q_{w}(\nu_{x}(a^{\lambda}_{i,j}))\) is in the span of the entries. The claim follows, and so (3) holds. **Theorem 5.7**.: _Let \(A_{\mathbb{P}}\) be a finitely-constrained self-similar quantum group in the sense of Definition 5.1. There is a unital quantum group isomorphism \(\pi\colon A_{\mathbb{P}}\to A_{\mathbb{P}}*_{w}\mathbb{P}\) satisfying_ \[\pi(q_{J}(a_{xu,yv}))=q_{w}\big{(}q_{I}(a_{x,y})\nu_{x}(q_{J}(a_{u,v}))\big{)} \tag{5.3}\] _for all \(x,y\in X\), \(u,v\in X^{m}\), \(m\geq 0\)._ Proof.: We define \(b_{\varnothing,\varnothing}\) to be the identity of \(A_{\mathbb{P}}*_{w}\mathbb{P}\), and for each \(x,y\in X\), \(u,v\in X^{m},m\geq 0\), \[b_{xu,yv}:=q_{w}\big{(}q_{I}(a_{x,y})\nu_{x}(q_{J}(a_{u,v}))\big{)}.\] We claim that this gives a family of projections satisfying (1)-(3) of Definition 3.2. Condition (1) holds by definition. We have \[b_{xu,yv}^{*} =q_{w}\big{(}\nu_{x}(q_{J}(a_{u,v}^{*}))q_{I}(a_{x,y}^{*})\big{)}\] \[=q_{w}\big{(}\nu_{x}(q_{J}(a_{u,v}))q_{I}(a_{x,y})\big{)}\] \[=q_{w}\big{(}q_{I}(a_{x,y})\nu_{x}(q_{J}(a_{u,v}))\big{)}\] \[=b_{xu,yv}\] and \[b_{xu,yv}^{2}=q_{w}\big{(}q_{I}(a_{x,y})\nu_{x}(q_{J}(a_{u,v}))q_{I}(a_{x,y}) \nu_{x}(q_{J}(a_{u,v}))\big{)}=q_{w}\big{(}q_{I}(a_{x,y}^{2})\nu_{x}(q_{J}(a_{ u,v}^{2}))\big{)}=b_{xu,yv}.\] So (2) holds. For each \(w\in X\) we have \[\sum_{z\in X}b_{xuw,yvz} =\sum_{z\in X}q_{w}\big{(}q_{I}(a_{x,y})\nu_{x}(q_{J}(a_{uw,vz})) \big{)}\] \[=q_{w}\Big{(}q_{I}(a_{x,y})\nu_{x}\Big{(}q_{J}\Big{(}\sum_{z\in X }a_{uw,vz}\Big{)}\Big{)}\Big{)}\] \[=q_{w}\big{(}q_{I}(a_{x,y})\nu_{x}(q_{J}(a_{u,v}))\big{)}\] \[=b_{xu,yv},\] and \[\sum_{z\in X}b_{xuz,yvw} =\sum_{z\in X}q_{w}\big{(}q_{I}(a_{x,y})\nu_{x}(q_{J}(a_{uz,vw}) )\big{)}\] \[=q_{w}\Big{(}q_{I}(a_{x,y})\nu_{x}\Big{(}q_{J}\Big{(}\sum_{z\in X }a_{uz,vw}\Big{)}\Big{)}\Big{)}\] \[=q_{w}\big{(}q_{I}(a_{x,y})\nu_{x}(q_{J}(a_{u,v}))\big{)}\] \[=b_{xu,yv},\] and hence (3) holds. This proves the claim, and hence the universal property of \(\mathbb{A}_{X}\) now gives a homomorphism \(\widetilde{\pi}\colon\mathbb{A}_{X}\to A_{\mathbb{P}}*_{w}\mathbb{P}\) satisfying \[\widetilde{\pi}(a_{xu,yv})=q_{w}\big{(}q_{I}(a_{x,y})\nu_{x}(q_{J}(a_{u,v})) \big{)},\] for all \(x,y\in X\), \(u,v\in X^{m}\), \(m\geq 0\). We now claim that \(J\) is contained in \(\ker\widetilde{\pi}\). To see this, fix \(w\in X^{n}\), with \(w=w_{1}w^{\prime}\) for \(w_{1}\in X,w^{\prime}\in X^{n-1}\). We first prove the claim that for each \(x_{k}:=a_{u_{1},v_{1}}\cdots a_{u_{k},v_{k}}\), where \(k\geq 1\) and each pair \(u_{i},v_{i}\in X^{m_{i}}\) for some \(m_{i}\geq 0\), we have \[\widetilde{\pi}(\rho_{w}(x_{k}))=\sum_{y\in X}q_{w}\big{(}q_{I}(a_{y,w_{1}}) \nu_{y}(q_{J}(\rho_{w^{\prime}}(x_{k})))\big{)}. \tag{5.4}\] Let \(k=1\). Then \[\widetilde{\pi}(\rho_{w}(a_{u_{1},v_{1}}))=\sum_{y\in X}\sum_{\alpha\in X^{n- 1}}\widetilde{\pi}(a_{y\alpha u_{1},wv_{1}})\] \[=\sum_{y\in X}\sum_{\alpha\in X^{n-1}}q_{w}\big{(}q_{I}(a_{y,w_{1}}) \nu_{y}(q_{J}(a_{\alpha u_{1},w^{\prime}v_{1}}))\big{)}\] \[=\sum_{y\in X}q_{w}\Big{(}q_{I}(a_{y,w_{1}})\nu_{y}\Big{(}q_{J} \Big{(}\sum_{\alpha\in X^{n-1}}a_{\alpha u_{1},w^{\prime}v_{1}}\Big{)}\Big{)} \Big{)}\] \[=\sum_{y\in X}q_{w}\big{(}q_{I}(a_{y,w_{1}})\nu_{y}(q_{J}(\rho_{w^ {\prime}}(a_{u_{1},v_{1}})))\big{)},\] and so (5.4) holds for \(k=1\). We now assume true for \(x_{k}\), and prove for \(x_{k+1}\). Note that for \(y,y^{\prime}\in X\) we have \(q_{I}(a_{y,w_{1}})q_{I}(a_{y^{\prime},w_{1}})=\delta_{y,y^{\prime}}q_{I}(a_{y, w_{1}})\), and hence \[q_{w}\big{(}q_{I}(a_{y,w_{1}})\nu_{y}(q_{J}(\rho_{w^{\prime}}(x_{ k})))q_{I}(a_{y^{\prime},w_{1}})\nu_{y^{\prime}}(q_{J}(\rho_{w^{\prime}}(a_{u_{ k+1},v_{k+1}})))\big{)}\] \[=q_{w}\big{(}\nu_{y}(q_{J}(\rho_{w^{\prime}}(x_{k})))q_{I}(a_{y, w_{1}})q_{I}(a_{y^{\prime},w_{1}})\nu_{y^{\prime}}(q_{J}(\rho_{w^{\prime}}(a_{u_{ k+1},v_{k+1}})))\big{)}\] \[=\delta_{y,y^{\prime}}q_{w}\big{(}\nu_{y}(q_{J}(\rho_{w^{\prime} }(x_{k})))q_{I}(a_{y,w_{1}})\nu_{y}(q_{J}(\rho_{w^{\prime}}(a_{u_{k+1},v_{k+1}} )))\big{)}\] \[=\delta_{y,y^{\prime}}q_{w}\big{(}q_{I}(a_{y^{\prime},w_{1}})\nu_ {y}(q_{J}(\rho_{w^{\prime}}(x_{k})))\nu_{y}(q_{J}(\rho_{w^{\prime}}(a_{u_{k+1},v_{k+1}})))\big{)}\] \[=\delta_{y,y^{\prime}}q_{w}\big{(}q_{I}(a_{y^{\prime},w_{1}})\nu_ {y}(q_{J}(\rho_{w^{\prime}}(x_{k}a_{u_{k+1},v_{k+1}})))\big{)}.\] It follows that \[\widetilde{\pi}(\rho_{w}(x_{k+1}))=\widetilde{\pi}(\rho_{w}(x_{k}))\widetilde {\pi}(\rho_{w}(a_{u_{k+1},v_{k+1}}))=\sum_{y\in X}q_{w}\big{(}q_{I}(a_{y^{ \prime},w_{1}})\nu_{y}(q_{J}(\rho_{w^{\prime}}(x_{k}a_{u_{k+1},v_{k+1}}))) \big{)},\] and it follows that (5.4) holds for all \(k\). Since linear combinations of products of the form \(x_{k}\) is a dense subalgebra of \(\mathbb{A}_{X}\), it follows that \[\widetilde{\pi}(\rho_{w}(a))=\sum_{y\in X}q_{w}\big{(}q_{I}(a_{y,w_{1}})\nu_{y }(q_{J}(\rho_{w^{\prime}}(a)))\big{)}\] for all \(a\in\mathbb{A}_{X}\). Now, if \(a\in I\), then \(\rho_{w^{\prime}}(a)\in J=\ker q_{J}\), and hence the above equations shows that \(\widetilde{\pi}(\rho_{w}(a))=0\). Hence \(\rho_{w}(a)\in\ker\widetilde{\pi}\) for all \(w\in X^{n}\) and \(a\in I\), and hence \(J\subseteq\ker\widetilde{\pi}\). This means \(\widetilde{\pi}\) descends to a homomorphism \(\pi\colon A_{\mathbb{P}}\to A_{\mathbb{P}}*_{w}\mathbb{P}\) satisfying \[\pi(q_{J}(a_{xu,yv}))=q_{w}\big{(}q_{I}(a_{x,y})\nu_{x}(q_{J}(a_{u,v}))\big{)}\] for all \(x,y\in X\), \(u,v\in X^{m}\), \(m\geq 0\). We now show that \(\pi\) is an isomorphism by finding an inverse. For each \(x\in X\) consider the homomorphism \(q_{J}\circ\sigma_{x}\colon\mathbb{A}_{X}\to A_{\mathbb{P}}\), where \(\sigma_{x}\) is the homomorphism from Remark 4.3. Since \(\sigma_{x}=\kappa\circ\rho_{x}\circ\kappa\), and we know from [10, Remark 2.10] that \(\kappa(J)\subseteq J\), it follows that \(q_{J}\circ\sigma_{x}\) descends to a homomorphism \(\phi_{x}\colon A_{\mathbb{P}}\to A_{\mathbb{P}}\) satisfying \[\phi_{x}(q_{J}(a_{u,v}))=q_{J}(\sigma_{x}(a_{u,v}))=\sum_{y\in X}q_{J}(a_{xu,yv}),\] for all \(u,v\in X^{m},m\geq 0\). Each \(\phi_{x}\), and the map \(q_{I}(a)\mapsto q_{J}(a)\) from \(\mathbb{P}\) to \(A_{\mathbb{P}}\), now allow us to apply the universal property of the free product \((*_{x\in X}A_{\mathbb{P}})*\mathbb{P}\) to get a homomorphism \(\widetilde{\phi}\colon(*_{x\in X}A_{\mathbb{P}})*\mathbb{P}\to A_{\mathbb{P}}\) satisfying \(\widetilde{\phi}\circ\nu_{x}=\phi_{x}\) for each \(x\in X\), and \(\widetilde{\phi}(q_{I}(a))=q_{J}(a)\) for all \(a\in\mathbb{A}_{1}\subseteq\mathbb{A}_{X}\). We claim that \[\widetilde{\phi}\big{(}\nu_{x}(q_{J}(a_{u,v}))q_{I}(a_{x,y})-q_{I}(a_{x,y})\nu_ {x}(q_{J}(a_{u,v})\big{)}=0,\] for each \(x\in X\), \(u,v\in X^{m}\), \(m\in\mathbb{N}\). We have \[\widetilde{\phi}\big{(}\nu_{x}(q_{J}(a_{u,v}))q_{I}(a_{x,y})-q_{I}(a_{x,y})\nu_ {x}(q_{J}(a_{u,v})\big{)}\] \[=\phi_{x}(q_{J}(a_{u,v}))q_{J}(a_{x,y})-q_{G}(a_{x,y})\phi_{x}(q_{J}(a _{u,v}))\] \[=\sum_{y\in X}q_{J}(a_{xu,yv})q_{J}(a_{x,y})-\sum_{y^{\prime}\in X}q _{J}(a_{x,y})q_{J}(a_{xu,y^{\prime}v})\] \[=q_{J}(a_{xu,yv})-q_{J}(a_{xu,yv})\] \[=0.\] It follows that \(\widetilde{\phi}\) descends to a homomorphism \(\phi\colon A_{\mathbb{P}}*_{w}\mathbb{P}\to A_{\mathbb{P}}\) satisfying \[\phi(q_{w}(\nu_{x}(q_{J}(a_{u,v}))))=q_{J}(\sigma_{x}(a_{u,v}))=\sum_{y\in X}q _{J}(a_{xu,yv})\] for all \(x\in X\), \(u,v\in X^{m},m\geq 0\), and \[\phi(q_{w}(q_{I}(a_{x,y})))=q_{J}(a_{x,y})\] for all \(x,y\in X\). We claim that \(\pi\) and \(\phi\) are mutually inverse. For \(x,y\in X\), \(u,v\in X^{m},m\geq 0\), we have \[\phi(\pi(q_{J}(a_{xu,yv})))=\phi\big{(}q_{w}\big{(}q_{I}(a_{x,y})\nu_{x}(q_{J} (a_{u,v}))\big{)}\big{)}=q_{J}(a_{x,y})\sum_{y\in X}q_{J}(a_{xu,yv})=q_{J}(a_{ xu,yv}),\] and it follows that \(\phi\circ\pi\) is the identity on \(A_{\mathbb{P}}\). For \(x\in X\), \(u,v\in X^{m},m\geq 0\), we have \[\pi(\phi(q_{w}(\nu_{x}(q_{J}(a_{u,v}))))) =\pi\Big{(}\sum_{y\in X}q_{J}(a_{xu,yv})\Big{)}\] \[=\sum_{y\in X}q_{w}\big{(}q_{I}(a_{x,y})\nu_{x}(q_{J}(a_{u,v})) \big{)}\] \[=q_{w}\Big{(}q_{I}\Big{(}\sum_{y\in X}a_{x,y}\Big{)}\nu_{x}(q_{J} (a_{u,v}))\Big{)}\] \[=q_{w}(\nu_{x}(q_{J}(a_{u,v}))),\] and for all \(x,w\in X\) we have \[\pi(\phi(q_{w}(q_{I}(a_{x,y}))))=\pi(q_{J}(a_{x,y}))=q_{w}(q_{I}(a_{x,y})).\] Hence \(\pi\circ\phi\) is the identity on \(A_{\mathbb{P}}*_{w}\mathbb{P}\), and so \(\pi\) is an isomorphism. We now need to show that \(\pi\) is a homomorphism of compact quantum groups, which means that \(\Delta_{w}\circ\pi=(\pi\otimes\pi)\circ\Delta_{J}\), where \(\Delta_{J}\) is the comultiplication on \(A_{\mathbb{P}}\). For \(x,y\in X\), \(u,v\in X^{m},m\geq 0\), we have \[(\pi\otimes\pi) \circ\Delta_{J}(q_{J}(a_{xu,yv}))\] \[=\sum_{\alpha^{m+1}}\pi(q_{J}(a_{xu,\alpha}))\otimes\pi(q_{J}(a_{ \alpha,yv}))\] \[=\sum_{z\in X}\sum_{\beta\in X^{m}}\pi(q_{J}(a_{xu,z\beta})) \otimes\pi(q_{J}(a_{z\beta,yv}))\] \[=\sum_{z\in X}\sum_{\beta\in X^{m}}q_{w}\big{(}q_{I}(a_{x,z})\nu_ {x}(q_{J}(a_{u,\beta}))\big{)}\otimes q_{w}\big{(}q_{I}(a_{z,y})\nu_{x}(q_{J}( a_{\beta,v}))\big{)}.\] We have \[\Delta_{w}\circ\pi(q_{J}(a_{xu,yv}))=\Delta_{w}(q_{w}(q_{I}(a_{x,y})))\Delta_{w }(q_{w}(\nu_{x}(q_{J}(a_{u,v}))),\] where \[\Delta_{w}(q_{w}(q_{I}(a_{x,y})))=\sum_{z\in X}q_{w}(q_{I}(a_{x,z}))\otimes q_{w}( q_{\mathbb{P}}(a_{z,y})), \tag{5.5}\] and \[\Delta_{w}(q_{w}(\nu_{x}(q_{J}(a_{u,v}))) =\sum_{z^{\prime}\in X}(q_{w}\otimes q_{w})\big{(}(\nu_{x}\otimes \nu_{z^{\prime}})(\Delta_{J}(q_{J}(a_{u,v})))(q_{I}(a_{x,z^{\prime}})\otimes 1 )\big{)}\] \[=\sum_{z^{\prime}\in X}\sum_{\beta\in X^{m}}q_{w}\big{(}\nu_{x}(q_ {J}(a_{u,\beta}))q_{I}(a_{x,z^{\prime}})\big{)}\otimes q_{w}\big{(}\nu_{z^{ \prime}}(q_{J}(a_{\beta,v}))\big{)}. \tag{5.6}\] A typical summand in the product of the expressions in (5.5) and (5.6) is \[q_{w}\big{(}q_{I}(a_{x,z})\nu_{x}(q_{J}(a_{u,\beta}))q_{I}(a_{x, z^{\prime}})\big{)}\otimes q_{w}\big{(}q_{\mathbb{P}}(a_{z,y})\nu_{z^{\prime}}(q_ {J}(a_{\beta,v}))\big{)}\] \[=q_{w}\big{(}\nu_{x}(q_{J}(a_{u,\beta}))q_{I}(a_{x,z})q_{I}(a_{x, z^{\prime}})\big{)}\otimes q_{w}\big{(}q_{\mathbb{P}}(a_{z,y})\nu_{z^{\prime}}(q_ {J}(a_{\beta,v}))\big{)}\] \[=\delta_{z,z^{\prime}}q_{w}\big{(}\nu_{x}(q_{J}(a_{u,\beta}))q_{I }(a_{x,z})\big{)}\otimes q_{w}\big{(}q_{\mathbb{P}}(a_{z,y})\nu_{z}(q_{J}(a_{ \beta,v}))\big{)}\] \[=\delta_{z,z^{\prime}}q_{w}\big{(}q_{I}(a_{x,z})\nu_{x}(q_{J}(a_{ u,\beta}))\big{)}\otimes q_{w}\big{(}q_{\mathbb{P}}(a_{z,y})\nu_{z}(q_{J}(a_{ \beta,v}))\big{)}.\] Hence \[\Delta_{w}\circ\pi(q_{J}(a_{xu,yv})) =\sum_{z\in X}q_{w}(q_{I}(a_{x,z}))\otimes q_{w}(q_{\mathbb{P}}(a_ {z,y}))\] \[=\sum_{z\in X}\sum_{\beta\in X^{m}}q_{w}\big{(}q_{I}(a_{x,z})\nu_ {x}(q_{J}(a_{u,\beta}))\big{)}\otimes q_{w}\big{(}q_{I}(a_{z,y})\nu_{x}(q_{J}( a_{\beta,v}))\big{)}\] \[=(\pi\otimes\pi)\circ\Delta_{J}(q_{J}(a_{xu,yv})),\] and it follows that \(\Delta_{w}\circ\pi=(\pi\otimes\pi)\circ\Delta_{J}\). _Example 5.8_.: An immediate consequence of Theorem 5.7 is that \(A_{\mathbb{P}}\) is noncommutative whenever \(\mathbb{P}\) is a noncommutative quantum subgroup of \(\mathbb{A}_{1}\). A class of such examples comes from Banica and Bichon's [1, Theorem 1.1], in which they classify all the quantum subgroups \(\mathbb{P}\) of \(\mathbb{A}_{1}\) for \(|X|=4\); the corresponding list of quantum groups \(A_{\mathbb{P}}\) gives us a list of potentially interesting self-similar quantum groups for further study.
2307.13788
Histogram Layer Time Delay Neural Networks for Passive Sonar Classification
Underwater acoustic target detection in remote marine sensing operations is challenging due to complex sound wave propagation. Despite the availability of reliable sonar systems, target recognition remains a difficult problem. Various methods address improved target recognition. However, most struggle to disentangle the high-dimensional, non-linear patterns in the observed target recordings. In this work, a novel method combines a time delay neural network and histogram layer to incorporate statistical contexts for improved feature learning and underwater acoustic target classification. The proposed method outperforms the baseline model, demonstrating the utility in incorporating statistical contexts for passive sonar target recognition. The code for this work is publicly available.
Jarin Ritu, Ethan Barnes, Riley Martell, Alexandra Van Dine, Joshua Peeples
2023-07-25T19:47:26Z
http://arxiv.org/abs/2307.13788v1
# Histogram Layer Time Delay Neural Networks for Passive Sonar Classification ###### Abstract Underwater acoustic target detection in remote marine sensing operations is challenging due to complex sound wave propagation. Despite the availability of reliable sonar systems, target recognition remains a difficult problem. Various methods address improved target recognition. However, most struggle to disentangle the high-dimensional, non-linear patterns in the observed target recordings. In this work, a novel method combines a time delay neural network and histogram layer to incorporate statistical contexts for improved feature learning and underwater acoustic target classification. The proposed method outperforms the baseline model, demonstrating the utility in incorporating statistical contexts for passive sonar target recognition. The code for this work is publicly available. Jarin Ritu\({}^{1}\), Ethan Barnes\({}^{1}\), Riley Martell\({}^{2}\), Alexandra Van Dine\({}^{2}\), Joshua Peeples\({}^{1}\)\({}^{1}\)Department of Electrical and Computer Engineering, Texas A&M University, College Station, TX, USA \({}^{2}\)Massachusetts Institute of Technology Lincoln Laboratory, Lexington, MA, USA Deep learning, histograms, passive sonar, target classification, texture analysis ## 1 Introduction Underwater acoustic target recognition (UATR) technology plays a crucial role in a variety of domains, including biology [1], carrying out search and rescue operations, enhancing port security [2], and mapping the ocean floor [3]. One of the primary target detection techniques used by modern crafts, such as unmanned underwater vehicles, is passive sonar [4]. Passive sonar is an underwater acoustic technology that uses hydrophones to detect and analyze sound waves in the ocean [5]. Unlike active sonar, passive sonar resolves targets from the natural sounds of the ocean and the noises produced by ships and other underwater vehicles. Processing and analyzing passive sonar data can be challenging due to the high volume of data and environmental complexity [6]. Signal processing techniques are often used to analyze ship-generated noise such as low frequency analysis and recording (LOFAR) spectra [7]. The Detection of Envelope Modulation on Noise (DEMON) is an approach that has been successfully used for target detection and recognition in passive sonar [8, 9, 10]. Despite their success, these approaches use handcrafted features that can be difficult to extract without domain expertise [11]. Artificial neural networks (ANNs), such as convolutional neural networks (CNNs) and time delay neural networks (TDNNs), provide an end-to-end process for automated feature learning and follow-on tasks (_e.g._, detection and classification of signals) [12, 13, 14, 15]. The TDNN has shown success in simulating long-term temporal dependencies [16] and can be modeled as a 1D CNN [13]. Thus, the TDNN can adaptively learn the sequential hierarchies of features, but does not explicitly account for the statistics of passive sonar data. These are difficult to model for feature extraction [17, 18]. The statistics of the signals can describe the acoustic texture of the targets of interest [18]. Texture generally falls into two categories: statistical and structural [19, 20, 21, 22].Statistical context in audio analysis involves studying the amplitude information of the audio signal. One way to capture amplitude information is by using probability density functions [18]. However, traditional artificial neural network (ANN) approaches, like convolutional neural networks (CNNs) and time-delay neural networks (TDNNs), have shown a bias towards capturing structural textures rather than statistical texture [20, 21, 22]. This bias limits their ability to directly model the statistical information required to capture acoustic textures accurately. To overcome this shortcoming, histogram layers can be integrated into ANNs to incorporate statistical context [22]. Methods that combine both structural and statistical textures have Figure 1: Overall experimental work flow. Each signal is resampled to \(16\) kHz and binned into three second segments. After dividing the signals and corresponding segments into training, validation, and test partitions, several time-frequency features are extracted. The features are then passed into the model and classified as one of the four vessel types. improved performance for other tasks such as image classification and segmentation [20, 21, 22]. In this work, we propose a new TDNN architecture that integrates histogram layers for improved target classification. Our proposed workflow is summarized in Figure 1. The contributions of this work are as follows: * Novel TDNN architecture with histogram layer (HLTDNN) for passive sonar target classification * In-depth qualitative and quantitative comparisons of TDNN and HLTDNN across a suite of time-frequency features. ## 2 Method ### Baseline TDNN Architecture The TDNN architecture consisted of several convolution layers with the ReLU activation function and max pooling. 2D convolutional features were extracted from the time-frequency input to capture local relationships between the vessel's frequency information [23]. Padding was added to the input time-frequency feature to maintain the spatial dimensions of the resulting features maps. After each convolution operation and ReLU activation function, the features were pooled along the time axis with desired kernel length \(L\) (_e.g._, max pooling kernel of size \(1\times L\)) to aggregate the feature information while maintaining the temporal dependencies similar to other TDNNs [16, 23]. After the fourth convolutional block, the features are flattened and then passed through a final 1D convolutional layer followed by a sigmoid activation function and global average pooling layer (GAP). ### Proposed HLTDNN The baseline TDNN is focused on the "structural" (_e.g._, local) acoustic textures of time and frequency as well as the temporal dependencies in the data. However, the model does not directly consider the statistical aspects of the data. A histogram layer [22] can be added in parallel to the baseline TDNN model to capture statistical features to assist in improving classification performance. Given input features, \(\mathbf{X}\in\mathbb{R}^{M\times N\times D}\), where \(M\) and \(N\) are the spatial (or time-frequency) dimensions while \(D\) is the feature dimensionality, the output tensor of the local histogram layer with \(B\) bins, \(\mathbf{Y}\in\mathbb{R}^{R\times C\times B\times D}\) with spatial dimensions \(R\) and \(C\) after applying a histogram layer with kernel size \(S\times T\) is shown in (1): \[Y_{rcbd}=\frac{1}{ST}\sum_{s=1}^{S}\sum_{t=1}^{T}e^{-\gamma_{bd}^{2}\left( \tau_{r+s,c+t,d}-\mu_{bd}\right)^{2}} \tag{1}\] where the bin centers (\(\mu_{bd}\)) and bin widths (\(\gamma_{bd}\)) of the histogram layer are learnable parameters. Each input feature dimension is treated independently, resulting in \(BD\) output histogram feature maps. The histogram layer takes input features and outputs the "vote" for a value in the range of \([0,1]\). The histogram layer can be modeled using convolution and average pooling layers as shown in Figure 2. Following previous work [22], the histogram layer is added after the fourth convolutional block (_i.e._, convolution, ReLU, and max pooling) and its features are concatenated with the TDNN features before the final output layer. ## 3 Experimental Procedure ### Dataset Description The DeepShip dataset [14] was used in this work. The database contained 609 records reflecting the sounds of four different ship types: cargo, passengership, tanker, and tug. Following [14], each signal is re-sampled to a frequency of \(16\) kHz and divided into segments of three seconds. Figure 3 illustrates the structure of the dataset after "binning" the signals into segments. The number of signals and segments for each class are also shown. Figure 3: DeepShip dataset structure. Figure 2: Proposed HLTDNN architecture. The histogram layer is added in the parallel with the baseline TDNN model through the bin center and width convolution layers with the radial basis activation function (RBF) and average pooling layer. ### Experimental Design **Feature Extraction** Six different features are extracted: Mel Spectrogram (MS), Mel-frequency cepstral coefficients (MFCCs), Short-time Fourier transform (STFT), Gammatone-frequency cepstral coefficients (GFCC), Constant-q transform (CQT), and Variable-q transform (VQT). The window and hop length for each feature was set to \(250\) and \(64\) ms respectively [14]. The number of Mel filter banks for the MelSpectrogram was set to \(40\). For MFCC, the number of Mel-frequency cepstral coefficients was 16. The number of frequency bins for STFT was \(48\) while GFCC, CQT, and VQT used 64 frequency bins. The feature dimensions after zero-padding were \(48\times 48\) for MS and STFT, \(16\times 48\) for MFCC, and \(64\times 48\) for GFCC, CQT, and VQT. **Data partitioning** The data set was split into 70% training, 15% validation, and 15% test based on the signals (428 training, 90 validation, and 91 test). After "binning" the signals into three second segments, 56,468 segments were created (38,523 training, 9,065 validation, and 8,880 test). All segments of each signal remained in the same partition to prevent data leakage (_i.e._, if one signal was selected for training, all segments of the signal were also used for training). **Experimental setup** The models (TDNN or HLTDNN) were evaluated with each individual feature across three runs of random initialization. The experimental parameters for the models were the following: * [noitemsep,topsep=0pt,parsep=0pt] * Optimizer: Adagrad * Learning rate (\(\eta\)): 0.001 * Batch size: 128 * Epochs: 100 * Dropout (\(p\)): 0.5 * Early stopping: 10 epochs * Number of bins (HLTDNN): 16 Dropout was added before the output classification layer and early stopping was used to terminate training if validation loss did not improve within number of patience epochs. Experiments were conducted on an NVIDIA RTX 3090. The models are implemented in Pytorch 1.13, TorchAudio 2.0, and nnAudio 0.3.1 [24]. ## 4 Results and Discussion ### Classification Performance TDNN and HLTDNN classification performances are shown in Table 1. Classification performance was accessed using five metrics: accuracy, precision, recall, F1 score, and Matthew's correlation coefficient (MCC). Fisher's discriminant ratio (FDR) was used to access the feature quality (discussed more in Section 4.2). Confusion matrices for the TDNN and HLTDNN using best performing feature are displayed in Figures 3(a) and 3(b) respectively. For the HLTDNN, STFT achieved the best classification performance compared to other features. However, MFCC had the best for performance for TDNN across the different performance metrics. STFT performed similarly to MFCC when observing classification accuracy. Additional quantitative and qualitative analysis will use STFT to evaluate the impact of the histogram layer on the vessel classification. The TDNN model initially performed well with the Mel spectrogram, MFCC, and STFT, but significantly degraded for the other three features (Table 1). The best performance was achieved using the MFCC feature as input while the worst feature was GFCC. A \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline Features & Model & Accuracy & Precision & Recall & F1 Score & MCC & FDR \\ \hline \multirow{2}{*}{MS} & TDNN & 50.31 \(\pm\) 1.41\% & 39.56 \(\pm\) 0.05\% & 47.67 \(\pm\) 0.03\% & 42.09 \(\pm\) 0.02\% & 34.22 \(\pm\) 0.02\% & 4.14 \(\pm\) 1.50 \\ \cline{2-7} & HLTDNN & 47.46 \(\pm\) 2.39\% & 45.25 \(\pm\) 0.03\% & 51.80 \(\pm\) 0.04\% & 46.00 \(\pm\) 0.03\% & 29.55 \(\pm\) 0.03\% & **20.51 \(\pm\) 1.86** \\ \hline \multirow{2}{*}{MFCC} & TDNN & 51.39 \(\pm\) 0.79\% & 50.10 \(\pm\) 0.02\% & 49.95 \(\pm\) 0.03\% & 49.48 \(\pm\) 0.02\% & 34.84 \(\pm\) 0.01\% & 5.34 \(\pm\) 1.29 \\ \cline{2-7} & HLTDNN & 54.41 \(\pm\) 0.42\% & 54.28 \(\pm\) 0.03\% & 53.91 \(\pm\) 0.03\% & **53.62 \(\pm\) 0.02\%** & 39.38 \(\pm\) 0.02\% & 15.29 \(\pm\) 1.85 \\ \hline \multirow{2}{*}{STFT} & TDNN & 51.15 \(\pm\) 0.72\% & 40.88 \(\pm\) 0.03\% & 48.49 \(\pm\) 0.01\% & 43.86 \(\pm\) 0.02\% & 24.04 \(\pm\) 0.04\% & 8.30 \(\pm\) 2.87 \\ \cline{2-7} & HLTDNN & **59.21 \(\pm\) 0.56\%** & **54.84 \(\pm\) 0.02\%** & **56.59 \(\pm\) 0.03\%** & 53.23 \(\pm\) 0.02\% & **46.05 \(\pm\) 0.01\%** & 17.75 \(\pm\) 0.58 \\ \hline \multirow{2}{*}{GFCC} & TDNN & 27.73 \(\pm\) 0.18\% & 17.45 \(\pm\) 0.00\% & 26.40 \(\pm\) 0.00\% & 17.61 \(\pm\) 0.00\% & 3.63 \(\pm\) 0.00\% & 15.26 \(\pm\) 0.44 \\ \cline{2-7} & HLTDNN & 43.42 \(\pm\) 0.61\% & 39.63 \(\pm\) 0.01\% & 41.44 \(\pm\) 0.01\% & 38.57 \(\pm\) 0.01\% & 24.24 \(\pm\) 0.01\% & 11.94 \(\pm\) 4.82 \\ \hline \multirow{2}{*}{CQT} & TDNN & 36.89 \(\pm\) 0.83\% & 23.34 \(\pm\) 0.03\% & 34.92 \(\pm\) 0.07\% & 30.85 \(\pm\) 0.02\% & 15.06 \(\pm\) 0.01\% & 16.95 \(\pm\) 0.56 \\ \cline{2-7} & HLTDNN & 50.66 \(\pm\) 1.37\% & 44.37 \(\pm\) 0.01\% & 48.04 \(\pm\) 0.02\% & 43.62 \(\pm\) 0.02\% & 34.30 \(\pm\) 0.02\% & 13.14 \(\pm\) 3.61 \\ \hline \multirow{2}{*}{VQT} & TDNN & 36.76 \(\pm\) 0.96\% & 28.14 \(\pm\) 0.02\% & 34.80 \(\pm\) 0.07\% & 30.76 \(\pm\) 0.02\% & 14.84 \(\pm\) 0.01\% & 16.82 \(\pm\) 0.94 \\ \cline{2-7} & HLTDNN & 50.12 \(\pm\) 0.27\% & 43.35 \(\pm\) 0.02\% & 47.57 \(\pm\) 0.01\% & 43.40 \(\pm\) 0.01\% & 33.44 \(\pm\) 0.00\% & 13.28 \(\pm\) 2.87 \\ \hline \end{tabular} \end{table} Table 1: Overall performance metrics for baseline TDNN and proposed HLTDNN model. The average score with \(\pm 1\sigma\) across the three experimental runs of random initialization is shown and the best average metric is bolded. The log of the Fisher Discriminant Ratio (FDR) is shown due to the magnitude of the FDR score. The time-frequency features in this work were Mel Spectrogram (MS), Mel-frequency cepstral coefficients (MFCC), Short-time Fourier transform (STFT), Gammatone-frequency cepstral coefficients (GFCC), Constant-q transform (CQT), and Variable-q transform (VQT). Figure 4: Average confusion matrices for the TDNN and HLTDNN on the DeepShip dataset using the STFT feature. The average overall test accuracy is shown in parenthesis. possible reason for this is that each feature used a 250 ms window and hop length of 64 ms. The short time frame may be limiting the frequency domain and selecting the best frequency band greatly impacts performance [25]. However, the performance of the HLTDNN was fairly robust across the different time-frequency features. The STFT feature performed the best for this model, and the HLTDNN also improved the performance of the GFCC, CQT and VQT features significantly in comparison to the TDNN. This demonstrates that the statistical context captured by the histogram layer is useful for improving target classification. Both models did not identify the Cargo class as well as the other vessel types as shown in Figure 4. Particularly, the most common classification mistakes occurred when the model predicted Cargo as Tanker (_i.e._, false positive). Intuitively, this classification error makes sense because Tanker is a type of Cargo ship (_e.g._, oil tanker [26]) and the sound produced by each ship maybe similar. Also, the Cargo class in the DeepShip data has been noted to have high intra-class variance [27]. As a result, the Cargo class was the most difficult to classify. Feature regularization methods (_i.e._, constrastive learning) can be incorporated into the objective function to mitigate intra-class variance. ### Feature Evaluation In addition to the classification metrics, quality of the features was accessed using Fisher's Discriminant Ratio (FDR). FDR is the ratio of the inter-class separability and the intra-class compactness. Ideally, the inter-class separability should be maximized (_i.e._, different vessel types should be "far away" from one another or have large distances between the classes in the feature space) and the intra-class compactness should be minimized (_i.e._, samples from the same class should be "close" or have small distances between one another in the feature space). As a result, the FDR should be maximized. From Table 1, the log of the FDR shows that the histogram model achieved the best FDR scores for all six features further demonstrating the utility of the statistical features. A deeper analysis using the best performing feature (STFT) in terms of classification performance is shown in Table 2. For all four classes, the log FDR for the HLTDNN is statistically significant (no overlapping error bars) in comparison to the TDNN. The main difference between the two models were the increased feature separability of the HLTDNN model in comparison with the baseline TDNN. The TDNN had smaller denominator (_i.e._, intra-class compactness) compared to the HLTDNN when computing the norm of the within-scatter matrix, indicating that the TDNN performs marginally better in terms of intra-class compactness. On the other hand, the features from the HLTDNN are more separable than those from the TDNN, as evident from the norm of the between-scatter matrix, showing the HLTDNN's superiority in terms of inter-class separability. The FDR scores further elucidate the importance of statistical texture information captured by the histogram layer. Figure 5 shows the 2D t-SNE projection of the features from the best performing models using the STFT feature. The same random initialization for t-SNE was used for both methods in order to do a fair comparison between both models. The qualitative results of t-SNE match our quantitative analysis using FDR. The features extracted by the histogram acts as a similarity measure for the statistics of the data and assigning higher "votes" to bins where features are closer. The addition of these features to the TDNN model improved the separability of the classes as observed in Figure 4(b). Modifying the histogram layer to help improve the intra-class compactness of the HLTDNN would be of interest in future investigations. ## 5 Conclusion In this work, a novel HLTDNN model was developed to incorporate statistical information for improved target classification in passive sonar. In comparison to the base TDNN, the HLTDNN not only improved classification performance and led to improved feature representations for the vessel types. Future work will investigate combining features as opposed to using a single time-frequency representation as the input to the network. Each feature can also be tuned (_e.g._, change number of frequency bins) to enhance the representation of the signals. Additionally, both architectures can be improved by a) adding more depth and b) leveraging pretrained models. The training strategies could also use approaches to mitigate overfitting and improve performance, such as regularization of the histogram layer (_e.g._, add constraints to the bin centers and widths) and data augmentation.
2306.12967
On the Degree of Dynamical Packing in the Kepler Multi-planet Systems
Current planet formation theories rely on initially compact orbital configurations undergoing a (possibly extended) phase of giant impacts following the dispersal of the dissipative protoplanetary disk. The orbital architectures of observed mature exoplanet systems have likely been strongly sculpted by chaotic dynamics, instabilities, and giant impacts. One possible signature of systems continually reshaped by instabilities and mergers is their dynamical packing. Early Kepler data showed that many multi-planet systems are maximally packed - placing an additional planet between an observed pair would make the system unstable. However, this result relied on placing the inserted planet in the most optimistic configuration for stability (e.g., circular orbits). While this would be appropriate in an ordered and dissipative picture of planet formation (i.e. planets dampen into their most stable configurations), we argue that this best-case scenario for stability is rarely realized due to the strongly chaotic nature of planet formation. Consequently, the degree of dynamical packing in multi-planet systems under a realistic formation model is likely significantly higher than previously realized. We examine the full Kepler multi planet sample through this new lens, showing that ~60-95% of Kepler multi-planet systems are strongly packed and that dynamical packing increases with multiplicity. This may be a signature of dynamical sculpting or of undetected planets, showing that dynamical packing is an important metric that can be incorporated into planet formation modelling or when searching for unseen planets.
Alysa Obertas, Daniel Tamayo, Norm Murray
2023-06-22T15:25:32Z
http://arxiv.org/abs/2306.12967v1
# On the Degree of Dynamical Packing in the Kepler Multi-planet Systems ###### Abstract Current planet formation theories rely on initially compact orbital configurations undergoing a (possibly extended) phase of giant impacts following the dispersal of the dissipative protoplanetary disk. The orbital architectures of observed mature exoplanet systems have likely been strongly sculpted by chaotic dynamics, instabilities, and giant impacts. One possible signature of systems continually reshaped by instabilities and mergers is their dynamical packing. Early Kepler data showed that many multi-planet systems are maximally packed - placing an additional planet between an observed pair would make the system unstable. However, this result relied on placing the inserted planet in the most optimistic configuration for stability (e.g., circular orbits). While this would be appropriate in an ordered and dissipative picture of planet formation (i.e. planets dampen into their most stable configurations), we argue that this best-case scenario for stability is rarely realized due to the strongly chaotic nature of planet formation. Consequently, the degree of dynamical packing in multi-planet systems under a realistic formation model is likely significantly higher than previously realized. We examine the full Kepler multi-planet sample through this new lens, showing that \(\sim 60-95\%\) of Kepler multi-planet systems are strongly packed and that dynamical packing increases with multiplicity. This may be a signature of dynamical sculpting or of undetected planets, showing that dynamical packing is an important metric that can be incorporated into planet formation modelling or when searching for unseen planets. keywords: exoplanets - planets and satellites: dynamical evolution and stability ## 1 Introduction The last decade of exoplanet observations has revolutionized our understanding of planets and planetary systems. Astronomers have a wealth of data on protoplanetary disks (e.g. see Andrews, 2020) and of long-lived multi-planet systems (e.g. see Zhu and Dong, 2021) which have driven significant innovation and progress in planet formation models (e.g. see Drazkowska et al., 2022). Most relevant to our investigation is a question of nature versus nurture. Did the Kepler multi-planet systems form in the gas disk with their observed architectures, or did those architectures arise from subsequent dynamical interactions? Most planet formation theories include a final giant impact phase in which planets excite one another onto crossing orbits and collide (e.g. see Goldreich et al. (2004); Wyatt and Jackson (2016), and reviews above). Numerical (i.e. using nobody simulations, e.g. Hansen and Murray (2013); Dawson et al. (2016); Matsumoto and Kokubo (2017); Poon et al. (2020); MacDonald et al. (2020)) and analytical (e.g Tremaine, 2015) exploration of this phase can approximately reproduce the observed exoplanet population. In this picture, planetary systems repeatedly destabilize from compact configurations with many planets, leading to gravitational scatterings and planetary mergers that leave behind ever longer-lived, lower-multiplicity architectures (Laskar, 1996). This hypothesis, under which orbital architectures are dynamically sculpted throughout planetary systems' lifetimes has been advanced from different perspectives in an exoplanet context by Volk and Gladman (2015), Pu and Wu (2015), Izidoro et al. (2017). An extreme, but testable scenario for the outcome of this process is the packed planetary systems (PPS) hypothesis. Barnes and Raymond (2004) proposed that these dynamical instabilities result in orbital architectures with no room for additional planets between adjacent pairs, and tested this hypothesis on a sample of observed systems with multiple giant planets (see also Barnes and Quinn, 2004). The Kepler mission's discovery of hundreds of multi-planet systems allowed for the first test of this hypothesis for lower mass planets, conducted by Fang and Margot (2013). They found that at least \(31-45\%\) of multi-planet systems are dynamically packed, broadly consistent with the PPS hypothesis. Fang and Margot (2013) obtain lower limits to this dynamical packing by always placing an additional planet in the orbital configuration that one would, on average, expect to be most stable: midway between the observed pair in mutual Hill radii (Appendix A), with all orbits initially circular. Under this binary definition, the system is dynamically packed only if this favourable configuration for an inserted planet goes unstable within the integration time. A major advantage to this approach is that it limits the number of integrations to perform (consequently reducing overall computation time). However, we know that this most optimistic scenario for inserted planets does not hold. Radial velocity, transit timing, and transit duration measurements have revealed that exoplant orbits in multi-planet systems typically have small but finite eccentricities (e.g. Van Eylen & Albrecht, 2015; Xie et al., 2016; Hadden & Lithwick, 2017; Mills et al., 2019; Van Eylen et al., 2019). Additionally, even if an equidistant location for an additional planet were stable, there is no guarantee that the planet formation process would create a planet in precisely that location. Instead, we propose a more general and continuous measure of dynamical packing, which evaluates the stability of a range of configurations for the inserted planet, weighted by the probability that planet formation would create such a configuration. This more physically meaningful approach to dynamical packing requires a corresponding generative model of planet formation from which to sample configurations for the inserted planet. Since astronomers are far from such a precise model at the current time, we consider two simple end-member planet-formation scenarios. First, for a sufficiently chaotic giant-impact phase, one might expect planet formation to explore the full phase space of possible orbital architectures. Not all of these outcomes will be long-term stable, in which case collisions and scatterings yield a new configuration. By adopting a simplified stability criterion, Tremaine (2015) proposed an analytic model in which the chaotic giant impact stage populates space uniformly and planets would only be observed in regions with long-term stability. This "ergodic hypothesis" yields distributions of orbital eccentricities and interplanetary spacings that are broadly consistent with observations (Tremaine, 2015). The natural definition of dynamical packing in this framework is then the fraction of phase space between adjacent pairs in which inserting an additional planet leads to instability. It's important to note that such a strongly chaotic giant impact phase does not necessarily result in disordered systems. Lammers et al. (2023)(see also Goldberg et al. (2022)) have shown that numerical integrations of a giant impact phase can nevertheless reproduce the observed preference for similar radii, masses, and spacings within multi-planet systems (Weiss et al., 2018; Millholland et al., 2017). At the opposite extreme, it is possible that dissipation (e.g. from gas or planetesimal disks) plays a strong role in the early stages of planet formation, allowing planetary systems to settle into their most stable configurations and avoid later instabilities. For example, Adams (2019); Adams et al. (2020) showed that, under various constraints and simplifying the gravitational interaction between planets, minimization of energy can explain some features of the observed similarities in radii, masses, and spacings within multi-planet systems. If one directly models the interplanetary interactions, dissipative and convergent migration naturally leads to the formation of resonant chains, several of which are now known (Mills et al., 2016; Gillon et al., 2017; Luger et al., 2017), and have presumably not left these resonances since formation (Tamayo et al., 2017). However, the vast majority of multi-planet systems are _not_ in such resonant-chain configurations, so if they do form during the gas disk phase, the majority must subsequently break away from their resonances (Goldreich & Schlichting, 2014; Izidoro et al., 2017; Izidoro & Raymond, 2018). The resonance escape may or may not lead to major instabilities. If it does, the end result of such instabilities and planetary mergers might then be expected to follow the predictions of the "ergodic hypothesis" mentioned above. Nevertheless, this ordered framework for planet formation (which we refer to as the "ordered hypothesis"), provides a useful extreme scenario to compare against. In this picture, we define an adjacent pair of planets as dynamically packed if there does not exist a long-term stable configuration _anywhere_ for an additional planet placed in between the observed pair. It is therefore sufficient to examine the stability of the additional planet in its most stable configuration. This means that for the ordered hypothesis, dynamical packing reduces to a binary classification similar to that of Barnes & Raymond (2004) and Fang & Margot (2013). The ergodic definition of dynamical packing requires the exploration of a vast phase space for observed and inserted planets (i.e. planet masses and orbit configurations) and would need a prohibitive number of CPU hours to perform stability assessment using Nbody integrations. We therefore employ the SPOCK package, which incorporates analytical dynamical models into flexible machine learning architectures to speed up stability determination by a factor of up to \(10^{5}\) relative to direct N-body integrations (Tamayo et al., 2020). In addition to new computational tools available to perform stability tests, the Kepler mission discovered an order of magnitude more multi-planet systems in the time since Fang & Margot (2013)(Thompson et al., 2018). GAIA DR-2 (Gaia Collaboration et al., 2018) also allowed for more accurate stellar parameters (Berger et al., 2020), and consequently planetary parameters, of the Kepler systems (Berger et al., 2020). Furthermore, there have been substantial improvements in observational, theoretical, and modelling work of the mass-radius relationship (e.g. Chen & Kipping, 2017), eccentricities (e.g. Van Eylen & Albrecht, 2015; Xie et al., 2016; Hadden & Lithwick, 2017; Mills et al., 2019; Van Eylen et al., 2019; He et al., 2020), and inclinations (e.g. Zhu et al., 2018; He et al., 2020; Millholland et al., 2021) of planets in the Kepler sample, all of which strongly impact the dynamics and stability of multi-planet systems. To build up our results sequentially, we first conduct tests using only the Kepler Quarters 1-6 data in a manner similar to Fang & Margot (2013). Next, we repeat our analysis with the updated Kepler and GAIA DR-2 combined data. Finally, we move beyond a simple binary classification of dynamical packing under a dissipative, ordered model of planet formation, and evaluate a continuous measure of dynamical packing under a giant impact phase closer to current theories of planet formation. We describe our methods for generating our planet catalogues and performing our dynamical tests in Section 2 and present the results of all our tests in Section 3. Finally, we summarize our findings and discuss their implications in Section 4. ## 2 Methods To investigate whether the Kepler multi-planet systems are dynamically packed, we build on Fang & Margot (2013). We run several sets of stability tests after inserting an additional planet in between adjacent pairs in observed systems. Fang & Margot (2013) were limited by the small-number statistics of multi-planet systems detected by the _Kepler_ mission in quarters 1-6 (Q1-6). As a result, they first modelled an underlying multi-planet population and then inserted additional planets into this synthetic population. With an order-of-magnitude more planet-pairs now discovered, we inserted additional planets directly into observed multi-planet systems from the full Kepler catalogue instead. Additionally, we used improved stellar and planetary parameters informed by the second data release from the GAIA mission (Gaia Collaboration et al., 2018). We build up to our extended analysis in several steps. First, for a meaningful comparison to Fang & Margot (2013), we perform a similar style of tests and analysis as Fang & Margot (2013) using the same observed multi-planet systems in _Kepler_ data from Q1-6 (described in Sec. 2.1), except with direct insertion into observed multi-planet systems rather than imposing a particular parameterized distribution to model the underlying planet population (Sec. 2.3). We then extend the same analysis to the full Kepler-Gaia catalogue (described in Sec. 2.2). Finally, we move beyond testing for the stability of a single configuration for an additional planet, to exploring the full phase space of additional-planet configurations (Sec. 2.4). We assessed the stability of planets inserted between observed adjacent pairs using two methods. Our primary tool was SPOCK(Tamayo et al., 2020), a machine learning code which provides the probability \(p_{\texttt{SPOCK}}\) that a system is stable for \(10^{9}\) orbits. Since previous works on this topic express their results in terms of the fraction of systems that are unstable (i.e. dynamically-packed), for consistency we convert this into the probability that a system is unstable within \(10^{9}\) orbits (i.e. \(p=1-p_{\texttt{SPOCK}}\) ). Throughout the remainder of this paper, we use "probability" to refer to this instability probability. SPOCK performs short \(10^{4}\)-orbit N-body integrations with REBOUND, and uses these to generate the input features to the machine learning model in order to determine the probability (Tamayo et al., 2020). This means that SPOCK can check for stability up to \(\sim 10^{5}\) times faster than N-body (N-body becomes proportionally more competitive for configurations that survive less than \(10^{9}\) orbits); across our suite of simulations, SPOCK was about \(10^{4}\) times faster. This allows for a much broader and more rapid phase space exploration than was previously possible. We also verify a subset of our results using slower N-body integrations with the REBOUND package (Rein and Liu, 2012). ### Kepler Q1-6 Catalogue We used a subset of data from Kepler Quarters 1-6(Batalha et al., 2013), as these were the data used for the analysis in Fang and Margot (2013), using these stellar and planetary properties to generate our catalogue of observed multi-planet systems (which we refer to as the "Q1-6 catalogue"). We used this catalogue for a comparison of our methods with Fang and Margot (2013), so we adopted their same cuts, retaining only planets that satisfied all of: \[P \leq 200\text{days}\] \[1.5R_{\oplus} \leq R \leq 30R_{\oplus}\] \[\text{SNR} \geq 10\] Where \(P\) is the period of the planet's orbit, \(R_{\oplus}\) is the Earth's radius, \(R\) is the planet's radius, and the signal-to-noise ratio is denoted as SNR. Some of these cuts removed planets in multi-planet observed systems, leaving only a single planet. These observed systems were dropped from our catalogue. Our Q1-6 catalogue contains 140 planets around 60 stars. There are 43 observed systems with \(N=2\) planets, 14 with \(N=3\), and 3 with \(N=4\). Note that these are observed multiplicities after making our cuts. ### Kepler-Gaia DR2 Catalogue To generate our up-to-date catalogue of multi-planet systems (which we refer to as our "Kepler-GAIA catalogue"), we first started by generating a catalogue containing a subset of the planets from the Kepler Cumulative List on the NASA Exoplanet Archive(NASA Exoplanet Archive, 2020), filtering to include only planets with a disposition of "CANDIDATE"1 and a radius of "not NULL". This was then merged with the more accurate stellar masses from Berger et al. (2020) and planet radii from Berger et al. (2020), as informed by GAIA DR-2, only retaining stars and planets present in all three of the Kepler Cumulative List, Berger et al. (2020), and Berger et al. (2020). We note that Berger et al. (2020) explicitly excluded exoplanets whose host stars likely have a binary companion(s)2 Footnote 1: Note: the “Disposition Using Kepler Data” column does not have “CONFRMED” as a possible value. We then kept only planets with physically plausible radii (the fraction of "CANDIDATE" planets with \(R>20R_{\oplus}\) is negligible), and fractional errors in the planet radius smaller than unity: \[R < 20R_{\oplus}\] \[\sigma_{R,+}/R < 1\] \[\sigma_{R,-}/R < 1.\] After making these cuts, we kept only planets in multi-planet systems. Additionally, we removed the two planets around KIC 3245969, as their periods differ by approximately 7 seconds. Our Kepler-GAIA catalogue contains 1536 planets around 617 stars. There are 408 systems with \(N=2\), 141 with \(N=3\), 48 with \(N=4\), and 20 with \(N\geq 5\). Like the Q1-6 catalogue, these are the observed multiplicities after making our cuts. A system's observed multiplicity is not necessarily its intrinsic multiplicity. Additional planets have been found in Kepler systems through methods other than the survey's detection pipeline, such as the non-transiting planet Kepler-20 g (Buchhave et al., 2016) (although its radial velocity signal may be caused instead by stellar activity (Nava et al., 2020)) or the low signal-to-noise planets Kepler-80 g and Kepler-90 i (Shallue and Vanderburg, 2018). These planets are not included in our study as they are not present in the cumulative Kepler catalogue, although we note that this would negatively impact the stability of inserted planets in our tests. It is likely that there are also undetected planets around the stars in our Kepler-GAIA catalogue, although the quantity is sensitive to planet inclinations (e.g. as first explored in the Kepler multi-planet systems by Lissauer et al. (2011)). Estimates of the average number of planets within \(\leq 400\) days around planet-hosting stars in the Kepler sample generally range from \(\sim 3-6\)(Traub, 2016; Zhu et al., 2018; Zink et al., 2019; Sandford et al., 2019), which means that we would expect an additional \(\sim 300-2150\) (\(\sim 20-140\)%) undetected planets in our Kepler-GAIA catalogue (with the caveat that we have excluded stars with only a single transiting planet). This could have an impact on the results of our stability tests if undetected planets are typically between adjacent observed pairs (e.g. if they are non-transiting like Kepler-20g, but don't produce strong radial velocity signals). Studies examining intrinsic multiplicity don't comment on the locations of undetected planets, so we conducted a simple test to estimate the frequency that an observed (i.e. transiting) adjacent pair has at least one undetected planet in between them. We found that \(\approx 24\)% of pairs have at least one in-between undetected planet (see Appendix B for more details). Even though this is a non-negligible proportion of pairs, we opted to proceed with our stability tests as if all observed pairs in our Kepler-GAIA catalogue are truly adjacent (rather than e.g. attempt to account for or incorporate the possible presence of in-between undetected planets). In-between undetected planets can only make a system _more_ dynamically packed, which means our analysis will provide robust lower limits on packing. Planets at longer periods have been observed in many Kepler systems (e.g. Schmitt et al. (2014); Wang et al. (2015)), meaning that some of the systems in our catalogue may also have undetected planets at larger periods affecting the stability of the inserted planets in our tests. Indeed, several studies (e.g. Zhu and Wu (2018); Bryan et al. (2019); Herman et al. (2019); Rosenthal et al. (2022)) have noted that cold giants typically have a compact inner set of planets. Given these studies, Zhu and Dong (2021) estimate that the probability of a cold Jupiter given a set of inner super Earths is \(P(CJ|SE)\approx 30\%\); however this relationship was recently questioned by Bonomo et al. (2023), who did not find evidence supporting the relationship. If outer giant planets are present, Millholland et al. (2022) recently argued that these undetected planets must typically lie significantly farther out, which means they likely would not be the predominant source of dynamical perturbations for the planets we insert between adjacent pairs in our Kepler-GAIA catalogue. In any case, their presence would only render each system _more_ unstable, similar to additional nearby planets. Similar to in-between undetected planets, this means our analysis will give lower limits on dynamical packing. ### Fang & Margot Style Instability Tests The goal of Fang and Margot (2013) was to test the packed planetary systems hypothesis (Barnes and Raymond, 2004) by inserting an additional planet in between adjacent pairs. The eccentricities and positions of inserted planets were chosen to maximize stability. If, under those conditions, a system could not host an additional planet, then they considered it to be dynamically packed. Our first tests were in a similar style as Fang and Margot (2013) (which we refer to as "FM13-style" tests) where we inserted a planet such that it has equal semimajor-axis separations from each neighbour, in units of the mutual Hill radius (see Appendix A). This choice tries to account for situations where the observed planets have very different masses, so one would expect the most stable position for the inserted planet to be farther away from the massive body. We note that while this placement should be beneficial for stability on average, it could also sometimes correspond to one of many strongly chaotic regions (e.g. near the edge of a strong mean motion resonance (Obertas et al., 2017)). Additionally, as a best-case scenario for stability (and without better information at hand), Fang and Margot (2013) assumed all orbits were circular. We now know from radial velocities (Mills et al., 2019), transit duration variations (Van Eylen and Albrecht, 2015; Xie et al., 2016; Van Eylen et al., 2019), and transit timing variations (Hadden and Lithwick, 2017), that Kepler systems exhibit small but finite orbital eccentricities, which strongly (and typically negatively) affect stability (Hadden and Lithwick, 2018; Tamayo et al., 2021; Yee et al., 2021). Regardless, we retain these two choices for our FM13-style tests for the sake of comparison and leave the exploration of different placements and non-zero eccentricities for our expanded parameter space tests (see 2.4 below). We ran these FM13-style tests with both the Q1-6 and the Kepler-GAIA catalogues. Although we expect poor statistics for the FM13-style tests on the Q1-6 catalogue, this allows us to compare catalogues using a similar methodology. For these tests, we created a REBOUND simulation with the star (using the mass in its respective catalogue). For the planets' orbits, following Fang and Margot (2013), we adopted an eccentricity of zero and an inclination drawn from a Rayleigh of Rayleigh distribution3 with parameter \(\sigma_{\sigma_{i}}=1\). We adopt a uniformly drawn longitude of ascending node (\(\Omega\)), argument of pericentre (\(\omega\)), and true anomaly (\(f\)). The observed planets are assigned their measured orbital periods, while the inserted planet is placed equidistant between them (in mutual Hill radii). Footnote 3: This is a Rayleigh distribution where the Rayleigh parameter itself is also a Rayleigh distribution. The masses of the observed planets in our two catalogues were calculated in two different ways based on their observed radii. For the Q1-6 catalogue, we used the same mass-radius relationship as Fang and Margot (2012) and Fang and Margot (2013) (note that our expression below is given in terms of \(R_{\oplus}\) and \(M_{\oplus}\) rather than \(R_{\rm Jupiter}\) and \(M_{\rm Jupiter}\)). \[\log_{10}\left(\frac{M}{M_{\oplus}}\right)=0.215689\left(\frac{R}{R_{\oplus}} \right)+0.2412,\ \text{for}\ R<11.6595R_{\oplus} \tag{1}\] \[\log_{10}\left(\frac{M}{M_{\oplus}}\right)=-0.0448137\left(\frac{R}{R_{\oplus}} \right)+3.279,\ \text{for}\ R\geq 11.6595R_{\oplus} \tag{2}\] For the Kepler-GAIA catalogue, we generated a sample of radii for each planet by using a two-sided normal distribution with mean value \(R\) and standard deviations \(\sigma_{R,+}\) and \(\sigma_{R,-}\). Then, we calculated mass distributions for each planet using Forcaster(Chen and Kipping, 2017). Finally, we used the median mass of these distributions as each planet's mass. Following Fang and Margot (2013), the mass of the inserted planet was determined by using eq. 1 for a planet with \(R=1.5R_{\oplus}\), resulting in \(M=3.671M_{\oplus}\). This mass was adopted for the inserted planet in our FM13-style tests for both the Q1-6 and Kepler-GAIA catalogues. For each system in the catalogue with an observed planet multiplicity \(N\), we generated \(N-1\) REBOUND simulations, each with only a single additional inserted planet. For example, an \(N=3\) system would have two simulations: one with a planet inserted between the inner and middle planets and a second with a planet inserted between the middle and outer planets. For the Q1-6 catalogue, we had 43 pairs belonging to a system with an observed multiplicity of \(N=2\), 28 with \(N=3\), and 9 with \(N=4\). For the Kepler-GAIA catalogue, we had 408 pairs belonging to a system with an observed multiplicity of \(N=2\), 282 with \(N=3\), 144 with \(N=4\), and 85 with \(N\geq 5\). Note the numbers of pairs differ from the number of systems (Sec. 2.1-2.2) if \(N\geq 3\). We assessed the stability of planets inserted between observed adjacent pairs using Nbody integrations and SPOCK. For the Nbody integrations, we used REBOUND with WHEAST (Rein and Tamayo, 2015) and a timestep of 5% of the inner planet's initial period. The REBOUND simulation was integrated for the maximum integration time (see below), until a close encounter occurred, or until a planet escaped. We set each planet's radius in the REBOUND simulation as its Hill sphere (i.e. \(R=a(M/3M_{\star})^{1/3}\)) and used a collision as our criterion for a close encounter. We set a maximum distance for the REBOUND simulation as 100 times the outer planet's initial distance and used this as our criterion for an escape. For the Q1-6 catalogue, we ran one set of integrations with a maximum integration time of \(10^{9}\) orbits of the innermost planet (using its initial period). For the Kepler-GAIA catalogue, we ran two sets of integrations: one with a maximum integration time of \(10^{9}\) orbits and a second with a maximum integration time of \(10^{8}\) years. We used SPOCK to obtain the probabilities of configurations being unstable within \(10^{9}\) orbits for a planet inserted between an adjacent pair for all sets of pairs in both catalogues. Conveniently, SPOCK calculates its probabilities by being given a REBOUND simulation. This allowed us to pass to SPOCK the exact same initial REBOUND simulations that were used for our Nbody integrations. We note that because the dynamics are chaotic, N-body integrations run on two sets of initial conditions which are separated by machine precision will yield two different but equally valid instability times. Nevertheless, Hussain and Tamayo (2020) find that the width of such instability time distributions is typically small compared to the mean instability time, so this is only a quantitative concern for the fraction of configurations with mean instability times within approximately one dex of the integration time (e.g., within \(\sim 10^{8.5}-10^{9.5}\) orbits for our \(10^{9}\) orbit runs). This therefore does not significantly impact our results. In Appendix C.1 we show that SPOCK compares well against N-body integrations for a representative subset of our suites of simulations. ### Expanded Parameter Space Instability Tests The significant computational savings of SPOCK vs. direct N-body integrations allows us to efficiently explore a broad parameter space of orbital configurations for inserted planets. Rather than testing a single configuration (equidistant in Hill radii, circular orbits) for an additional planet between each observed pair (Fang and Margot, 2013), we drew 10 000 orbital configurations randomly sampling planets' physical and orbital parameters. Specifically, we initialized a star with its Kepler-GAIA catalogue mass. The transiting (observed) planets were assigned their Kepler-GAIA catalogue periods. To initialize their masses, we began by drawing a radius from a two-sided normal distribution (with the catalogue values of \(R\), \(\sigma_{R,+}\), and \(\sigma_{R,-}\) as the mean and standard deviations). We then passed this radius to the Forecaster package (Chen and Kipping, 2017), which uses it to estimate a probability distribution function (pdf) for the mass, and returns a random sample from this pdf. While some of the planets in our Kepler-GAIA catalogue have measured masses from radial velocity (RV) or transit timing variation observations, our ensemble results are not sensitive (within error) to the use of measured versus Forecaster observed planet masses. We chose to use Forecaster to generate individual mass pdfs for all planets in the Kepler-GAIA catalogue in order to apply a consistent methodology for generating our REBOUND simulations. To check if our results were sensitive to the choice of mass for observed planets, we compared our two summary metrics (see Sec. 3.3.1 & Table 3 and 3.3.2 & Table 4) for a subset of planets that are present both in Table 10 of Bonomo et al. (2023) and in our Kepler-GAIA catalogue. Although the RV masses from Bonomo et al. (2023) are typically higher than the median Forecaster mass (which would only make systems _more_ unstable), our summary metrics typically differed well within our estimated errors. Using the median Forecaster mass, the 4 observed adjacent pairs (8 planets) in 4 systems with measured RV masses have mean (\(\mu\)) and standard deviation (\(\sigma\)) values of the RV-Forecaster masses ratio (i.e. \(M_{RV}/M_{f}\)) of \(\mu\sim 1.3\) and \(\sigma\sim 0.5\)4, with both summary metrics typically having a small difference (i.e. slightly less stable) within our errors (see Tables 3 and 4 plus the discussion in Appendix D for our error estimates). Expanding to the set of 28 planets (18 adjacent pairs) in 9 multi-planet systems with measured or constrained (i.e. \(M_{RV}<\) some value) masses, both summary metrics similarly differed within our errors despite much larger mean and standard deviation values of the masses ratio (\(\mu\sim 2.7\), \(\sigma\sim 2.6\))5. Footnote 4: For all 14 planets with measured RV masses (but not necessarily adjacent pairs), \(\mu\sim 1.5\) and \(\sigma\sim 0.5\). Footnote 5: Exclusive of Kepler-37 b, which has \(R=0.28^{+0.03}_{-0.03}R_{\oplus}\) in our Kepler-GAIA catalogue but \(M_{RV}/M_{f}<75.7\). Including it gives \(\mu\sim 5.1\), \(\sigma\sim 13.2\). For the hypothetical inserted planet, we sampled the orbital period uniformly between the two observed periods. We then randomly drew a physical radius from the observed Kepler-GAIA catalogue, and sampled a corresponding mass as described above with the Forecaster package (Chen and Kipping, 2017). The distribution of masses of all inserted planets is shown in Figure 1. The median mass is \(4.84M_{\oplus}\), and \(95\%\) have \(0.32M_{\oplus}\lesssim M\lesssim 46.41M_{\oplus}\).6 We note that our results do not strongly depend on this choice for how the mass of the inserted planet was sampled (see Fig. 3). Footnote 6: Our FM13-style tests instead use \(M=3.671M_{\oplus}\), which corresponds to the smallest radius considered by Fang and Margot (2013) of \(1.5R_{\oplus}\). This falls within the central \(68\%\) interval of our adopted mass distribution. Finally, all eccentricities were drawn from a Rayleigh distribution with parameter \(\sigma_{e}\); inclinations were drawn from a Rayleigh distribution with parameter \(\sigma_{I}\); and orbital angles \(\Omega\), \(\omega\), and \(f\) were drawn uniformly. Our eccentricity and inclination distributions were motivated both by observational work to infer their values using methods such as radial velocities, transit duration variations, and transit timing variations (e.g. Xie et al., 2016; Hadden and Lithwick, 2017; Zhu et al., 2018; Van Eylen et al., 2019; Mills et al., 2019; Millholland et al., 2021). Additionally, small inclinations have a small impact on the stability of transiting planets (Tamayo et al., 2021), so we use \(\sigma_{e}\sim\sigma_{i}\) for simplicity. To represent a "low end" and "high end" of the range from observations, we chose two sets of values for \((\sigma_{e},\sigma_{i})\): \((0.01,0.5^{\circ})\) and \((0.05,2.5^{\circ})\). We note that while we sample periods uniformly, we do not do the same in the eccentricity degrees of freedom, which would correspond to sampling uniformly in \(e^{2}\)(Tremaine, 2015) (i.e. this would be an ergodic sampling). Uniform sampling favours the largest eccentricities, while our choice of Rayleigh distributions favours particular mean eccentricity values that can be calibrated against observations. An ergodic sampling of eccentricities would thus only increase our quoted estimates of dynamical packing. As in our FM-13 style tests, for a system with \(N\) observed planets, we tested its stability \(N-1\) times with the additional planet inserted between the \(N-1\) sets of observed adjacent pairs. We thus similarly test the dynamical packing of 919 observed, adjacent planet pairs with the same multiplicity breakdown listed in the previous subsection. However, instead of testing a single configuration for the inserted planet, we generated 5000 configurations (not just sampling parameters of the inserted planet, but also of all of the system's observed planets) for each of the 919 sets of adjacent pairs using the two sets of values for \((\sigma_{e},\sigma_{i})\), giving a total of 9 190 000 tested configurations. ## 3 Results Our ultimate goal is to investigate the dynamical packing of the Kepler multi-planet systems in the Kepler-GAIA catalogue using our expanded definition of dynamical packing. Before conducting those tests, we performed a series of simplified tests to serve as stepping stones. This allowed us to build up to our expanded tests, but also to provide context and to see how our results compare to those of Fang and Margot (2013)'s previous study on dynamical packing in Kepler multi-planet systems. Since we are concerned with the instability (and therefore the dynamical compactness) of the ensemble of multi-planet systems rather than the instability of each individual original system, we focus on the overall results of our tests and report summary metrics of system instability from our tests. We first present the results of our FM13-style tests on the Kepler Q1-6 catalogue (the data used by Fang and Margot (2013)), followed by our FM13-style tests on the Kepler-GAIA catalogue (which contains an order of magnitude more planets than the Q1-6 catalogue). Finally, we present the results of our expanded parameter space tests with a continuous definition of dynamical packing. ### FM13-Style Tests for Kepler Q1-6 data First, we test for stability when inserting a planet equidistant (in mutual Hill radii) between adjacent pairs, which we refer to as "FM13-style tests". Our methodology is similar to Fang & Margot (2013), except that we inserted the additional planets into observed multi-planet systems rather than into generated Kepler-like systems. Table 1 shows the results for our FM13-style tests applied to adjacent pairs in the Kepler Q1-6 catalogue, broken down by the observed multiplicities of the pairs' host systems (shown in column 1). The second column shows the percentage of configurations that went unstable in N-body integrations of \(10^{9}\) orbits and the estimated statistical error. The numbers in the third column are calculated differently. SPOCK does not yield a binary result of stable or unstable, but rather a probability in the range [0, 1]. The expected total number of unstable systems is then simply the sum of those probabilities (see Tamayo et al. 2021b, for more discussion). The third column therefore lists the estimated fraction of unstable systems, which is equivalent to the mean SPOCK probability that systems go unstable within \(10^{9}\) orbits, and the estimated statistical error. The fourth column shows the corresponding values from Table 1 of Fang & Margot (2013) (which does not include error estimates), where they drew synthetic systems from their modelled planet population, and performed N-body integrations for \(10^{8}\)_years_. For many of the Kepler systems, \(10^{9}\) orbits is substantially less than \(10^{8}\) years. We show below that this discrepancy does not affect our conclusions. We see that the fraction of unstable systems from our Nbody integrations is consistent with the mean SPOCK probability for all multiplicities, within our statistical errors (specified in the Table caption). Given the small number of multi-planet systems in the Kepler Q1-6 catalogue and the resulting large error bars, our results are also broadly consistent with Fang & Margot (2013). While Fang & Margot (2013) do not comment on it (presumably because of the large statistical errors), these results all weakly suggest that the proportion of dynamically packed systems may increase with planet multiplicity. ### FM13-Style Tests for the full Kepler-GAIA catalogue The full Kepler-GAIA catalogue provides an opportunity to test this trend with multiplicity on an order-of-magnitude larger sample, which we present in Table 2. The columns are analogous to Table 1, with the addition of column 2 which shows the percentage of configurations that went unstable in Nbody integrations of \(10^{8}\) years. The fifth column again lists the result from Fang & Margot (2013). With a significantly larger observed planet sample, these tests confirm the trend of an increasing proportion of dynamically packed systems with increasing multiplicity. With improved planet statistics compared to Q1-6 and correspondingly smaller error bars, we can see some discrepancies between N-body and SPOCK (specifically for \(N=2\)). If a sample predominantly consists of stable configurations, SPOCK will overestimate the sample's mean probability (see Appendix C). Although our expanded parameter space tests (discussed below in Sec. 3.3) only have mean probabilities using SPOCK, this discrepancy is not an issue. Since dynamical packing increases with multiplicity, an overestimation of a highly-stable sample's mean probability (i.e. for lower multiplicities) would only strengthen this trend. Additionally, relaxing our assumption of circular orbits varies the dynamical packing fractions of tens of percent (e.g. Table 3) and our systematic errors in determining the most stable configuration are on the order of ten percent (Appendix D). Thus, the discrepancies between N-body and SPOCK for \(N=2\) are not a strong factor limiting our investigation, and the speed increase provided by SPOCK will allow us to perform much more expansive phase space investigations. \begin{table} \begin{tabular}{c c c c} \hline Multiplicity & \multicolumn{2}{c}{Kepler Q1–6} & FM13 \\ & \(10^{9}\) orbits & \multicolumn{2}{c}{SPOCK} \\ \hline N = 2 (403) & \(23.3\pm 6.4\%\) & \(30.4\pm 5.1\%\) & \(\geq\)31\% \\ N = 3 (28) & \(46.4\pm 9.4\%\) & \(47.8\pm 7.4\%\) & \(\geq\)35\% \\ N = 4 (9) & \(66.7\pm 15.7\%\) & \(62.1\pm 13.5\%\) & \(\geq\)45\% \\ \hline \end{tabular} \end{table} Table 1: Proportions of unstable configurations for the FM13-style tests applied to the Kepler Q1–6 catalogue. Column 1 shows the host system’s observed planet multiplicity, with the number of analysed adjacent pairs in parentheses. Column 2 shows the percentage of Nbody integrations that went unstable within \(10^{9}\) orbits (\(\pm\) standard error of the mean). Column 4 is from Table 1 of Fang & Margot (2013). \begin{table} \begin{tabular}{c c c c c} \hline Multiplicity & \multicolumn{2}{c}{Kepler-GAIA Catalogue} & FM13 \\ & \(10^{8}\) years & \(10^{9}\) orbits & \multicolumn{2}{c}{SPOCK} \\ \hline N = 2 (408) & \(19.6\pm 2.0\%\) & \(16.7\pm 1.8\%\) & \(26.1\pm 1.6\%\) & \(\geq\)31\% \\ N = 3 (282) & \(33.7\pm 2.8\%\) & \(29.8\pm 2.7\%\) & \(32.3\pm 2.0\%\) & \(\geq\)35\% \\ N = 4 (144) & \(46.5\pm 4.2\%\) & \(40.3\pm 4.1\%\) & \(38.8\pm 2.9\%\) & \(\geq\)45\% \\ N \(\geq\) 5 (85) & \(65.9\pm 5.1\%\) & \(60.0\pm 5.3\%\) & \(50.1\pm 4.1\%\) & \\ \hline \end{tabular} \end{table} Table 2: Proportions of unstable configurations for the FM13-style tests applied to the Kepler-GAIA. Column 1 shows the host system’s observed planet multiplicity, with the number of analysed adjacent pairs in parentheses. Columns 2 and 3 show the percentage of Nbody integrations that went unstable within \(10^{8}\) years and \(10^{9}\) orbits, respectively. Column 4 shows the mean probability of going unstable within \(10^{9}\) orbits. Column 5 is from Table 1 of Fang & Margot (2013). Errors are calculated as in Table 1. Figure 1: Histogram of the inserted planet masses for all 9 19 000 sets of adjacent pairs examined. We obtained masses by applying Forecaster (Chen & Kipping 2017) to a randomly drawn radius from the Kepler-GAIA catalogue. The median mass is \(4.84M_{0}\), \(68\%\) are within \(1.75M_{0}\leq M\leq 12.56M_{0}\), and \(95\%\) are within \(0.32M_{0}\leq M\leq 46.41M_{0}\). \(0.47\%\) of masses are below \(0.1M_{0}\) and \(1.45\%\) of masses are above \(100M_{0}\). Note that SPOCK yields probabilities of stability over \(10^{9}\) orbits, while Fang & Margot (2013) performed N-body integrations over \(10^{8}\) years. In the Kepler-GAIA catalogue, \(10^{9}\) orbits correspond to a timescale that is less than \(10^{8}\) years for nearly all systems (the median time corresponding to \(10^{9}\) orbits is \(1.52\times 10^{7}\) years). As a check, we also perform \(10^{8}\)-year N-body integrations and list the results in the second column. As expected, a higher proportion of systems are unstable when integrated for \(10^{8}\) years compared to \(10^{9}\) orbits. However, given that the distribution of instability times in compact planetary systems is roughly log-uniform (Volk & Gladman, 2015), there is not a substantial difference between the number of unstable systems in our two tests. ### Expanded Parameter Space Tests Instead of testing only a single point estimate for the most stable configuration of a planet inserted between each observed adjacent pair as in our FM13-style tests, our expanded parameter space tests use SPOCK to explore the vast parameter space available to the inserted planet. We generate 10000 configurations of observed systems with an inserted planet between each adjacent pair tested, drawing orbital periods for the inserted planet uniformly within the gap, and drawing the eccentricities and inclinations from Rayleigh distributions with parameters \(\sigma_{e}\) and \(\sigma_{i}\) (5000 configurations for two sets of \(\sigma_{e}\),\(\sigma_{i}\) values), respectively. Fig. 2 shows examples of the probability distributions for four different adjacent pairs in our expanded parameter space test using \(\sigma_{e}=0.01\) and \(\sigma_{i}=0.5^{\circ}\). Each panel shows the probability distribution projected onto a 25 by 25 grid of the inserted planet's mass vs. its orbital period. The colour shows the mean probability of being unstable within \(10^{9}\) orbits, marginalizing over all the parameters not visible in this projection (i.e., the eccentricities, inclinations, and orbital angles for all planets, and the masses sampled within observational uncertainties for the two detected planets on either side, see Sec. 2.4). Note that the periods are distributed uniformly, whereas the masses are distributed according to Fig. 1 (i.e. the number density of configurations within each grid cell is uniform horizontally but _not_ uniform vertically). The vertical limits correspond to 95% of the mass distribution and the horizontal dashed lines encompass 68% of the mass distribution7. Footnote 7: The vertical extents of each panel in Figs. 2 and 3 are limited to 95% of masses, but no cuts were made to compute values in Tables 3 and 4. Each panel in Fig. 2 is labelled with the KIC of the adjacent pair's host star and the location of the pair within that system (with 0 meaning that the additional planet is inserted in between the innermost observed planet and its closest neighbour, 1 corresponding to between the second and third, etc.). For some adjacent pairs, the first and last columns on some of the grids show higher probabilities of stability (visible in both Figures 2 and 3). This corresponds to co-orbital configurations of the inserted planet and either the inner or outer observed planet. Although these particular SPOCK probabilities are not particularly reliable (see Appendix C2), the configurations represent only a small proportion of the total parameter space. We computed quantities presented in this paper including and excluding these columns and they all differed within error (with the exception of Table D1). For simplicity, we therefore present our results without making any period cuts for the inserted planet. The systems in Figure 2 highlight four qualitatively different scenarios. The white X in each panel marks the location for the additional planet using the procedure of Fang & Margot (2013) (Sec. 2.3), with corresponding probability \(p_{FM13}\) of being unstable. As discussed in Sec. 1, one might expect an ordered, dissipative planet formation process to yield orbital configurations for additional planets that promote stability, like those adopted by Fang & Margot (2013) (equal Hill-radius spacing, circular orbits). The relevant summary metric for stability under this hypothesis is then (Sec. 3.3.2) the lowest probability of instability found in our grid, i.e., the Most Stable Configuration (MSC), with corresponding probability \(p_{MSC}\). In a strongly chaotic, "ergodic" picture of planet formation where all grid points have uniform probability, the relevant measure of dynamical packing is the mean probability of instability across the grid \(p_{mean}\) (Sec. 3.3.1). While both the FM13-style and the MSC optimize for stability, two competing effects cause deviations between \(p_{FM13}\) and \(p_{MSC}\): relaxing the assumption of circular orbits of Fang & Margot (2013) should drive up the fraction of dynamically packed systems, whereas exploring a broader parameter space for inserted planets should allow us to find more stable configurations and drive down the fraction of dynamically packed systems. The top left and bottom right panels in Fig. 2 show cases where the Fang & Margot (2013) placement roughly matches the MSC. The top left panel is a case where essentially the whole region is unstable, while the bottom right panel shows a case where the system remains stable even when a planet is inserted at almost any period and a broad range of masses. Although observed adjacent pairs typically do not have undetected planets located in between them (see Appendix B for our estimated frequency), pairs with probability grid plots visually similar to the lower right panel (i.e. they have smaller values of \(p_{mean}\) and look very yellow) are the most suitable candidates to find this type of unseen planet. The remaining two panels of Fig. 2 show cases where the simple method of Fang & Margot (2013) gives a poor estimate of the stability of the system. The bottom left shows an example with a much more massive inner observed planet, so that the equal-Hill-radii FM13-style placement puts the inserted planet much closer to the low-mass outer neighbor. We see that in this case, that placement (probability of being unstable \(p_{FM13}=1\)) puts the inserted planet outside the broad stable region (\(p_{MSC}=0.198\)). The top right panel shows a system that has high probability of being unstable in the expanded parameter space test (\(p_{MSC}=0.845\)) but low probability in the FM13-style test (\(p_{FM13}=0.157\)). This is because the FM13-style placement is located near the bounds of a region with _slightly_ higher stability on average and happens to be one such configuration that is more stable in that region (i.e. it's an outlier). #### 3.3.1 Dynamical Packing Under the Ergodic Hypothesis of Planet Formation The ergodic hypothesis posits that planet formation is sufficiently chaotic to yield orbital configurations that roughly fill phase space uniformly. Accordingly, we assess dynamical packing in Table 3 using the mean probability \(p_{mean}\) for each adjacent pair's probability distribution of being unstable within \(10^{9}\) orbits (i.e. the estimated proportion of our sampled space which is unstable for an inserted planet). The two central columns explore the effects of finite eccentricities and inclinations, with representative low (\(\sigma_{e}=0.01\), \(\sigma_{i}=0.5^{\circ}\)) and high (\(\sigma_{e}=0.05\) and \(\sigma_{i}=2.5^{\circ}\)) Rayleigh parameters for their distributions. The fourth column shows the values from Table 1 of Fang & Margot (2013). We find that under this definition, more compatible with a giant impact phase, the majority (\(\approx 60-95\%\)) of our sampled space is unstable for an inserted planet. Similar to the FM13-style tests, we also see that instability increases with multiplicity: an additional \(\sim 20\%\) of sampled space is unstable for \(N\geq 5\) compared to \(N=2\). Increasing the eccentricity and inclination Rayleigh parameters from \(\sigma_{e}=0.01\) and \(\sigma_{i}=0.5^{\circ}\) to \(\sigma_{e}=0.05\) and \(\sigma_{i}=2.5^{\circ}\) also results in an additional \(\sim 20\%\) of unstable space between observed planet pairs. A particular observational selection effect may be consistent with dynamical packing increasing with multiplicity. Although it is uncommon, we estimate that a non-negligible proportion (\(\approx 24\%\) for \(N\geq 2\)) of observed adjacent pairs have undetected planets located in between them (see Appendix B for more details). In particular, our estimated frequency of these in-between undetected planets decreases as multiplicity increases, meaning that low multiplicity systems may be more dynamically packed than suggested by Table 3. We caution that further testing is necessary to validate our estimated frequencies and their applicability to the observed Kepler multi-planet systems, however. We also visually examine the proportion of unstable space for tests using Rayleigh parameters \(\sigma_{e}=0.01\) and \(\sigma_{i}=0.5^{\circ}\) in Fig. 3. These are similar to the specific examples in Fig. 2, except we overlay the Figure 2: Example probability distributions of being unstable within \(10^{9}\) orbits for four different tests of adjacent pairs. An additional planet was inserted between the adjacent pair and the system was sampled 5000 times (using Rayleigh distributions with \(\sigma_{e}=0.01\), \(\sigma_{i}=0.5^{\circ}\)). Each panel is a 2D histogram on a 25 by 25 grid showing the mean probability of all sampled configurations with an inserted planet whose mass and period fall within the bounds of each grid cell (white spaces have no inserted planets in that region). The white X shows the inserted planet’s mass and period for the adjacent pair’s FM13-style test. Each panel’s title shows its KIC, the placement of the inserted planet (0 is between the innermost planet and the second planet), the mean probability (\(\mathrm{p}_{mean}\)), the most stable configuration probability (\(\mathrm{p}_{MSC}\)), and the probability of the pair’s FM13-style test (\(\mathrm{p}_{FM13}\)). Inserted planet periods are distributed uniformly in period and the masses according to Figure 1 (the vertical limits correspond to \(95\%\) of masses and \(68\%\) of masses fall within the region marked by the horizontal dashed lines). results for all adjacent pairs in our sample. Since each pair has its inner and outer planet at different periods, the period of the inserted planet has been scaled linearly so that the inner planet is at 0 and the outer planet is at 1 (i.e., \(P_{scaled}=(P_{inserted}-P_{1})/(P_{2}-P_{1})\)). The trend of increasing instability with multiplicity is evident. We also see, unsurprisingly, that instability increases with the mass of the inserted planet (the vertical scale is logarithmic however, so the trend with mass is not very strong) and its proximity to either the inner or outer observed planet (with the exception of the "co-orbital" columns, as discussed in Sec. 3.3 and Appendix C2). Although Fig. 3 shows (qualitatively) that there is a significant amount of stable space for an additional planet in \(N=2\) systems (and that the amount of space decreases with multiplicity), the majority of our sampled space between adjacent pairs is unstable according to Table 3. In other words, under the ergodic hypothesis and using our chosen distributions, observed adjacent pairs are typically strongly dynamically packed. #### 3.3.2 Dynamical Packing Under the Ordered Hypothesis of Planet Formation An opposite end-member scenario compared to the ergodic hypothesis, the ordered hypothesis assumes planets form in their most stable configuration (MSC). Therefore, we assess dynamical packing using the probability \(p_{MSC}\) of the most stable configuration. We discuss how we select this most stable configuration from each adjacent pair's probability distribution in Appendix D to be more robust against outlier probabilities assigned by SPOCK. Table 4 reports the estimated fractions (with statistical uncertainties) of adjacent pairs that are unstable even when the additional planet is inserted in its most-stable configuration. Analogous to Table 3, the central columns explore the dependence on the assumed distributions of eccentricities and inclinations. Similar to all of our previous tests, we see that the fraction of unstable pairs increases with multiplicity: under this definition, roughly twice as many systems with \(N\geq 5\) observed planets are maximally packed as those with \(N=2\). We note that while our selection of the most stable configuration introduces uncertainty in our quoted packing fractions (which we estimate are \(\sim 10\%\), see Appendix D), the trend with multiplicity is robust regardless of how we select the MSC. Additionally, increasing the eccentricity and inclination Rayleigh parameters from \(\sigma_{e}=0.01\) and \(\sigma_{i}=0.5^{\circ}\) to \(\sigma_{e}=0.05\) and \(\sigma_{i}=2.5^{\circ}\) approximately doubles the proportion of packed systems for all multiplicities. Even under this most optimistic scenario for stability, there are still a substantial number of cases where an additional planet is not stable when inserted between an observed adjacent pair. These unstable fractions are lower than their counterparts under the ergodic hypothesis in Table 3, but provide useful lower limits for the frequency of maximally packed planet pairs. If one wanted to use the most stable configuration to determine these lower limits for a population of multi-planet systems, it is useful to know if performing stability tests across a vast parameter space is necessary or if this can be achieved using point estimates for the MSC (e.g. if CPU hours are limited). For this, we can compare our FM13-style results (column 4 of Table 2) to our expanded parameter space results using our lowest eccentricity and inclination Rayleigh parameters (column 2 of Table 4). We see that the proportions of unstable configurations are similar across multiplicities. This similarity is predominantly because the scenarios shown in the upper left and lower right panels of Fig. 2 are typical, whereas the scenarios shown in the upper right and lower left panels are not. In other words, placing the inserted planet equidistant (in mutual Hill radii) between an adjacent pair is a good approximation of the most stable configuration when considering the lower limits of dynamical packing for a sample of multi-planet systems. There are likely other approximations which may capture different dynamical arguments for high stability (e.g. the underlying physics of the brightest yellow column shown in the lower left panel of Fig. 2) or which use a different approach to determining the most stable configuration than ours (see Appendix D), but this is well beyond the scope of our study. ## 4 Conclusions In this work, we have re-examined the dynamical packing of the Kepler multi-planet systems through a new lens. Previous work has focused on dynamical packing as a binary property: the yes/no question of whether, given an adjacent pair of observed planets, at least one possible configuration exists for an additional planet between them that could survive over long timescales. This theoretical, binary question of whether such stable arrangements exist is only meaningful to the extent that the planet formation process is capable of generating such configurations, however. We therefore argue for a more meaningful, explicit definition of dynamical packing that is instead continuous: for a given pair of observed, adjacent planets, what is the probability that the planet formation process could have created an additional planet between them that would have survived to the present day? The obvious complication is that this new definition requires a detailed planet formation model that provides a probability distribution for new planets over all possible configurations. Since planet formation modelling is still an open area of research, choosing a \begin{table} \begin{tabular}{l c c c} \hline Multiplicity & \(\sigma_{e}=0.01\), \(\sigma_{l}=0.5^{\circ}\) & \(\sigma_{e}=0.05\), \(\sigma_{l}=2.5^{\circ}\) & FM13 \\ \hline N = 2 (408) & \(22.7\pm 1.6\%\) & \(47.7\pm 2.1\%\) & \(\geq\)31\% \\ N = 3 (282) & \(31.4\pm 1.9\%\) & \(60.4\pm 2.3\%\) & \(\geq\)35\% \\ N = 4 (144) & \(32.3\pm 2.3\%\) & \(70.5\pm 2.8\%\) & \(\geq\)45\% \\ N\(\geq 5\) (85) & \(43.0\pm 3.6\%\) & \(83.6\pm 2.9\%\) & \\ \hline \end{tabular} \end{table} Table 4: Proportions of unstable configurations for the expanded parameter space tests applied to the Kepler-GAIA catalogue. Column 1 shows the host system’s observed planet multiplicity, with the number of analysed adjacent pairs in parentheses. Columns 2 and 3 show the mean probability (using each pair’s **most stable configuration**) for the tests with \((\sigma_{e},\sigma_{l})=(0.01,0.5^{\circ})\) and \((0.05,2.5^{\circ})\), respectively. Column 4 is from Table 1 of Fang & Margot (2013). Errors are calculated as in Table 1. \begin{table} \begin{tabular}{l c c c} \hline Multiplicity & \(\sigma_{e}=0.01\), \(\sigma_{l}=0.5^{\circ}\) & \(\sigma_{e}=0.05\), \(\sigma_{l}=2.5^{\circ}\) & FM13 \\ \hline N = 2 (408) & \(58.9\pm 1.3\%\) & \(77.7\pm 1.2\%\) & \(\geq\)31\% \\ N = 3 (282) & \(67.8\pm 1.4\%\) & \(85.5\pm 1.2\%\) & \(\geq\)35\% \\ N = 4 (144) & \(74.8\pm 1.6\%\) & \(92.9\pm 0.9\%\) & \(\geq\)45\% \\ N\(\geq 5\) (85) & \(81.4\pm 1.9\%\) & \(95.6\pm 1.0\%\) & \\ \hline \end{tabular} \end{table} Table 3: Proportions of unstable configurations for the expanded parameter space tests applied to the Kepler-GAIA catalogue. Column 1 shows the host system’s observed planet multiplicity, with the number of analysed adjacent pairs in parentheses. Columns 2 and 3 show the mean probability (using each pair’s **mean** probability, i.e. the proportion of parameter space which is unstable) for the tests with \((\sigma_{e},\sigma_{l})=(0.01,0.5^{\circ})\) and \((0.05,2.5^{\circ})\), respectively. Column 5 is from Table 1 of Fang & Margot (2013). Errors are calculated as in Table 1. detailed planet formation model is not a straightforward task. Instead, we simplify matters by considering two frameworks of planet formation. One possibility is that the formation process is sufficiently dissipative and gradual that planets would naturally fall into the lowest-energy, most stable configurations available (which we refer to as the "ordered hypothesis"). In this limit, our continuous definition reduces to the simple binary classification considered in previous studies. Perhaps for that reason, as well as the enormous computational benefit of only testing a single point estimate of the most stable configuration, this Fang & Margot (2013) used this as their measure for the dynamical packing of the Kepler multi-planet systems. It is not clear that this framework of planet formation occurs, or that the system remains in this state, however. For example, one scenario where planets form in their most stable configuration is settling into resonant chains. Such lowest-energy resonant chains are rarely observed for Kepler multi-planet systems, though(Fabrycky et al., 2014). Indeed, several studies have argued that if planets settle into resonant chains during the dissipative gas disk phase, then chaotic dynamics must subsequently destabilize these resonant chains following disk dissipation (e.g. Izidoro et al., 2017; Matsumoto & Ogihara, 2020; Pichierri & Morbidelli, 2020; Izidoro et al., 2021; Esteves et al., 2022; Goldberg et al., 2022). If so, chaotic dynamics could also remove planets from a different type of most stable configuration. As long as we broaden 'planet formation' to include this violent final stage of planetary growth through collisions (the so-called giant impacts phase), we would expect planet formation to rarely populate small pockets of stability in a vast phase space. Unlike an ordered Figure 3: Similar instability “grid plots” as Figure 2, except for all sets of adjacent pairs (i.e. “stacked grid plots”) with Rayleigh distributions \(\sigma_{e}=0.01\), \(\sigma_{t}=0.5^{\circ}\). Periods have been linearly scaled with the inner planet at 0 and the outer planet at 1. Each subplot shows probabilities according to observed multiplicities of the adjacent pairs’ systems. The inserted planets’ scaled periods are distributed uniformly and the masses according to Figure 1 (the vertical limits correspond to 95% of masses and 68% of masses fall within the region marked by the horizontal dashed lines). picture of planet formation in which the existence of small stability pockets matters, the relevant metric in this chaotic picture of planet formation (including giant impacts) is the fraction of the vast phase space between the pair that would allow an additional planet to form and survive to the present day. Tremaine (2015) has argued that for a sufficiently chaotic giant impact phase, one would expect planet formation to approximately fill the phase space of orbital configurations uniformly. Subsequently, the distribution of orbital architectures for observed systems corresponds to the subset of phase space that is dynamically stable on timescales comparable to the systems' ages. Assuming a simplified stability criterion, Tremaine (2015) shows that this "ergodic hypothesis" predicts distributions of orbital eccentricities and interplanetary spacings that are broadly consistent with observations. Both the ordered and ergodic hypotheses are idealized pictures of planet formation and we expect that observed planetary systems have formation histories falling somewhere in between these two limits. Since these two hypotheses represent opposing scenarios, they provide intuition for the possible range of maximally packed planetary systems. Using the ergodic definition and our chosen distributions, observed pairs of planets in compact multi-planet systems are typically strongly packed: \(\sim 60-95\%\) (Table 3). Under the more restrictive ordered definition of maximally packed, one might expect that a smaller proportion of systems would be maximally packed. This is in fact seen in a comparison between Table 3 and Table 4. Nevertheless, our results show that a significant portion of adjacent pairs, \(\sim 20-80\%\), are maximally packed using the more stringent definition. Regardless of our definition of dynamical packing, we see a clear trend of increasing packing with increasing multiplicity, assuming that eccentricities are similar across multiplicities. This is consistent with the observation that the interplanetary spacing (as measured by e.g. period ratio or Hill-radius separation) between observed adjacent pairs decreases as the multiplicity increases (e.g. Weiss et al., 2018; Zhu and Dong, 2021). As the separation between a pair decreases, more mean motion resonances can overlap to drive chaos and instabilities in multi-planet systems (Deck et al., 2013), which may be responsible for interplanetary spacing constraints in very compact multi-planet systems (Obertas et al., 2017). One possible source of this trend of increased packing with multiplicity is an observational selection effect: systems with low observed multiplicity may be more likely to have undetected planets located between observed adjacent pairs. When using \(\text{SysSimExClusters}\)(He et al., 2019) to generate a set of "Kepler-like" multi-planet systems and to simulate observing them with the Kepler telescope and its detection pipeline, we saw that lower multiplicity systems had a higher proportion of in-between undetected planets (see Appendix B for more details). In other words, low multiplicity systems could be more dynamically packed than they appear. As noted in Sec. 3.3, observed adjacent pairs with low \(p_{mean}\) (i.e. their grid plots, like those in Fig. 2, look very yellow) are suitable candidates to search for these in-between undetected planets. Grid plots and summary statistics for all observed adjacent pairs in our Kepler-GAIA catalogue are openly available online. While such a trend with multiplicity could simply be an imprint of formation in the protoplanetary disk, it could also naturally arise through later dynamical instabilities causing giant impacts. A giant impact phase, likely with a tail of collisions spanning the Gyr lifetimes of typical Kepler stars, would generate larger interplanetary gaps as planets merge. This process results in progressively longer-lived, lower-multiplicity systems. Though we have not specifically investigated the question here, hypotheses involving this sculpting of multi-planet systems (Pu and Wu, 2015; Volk and Gladman, 2015) might naturally predict more space for additional planets (i.e., lower dynamical packing) for low-multiplicity systems that have had several planets removed. Alternatively, our assumption of a constant eccentricity distribution with multiplicity may not hold. There is some observational evidence suggesting that systems with \(N=2\) have higher eccentricities than those with \(N\geq 3\)(Xie et al., 2016). Detailed modelling of Kepler multi-planet system architectures similarly shows higher eccentricity in low multiplicity systems (He et al., 2020). Furthermore, observational modelling suggests that orbital inclinations decrease significantly with increasing multiplicity (Zhu et al., 2018; Millholland et al., 2021). Given that eccentricities are comparable to inclinations in most astrophysical disks (and indeed, this was seen in the modelling by He et al. (2020)), a decreasing trend of eccentricity with inclination therefore seems plausible. While we do not consider systems with a single transiting planet, eccentricities extracted from transit durations in single-transiting systems are roughly four times higher than those in multi-planet systems (Mills et al., 2019; Van Eylen et al., 2019), consistent with the above picture. Comparing our \(N=2\) results using a higher eccentricity Rayleigh parameter to our \(N\geq 5\) results using a lower eccentricity parameter in Tables 3 and 4, we see similar values. It may in fact be that systems with different multiplicities have comparable packing which could be caused by the same processes generating the eccentricity differences. Dawson et al. (2016) have argued that a giant impact phase might establish an equilibrium between scatterings that excite eccentricities and collisions that dampen them. We expect that the relationships between the physical processes of planet formation and evolution (including giant impacts), present-day eccentricities in mature systems, the degeneracy between observed vs. intrinsic multiplicity and mutual inclinations (including in-between undetected planets), where observed systems fall between the ordered and ergodic hypotheses, and dynamical packing are complex and very intertwined. This paper is a first attempt at examining dynamical packing in an expanded context and searching for clues to help disentangle these relationships. In the past, this was not feasible due to practical constraints: limited computation time for running Nbody integrations of the various configurations needed to explore such a large parameter space. With SPOCK, that constraint is eased substantially and opens up the option, as we explored, of using a broader definition of dynamical packing that's physically tied to specific models of planet formation. We certainly do not expect that the methods and definitions we utilized are perfect, but they have illuminated a strong need for the exoplanet community to come together and consider dynamical packing explicitly linked to planet formation. ## Acknowledgements We are grateful for Gwendolyn Eadie's and Joshua Speagle's expertise, insight, and patience in discussions of our error estimates. We thank Hanno Rein and Kristen Menou for their thoughtful discussion on several occasions as we worked on this project. Additionally, we thank the anonymous reviewer for their extraordinarily valuable comments, questions, and suggestions. AO was partially supported by the Natural Sciences and Engineering Research Council of Canada (NSERC) during this project (application ID 504349-2017). NM and AO were partially supported by NSERC during this project (application ID RGPIN-2017-06459) This research has made use of the NASA Exoplanet Archive, which is operated by the California Institute of Technology, under contract with the National Aeronautics and Space Administration under the Exoplanet Exploration Program. This research has made extensive use of the pandas(Wes McKinney, 2010), NumPy(Harris et al., 2020), Astropy8(Astropy Collaboration et al., 2013, 2018, 2022) and Matplotlib(Hunter, 2007) packages for python9. Footnote 8: [http://www.astropy.org](http://www.astropy.org) Footnote 9: [https://www.python.org](https://www.python.org) The University of Toronto operates on the traditional land of the Huron-Wendat, the Seneca, and the Mississippias of the Credit. Harvey Mudd College operates on Torojotangna, one of many villages in the traditional lands of the Tongya-Gabrieileno/Gabrieileno/Kizh peoples. We encourage readers to learn whose lands they occupy and how to support these Indigenous Peoples. ## Data Availability We obtained the Kepler quarters 1-6 data (Batalha et al., 2013) used in this paper from the files table3.dat, table4.dat, and table5.dat located at [http://cdsarc.unistra.fr/viz-bin/cat/J/ApJS/204/24](http://cdsarc.unistra.fr/viz-bin/cat/J/ApJS/204/24). We obtained the cumulative Kepler data from the NASA Exoplanet Archive (catalog 2020), which we accessed on 2020-09-21 at 15:02 EDT. After applying the two filters described in Sec. 2.2, 4612 rows were returned. Details about the cumulative catalogue's generation are available at [https://exoplanetarchive.ipac.caltech.edu/docs/PurposeOfKOITable.html](https://exoplanetarchive.ipac.caltech.edu/docs/PurposeOfKOITable.html) We obtained the revised Kepler/Gaia stellar parameters (Berger et al., 2020a) from the file table2.dat located at [https://cdsarc.cds.uninstra.fr/viz-bin/cat/J/A159/280](https://cdsarc.cds.uninstra.fr/viz-bin/cat/J/A159/280) and the planetary parameters (Berger et al., 2020b) from the file table1.dat located at [https://cdsarc.cds.uninstra.fr/viz-bin/cat/J/AJ/160/108](https://cdsarc.cds.uninstra.fr/viz-bin/cat/J/AJ/160/108). Our code used to combine the various catalogues, generate and integrate our REBOUND simulations, and obtain SPOCK probabilities is available at github.com/aobertas/dynamical-packing-kepler-multis. Summary metrics and grid plots (similar to Fig. 2) for all observed adjacent pairs in our Kepler-GAIA catalogue are also available online via the GitHub repository.
2303.06114
Optimal Design of Validation Experiments for the Prediction of Quantities of Interest
Numerical predictions of quantities of interest measured within physical systems rely on the use of mathematical models that should be validated, or at best, not invalidated. Model validation usually involves the comparison of experimental data (outputs from the system of interest) and model predictions, both obtained at a specific validation scenario. The design of this validation experiment should be directly relevant to the objective of the model, that of predicting a quantity of interest at a prediction scenario. In this paper, we address two specific issues arising when designing validation experiments. The first issue consists in determining an appropriate validation scenario in cases where the prediction scenario cannot be carried out in a controlled environment. The second issue concerns the selection of observations when the quantity of interest cannot be readily observed. The proposed methodology involves the computation of influence matrices that characterize the response surface of given model functionals. Minimization of the distance between influence matrices allow one for selecting a validation experiment most representative of the prediction scenario. We illustrate our approach on two numerical examples. The first example considers the validation of a simple model based on an ordinary differential equation governing an object in free fall to put in evidence the importance of the choice of the validation experiment. The second numerical experiment focuses on the transport of a pollutant and demonstrates the impact that the choice of the quantity of interest has on the validation experiment to be performed.
Antonin Paquette-Rufiange, Serge Prudhomme, Marc Laforest
2023-03-10T18:01:56Z
http://arxiv.org/abs/2303.06114v1
# Optimal Design of Validation Experiments for the Prediction of Quantities of Interest ###### Abstract Numerical predictions of quantities of interest measured within physical systems rely on the use of mathematical models that should be validated, or at best, not invalidated. Model validation usually involves the comparison of experimental data (outputs from the system of interest) and model predictions, both obtained at a specific validation scenario. The design of this validation experiment should be directly relevant to the objective of the model, that of predicting a quantity of interest at a prediction scenario. In this paper, we address two specific issues arising when designing validation experiments. The first issue consists in determining an appropriate validation scenario in cases where the prediction scenario cannot be carried out in a controlled environment. The second issue concerns the selection of observations when the quantity of interest cannot be readily observed. The proposed methodology involves the computation of influence matrices that characterize the response surface of given model functionals. Minimization of the distance between influence matrices allow one for selecting a validation experiment most representative of the prediction scenario. We illustrate our approach on two numerical examples. The first example considers the validation of a simple model based on an ordinary differential equation governing an object in free fall to put in evidence the importance of the choice of the validation experiment. The second numerical experiment focuses on the transport of a pollutant and demonstrates the impact that the choice of the quantity of interest has on the validation experiment to be performed. keywords: Validation, Quantity of Interest, Optimal Design of Experiments, Sensitivity Analysis, Uncertainty Quantification + Footnote †: journal: ## 1 Introduction The development of models to describe a system of interest (be it physical, biological, or societal) is central in understanding and predicting its behavior. Mathematical models may stem from physical laws and/or empirical considerations for either some parts of the system or the whole system itself. In order to assess the potential of a model to explain and predict a given quantity of interest (QoI), it must be validated, or at best, not invalidated. In other words, the error between the model and the reality it is supposed to describe must be precisely quantified with respect to the QoI. The validation process implies comparing outputs of the model with experimental data acquired by testing the system of interest. Over the past decades, several procedures for model validation have been proposed in the literature. For instance, Oberkampf and Trucano [29] present a detailed overview of verification and validation processes while Roy and Oberkampf [35] provide a thorough discussion on these two processes with a focus on the treatment of both aleatory and epistemic uncertainty. A recent review article by Riedmaier et al. [34] discusses in details the scope and shortfalls of existing approaches for validation and prediction. The paper of Lee et al. [23] provides an extensive review of the literature addressing the problems of uncertainty propagation, parameters calibration, and model validation. Model validation has become increasingly relevant and primordial in the field of computational sciences and engineering due to the complexity of scientific models currently employed and the demand for predictive and quantitative results. However, a critical aspect remains the selection of suitable validation experiments, since they provide the experimental data against which the model prediction is compared and thus directly influence the decision whether the model is deemed valid or not. This issue is even more important when the number of validation experiments or amount of data are limited. Yet, the precise design of such validation experiments is still a question largely overlooked in the literature, with the exception of a few contributions. Oliver et al. [30] stress the importance of the validation process for predictive modeling, even more so in the case of models made of several coupled sub-models. They essentially identify two fundamental issues when tackling model validation: 1) that of validating a model if one cannot reproduce the conditions under which the predictions are made; 2) that of validating a model if the QoI one wishes to predict cannot be observed in practice. To answer these two questions, the authors estimate the modeling errors affecting the sub-models and propagate them to the QoI at the prediction scenario. They then propose a series of guidelines to ensure that the calibration and validation experiments are relevant for prediction purposes [30, Section 3.3]. These guidelines can be roughly summarized as follows: if the QoI is sensitive to certain model parameters and/or certain modeling errors, then the calibration and validation experiments should reflect these sensitivities. However, the discussion remains essentially qualitative and the analysis is performed a posteriori, that is, once calibration and validation experiments have been performed, they subsequently verify that these are actually relevant. The paper of Hamilton and Hills [17] and the subsequent report of Hills [18] also address these aforementioned issues. The authors develop a meta-model allowing the extrapolation of the modeling error to the QoI at prediction scenario. More precisely, they identify a linear relationship between the sensitivities (some kind of elementary effects) of the QoI at prediction scenario to the sensitivities of the observables at validation scenario. Weights could then be attributed to more relevant validation observations with respect to the objective of the prediction. However, the assessment is again made a posteriori. Moreover, the authors do not address the design of validation experiments. We nevertheless note that Hills [18, Section 1.3] presents several cases where a validation experiment may fail to provide a useful assessment on whether the model can be employed to predict a QoI at a prediction scenario. Other works have also studied the problem of validating mathematical models made of multiple sub-models. Among those, we mention the work of Hills and Leslie [19], who employ local derivative-based indices to weight the importance of the validation experiments for the sub-models with respect to the numerical prediction using the full model. In the same vein, the work of Li and Mahadevan [24] proposes an integrated approach to take into account the validity of the lower-level models and their relevance for the system-level model. More precisely, they employ a model reliability metric to assess the validity of lower-level models and perform a sensitivity analysis based on Sobol indices [39] to quantify the relation of these same lower-level models to the model for the full system. Farrell et al. [14] propose the so-called Occam-Plausibility algorithm that allows one for identifying the simplest and most plausible model among a given family of competing models that they rank in terms of the posterior model plausibility. The paper of Tan et al. [42] enriches the Occam-Plausibility algorithm by providing two criteria about the selection of the validation experiment to be performed. The first criterion stipulates that the sensitivity indices of the validation scenario must be close to the sensitivity indices of the prediction scenario. They employ Sobol indices [39] as their sensitivity indices, thus considering the decomposition of the variance as a notion of sensitivity. This criterion is also verified a posteriori, once a validation scenario has been proposed. The second criterion checks if the validation scenario provides an information gain on the posterior distribution of the model parameters compared to the calibration scenarios. Ao et al. [3] specifically study the optimal design of validation experiments in the context of life prediction models. Their approach is actually similar to the Bayesian optimal design approach described in Section 2.6. However, their methodology relies on a fine model representing the system of interest, thus necessitating additional a priori knowledge on the system of interest, which is not always available. We also mention the paper of Sunseri et al. [41] that investigates the influence of auxiliary parameters on the solution of inverse problems. Although their study does not address the problem of validation, their methodology is relevant to the present work. They determine some pointwise and global sensitivity indices using the gradient of the solution to the inverse problem with respect to the auxiliary variables. Whenever an auxiliary parameter possesses a high sensitivity index, it indicates that a small perturbation in its value significantly impacts the solution to the inverse problem and that some effort should be spent in order to decrease the uncertainty of this auxiliary parameter. The present work aims at refining some of the concepts laid out in the aforementioned papers, in particular, the use of sensitivity indices to assess the relevance of validation experiments. The main objective here is therefore to describe a novel methodology for the optimal design of validation experiments. The proposed approach essentially relies on the formulation of two distinct optimization problems, which are formulated in such a way that the behavior of the model under the validation conditions resembles as much as possible that of the same model, but under the prediction conditions. In this manner, one can gain confidence in the model predictions, in the sense that the choice of the validation experiment is tailored so as to produce a validation scenario in which the observables express the same behavior as that of the QoI in the prediction scenario. In other words, the modeling errors observed in the validation scenario will be similar to the modeling errors that one would expect in the prediction scenario. The paper is organized as follows. Section 2 introduces all the necessary concepts and notations to precisely describe the validation process. A detailed account of the different types of parameters and sources of uncertainty involved in a model will be provided. We also describe methodologies for calibration and validation and give a brief overview of methods for optimal design of calibration experiments. Section 3 presents our novel approach for the design of validation experiments tailored toward the prediction of a quantity of interest that cannot potentially be observed. We briefly review the Active Subspace method as the chosen method to perform the sensitivity analysis. Two optimal design problems will be formulated in order to compute the appropriate control and sensor parameters for the validation scenario. We then present some numerical examples in Section 4. We first consider a toy problem, that of an object in free fall, to illustrate the validation process, with an emphasis on the importance of designing optimal validation experiments. We then study a second example, consisting of a simplified pollutant transport problem, to highlight the importance of the definition of the QoI in the design of optimal validation experiments. Section 5 provides some concluding remarks along with a series of open questions regarding the proposed methodology and how it relates to the larger process of uncertainty quantification and model validation. ## 2 Terminology and Preliminary Notions about Validation and Predictive Modeling The purpose of this section is to introduce some preliminary notions and notations in order to precisely describe the validation process. We recall that the primary objective of predictive modeling is to be able to obtain some quantitative predictions regarding a system of interest. At first sight, the task may seem rather straightforward to achieve as it simply aims at evaluating the output of a model. However, as mentioned earlier, for numerical predictions to be of any usefulness, one should be confident that the model remains valid for its intended use and that the predictions obtained with the model are in fact accurate descriptions of the reality. It should be clear by now that the validity of the model should be assessed with respect to the quantities of interest that one would like to predict using the model at specific conditions/regimes. In the presentation below, we will strive to use the same terminology as that commonly introduced in the literature, with a few exceptions in order to emphasize the different roles of some of the model parameters and model outputs. ### Abstract Model We define a generic _deterministic model_ as the triplet \((r,p,u)\), where \(p=(p_{1},\ldots,p_{d})\in\mathcal{P}\) are the _parameters_, \(u=(u_{1},\ldots,u_{s})\in\mathcal{U}\) are the _state variables_, and \(r\) include the _equilibrium relation_ and initial and/or boundary conditions \[r(p,u)=0. \tag{1}\] For the sake of simplicity, the parameter space \(\mathcal{P}\subseteq\mathbb{R}^{d}\) is assumed to be finite dimensional and the state space \(\mathcal{U}\) to be either a Banach or Hilbert space. The equilibrium relation \(r\) may involve ordinary or partial differential equations, algebraic equations, or any combination thereof. The equilibrium relation encapsulates the set of hypotheses and scientific understanding that hopefully lead to a sufficiently accurate model of the system of interest. An important property that the equilibrium relation \(r\) must possess is well-posedness: for each \(p\in\mathcal{P}\), there exists a unique and stable solution \(u\in\mathcal{U}\) satisfying (1). In other words, there exists a function \(g\) such that \(u=g(p)\). As an example, we consider a simple model that will be used in Section 4.1 to illustrate our methodology. The model describes the motion of a spherical projectile launched vertically in ambient air at sea level. The evolution of the physical system can be described by the linear ordinary differential equation and initial conditions \[m\frac{\mathrm{d}^{2}u}{\mathrm{d}t^{2}}(t)+3\pi\ell\exp(\mu) \frac{\mathrm{d}u}{\mathrm{d}t}(t)=-mg, \forall t\in(0,T) \tag{2a}\] \[u(0)=u_{0},\] (2b) \[\frac{\mathrm{d}u}{\mathrm{d}t}(0)=v_{0}, \tag{2c}\] where \(u(t)\) is the altitude reached by the projectile at time \(t\), \(m\) and \(\ell\) are its mass and diameter, \(u_{0}\) and \(v_{0}\) are its initial position and velocity, respectively, \(g\) is the gravitational constant, and \(\mu\) is a viscosity parameter for a spherical object travelling in air [43, p.488]. In this example, the parameters of the problem are given by \(p=(m,\ell,u_{0},v_{0},g,\exp(\mu))\in\mathcal{P}=(\mathbb{R}^{+})^{6}\) and the state variable consists in the altitude \(u\in\mathcal{U}=C^{1}(\mathbb{R})\) equipped with the uniform norm on \(u\) and \(\mathrm{d}u/\mathrm{d}t\). The set of equilibrium relation and initial conditions can easily be obtained from the initial-value problem (2). The model is constructed by making the following hypotheses regarding the physical system: 1. The trajectory of the projectile is supposed to be one dimensional, so only forces acting vertically are taken into account. 2. The friction force is assumed to be proportional to the velocity of the projectile. 3. The viscosity of the air and the gravitational constant remain constant with respect to the altitude. ### Classes of Parameters We split the parameters \(p\) of a problem into two distinct subsets, namely the _control parameters_\(x\) and the _model parameters_\(\theta\). We also consider an additional set of parameters that we call the _sensor parameters_\(z\in\Omega\), where \(\Omega\) is the domain of definition of the state variable \(u\). Each type of parameters will play some specific roles, as described below. Similar definitions can be found in the papers cited in the introduction, even if no consensus has yet been reached in the literature on a common terminology. The _control parameters_\(x\in\mathcal{X}\) represent the parameters that one can control when performing an experiment on the system of interest. These parameters can be viewed as the set of inputs which uniquely determine the experimental scenario. Examples of such parameters are the parameters appearing in the initial conditions and boundary conditions, the geometrical data, and the physical properties directly measurable by an experimental apparatus (mass, length, temperature, etc.). For the example of Section 2.1, the control parameters are given by the mass and diameter of the projectile as well as its initial altitude and velocity, hence \(x=(m,\ell,u_{0},v_{0})\). The _model parameters_\(\theta\in\mathcal{T}\) are the remaining parameters necessary for the full description of the model (\(\theta=p\setminus x\)). They are coined this way due to the fact that they explicitly stem from the assumptions and simplifications that one makes in defining a model to describe a particular phenomenon. Other hypotheses could potentially lead to a new model with different model parameters, but will not change the control parameters. Naturally, the definition of the parameters \(p\), and hence of the model parameters \(\theta\), depends directly on what a parameter would be deemed as such by the modeler and/or user. For example, numerical constants like \(\pi\) in (2) clearly do not constitute parameters, but the picture could be far less clear for other quantities. As a rule of thumb, if the value of a quantity may vary or if it is of any interest in the analysis of the model, then it can be considered as a model parameter. Examples of model parameters can be those that characterize the properties of a material (e.g. Young modulus, Poisson coefficient) or some dimensionless quantities (e.g. Reynolds number, Prandlt number). Model parameters are specified either by prior knowledge or by a calibration process. A more detailed discussion about the latter point will be presented in Section 2.5. In the case of the spherical projectile, the model parameters are \(\theta=(g,\mu)\) since they result from assumptions in the modeling process (\(g\) is independent of the altitude and \(\mu\) is related to a sub-model for air friction). A final type of parameters are the so-called _sensor parameters_\(z\in\Omega\). They often represent location defined in terms of the independent variables (e.g. space or time) upon which the state variable \(u\) would be observed. In the example of the projectile, the sensor parameters will be the time at which we wish to evaluate the altitude, so \(z=t\). ### Classes of Scenarios The control and model parameters allow one to define the regime under which both the system and the model operate. A specific regime will be referred to as a _scenario_. For the sake of simplicity here, we shall restrict the model parameters \(\theta\) to be the same for the validation and prediction scenarios. For the projectile example, this implies that the prediction and experimental apparatus be at sea level and in the ambient atmosphere (same gravitational constant \(g\) and same viscosity \(\mu\)). In other words, only the control parameters \(x\) are needed to fully characterize a scenario. We shall study three classes of scenarios, with a particular emphasis on the first two. 1. **Prediction Scenario:** Defines the conditions, described by \(x_{\mathrm{pred}}\), under which predictions on the system of interest will be made. 2. **Validation Scenario:** Defines the conditions, described by \(x_{\mathrm{val}}\), under which the validation experiment will be carried out. 3. **Calibration Scenario:** Defines the conditions, described by \(x_{\mathrm{cal}}\), under which the calibration experiment will be performed. We note here that model calibration essentially aims at estimating the uncertainty of the model parameters \(\theta\). On the one hand, the validation and calibration experiments are performed within a _set_\(\mathcal{X}_{\mathrm{lab}}\subset\mathcal{X}\) of _controlled environments_. On the other hand, the prediction scenario \(x_{\mathrm{pred}}\) may or may not be producible experimentally, so that \(x_{\mathrm{pred}}\) may not belong to \(\mathcal{X}_{\mathrm{lab}}\). Figure 1 provides a conceptual illustration of the various scenarios. Figure 1: Illustration of the various scenarios. The prediction scenario may be producible (black circle) or may not be producible experimentally (black square). The validation and calibration scenarios are elements of the set \(\mathcal{X}_{\mathrm{lab}}\) of controlled environments. ### Observables and Quantities of Interest We reiterate that a mathematical model constitutes only an abstraction of a physical system of interest. In fact, one has access to limited information about the system of interest. This information will be collectively referred to as the _experimental observables_\(y_{\mathrm{exp}}\). The experimental observables \(y_{\mathrm{exp}}\) can be acquired through a variety of experimental devices and/or statistical surveys. Importantly, they are obtained for a particular scenario and for given sensor parameters, so that they are functions of \(x\) and \(z\), i.e. \(y_{\mathrm{exp}}=y_{\mathrm{exp}}(x,z)\). We also assume that the model \((r,p,u)\) is sufficiently rich so that each experimental observable can be abstracted by a functional of the parameters \(p\) and the sensor parameters \(z\) (the state variable \(u\) is implicitly given through the function \(g(p)\)). These observables will be referred to as the _model observations_\(y\coloneqq h_{\mathrm{obs}}(p,u,z)=h_{\mathrm{obs}}(p,z)\). In the following, the term _observables_ will be used to refer to both experimental and model observations. In the example dealing with the launch of a spherical projectile, the experimental observables may consist in the altitude of the projectile, acquired by an optical apparatus or an altimeter, or the acceleration of the projectile obtained with an accelerometer. Whatever the quantity considered, it will be measured at a given discrete time \(z=t_{\mathrm{exp}}\). The corresponding model observations are thus defined as \(y(t_{\mathrm{exp}})\coloneqq u(t_{\mathrm{exp}})\) or \(y(t_{\mathrm{exp}})\coloneqq\mathrm{d}^{2}u/\mathrm{d}t^{2}(t_{\mathrm{exp}})\). Quantities of interest extend the notion of model observables, in the sense that they refer to quantities that can or cannot be experimentally observed. In other words, QoIs may not depend on sensor positions, although they could depend implicitly on the independent (spatial or temporal) variables through an average or a maximum. The QoIs will be denoted as \(q\coloneqq h_{\mathrm{qoi}}(p,u)=h_{\mathrm{qoi}}(p)\), since they are evaluated for specific scenarios. We list below some examples of such possible QoIs: * Mean quantities over a subregion of \(\Omega\) (e.g. mean displacement or stress in a solid); * Minimal or maximal quantities and their location (e.g. maximal stress); * Statistical quantities (e.g. probability of leakage of nuclear waste). An important notion is that the QoIs provide an abstraction of the key features of the system of interest upon which a decision making process will be applied. Figure 2 synthesizes how the observations and QoIs are obtained in terms of the various parameters. In the case of the projectile, an example of QoI could be the maximal altitude that it reaches over time, i.e. \(q\coloneqq\max_{t}u(t)\). We note that the quantity is not directly observable since we would need to know beforehand the exact time at which the maximal altitude of the real projectile is reached. A precise description of the prediction problem can now be stated as follows: one wishes to predict a given QoI \(h_{\mathrm{qoi}}\) for a given prediction scenario \(x_{\mathrm{pred}}\). The definition of observable functionals \(h_{\mathrm{obs}}\) will play a crucial role in the calibration and validation problems in order to obtain accurate predictions. Figure 2: Diagram of the interactions between the various parameters and the system of interest as well as the model. It is important to observe that the model parameters \(\theta\) do not affect the system of interest, but only the observation and QoI functional \(h_{\mathrm{obs}}\) and \(h_{\mathrm{qoi}}\). For the sake of simplicity, the QoI functional \(h_{\mathrm{qoi}}\) does not depend explicitly on the sensor parameters \(z\). The accuracy of the model outputs (observables and QoI) also depends upon the aggregation of various sources of uncertainties and errors, as explained in the following subsection. ### Uncertainties and Modeling Errors Several approaches can be adopted to represent uncertainties in quantities, such as parameters, observables, or QoI, as outlined in [22, 34]. We consider here the probabilistic approach, where an uncertain quantity is assimilated as a random variable \(A\) with probability distribution \(F_{A}\) and density function \(f_{A}\). The random variable \(A\) can be either discrete or continuous. We adopt the usual convention that a lowercase variable \(a\) indicates one realization of the random variable \(A\), noted in uppercase. In addition to random variables, we may also encounter random processes \(A=A(b)\), usually consisting of a family of random variables, each defined for an instance of \(b\). Uncertainties are sometimes classified as _aleatory_ uncertainty and _epistemic_ uncertainty [35]. We will assume in this work that all uncertainties are aleatory. More precisely, this means that one is able to prescribe the distribution \(F_{A}\) of any random variable \(A\). In contrast, if the uncertainty were epistemic, we would not be able to characterize \(F_{A}\), see e.g. [35, Section 2]. Epistemic uncertainty can thus be viewed here as _uncertainty on the uncertainty_. We note that some authors define epistemic uncertainty as the uncertainty that could be eventually reduced if more information and data were available [18, 27]. However, such subtlety in the definition of epistemic uncertainty is not relevant to the current discussion and will be here passed over. Since we suppose that all uncertainties will be aleatory, it is reasonable to offer some guidance on how the distributions can be determined, as we do below. #### 2.5.1 Uncertainty in the Parameters As a first approximation, we suppose that the control parameters \(x\) and the sensor parameters \(z\) do not possess any uncertainty, an assumption that can be far from the truth depending on the nature of the prediction scenario and of the actual experimental apparatus. However, this hypothesis will simplify the definition of the optimal validation experiment as well as the overall exposition of the present work. Therefore, the only parameters that we will be considered uncertain here are the model parameters \(\theta\). The quantification of the uncertainty in the model parameters \(\theta\) is a non-trivial task since this type of parameters arises from the various hypotheses of the model. One can determine the distribution \(F_{\Theta}\) by taking into account prior knowledge concerning each hypothesis. In the example of the projectile, we possess relatively extensive prior knowledge on the gravitational constant \(g\) and the air viscosity \(\mu\), so one can assign them a distribution with small variance. However, it is often the case that we have a rather vague idea about the values of the model parameters. The probability distribution of each parameter need then be identified through calibration processes. This particular topic is the subject of an exhaustive literature. For the sake of completeness and to emphasize the difference between the problem of calibration and validation, we further detail the calibration of model parameters in Section 2.6. #### 2.5.2 Uncertainty in the Observables and QoIs Uncertainty in experimental observations may come from various sources and may depend on the scenarios \(x\) and sensor parameters \(z\). Experimental observations are thus assimilated as random processes \(Y_{\exp}(x,z)\). Being able to characterize the uncertainty in the observables is crucial for the validation process, as further explained in Section 2.7. We list two possible, although not exhaustive, sources of uncertainty. The first one comes from the inherent and inevitable variability of the measurement apparatus. However, it is sometimes possible to reduce the amount of uncertainty in the observables and thus get experimental observations closer to the _true_ value by using a more precise apparatus. The second source of uncertainty comes from the inherent stochastic nature of the system of interest. For fixed control and sensor parameters \(x\) and \(z\), repetitions of the same experiment may yield different, yet accurate, values for the experimental observations. This may happen for instance if the phenomena of interest suffer from instabilities with respect to infinitesimal perturbations of the control parameters. The aggregation of these two sources of uncertainty allows one to determine the distribution \(F_{Y_{\text{exp}}(x,z)}\), but the task is not necessarily straightforward. We take the point of view that we can repeat a certain number of times the same experiments (in the case of non-destructive experiments) so that one can determine the empirical distribution \(F_{Y_{\text{exp}}(x,z)}\), which aggregates both sources of uncertainty. Uncertainty in the model observation \(Y\) and quantity of interest \(Q\) are obtained by propagating the uncertainty in the model parameters \(\Theta\) through their respective functional, that is, \[Y(x,z)\coloneqq h_{\text{obs}}(x,\Theta,z) \tag{3}\] \[Q(x)\coloneqq h_{\text{qoi}}(x,\Theta) \tag{4}\] Computing the exact distributions \(F_{Y}\) and \(F_{Q}\) is only possible for very simple functionals and for a restricted family of distributions \(F_{\Theta}\) (e.g. Gaussian distributions). Most of the time, one needs to approximate the distributions \(F_{Y}\) and \(F_{Q}\) by sampling the distribution \(F_{\Theta}\) using methods such as the ubiquitous Monte-Carlo method, the Latin Hypercube method, or Quasi Monte-Carlo methods [40]. It is often enlightening to perform an uncertainty propagation prior to any validation step. If the prediction of the \(Q\) possesses too much uncertainty, then the prediction may be useless, independently of whether the model is deemed valid or not. In this case, it may be necessary to refine the calibration of the model parameters \(\Theta\) in order to reduce their uncertainty, thereby potentially reducing as well the uncertainty in the QoI. The use of sensitivity analysis methods [12, 37] can help identify the model parameters \(\Theta\) contributing the most or least to the uncertainty in \(Y\) and \(Q\). We summarize the various quantities introduced so far as well as the source of their uncertainty in Table 1. #### 2.5.3 Modeling Errors The authors adhere to the common idea that that no model, as sophisticated as it may be, would be able to capture the exact behavior of a system of interest, since a model is merely an abstraction of the latter and often relies on several hypotheses. Therefore any model comes with some modeling errors that should be estimated in one way or the other. Furthermore, the modeling error should account for the inevitable sources of uncertainty in both the experimental and model observations. One way to combine parameter uncertainties and modeling errors is in terms of the discrepancy \(E\): \[E(x,z)=Y_{\text{exp}}(x,z)-Y(x,z)=Y_{\text{exp}}(x,z)-h_{\text{obs}}(x,\Theta,z). \tag{5}\] The discrepancy \(E(x,z)\) can be viewed as a random process in the sense that it consists of a random variable for each instance of the control parameters \(x\) and of the sensor parameters \(z\). This discrepancy \(E\) can be viewed as the aggregation of the experimental errors and the model inadequacy in the approach taken by Kennedy and O'Hagan [4, 21]. An important property of the discrepancy (5) is that it is defined independently of the QoI. Other approaches can be adopted to quantify the modeling errors [18, 30]. For the sake of simplicity in the presentation, and since our focus is rather on the design of validation experiments, we omit to describe these methods here. We refer the interested reader to [28, 34] for a summary of these methods. \begin{table} \begin{tabular}{l c c} \hline \hline & **Definition** & **Uncertainty** \\ \hline **Control Parameters**\(x\) & Parameters that one can control & None \\ & to perform an experiment & \\ **Model Parameters**\(\Theta\) & Remaining parameters necessary & Prior knowledge or \\ & to describe a model & posterior density \\ **Sensor Parameters**\(z\) & Spatio-temporal parameters & None \\ \hline **Experimental Observation** & Observations at control parameters \(x\) & Empirical density \\ \(Y_{\text{exp}}(x,z)\) & and sensor parameters \(z\) & Uncertainty propagation \\ **Model Observation**\(Y(x,z)\) & \(Y(x,z)\coloneqq h_{\text{obs}}(x,\Theta,z)\) & Uncertainty propagation \\ **Quantity of Interest**\(Q(x)\) & \(Q(x)\coloneqq h_{\text{qoi}}(x,\Theta)\) & Uncertainty propagation \\ \hline \hline \end{tabular} \end{table} Table 1: Summary of the parameters, observables, and QoI as well as their source of uncertainty ### Calibration Process and Optimal Design of Calibration Experiments We provide here a brief overview of the calibration process and of the related problem dealing with the optimal design of calibration experiments. Although our proposed approach to perform optimal design of validation experiments does not directly involve a calibration process _per se_, a brief discussion on the topic is deemed insightful in order to differentiate our methodology from the optimal design problem for calibration experiments. The calibration of the model parameters \(\Theta\), a process usually referred to as model calibration or parameter identification, has as a main objective to provide a characterization of the probability distribution \(F_{\Theta}\), which encapsulates the uncertainty in the model parameters, using actual experimental observables \(y_{\rm exp}\), as realizations of \(Y_{\rm exp}\). In other words, the goal is to update our knowledge about the model parameters. There exist several methods to perform a calibration, for example, the least squares approach (and its regularized version), the maximum likelihood estimation (MLE) approach, or the maximum a posteriori (MAP) approach. Under various hypothesis, one can show equivalences between these approaches [38]. A common feature of these methods is that they provide a point estimator \(\hat{\theta}\) of the model parameters \(\theta\). These approaches present two drawbacks that are closely related. First, it is relatively difficult to quantify the uncertainty of \(\hat{\Theta}\) and only asymptotic results may be applicable [13, Section 16.4]. Second, if the model is not _identifiable_, then these asymptotic results cannot be invoked and the point estimator \(\hat{\theta}\) may not be "stable". This identifiability condition is a restrictive condition to fulfil and especially tedious to verify [9]. An alternative approach consists in updating the uncertainty of the model parameters with the posterior of a Bayesian analysis. As for the MLE and MAP approaches, the method requires defining a likelihood function \(L_{Y_{\rm exp}=y_{\rm exp},x_{\rm cal},z_{\rm cal}}(\theta)\), which describes the probability of obtaining the experimental observables \(Y_{\rm exp}=y_{\rm exp}\) given the control, sensor, and model parameters, \(x_{\rm cal}\), \(z_{\rm cal}\), and \(\theta\), respectively. The definition of the likelihood function is not unique. We show below how to construct the likelihood function assuming an additive error \[y_{\rm exp}(x_{\rm cal},z_{\rm cal})=y(x_{\rm cal},z_{\rm cal})+e(x_{\rm cal},z_ {\rm cal})=h_{\rm obs}(x_{\rm cal},\theta,z_{\rm cal})+e(x_{\rm cal},z_{\rm cal}), \tag{6}\] where \(y_{\rm exp}(x_{\rm cal},z_{\rm cal})\) is a realization of the random variable \(Y_{\rm exp}(x_{\rm cal},z_{\rm cal})\) and \(e(x_{\rm cal},z_{\rm cal})\) is a realization of the random variable \(E(x_{\rm cal},z_{\rm cal})\) that describes the error. We note that \(E\) is the same as in (5). Since the error \(E(x_{\rm cal},z_{\rm cal})\) is actually unknown, one can choose a reasonable approximation so that the likelihood be tractable. A typical choice is a Gaussian noise with zero mean and standard deviation \(\sigma>0\): \[E\sim\mathrm{N}(0,\sigma^{2}). \tag{7}\] In other words, it is assumed that the noise is identical whatever the calibration scenario \(x_{\rm cal}\) and sensor parameters \(z_{\rm cal}\). The likelihood function resulting from this choice is defined as \[L_{Y_{\rm exp}=y_{\rm exp},x_{\rm cal},z_{\rm cal}}(\theta)=\frac{1}{\sqrt{2\pi }\sigma}\exp\left[-\frac{(h_{\rm obs}(x_{\rm cal},\theta,z_{\rm cal})-y_{\rm exp }(x_{\rm cal},z_{\rm cal}))^{2}}{2\sigma^{2}}\right]. \tag{8}\] Kennedy and O'Hagan [21] propose to represent the discrepancy as the sum of a model error term (represented as a Gaussian process) and the experimental error. However, this approach is computationally involved and may lack identifiability since both the model parameters and the model errors need to be calibrated [4]. In addition to the likelihood, one must provide a prior density, denoted by \(f_{\Theta_{0}}\), encoding the current knowledge that one possesses about the model parameters \(\Theta\). With the likelihood function and the prior density, Bayes' theorem allows one to obtain the posterior probability density function \(f_{\Theta|Y_{\rm exp}=y_{\rm exp},x_{\rm cal},z_{\rm cal}}\) as \[f_{\Theta|Y_{\rm exp}=y_{\rm exp},x_{\rm cal},z_{\rm cal}}(\theta)=\frac{L_{Y_{ \rm exp}}=y_{\rm exp},x_{\rm cal},z_{\rm cal}}{\int L_{Y_{\rm exp}}=y_{\rm exp}, x_{\rm cal},z_{\rm cal}}(\theta)f_{\Theta_{0}}(\theta)\mathrm{d}\theta. \tag{9}\] The posterior density represents the updated knowledge about the model parameters \(\varTheta\) given the experimental observations \(y_{\exp}(x_{\mathrm{cal}},z_{\mathrm{cal}})\). The analytical expression of the posterior density is available only in a few cases, for example, when the model is linear with respect to the model parameters and one chooses a likelihood of the form (8) and a Gaussian prior. Otherwise, it is generally approximated using sampling methods such as the Markov Chain Monte-Carlo method or its variants [16, 40]. Since all calibration approaches require the use of experimental observations \(y_{\exp}(x_{\mathrm{cal}},z_{\mathrm{cal}})\), the calibration scenario \(x_{\mathrm{cal}}\) and sensor parameters \(z_{\mathrm{cal}}\) should be chosen appropriately in order to better determine the uncertainty in the model parameters \(\varTheta\). To the best of our knowledge, there essentially exists two methods for optimal design of calibration scenarios, one based on the Fisher information matrix and the other based on a Bayesian approach. 1. The first method consists in optimizing some functional of the Fisher information matrix associated with the model [5, 6]. This approach is widely used for polynomial models (linear with respect to the model parameters and possibly non-linear with respect to the control and sensor parameters) since the Fisher information matrix depends only upon the control and sensor parameters [5]. Different definitions of the Fisher information matrix functional lead to different optimal calibration experiments. For instance, the D-optimal design minimizes the variance of the estimator of the model parameters \(\theta\) and the G-optimal design minimizes the maximum of the variance in the prediction. The D-optimal problem for a linear model reads \[(x_{\mathrm{cal}},z_{\mathrm{cal}})=\operatorname*{argmax}_{(x,z)\,\in\, \mathcal{X}_{\mathrm{lab}}\times\mathcal{Z}_{\mathrm{lab}}}|\mathcal{I}(x,z)|,\] (10) where \(\mathcal{I}\) denotes the Fisher information matrix. Equivalence theorems show that these different optimal designs are in fact related. For a thorough discussion of this approach and some extensions to non-linear models, we refer the interested reader to the book of Atkinson et al. [6]. 2. The second method is based on the Bayesian approach [8, 25, 36] and focuses on minimizing some functional of the posterior \(f_{\varTheta|Y=y_{exp},x_{\mathrm{cal}},z_{\mathrm{cal}}}\) (or sometimes the likelihood function) over the possible observations \(Y\) and the possible values of the model parameters \(\varTheta\). Definitions of the functional akin to the one employed with the Fisher information matrix give rise to similar optimal designs. For example, we can seek the control parameters \(x_{\mathrm{cal}}\) and sensor parameters \(z_{\mathrm{cal}}\) that minimize the determinant of the covariance of the posterior \[(x_{\mathrm{cal}},z_{\mathrm{cal}})=\operatorname*{argmax}_{(x,z)\,\in\, \mathcal{X}_{\mathrm{lab}}\times\mathcal{Z}_{\mathrm{lab}}}\,\iint\big{(} \det(\mathrm{Cov}(\varTheta|Y=y_{exp},x,z))\big{)}^{-1}\,\mathrm{d}F_{Y_{\exp }}\,\mathrm{d}F_{\varTheta_{0}}.\] (11) We remark that solving these types of optimization problems can be very challenging because it usually requires a nested integration. Several simplifications, such as a Laplace approximation of the posterior, can make these optimization problems tractable [8, 25]. A very important feature of the two approaches is that the determination of the optimal calibration experiment does not rely on any experimental data, as it is an a priori analysis. We will retain this feature in our methodology to design optimal validation scenarios. The whole calibration process is summarized in Algorithm 1. ### Validation Process Everything is now in place to discuss the problem of interest, which is the validation of a model. Again, the goal is to assess the accuracy in the prediction of a quantity of interest \(Q=h_{\mathrm{qoi}}(x_{\mathrm{pred}},\varTheta)\) at the prediction scenario \(x_{\mathrm{pred}}\). We reiterate that the discrepancy \(E\) given in (5) is a useful way to encode the accuracy of our model observations. Both experimental and model observations should be performed at a specific scenario, the so-called validation scenario \(x_{\mathrm{val}}\), and for given sensor parameters \(z_{\mathrm{val}}\). For now, let us suppose that we have determined this validation scenario \(x_{\mathrm{val}}\), the type of observable (i.e. the choice of the functional \(h_{\mathrm{obs}}\)) and the value of the sensor parameters \(z_{\mathrm{val}}\). Multiple approaches to validate a model have been proposed, as mentioned in the Introduction. Several of these validation processes require the choice of a validation metric. For example, Roy and Oberkampf [35] utilize the area validation metric introduced by Ferson et al. [15] \[d(F_{Y},F_{Y_{\text{exp}}})=\int_{-\infty}^{\infty}\bigl{|}F_{Y}(s)-F_{Y_{\text{ exp}}}(s)\bigr{|}\,\mathrm{d}s, \tag{12}\] where \(F_{Y}\) and \(F_{Y_{\text{exp}}}\) are the model and experimental observation distributions, respectively. They provide several rationales for this choice of validation metric and provide several ways to employ it, especially in the presence of different validation scenarios. Another validation metric employs the discrepancy \(E\) with a tolerance \(\varepsilon\) on the error \[\gamma=F_{|E|}(\varepsilon)=\Pr(|E|<\varepsilon)=\Pr(|Y_{\text{exp}}(x_{\text {val}},z)-h_{\text{obs}}(x_{\text{val}},\varTheta,z)|<\varepsilon). \tag{13}\] If the value of \(\gamma\) is above a certain threshold \(\eta\), then the model is deemed valid. This particular measure was introduced by Rebba and Mahadevan [33] and coined _model reliability metric_. In their paper, the authors compare the model reliability metric with point and interval hypothesis testing under both the frequentist and Bayesian perspectives. They found that adopting the hypothesis testing approach may lead to some inconsistencies as to whether the model is deemed valid or not, whereas the model reliability metric dos not present these inconsistencies. Li and Mahadevan [24] expand this model reliability metric to take into account multivariate outputs. Mullins et al. [27] further investigate the use of the model reliability metric for validation. In this paper, the authors argue that the use of the aforementioned area validation metric can be misleading. Indeed, increasing the uncertainty of the experimental observation so that \(F_{Y_{\text{exp}}}\approx F_{Y}\) leads to a smaller area validation metric (a valid model), whereas one should have less confidence in the validity of our model, since the experimental data are of lesser quality. For these reasons, we will select the reliability metric (13) as the validation metric for the remainder of the paper. To compute the probability \(\gamma\), one needs to know the distribution of the discrepancy \(E\). We assume here that the random variables \(Y_{\text{exp}}(x_{\text{cal}},z)\) and \(\varTheta\) are independent. This assumption is reasonable since the first quantity relates to the experimental observations while the second quantity is related to the various hypotheses on the model. We can then estimate \(\gamma=F_{|E|}(\varepsilon)\) with any of the previously mentioned sampling methods by performing the sampling of the empirical distributions of \(Y_{\text{exp}}(x_{\text{cal}},z)\) and of the model parameters \(\varTheta\) independently. #### 2.7.1 Validation Scenario The choice of the validation scenario \(x_{\text{val}}\), the type of observable, and the value of the sensor parameters \(z_{\text{val}}\) have a significant impact on the actual outcome of the validation process, as it will be illustrated in Section 4.1. It is seldom the case that we can replicate in a controlled environment the conditions under which we would like to perform the prediction, wherein the prediction and validation scenario are the same. If it is not the case, then we must come up with an _informative_ and _relevant_ validation scenario with respect to the prediction scenario. Moreover, if the QoI is not observable, then we need to determine which type of observables and which sensor parameters \(z\) should be employed for the validation. These are the main issues addressed in this research work, which will be further explored in Section 3. #### 2.7.2 Validation Workflow A complementary objective of our work is to eliminate as much as possible arbitrary decisions in the validation process. Indeed, by providing a way to determine the validation scenario \(x_{\text{val}}\), the type of observables, and the sensor parameters \(z_{\text{val}}\), we are closer to _automating_ the validation process. Our perspective of what a validation workflow should look like is summarized by Algorithm 2. ``` Input: Model \((r,p,u)\), quantity of interest \(h_{\text{qoi}}\), prediction scenario \(x_{\text{pred}}\), choice of observation functional \(h_{obs}\), choice of the error \(E(x,z)\), prior density \(f_{\Theta_{0}}\), constrained sets \(\mathcal{X}_{\text{lab}}\) and \(\mathcal{Z}_{\text{lab}}\), error tolerance \(\varepsilon\), threshold \(\eta\) 1 Set \(f_{\Theta}=f_{\Theta_{0}}\) ; 2ifCalibration experiment availablethen 3 Calibration of the model parameter \(\Theta\) (Algorithm 1); 4 Set \(f_{\Theta}=f_{\Theta|Y_{\text{exp}}=y_{\text{exp}},x_{\text{val}},z_{\text{val }}}\) ; 5 6 end if 7 Computation of \(Q=h_{\text{qoi}}(x_{\text{pred}},\Theta)\); 8ifUncertainty of \(Q=h_{\text{qoi}}(x_{\text{pred}},\Theta)\) is too largethen 9 Exit validation; 10 11else 1 Proceed to validation; 12 13 end if 14 Computation of the validation scenario \(x_{\text{val}}\), sensor parameters \(z_{\text{val}}\), and type of observation functional \(h_{\text{obs},\text{val}}\) (Algorithm 3); 15 Realization of the experiment to obtain \(y_{\text{exp}}(x_{\text{val}},z_{\text{val}})\); 16 Computation of the validation metric \(\gamma\) (13); 17if\(\gamma\geq\eta\)then 18 Model is not invalidated 19else 20 Model is invalidated 21 end if ``` **Algorithm 2**Validation Process The first step of the validation process consists in defining the prior density \(f_{\Theta_{0}}\) of the model parameters \(\Theta\) using prior knowledge or via a calibration process, such as the one described in Algorithm 1. The second step consists in a _sanity check_, that of checking whether the prediction of \(Q=h_{\text{qoi}}(x_{\text{pred}},\Theta)\) has too much uncertainty given the uncertainty of the model parameters \(\Theta\). If it does, then it is irrelevant to continue the validation process since the prediction would not be useful, independently of whether or not the model is valid. If the uncertainty in the QoI is deemed acceptable, then one needs to design an appropriate validation experiment. One would then carry out the validation experiment and estimate the validation metric to verify that its prediction indeed reflects the reality. As a final remark, we do not address in this work the propagation of the modeling errors to the QoI at the prediction scenario, nor the decision making about the model validity. Our objective is mainly to search for the best validation experiments so that one can better assess the capability of the model to predict the QoI. That said, we conjecture that our approach for designing optimal validation experiments, as described in the next section, complements the validation workflow presented in Algorithm 2 to provide useful assessments of the model validity. ## 3 Optimal Design of Validation Experiments A validation scenario to obtain validation data must be _representative_ of the prediction scenario, since the objective is to assert whether or not the model is valid for predictive purposes, more specifically, for the prediction of the quantity of interest \(Q=h_{\text{qoi}}(x_{\text{pred}},\Theta))\). By _representative_, we mean that the various hypotheses on the model should be similarly satisfied in both the prediction and validation scenarios. In many applications, it is customary to design the validation experiment based on dimensionless numbers. In fluid mechanics for instance, the Reynolds number is often used as a criterion to select validation conditions that reflect the prediction setting. However, this simple choice does not necessarily provide a complete framework for the design of validation experiments. First, a model usually involves more parameters than units and several dimensionless numbers need to be considered simultaneously. Moreover, the relative influence of each of the dimensionless numbers cannot be solely provided by dimensional analysis. It is in fact important to quantify the influence of a dimensionless number on the QoI for the following two reasons: 1. the uncertainty in the model parameters \(\Theta\) is propagated to the dimensionless parameters; 2. the dimensionless numbers in the validation and prediction settings rarely match exactly. Although dimensional analysis is a useful tool to better understand a model, it seems insufficient and somewhat ill-defined for validation purposes. Our view is that the behavior of the model with respect to the parameters and their uncertainty should be as similar as possible in the validation and prediction scenarios. In other words, an analysis of the parameter sensitivity should guide the optimal design of validation experiments. ### Sensitivity Analysis Sensitivity analysis is a vast subject with multiple objectives and methodologies, see e.g. [12, 32, 37] for thorough overviews on the topic. Examples of problems addressed by sensitivity analysis are: * Characterization of the relationship between the parameters and the model outputs. * Identification of non-influential parameters, a topic often referred in the literature to as _factor fixing_. * Identification of the parameters that influence the most the model outputs, usually referred to as _factor prioritization_. One objective here is to further reduce the uncertainty affecting these parameters. Sensitivity methods are usually classified as either local or global. On one hand, local methods compute the sensitivity (e.g. derivatives) around a specific point in the parameter space and do not take into account the uncertainty affecting the parameters. On the other hand, global methods incorporate the uncertainty in the parameters. The Sobol'-based sensitivity analysis is a global approach that computes the so-called Sobol' sensitivity indices, which quantitifies how much of the variance in a model output can be attributed by a single parameter or a subset of parameters [39]. We consider in our approach that the model parameters \(\Theta\) are uncertain so that we need to employ a global sensitivity analysis method. Moreover, our goal being to assess the influence of the various parameters on the model outputs, either the observables or the QoI (the first objective aforementioned), we seek a description of the response surface of the observation and QoI functionals with respect to the parameters. Our objective is thus different from that of allocating the variance of \(Y=h_{\mathrm{obs}}(x,\Theta,z)\) or \(Q=h_{\mathrm{qoi}}(x,\Theta)\) to parameters (or subsets of parameters) such as in the Sobol' approach. ### Active Subspace Method One sensitivity method that allows the characterization of a response surface while taking into account the uncertainty of the parameters is the Active Subspace method [10, 11]. The method computes the gradient of a model functional \(h\) with respect to the control and model parameters to quantify their influence. The expectation of the outer product of the gradient with respect to the distribution \(F_{\Theta}\) is performed to assemble the so-called _influence matrix_\(M_{h}\) \[\begin{split} M_{h}(x,z)&=\mathbb{E}\left(\nabla _{(x,\theta)}h(x,\Theta,z)\,\nabla_{(x,\theta)}h(x,\Theta,z)^{T}\right)\\ &=\mathrm{Cov}\left(\nabla_{(x,\theta)}h(x,\Theta,z)\right)+ \mathbb{E}\left(\nabla_{(x,\theta)}h(x,\Theta,z)\right)\mathbb{E}\left( \nabla_{(x,\theta)}h(x,\Theta,z)\right)^{T}.\end{split} \tag{14}\] One proceeds with the eigenvalue decomposition of \(M_{h}(x,z)\) to identify the principle directions influencing the outputs of the model. This influence matrix is characterized by large variations in the gradient and/or large expected gradients. From now on, the influence matrix \(M_{h}\) will constitute our quantitative description of the surface response of the functional \(h\). Given two influence matrices \(M_{1}\) and \(M_{2}\) characterized by different functionals \(h\), different scenarios \(x\), and different sensor parameters \(z\), we define the distance between the two influence matrices as \[d(M_{1},M_{2})=\left\|M_{1}-M_{2}\right\|_{2}, \tag{15}\] where \(\left\|M\right\|_{2}\) denotes the spectral norm of \(M\) (other norms could have been considered as well). The distance essentially measures the error between an influence matrix \(M_{1}\) and a target influence matrix \(M_{2}\). We note that the use of the Active Subspace method presumes enough regularity of the observations and QoI functionals. Having defined a quantification of the response surface of a functional as well as a means for comparison, we turn our attention to the problem at hand, that of defining a validation experiment specifically tailored toward the prediction of the QoI at a given prediction scenario. ### Optimal Choice of the Validation Scenario We describe the set of controlled experiments as the set of scenarios \(\mathcal{X}_{\mathrm{lab}}\subseteq\mathcal{X}\) for which one is able to make a laboratory experiment to interrogate the system of interest. We recall that the prediction scenario \(x_{\mathrm{pred}}\) may or may not belong to \(\mathcal{X}_{\mathrm{lab}}\). We thus formulate two requirements regarding an optimal validation scenario: 1. If \(x_{\mathrm{pred}}\in\mathcal{X}_{\mathrm{lab}}\), then the validation scenario should be given by \(x_{\mathrm{val}}=x_{\mathrm{pred}}\). 2. If the QoI functional \(h_{\mathrm{qoi}}\) is in fact an observable \(h_{\mathrm{obs}}\), then the observable and the sensor parameters employed for the validation should correspond to those of the QoI. These two requirements will guide the definition of the two optimal design problems that will be set up to specify the goal-oriented validation experiment. The first step in specifying the validation experiment consists in determining the validation scenario \(x_{\mathrm{val}}\). We recall here that the QoI functional \(h_{\mathrm{qoi}}\) is only a function of the control parameters \(x\) and of the model parameters \(\Theta\). Moreover, the model parameters \(\Theta\) are the same for the validation and prediction scenarios, so we have two random quantities of interest \(Q_{\mathrm{val}}=h_{\mathrm{qoi}}(x_{\mathrm{val}},\Theta)\) and \(Q_{\mathrm{pred}}=h_{\mathrm{qoi}}(x_{\mathrm{pred}},\Theta)\). We seek the validation scenario \(x_{\mathrm{val}}\) that minimizes the distance (15), with the constraint that the scenario be realizable in a controlled environment \[x_{\mathrm{val}}\in\operatorname*{argmin}_{x\in\mathcal{X}_{\mathrm{lab}}}d( M_{h_{\mathrm{qoi}}}(x),M_{h_{\mathrm{qoi}}}(x_{\mathrm{pred}})). \tag{16}\] In other words, we look for the validation scenario such that the response surface of \(Q_{\mathrm{val}}\) (encoded by the influence matrix \(M_{h_{\mathrm{qoi}}}(x_{\mathrm{val}})\)) resembles the most the response surface of \(Q_{\mathrm{pred}}\) for the prediction scenario. This is another way of saying that the various hypotheses and simplifications leading to the model have comparable impact in the validation and predictive settings. We observe that if \(x_{\mathrm{pred}}\in\mathcal{X}_{\mathrm{lab}}\), then a global minimum with zero value is attained at \(x_{\mathrm{pred}}=x_{\mathrm{val}}\), since the objective function in (16) is non-negative. However, the global minimum may not be unique. Indeed, a linear functional \(h_{\mathrm{qoi}}\) with respect to the parameters features the same influence matrix \(M_{h_{\mathrm{qoi}}}(x)\) whatever the value of these parameters. Hence, the first requirement stated above is partially fulfilled, since the optimization problem (16) may not possess a unique minimum. ### Optimal Choice of Observables and Sensor Parameters The goal in this section is to define the problem whose solution will provide an observation functional \(h_{\mathrm{obs}}\) and sensor parameters \(z\) in order to fully characterize the optimal validation experiment. We recall that QoIs are not necessarily observable either in the prediction or the validation scenarios. We nevertheless want to validate the model for the prediction of the QoI. With an optimal prediction scenario obtained by solving (16), we now need to determine the functional \(h_{\mathrm{obs}}\) and sensor parameters \(z\) that best mimic the influence matrix of \(Q_{\mathrm{val}}=h_{\mathrm{qoi}}(x_{\mathrm{val}},\Theta)\). Again, the rationale is that the influence matrix reflects the impact of the various hypotheses and assumptions of the model. The result of the validation process obtained with this particular functional \(h_{\mathrm{obs}}\) and sensor parameters \(z\) could then be reasonably extrapolated to the QoI. Since the choices of the observation functional and of the sensor parameters \(z\) are constrained by the available experimental capabilities, we introduce here the set \(\mathcal{H}_{\mathrm{lab}}\) of possible observation functionals and the set \(\mathcal{Z}_{\mathrm{lab}}\) of possible sensor parameters. The optimal design problem to be solved is then given by \[(h_{\mathrm{obs,val}},z_{\mathrm{val}})\in\operatorname*{argmin}_{(h_{\mathrm{ obs}},z)\in\mathcal{H}_{\mathrm{lab}}\times\mathcal{Z}_{\mathrm{lab}}}d\bigg{(} \frac{M_{h_{\mathrm{obs}}}(x_{\mathrm{val}},z)}{\mathrm{Tr}(M_{h_{\mathrm{obs} }}(x_{\mathrm{val}},z))},\frac{M_{h_{\mathrm{goal}}}(x_{\mathrm{val}})}{ \mathrm{Tr}(M_{h_{\mathrm{goal}}}(x_{\mathrm{val}}))}\bigg{)}. \tag{17}\] The normalization of the influence matrices for the observational and QoI functional is required since both functionals may represent different physical quantities with different scaling and units. The validation experiment tailored toward the prediction of the QoI is now fully specified. It consists in interrogating the system of interest for the experimental observation corresponding to the observation functional \(h_{\mathrm{obs,val}}\) at the sensor parameters \(z_{\mathrm{val}}\) under the scenario \(x_{\mathrm{val}}\). The whole process to search for the optimal experiment is described in Algorithm 3. ``` Input: Model \((r,p,u)\), quantity of interest \(h_{\mathrm{qoi}}\), prediction scenario \(x_{\mathrm{pred}}\), model parameter \(\Theta\), set of controlled scenarios \(\mathcal{X}_{\mathrm{lab}}\), set of observation functionals \(\mathcal{H}_{\mathrm{lab}}\), set of controlled sensor parameters \(\mathcal{Z}_{\mathrm{lab}}\) 1 Computation of the validation scenario \(x_{\mathrm{val}}\) by solving (16); 2 Computation of the validation observation functional \(h_{\mathrm{obs,val}}\) and validation sensor parameter \(z_{\mathrm{val}}\) by solving (17); 3 Validation scenario \(x_{\mathrm{val}}\), validation observation functional \(h_{\mathrm{obs,val}}\), validation sensor parameters \(z_{\mathrm{val}}\) ``` **Algorithm 3**Optimal Design of Validation Experiment The values of the objective functionals of Problems (16) and (17) both provide useful information regarding the quality of the validation experiment. If both objective functionals are small, then the influence matrix of the observables \(h_{\mathrm{obs}}\) at the validation scenario mimics well the influence matrix of the QoI at the prediction scenario. The validation experiment is thus related to the predictive setting. If the objective functional of problem (16) is large, then one may need to consider expanding the set of controlled scenarios \(\mathcal{X}_{\mathrm{lab}}\). Also, if the objective functional of Problem (17) is large, then one may seek other types of experimental measurements. In case of one or both situations, one must exercise caution in extrapolating the result of the validation process to the prediction scenario. The solution of the optimization problems (16) and (17) is performed with the solver NOMAD [7] and is further detailed in C. ## 4 Numerical Examples We illustrate our methodology on two examples, the first one consisting in the projectile problem introduced in Section 2.1 and the second one consisting in a pollutant transport problem. The first problem is used to illustrate the entire validation process and to emphasize the importance of the design of the validation experiment. In particular, we will provide an example of a poorly chosen validation experiment that leads to a false positive, that is, the model is not deemed invalid when it is in fact invalid. By contrast, we will show that the application of our methodology to design an optimal validation experiment produces a true negative. The second problem will show the importance of the definition of the QoI on the validation experiment. ### The Projectile Problem The first example considers the system of interest described in Section 2.1, where we seek the maximum altitude reached by a spherical projectile when launched vertically. The model is given by the linear ordinary differential equation (2) for which one can analytically compute the value of the QoI as \[q(x,\theta)=\max_{t}u(t)=u_{0}+\frac{mv_{0}}{3\pi\exp(\mu)\ell}-\frac{m^{2}g}{(3 \pi\exp(\mu)\ell)^{2}}\ln\left(1+\frac{3\pi\exp(\mu)\ell v_{0}}{mg}\right). \tag{18}\] We recall that the control parameters are given here by \(x=(m,\ell,u_{0},v_{0})\) while the model parameters are \(\theta=(g,\mu)\). Since the proposed QoI is non-linear with respect to some of the control parameters, the associated influence matrix (14) depends on the scenario \(x\). Moreover, we set the model parameters to \(\Theta=(G,U)\sim(\mathrm{N}(9.81,0.01^{2}),\mathrm{N}(-5\ln(10),0.5^{2}))\), where N denotes normal distributions. The values were chosen based on the fact that we know relatively well the value of the gravitational constant \(g\) but not as well the air viscosity \(\mu\). #### 4.1.1 Optimal Validation Scenario To illustrate the possibility that a poorly designed validation experiment may yield a false positive, we need to consider a specific prediction scenario \(x_{\mathrm{pred}}\) and a specific controlled environment \(\mathcal{X}_{\mathrm{lab}}\) that describes the possible validation scenarios \(x_{\mathrm{val}}\). The prediction scenario and a tentative validation scenario, as well as the controlled environment, are described in Table 2. The tentative validation scenario is chosen such that the Reynolds number \(\mathrm{Re}\) taken at the initial time, \[\mathrm{Re}(t)=\frac{\rho\ell}{\exp(\mu)}\bigg{|}\frac{\mathrm{d}u}{\mathrm{ d}t}(t)\bigg{|}, \tag{19}\] where \(\rho\) is the density of the ambient air, matches the one of the prediction scenario \(x_{\mathrm{pred}}\). The rationale echoes the discussion at the beginning of Section 3 and, in this regard, may be considered as a viable validation experiment. However, as shown in Figure 3, the influence matrix of the QoI at \(x_{\mathrm{val,tent}}\) is fairly different from the influence matrix of the QoI at \(x_{\mathrm{pred}}\). The controlled environment \(\mathcal{X}_{\mathrm{lab}}\) consists in the Cartesian product of the intervals for the parameters \(x\), as defined in Table 2. We now solve problem (16) to obtain the optimal validation scenario for this predictive setting and obtain \(x_{\mathrm{val}}=(1.000,0.100,0.705,100.446)\). Figure 3 compares the influence matrices \(M_{h}\) (14) of the QoI (18) for the prediction scenario \(x_{\mathrm{pred}}\), the tentative validation scenario \(x_{\mathrm{val,tent}}\), and the aforementioned optimal validation scenario \(x_{\mathrm{val}}\). We observe that the first eigenvector \(v_{1}\), associated with the largest eigenvalue \(\lambda_{1}\), has significantly changed between \(x_{\mathrm{val,tent}}\) and \(x_{\mathrm{val}}\) in order to resemble more closely the one of the prediction scenario (especially for the \(u_{0}\), \(v_{0}\), and \(g\) components). The second eigenvector also changed notably between the tentative and optimal validation scenarios. Compared to the prediction scenario, the second and third eigenvectors seem to have switched, but this result can be explained by the fact that the second and third eigenvalues of the optimal validation scenario are almost identical. Apart from this, we actually observe a good agreement between the eigenvectors. In other words, the influence matrix of the QoI at \(x_{\mathrm{val}}\) better mimics the influence matrix of the QoI at \(x_{\mathrm{pred}}\) than the tentative validation scenario. \begin{table} \begin{tabular}{c c c c c} \hline \hline & **Mass \(m\)** & **Diameter \(\ell\)** & **Initial Position \(u_{0}\)** & **Initial Velocity \(v_{0}\)** \\ \hline **Prediction** & 0.05 & 0.01 & 1 & 100 \\ **Scenario \(x_{\mathrm{pred}}\)** & & & & \\ \hline **Controlled** & [1,5] & [0.05,0.1] & [0,2] & [10,120] \\ **Environment \(\mathcal{X}_{\mathrm{lab}}\)** & & & & \\ \hline **Tentative Validation** & & & & \\ **Scenario \(x_{\mathrm{val,tent}}\)** & & & & \\ \hline \hline \end{tabular} \end{table} Table 2: Prediction scenario \(x_{\mathrm{pred}}\), controlled environment \(\mathcal{X}_{\mathrm{lab}}\), and tentative validation scenario \(x_{\mathrm{val,tent}}\) #### 4.1.2 Optimal Sensors and Observation Functional We now search for the best observation functional \(h_{\mathrm{obs}}\) between the functionals \(h_{\mathrm{obs}}(x_{\mathrm{val}},\varTheta,z)\coloneqq u\) and \(h_{\mathrm{obs}}(x_{\mathrm{val}},\varTheta,z)\coloneqq\mathrm{d}^{2}u/\mathrm{ d}t^{2}\), as funtions of time \(t\), and the best sensor parameters \(z\coloneqq t\) that should be employed in the validation experiment. In order to do so, we evaluate the objective functional of Problem (17) for both observation functionals and for various times \(t\) under the validation scenario \(x_{\mathrm{val}}\) computed previously. The results are shown in Figure 4. As expected, there is a specific time \(t\) for which the influence matrix of \(h_{\mathrm{obs}}(x_{\mathrm{val}},\varTheta,z)\coloneqq u(t)\) is exactly the same as the influence matrix of \(h_{\mathrm{qoi}}(x_{\mathrm{val}},\varTheta)\). This particular time \(t\) corresponds to the instant at which the projectile has reached the maximal altitude (as shown in Figure 5). It is interesting to observe in Figure 4 that the objective functionals for the position \(u\) and acceleration \(\mathrm{d}^{2}u/\mathrm{d}t^{2}\) evolve quite differently in time. On one hand, the influence matrix for the acceleration remains quite different from the influence matrix of the QoI at the validation scenario for all times \(t\). On the other hand, the objective functional (17) for the position \(u\) at small times \(t\) is quite large. This is explained by the fact that the position right after the launch is mainly influenced by the initial position \(u_{0}\) and initial velocity \(v_{0}\), which is not the case for the QoI at the validation scenario, see rightmost plot on bottom row in Figure 3. However, a rapid decrease in the objective functional of Problem (17) for the position indicates that the influence of the initial position and velocity (alongside the other parameters) quickly tend to that of the QoI. For the sake of presenting the whole validation process, and to illustrate the possibility of obtaining a false validation result, we need some experimental observations. We shall use manufactured or synthetic data provided by what we will refer here to as a _fine model_ that serves as a surrogate to the physical system of interest. Indeed, the precise source of experimental observations is not directly relevant here, since they do not play any role in the design of the optimal validation experiment. Figure 3: Eigenvalues (top row) and eigenvectors (bottom row) associated with the influence matrices of the QoI (18) for the prediction scenario \(x_{\mathrm{pred}}\) (left), the tentative validation scenario \(x_{\mathrm{val,tent}}\) (center), and the optimal validation scenario \(x_{\mathrm{val}}\) (right). #### 4.1.3 Manufactured Data The fine model is provided by the following non-linear ODE (43, Section 7.6) and initial conditions \[m\frac{\mathrm{d}^{2}\tilde{u}}{\mathrm{d}t^{2}}(t)+\frac{\rho\pi \ell^{2}c_{D}}{8}\frac{\mathrm{d}\tilde{u}}{\mathrm{d}t}(t)\bigg{|}\frac{ \mathrm{d}\tilde{u}}{\mathrm{d}t}(t)\bigg{|} =-mg,\qquad\text{for }t\in(0,T), \tag{20a}\] \[\tilde{u}(0) =u_{0},\] (20b) \[\frac{\mathrm{d}\tilde{u}}{\mathrm{d}t}(0) =v_{0}. \tag{20c}\] Following [44], the definition of the friction coefficient \(c_{D}\) depends on the Reynolds number \(\mathrm{Re}\) (19) and is defined for laminar regimes, i.e. \(\mathrm{Re}<2\times 10^{5}\), as: \[c_{D}=\frac{24}{\mathrm{Re}}\left(1+0.15\times\mathrm{Re}^{0.681}\right)+\frac {0.407}{1+8710\times\mathrm{Re}^{-1}}. \tag{21}\] In case of very low Reynolds numbers (\(\mathrm{Re}\ll 1\)), the coefficient is well approximated by \(c_{D}=24/\mathrm{Re}\). With this approximation, one actually recovers the linear model (2). The fine model allows us to obtain measurements of the position \(\tilde{u}\) at any time \(t\). Realizations \(y_{\mathrm{exp}}(x,z)\) of these pseudo-experimental observations \(Y_{\mathrm{exp}}(x,z)\) are obtained by sampling \[Y_{\mathrm{exp}}(x,z)\sim\tilde{u}(x,z)\,\mathrm{N}(1,0.05^{2}). \tag{22}\] This choice for the noise is arbitrary and corresponds to a multiplicative Gaussian noise with standard deviation \(5\%\). We suppose here that the discretization errors are precisely controlled while solving the ODE (20). Finally, for this fine model, we fix the values of the gravitational constant, the air viscosity, and the air density to \(g=9.81\), \(\exp(\mu)=1.8\times 10^{-5}\), and \(\rho=1.2\), respectively. #### 4.1.4 Validation Analysis We now compare the trajectories obtained with Model (2) and Model (20) for the prediction scenario \(x_{\mathrm{pred}}\), the tentative validation scenario \(x_{\mathrm{val,tent}}\), and the optimal validation scenario \(x_{\mathrm{val}}\). Moreover, we compute the discrepancy (5) for the three scenarios at their respective optimal sensor parameter \(z_{\mathrm{val}}\), which corresponds to the time the projectile reaches the maximal altitude. The trajectories and distributions \(F_{|E|}\) of the discrepancy \(E\) are shown in Figure 5. It is important to mention that the distribution of the discrepancy \(E\) and the trajectory of the physical system (red curve) cannot in general be accessed for the prediction scenario since experimental data are not available in that case. We can produce those plots here only because the measurements are obtained with the help of the model (20). We observe that our model (2) poorly captures the maximum altitude reached by the projectile at the prediction scenario, since there is a significant difference between the red and blue curves, as shown in the top left plot of Figure 5. The friction forces in the model are in fact underestimated. This mismatch between the model and the system of interest is reflected in the distribution \(F_{|E|}\) of the discrepancy: the probability that the absolute error on the maximal position be less than 80 is close to zero. In other words, one is almost sure that the absolute error on the maximal position will be between 80 and 110. The model for the prediction of the maximal altitude under the prediction scenario \(x_{\text{pred}}\) can thus be considered invalid. Although this information is not available in practice, the validation process should inform us beforehand on the accuracy of the model predictions. Under the tentative validation scenario \(x_{\text{val,tent}}\), we observe a good agreement between the position of the projectile for the model and for the system of interest. This can be explained by the fact that the friction force is in this case negligible with respect to the inertial and gravitational forces. Since the only difference between the model and the system of interest is due to the modeling of the friction force, the discrepancy \(E\), computed at the optimal sensor point \(z=t\approx 2\), remains small. Its distribution \(F_{|E|}\) indicates that the observational error is almost surely less than 0.5. Depending on the tolerance \(\varepsilon\) and threshold \(\eta\) on the validation metric, the model would likely have been deemed valid. Hence, the model is not appropriate for the prediction of the QoI at the prediction scenario and the validation process using this tentative validation scenario would have given a false positive. In the case of the optimal validation scenario \(x_{\text{val}}\), we observe a significant departure between the model and the system of interest. The optimal validation scenario seems to capture the fact that the friction forces in our model are underestimated. With the optimal sensor point previously computed, the discrepancy indicates that the error on the position is almost surely between 280 and 310. The model would likely have failed the validation test, which better reflects the real predictive capabilities Figure 5: Position \(u\) (top row) and discrepancies (bottom row) for the prediction scenario (left column), tentative validation scenario (center column), and the optimal validation scenario (right column). The experimental observations are in red and the model observations are in blue. The shaded areas represent the 95% confidence interval for both the experimental and model observations. The absolute value of the discrepancies (5) are computed at the sensor location indicated by the black dashed lines. of the model under the prediction scenario. We would like to emphasize here that the proposed methodology to design an optimal validation experiment does not guarantee that the outcome of the validation process will always provide one with a correct answer. It only ascertains that the influence matrix of the observation at the validation scenario is as much as possible comparable to the influence matrix of the QoI at the prediction scenario. In this example, it happens that the conditions of the optimal validation experiment allow the model for capturing the relative importance of the friction forces with respect to the other forces, as in the prediction scenario. In other words, since the friction forces are not correctly modeled in the validation setting, one may conclude that it could also be the case for the prediction scenario, which is indeed true here. ### Contaminant Transport Problem The second example consists in a problem of pollutant transport in a fictitious river where we wish to analyze the impact of the design of a new factory upstream. We consider two QoIs, both representing the mean concentration of contaminant in a specific region of the domain. The model we wish to validate is a 2D linear steady-state diffusion-advection equation that governs the concentration \(\phi=\phi(z)\) of the contaminant in the river: \[-\nabla\cdot(\exp(k)\nabla\phi)+\nabla\cdot(v\phi) =0, \text{in }\Omega, \tag{23a}\] \[\phi=\phi_{D}, \text{on }\Gamma_{\text{west}},\] (23b) \[-n\cdot\exp(k)\nabla\phi =0, \text{on }\partial\Omega\setminus\Gamma_{\text{west}}, \tag{23c}\] where \(n\) denotes the outward unit normal to the domain boundary, \(\exp(k)\) is the diffusivity coefficient, and \(v\) is the advection velocity. For the sake of simplicity, we consider the velocity field \(v\) to be given and known, so that it is not considered a parameter in this study (see A). The control parameters \(x\) consist in the parameters that characterize the Dirichlet condition \(\phi_{D}\), to be described later, and the model parameter is only \(\theta=k\). We suppose here that the diffusivity parameter \(k\) is well-known so that \(\Theta\sim\delta(k_{0})\), with \(\delta\) the Dirac measure and \(k_{0}=-2\ln(10)\). The sensor parameters are the Cartesian coordinates \(z=(z_{1},z_{2})\). A sketch of the geometry is provided in Figure 6. The domain consists in an inlet and an outlet on the west and east boundaries, respectively. Two docks hinder the flow around the first region of interest, denoted by \(\Omega_{1}\). The second region of interest \(\Omega_{2}\) is shown in Figure 6. We note that the particular dimensions of the domain \(\Omega\) and the particular value of the model parameter \(k\) are arbitrarily chosen, as the example is for illustrative purpose only. To validate the model, we imagine one can perform an experiment consisting in injecting a small and controlled amount of contaminant upstream. The Dirichlet condition \(\phi_{D}\) is therefore used to describe Figure 6: Domain \(\Omega\) and its boundaries for the contaminant transport model. The regions for which we seek the mean pollutant concentration are indicated by \(\Omega_{1}\) and \(\Omega_{2}\) (respectively employed for the definition of the QoIs). The grid is used to indicate the regions \(\Omega_{\text{obs}}\) in which the mean concentration of pollutant can be observed. both the pollutant release from the factory and the validation experiment. We parametrize \(\phi_{D}\) in terms of the mollifier centered at \(z_{2}=z_{0}\), of length \(L\) and intensity \(c\), as follows: \[\phi_{D}(z_{0},L,c,z_{2})=\begin{cases}c\exp\Big{(}\big{(}\big{|}\frac{z_{2}-z_ {0}}{L}\big{|}-1\big{)}^{-1}\Big{)},&\text{if }\big{|}\frac{z_{2}-z_{0}}{L}\big{|}<1,\\ 0,&\text{otherwise}.\end{cases} \tag{24}\] The control parameters in this application are given by \(x=(z_{0},L,c)\). This parametrization is sufficiently rich to describe both the pollutant release from the factory and a wide range of validation scenarios, while involving a limited number of parameters. Moreover, the use of above mollifier is attractive thanks to its regularity properties, since we need to compute the gradient of functionals with respect to the control parameter \(x\) (as explained in B). We can now specify both the prediction scenario \(x_{\text{pred}}\) and the controlled environment \(\mathcal{X}_{\text{lab}}\). The prediction scenario consists in a description of the possible pollutant profile from the factory to be built upstream and is provided in Table 3. The controlled environment \(\mathcal{X}_{\text{lab}}\) includes all validation scenarios that can actually be performed. We suppose here that the pollutant can be injected in a controlled manner anywhere along the boundary \(\varGamma_{\text{west}}\), while the length \(L\) and intensity \(c\) can only take limited values. The intervals for the three control parameters \(z_{0}\), \(L\), and \(c\) are also provided in Table 3. We also consider the two QoIs given by \[h_{\text{qoi},i}(x,\theta)=\frac{1}{|\varOmega_{i}|}\int_{\varOmega_{i}}\phi( x,\theta,z)\,\mathrm{d}z,\qquad\text{for }i=1,2,\] where \(\varOmega_{i}\), \(i=1,2\), are the two sub-regions shown in Figure 6. #### 4.2.1 Optimal Validation Scenarios Now that the prediction scenario \(x_{\text{pred}}\), quantities of interest \(h_{\text{qoi},i}\), and controlled environment \(\mathcal{X}_{\text{lab}}\) are prescribed, we can compute the optimal validation scenarios with respect to the two QoIs. We thus solve Problem (16) for each QoI. The concentration of the pollutant for the prediction scenario \(x_{\text{pred}}\) and the optimal validation scenarios with respect to the first and second QoI are all shown in Figure 7. We observe that the optimal validation scenarios are highly sensitive to the particular QoI we wish to predict since the location \(z_{0}\) of the pollutant varies considerably between the two QoIs. This result is important as it highlights the fact that it is the combination of the prediction scenario and the QoI one wants to predict that should dictate the validation scenario. From the prediction scenario, we observe that the pollutant enters the domain \(\varOmega_{1}\) associated with the first QoI, while very little pollutant reaches the second domain \(\varOmega_{2}\) for the second QoI. We reiterate here that the objective is not to find the validation scenario \(x_{\text{val}}\) such that \(h_{\text{qoi}}(x_{\text{val}},\varTheta)\approx h_{\text{qoi}}(x_{\text{pred }},\varTheta)\), but the scenario such that \(M_{h_{\text{qoi}}}(x_{\text{val}})\approx M_{h_{\text{qoi}}}(x_{\text{pred}})\). The influence matrices \(M_{h_{\text{qoi}}}(x_{\text{val}})\) and \(M_{h_{\text{qoi}}}(x_{\text{pred}})\) are characterized by their eigenvalues and eigenvectors shown in Figure 8. The absence of uncertainty in the model parameter \(\theta=k\) implies that the influence matrices \(M_{h_{\text{qoi}}}\) should be of rank one, since the covariance in Equation (14) is null in this case. This is clearly reflected in the numerical results, where the first eigenvalue of each influence matrix are several orders of magnitude higher than the others. \begin{table} \begin{tabular}{c c c c} \hline \hline & **Position \(z_{0}\)** & **Length \(L\)** & **Intensity \(c\)** \\ \hline **Prediction** & 0.75 & 0.6 & 2 \\ **Scenario \(x_{\text{pred}}\)** & & & \\ \hline **Controlled** & & & \\ **Environment \(\mathcal{X}_{\text{lab}}\)** & & & \\ \hline \hline \end{tabular} \end{table} Table 3: Description of the prediction scenario \(x_{\text{pred}}\) and of the controlled environment \(\mathcal{X}_{\text{lab}}\) for the pollutant transport model. Concerning the first QoI, we observe that the magnitudes of the eigenvalues are slightly smaller for the validation than those for the prediction. This can be explained by the fact that the overall magnitude of the pollutant is smaller in the validation setting because of the constraint on the controlled environment \(\mathcal{X}_{\text{lab}}\). We also observe that the eigenvector for the optimal validation scenario tends to match as closely as possible the eigenvector of the prediction scenario. One can explain this difference by remarking that the size \(L\) of the region in which the pollutant is injected should have more influence in the validation scenario since the quantity of injected pollutant is smaller than for the prediction scenario. The optimal validation scenario actually succeeds in capturing the fact that the intensity \(c\) of the source of pollutant and the diffusivity \(k\) are not very influential in determining the mean pollutant concentration in \(\Omega_{1}\). Concerning the second QoI, we observe a very good agreement between the eigenvalues of the influence matrix at the validation scenario and those at the prediction scenario. As with the first QoI, the eigenvector for the validation scenario tends to match as closely as possible the eigenvector for the prediction scenario. #### 4.2.2 Positioning of the Sensors Once the optimal validation scenario has been computed for each quantity of interest considered, we tackle the last step of our methodology, namely the optimal positioning of a sensor. For the pollutant Figure 7: Concentration of the pollutant for the prediction and optimal validation scenarios: (top) prediction scenario, (center) optimal validation scenario associated with \(h_{\text{qoi},1}\), (bottom) optimal validation scenario associated with \(h_{\text{qoi},2}\). transport problem, we consider only one type of observation functional \(h_{\rm obs}\) \[h_{\rm obs}(x,\theta,z)=\frac{1}{|\Omega_{\rm obs}(z)|}\int_{\Omega_{\rm obs}(z)} \phi(x,\theta,w)\,\mathrm{d}w, \tag{25}\] where \(\Omega_{\rm obs}(z)\) represents a square region centered at \(z\) of length and width equal to \(0.1\). The objective is now to find the location \(z\) of the square region \(\Omega_{\rm obs}(z)\) inside the domain \(\Omega\) such that the influence matrix of \(h_{\rm obs}\) matches as closely as possible the influence matrix of the QoIs at the validation scenario. Instead of solving directly the optimization problem (17), we plot the objective function in (17) for various positions of the squared region inside \(\Omega\). We actually consider for the squared regions the grid cells shown in Figure 6. Figure 9 presents the results for both QoIs under their respective optimal validation scenario. The best positions for the sensor are indicated in this figure by the squares in which the value of the objective function is the closest to zero. For the first QoI, we clearly note that it is best to position the sensor within \(\Omega_{1}\), which is an obvious result since we ultimately seek to predict the mean concentration of pollutant in \(\Omega_{1}\). The optimal region of observation seems to extend to the region where recirculation occurs, as shown in Figure 10. To some extent, it also spreads upstream above the first dock and downstream right above the second dock. Moreover, we observe that positioning the sensor in the upper-half of the domain would be useless for the prediction of the first QoI. This result is in fact quite intuitive since, under the optimal validation scenario shown in Figure 7, very little pollutant flows into those regions while a certain amount of pollutant indeed reaches \(\Omega_{1}\). Finally, the influence matrices associated with the mean concentration in the regions located after the second dock share some similarities with the influence matrix \(M_{h_{\rm equ,1}}(x_{\rm val})\), but not as much as the region between the two docks. We also remark that the fluctuations in the objective function in the plume located downstream could be explained by the accumulation of numerical errors resulting from the computation of the gradient (see B). For the second QoI, it is again optimal to position the sensor inside the region of interest, \(\Omega_{2}\) in this case. The pollutant sensor can also be placed optimally right upstream and right downstream of \(\Omega_{2}\). Interestingly, it does not seem optimal to put the sensor right inside the plume of the pollutant. Figure 8: Eigenvalues (top row) and eigenvectors (bottom row) associated with the influence matrices for the prediction scenario (black circles) and for the validation scenario (red squares). The left and right columns display the sensitivity indices related to the first and second QoIs, respectively. For the sake of clarity, only the first (and only relevant) eigenvector is displayed for each influence matrix. One should rather probe the boundary region of the plume next to the QoI since variations in the concentration are larger there and the measurements would thus be more sensitive to the location of the pollutant release. ## 5 Conclusion We have addressed in this paper the issue of designing optimal validation experiments tailored toward the prediction of a QoI. To the best of our knowledge, this problem has been rarely considered in the literature. We thus provide a careful description of a mathematical model, the role of the various input parameters, and the objectives when performing predictions using the model. We have also carried out a consistent treatment of the aleatory uncertainty affecting the model parameters, the experimental and computed observations, and the QoI. Within this framework, we have introduced the influence matrix, computed using the Active Subspace method, as a means to provide a quantitative description of the response surfaces associated with the various model functionals. We have then proposed a methodology to design an optimal validation scenario and identify optimal observations based on the comparison of influence matrices. The methodology essentially consists in the solution of two optimization problems: 1) the first optimal design problem (16) allows the computation of the optimal validation scenario; 2) the second optimal problem (17) finds the best measurements to be performed on the system of interest. The methodology was tested on two numerical examples. The results for a simple projectile problem demonstrated that the optimal validation scenario was able to recover the essential features of the influence matrix associated with the QoI at the prediction scenario. The influence matrices obtained with two different observable functionals were compared over a given time period and allowed us to identify an optimal observation functional and sensor parameter. The results for the pollutant transport problem highlighted the fact that the optimal validation experiment does indeed depend on the QoI we wish to predict. They confirmed as well the fact that, in the case of non-observable QoI, one can still identify observation functionals sharing similar influence matrices. This numerical example also shows that the proposed methodology can be viably used for applications of engineering interest. Figure 9: Value of the objective function in (17) for the first QoI (top) and second QoI (bottom). Several hypothesis and simplifications have been made throughout this work, limiting to some extent the scope of the predictive problem. Alleviating the hypothesis that the control parameters and sensor parameters are not uncertain could lead to more robust validation experiments, especially when these control parameters consist of boundary and/or initial conditions. Also, the use of the Active Subspace method to define the influence matrix necessitates some regularity on the observation and QoI functionals. For models lacking regularity, one could envision the use of the _variogram analysis of response surfaces_[31] as a means to compute the influence matrix. On another note, the study of the projectile problem hints that a sole dimensional analysis is insufficient to select an optimal validation experiment. However, transforming the parameter space with a dimensional analysis may yield a different representation of the influence matrix of a model functional. This new representation may provide additional information about the model. For example, we may be able to identify the important dimensionless quantities that guide the design of validation experiments. This particular question is the subject of ongoing research. Moreover, in addition to employing the sensitivity analysis to design validation experiments, it could be used for the analysis of existing data sets. By describing the influence matrix of a model under a data set, one could for instance find specific data within this set that could be utilized for validation purposes. Adversarial validation scenarios aimed at testing a wide range of prediction settings may also be found. In the same vein, it could be insightful to apply the influence matrix to cross-validation approaches. Finally, methods to quantify the model errors rely on some sort of comparison between the reality and the model observables. It could be interesting to analyze the impact of the choice of the experiment on the modeling error for the QoIs at the prediction scenario. The proposed methodology could perhaps better inform the modeling error, which we will investigate in a future research. ## Acknowledgements APR is grateful for the financial support of the _Fonds de recherche du Quebec - Nature et technologies_. SP and ML are grateful for the support from the Natural Sciences and Engineering Research Council of Canada (NSERC) Discovery Grants [grant numbers RGPIN-2019-7154, RGPIN-2018-06592].
2303.09227
MROS: A framework for robot self-adaptation
Self-adaptation can be used in robotics to increase system robustness and reliability. This work describes the Metacontrol method for self-adaptation in robotics. Particularly, it details how the MROS (Metacontrol for ROS Systems) framework implements and packages Metacontrol, and it demonstrate how MROS can be applied in a navigation scenario where a mobile robot navigates in a factory floor. Video: https://www.youtube.com/watch?v=ISe9aMskJuE
Gustavo Rezende Silva, Darko Bozhinoski, Mario Garzon Oviedo, Mariano Ramírez Montero, Nadia Hammoudeh Garcia, Harshavardhan Deshpande, Andrzej Wasowski, Carlos Hernandez Corbato
2023-03-16T11:05:25Z
http://arxiv.org/abs/2303.09227v1
# MROS: A framework for robot self-adaptation ###### Abstract. Self-adaptation can be used in robotics to increase system robustness and reliability. This work describes the Metacontrol method for self-adaptation in robotics. Particularly, it details how the MROS (Metacontrol for ROS Systems) framework implements and packages Metacontrol, and it demonstrate how MROS can be applied in a navigation scenario where a mobile robot navigates in a factory floor. Video: [https://www.youtube.com/watch?v=ISe9aMskJuE](https://www.youtube.com/watch?v=ISe9aMskJuE) Self-adaptive systems, Self-adaptation, Metacontrol, MROS, Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics + Footnote †: journal: Journal of Robotics a new configuration for the Managed system that satisfies its requirements. The Execute step is responsible for reconfiguring the Managed system. The KB is the main difference of Metacontrol with other self-adaptation frameworks. It consists of a runtime model based on the TOMASys metamodel. TOMASys contains concepts to describe the functional and physical architecture of a system, and its variants, both at design time and at runtime. For the robotic developer using MROS, only TOMASys design time concepts are needed to create the Metacontrol KB, while the runtime elements are used by an automatic reasoner in the Metacontroller. A more detailed description of TOMASys can be found in (Bartos et al., 2016; Bartos et al., 2016). The _Function_ element represents an abstract functionality of the system, such as navigating from point A to B. A _Function Design_ is an engineering design solution that solves a specific _Function_ with an expected performance. _Quality Attribute Type_ and _Quality Attribute Value_ are used to capture the systems engineering meaning of QAs. A _Quality Attribute Type_ represents a characteristic of the system that shall be observed, such as energy. And a _Quality Attribute Value_ represents an amount of a _Quality Attribute Type_, e.g., 1 Joule. ## 3. Mros Framework MROS is a ROS-based implementation of Metacontrol that enables architectural self-adaptation in ROS-based robotic systems. MROS monitors the state of the ROS system and updates its KB according to it, then it uses ontological reasoning to Analyze and Plan the required adaptations, which are then executed by reconfiguring the ROS node graph. This section first describes how the MROS library implements the MAPE-K loop and the Metacontrol KB, and then it presents the methodology that robot developers can follow to use it to implement self-adaptation in ROS 1 applications2. Footnote 2: There is already a beta version of MROS supporting ROS 2, but it requires a different method ### Mros library MROS follows the MAPE-K model for runtime adaptation, as shown in in Figure 3. **Monitor:** This component is realized with the ROS node _rosgraph_monitor_, it provides system observers templates. The observer nodes operate at a fixed frequency and publish the monitored values to the Metacontrol reasoner using the standard ROS diagnostic mechanism (i.e., a specific channel and data structure in ROS systems). This means they easily integrate into an existing ROS system and might use or provide services already needed in the system, regardless of the presence of a Metacontroller. **Knowledge Base:** The Knowledge Base consists of **(1)** the TOMASys metamodel that allows modeling the functional architecture of autonomous systems; **(2)** A TOMASys model of the Managed subsystem that defines its unique _Functions_, _Function Designs_ and its expected _Quality Attributes_, in other words, all possible configurations for the Managed subsystem. The KB is implemented as an ontology using the Ontology Web Language (OWL) in combination with the Semantic Web Rule Language (SWRL). **Analyze and Plan:** These components are realized with the ROS node _mros1_reasoner_, it integrates the OWL ontology, i.e. the KB, with ROS and it reasons over it to decide when and how to adapt the Managed subsystem. It consists of a ROS node implemented with Python that makes use of the library OwlReady2 to bridge the ontology with Python and ROS. And the reasoning is performed with the off-the-shelf ontological reasoner Pellet. **Execute:** This step is realized with the ROS node _mc_rosgraph_manipulator_. It provides a ROS node that is responsible for killing and starting new ROS nodes and changing the necessary ROS parameters to fulfill the desired configuration. ### MROS methodology To use MROS to add self-adaptation to a ROS system, the activities in Figure 4 may be followed. **Step 1:** Define the possible adaptations, i.e. set of architectural variants relevant for the robot to be able to perform a mission. For each architectural variant, the developer creates a ROS launch file for the automatic deployment of the corresponding configuration of ROS nodes. **Step 2:** Create the KB by modeling the ROS system architectural variants with TOMASys. MROS provides an implementation of TOMASys in OWL, the developer needs to create the individuals for the application's _Functions_, _Function Designs_ and its expected _Quality Attribute Values_ and _Quality Attribute Types_. **Step 3:** Create observers using the templates provided in the ROS package _rosgraph_monitor_ to monitor the status of the active ROS nodes and to measure the relevant _Quality Attributes_. **Step 4:** Configure the _mros1_reasoner_ and _mc_rosgraph_manipulator_ nodes by linking the system architectural variants (step 1) with TOMASys _Function Designs_ (step 2), and connecting them to application-specific reconfiguration actions (e.g. to store required node states). This is done simply editing a template Metacontroller configuration file in YAML. ## 4. Case Study This section describes how MROS can be set up for a robot navigating on a factory floor following the MROS methodology described in Section 3.2. This case study comes for the previous Figure 1. Metacontrol Figure 2. TOMASys work of Bozhinoski et al. (2018). The experiment code can be found at [https://github.com/rosin-project/metacontrol_sim](https://github.com/rosin-project/metacontrol_sim). _Case study description._ The case study consists of a Clearpath Ridgeback mobile robot that navigates in a factory floor. The robot is equipped with two laser sensors, one IMU, and an odometry system. The navigation system is realized with the ROS1 navigation stack. As the robot navigates, unexpected obstacles may appear in it's path and it may get closer or further to objects, which causes its safety quality attribute level to change. When safety levels are high (lower risk of collision), the robot can navigate with higher speed and acceleration. When safety levels are low (higher risk of collision), it needs to use lower speeds. In addition, throughout the mission, the battery level diminishes and with this the robot must navigate with lower speed and acceleration to save energy. MROS is used to adapt at runtime the navigation parameters, such as maximum speed and acceleration, to satisfy safety and energy constraints. ### Application to MROS to the case study **Step 1:** Identify the architectural variants corresponding to different configurations (parameter values) of the main node of the ROS1 navigation stack, and create its corresponding launch files. In total, 27 different function designs were defined. A snippet of the ROS launch file of one _Function Design_ is shown in listing 1. The parameters that have been specifically defined for this _Function Design_ are: _max_vel_x_, _max_vel_y_, _acc_lim_x_, _acc_lim_y_, _qa_safety_, and _qa_energy_. The QA values are specified as ROS parameters to enable, if necessary, to change them at runtime. The QA values defined in the launchs override the ones defined in the ontology. ``` 1<launch> 2<para name="qa_safetyvalue="a.7"/> 3<para name="qa_energyvalue="a.3"/> 4<node pkg="move_base"type="move_base> 5<cod="node"response="false"output<screen"> 6<para name="ractory/planner805/max_vel_x" 7value="0.3"/> 8<para name="ractory/planner805/max_vel_y" 9value="0.3"/> 10<para name="ractory/planner805/acc_lim_x" 11value="3.6"/> 12<para name="ractory/planner805/acc_lim_y" 13value="-3.6"/> 14</node" 15</lunch> ``` **Step 2:** Model the architectural variants with the MROS TOMASys ontology. To create the OWL file for the KB, the graphical tool Protege can be used to simplify the process. The TOMASys metamodel is available in the package mc_mdl_tomasys. It is only necessary to set up the application-specific ontology by creating individuals of the design time TOMASys classes (Figure 2). For this case study, the following individuals are created: * A _Function_ individual for the navigation capability; * _Quality Attribute Type_ individuals for both safety and energy quality attributes; * _Function Design_ individuals for each variant that solves navigation, including object property individuals of _Quality Attribute Value_ with their _Quality Attribute Type_ and a data field with the expected QA value. **Step 3:** Create observers with the templates provided in the ROS package rosgraph_monitor. These templates are two Python classes called _TopicObserver_ and _ServiceObserver_ that implement the general functionalities needed to monitor ROS topics and services, respectively. For each specific quality attribute that needs to be Figure 4. MROS Design time activities Figure 3. MROS framework monitored, it is necessary to implement a new class that inherits from one of them. For the use case, two Observers are implemented: _SafetyQuality-Observer_ and _EnergyQualityObserver_. A snippet of the implementation of the latter can be seen in listing 2. The class _EnergyQualityObserver_ inherits from _TopicObserver_. In the initialization of the class, the topic to which the observer needs to subscribe and its message type are defined. In line 4, it is defined that it needs to subscribe to the topic _/power_load_ to retrieve information about the battery, and that its message type is a float. For all observers, the method _calculate_attribute_ must be overloaded, it is responsible for performing any necessary calculation with the data received via the topic defined in the initialization, it must return the final data as a key-value pair structured as a _diagnostic_msgs/DiagnosticArray_ message. In the _EnergyQualityObserver_, in line 9, the method calculates the battery level as a normalized value, and returns it, for example, as \(\{energy,0.9\}\). The output of the observers is published in the topic _diagnostics_. ``` 1classEnergyQualityObserver(TopicObserver): 2def__init__(self,name): 3ftopictoobserveandaspestype 4topics=[L^("power_load",Float22)] 5super(EnergyQualityObserver) 6self)-__init__(name,10,topics) 7 8defcalculate_attr(self,msgs): 9status_msg=DiagnosticCstatus() 10 11Normalizedcalculusforenergy 12att=(msg,data_0.2)/(5.8-0.2) 13print("normalizedenergy:(@)"-forast(str(str(str(str(str)))) 14 15status_msg=DiagnosticStatus() 16status_msg.local="DiagnosticStatus.OK 17status_msg="self..id" 18status_msg.values.append(Keyvalue("energy",str(str(str(str)))) 19status_msg.message="QAstatus" 20returnstatus_msg ``` **Step 4**: Configure Metacontrol through a yamml file. This is used to map each _Function Design_ defined in the ontology (step 2) to its respective launch file (step 1). Additionally, to indicate which ROS nodes are killed and spawned during the reconfiguration process, as well, what actions and goals must be saved to be reset when the reconfiguration is performed. A snippet of the configuration file of this use case can be seen in listing 3. ``` 1reconfiguration_action_name:'rosgraph_manipulator_action_server 2configurations: 3f_v1_r1: 4command:roslaunchf_v1_r1_r1_r1_launch 5f_v1_r2: 6command:roslaunchf_v1_r2_r1_v1_r2_launch 7f_v1_r3: 8command:roslaunchf_v1_r3f_v1_r3_launch 9kill_nodes://now_base? 10saveaction:'now_base? 11goal_msg_type:now_base_msgs.MowBaseAction ``` ### Results Bozhinoski et al. (2018) show that by adding self-adaptation with MROS to this use case, the robot performance increases regarding the amount of time it violates its required safety and energy quality attributes, and the overall mission success. In average, safety violations decreases from 2.5% to 0.96%, energy violations from 2.98% to 1.86%, and mission success increases from 65.20% to 78.50%. ## 5. Related Work Aldrich et al. leverages predictive data models to enable automated robot adaptation to changes in the environment at run-time (Bozhinoski et al., 2018). While the approach depicts the benefits of using models by capturing high-level artifacts, it makes it extremely challenging for a ROS developer to make use of them in robotic scenarios because: (1) it does not introduce models that can be reused for a different application; (2) it does not give insights on how to build similar models; (3) it does not provide infrastructure to leverage those models. Cheng et al. propose a framework that uses GSN assurance case models to manage run-time adaptations for ROS systems (Cheng et al., 2018). The framework integrates assurance information from GSN models to ROS specific information to guide runtime monitoring and adaptation. It uses custom-developed libraries specific to the approach, rather than standard libraries in ROS (such as ROS Diagnostics) raising the entry barrier for ROS developers to effectively use it. ## 6. Discussion and Future Works This paper describes MROS - a tool that enables robots to perform self-adaptation at runtime based on ontological reasoning. MROS establishes generic self-adaptation mechanisms that drive self-adaptation through the MAPE-K reference feedback loop. This eases the process of designing self-adaptation for robots since it only requires users to define the proper observers, the Managed system ontological model conforming to TOMASys, a few configuration files, and the launch files for each architectural variant. Due to its reusability and extensibility, MROS has been used to handle different adaptation concerns in different robotic applications, such as reliable propulsion and motion control in underwater robots (Bozhinoski et al., 2018), contingency handling in mobile manipulators (Bozhinoski et al., 2018), and enhanced safety and energy saving in the navigation of a mobile robot (Bozhinoski et al., 2018). #### Acknowledgments. This work was supported by the European Union's Horizon 2020 Framework Programme through the MSCA network REMARO (Grant Agreement No 956200), the grant RobMoS-ITP-MROS (Grant Agreement No. 732410) and the ROSIN project (Grant Agreement No. 732287).
2308.10404
Fractal Sumset Properties
In this paper we introduce two notions of fractal sumset properties. A compact set $K\subset\mathbb{R}^d$ is said to have the Hausdorff sumset property (HSP) if for any $\ell\in\mathbb{N}_{\ge 2}$ there exist compact sets $K_1, K_2,\ldots, K_\ell$ such that $K_1+K_2+\cdots+K_\ell\subset K$ and $\dim_H K_i=\dim_H K$ for all $1\le i\le \ell$. Analogously, if we replace the Hausdorff dimension by the packing dimension in the definition of HSP, then the compact set $K\subset\mathbb{R}^d$ is said to have the packing sumset property (PSP). We show that the HSP fails for certain homogeneous self-similar sets satisfying the strong separation condition, while the PSP holds for all homogeneous self-similar sets in $\mathbb{R}^d$.
Derong Kong, Zhiqiang Wang
2023-08-21T00:44:48Z
http://arxiv.org/abs/2308.10404v3
# Fractal sumset properties ###### Abstract. In this paper we introduce two notions of fractal sumset properties. A set \(K\subset\mathbb{R}^{d}\) is said to have the _Hausdorff sumset property_ (HSP) if for any \(\ell\in\mathbb{N}_{\geq 2}\) there exist sets \(K_{1},K_{2},\ldots,K_{\ell}\) such that \(K_{1}+K_{2}+\cdots+K_{\ell}\subset K\) and \(\dim_{H}K_{i}=\dim_{H}K\) for all \(1\leq i\leq\ell\). Analogously, if we replace the Hausdorff dimension by the packing dimension in the definition of HSP, then the set \(K\subset\mathbb{R}^{d}\) is said to have the _packing sumset property_ (PSP). We show that the HSP and the PSP hold for a large class of sets in \(\mathbb{R}^{d}\) with any given dimension \(\alpha\in(0,d)\). On the other hand, the HSP fails for certain homogeneous self-similar sets satisfying the strong separation condition, while the PSP holds for all homogeneous self-similar sets in \(\mathbb{R}^{d}\). Key words and phrases:Sumset, Hausdorff dimension, packing dimension, HSP, PSP 2020 Mathematics Subject Classification: Primary: 28A80; Secondary: 11B13, 28A78 ## 1. Introduction Let \(\mathbb{N}\) be the set of natural numbers. The famous Erdos sumset conjecture states that if a set \(A\subset\mathbb{N}\) has positive upper Banach density, then there exist two infinite sets \(B,C\subset\mathbb{N}\) such that \(A\) contains the _sumset_\(B+C:=\{b+c:b\in B,c\in C\}\) (see [2]). This conjecture was fully proven by Moreira et al. [16] in a more general setting including countable amenable groups. A short proof of this conjecture was later given by Host [9]. Recently, Kra et al. [13] extended this sumset result and showed that if \(A\subset\mathbb{N}\) has positive upper Banach density, then for any \(\ell\in\mathbb{N}_{\geq 2}\) there exist infinite sets \(B_{1},B_{2},\ldots,B_{\ell}\subset\mathbb{N}\) such that \(A\) contains the sumset \(B_{1}+B_{2}+\cdots+B_{\ell}:=\{b_{1}+b_{2}+\cdots+b_{\ell}:b_{i}\in B_{i},\ 1\leq i\leq\ell\}\), where \(\mathbb{N}_{\geq n}:=\{\ell\in\mathbb{N}:\ell\geq n\}\) for any \(n\in\mathbb{N}\). In the literature there is a great interest in the study of sumsets such as iterated sumsets \(nE:=\underbrace{E+\cdots+E}_{n}\) and inhomogeneous sumsets \(E_{1}+E_{2}+\cdots+E_{n}\). The study of sumsets is closely related to the famous Marstrand's Projection Theorem (cf. [15]). Fraser et al. [8] showed that if \(E\subset\mathbb{R}\) is a closed set with positive lower dimension (see [7] for its definition) then \(\dim_{H}nE\) tends to \(1\) as \(n\to\infty\). Lindenstrauss et al. [14] considered the dimension growth for the inhomogeneous sumsets \(E_{1}+E_{2}+\cdots+E_{n}\), where each \(E_{i}\) is a compact \(\times p\) invariant subset of the unit circle with positive Hausdorff dimension. Recently, Feng and Wu [6] studied for which set \(F\subset\mathbb{R}^{d}\) its iterated sumset \(nF\) has non-empty interior for sufficiently large \(n\in\mathbb{N}\). For more information on sumsets we refer to the papers [20, 18] and the references therein. Inspired by the Erdos sumset conjecture and recent progress, we consider a fractal analogues. Given a set \(K\subset\mathbb{R}^{d}\) with positive Hausdorff dimension \(\alpha=\dim_{H}K>0\), it is ## 1. Introduction Let \(K\subset\mathbb{R}^{d}\) be a compact set and let \(\alpha\in(0,d)\) be a compact set. We say that \(K\) is _\(K\)-stable_ if for any \(0<\alpha<d\), there exists a compact set \(K\subset\mathbb{R}^{d}\) with \(\dim_{H}K=\alpha\) such that for any \(\ell\in\mathbb{N}_{\geq 2}\) we can find compact sets \(K_{1},K_{2},\ldots,K_{\ell}\subset\mathbb{R}^{d}\) satisfying \[K_{1}+K_{2}+\cdots+K_{\ell}\subset K\qquad\text{and}\qquad\dim_{H}K_{i}=\dim _{H}K=\alpha\ \forall 1\leq i\leq\ell. \tag{1.1}\] Here \(K\) is a compact set and \(\alpha\in(0,d)\) is a compact set. We say that \(K\) is _\(K\)-stable_ if for any \(0<\alpha<d\) there exists a compact set \(K\subset\mathbb{R}^{d}\) with \(\dim_{H}K=\alpha\) such that for any \(\ell\in\mathbb{N}_{\geq 2}\) we can find compact sets \(K_{1},K_{2},\ldots,K_{\ell}\subset\mathbb{R}^{d}\) satisfying \[K_{1}+K_{2}+\cdots+K_{\ell}\subset K\qquad\text{and}\qquad\dim_{H}K_{i}=\dim _{H}K=\alpha\ \forall 1\leq i\leq\ell. \tag{1.2}\] Here \(K\) is a compact set and \(\alpha\in(0,d)\) is a compact set. We say that \(K\) is _\(K\)-stable_ if for any \(0<\alpha<d\) there exists a compact set \(K\subset\mathbb{R}^{d}\) with \(\dim_{H}K=\alpha\) such that for any \(\ell\in\mathbb{N}_{\geq 2}\) we can find compact sets \(K_{1},K_{2},\ldots,K_{\ell}\subset\mathbb{R}^{d}\) satisfying \[K_{1}+K_{2}+\cdots+K_{\ell}\subset K\qquad\text{and}\qquad\dim_{H}K_{i}=\dim _{H}K=\alpha\ \forall 1\leq i\leq\ell. \tag{1.3}\] Here \(K\) is a compact set and \(\alpha\in(0,d)\) is a compact set. We say that \(K\) is _\(K\)-stable_ if for any \(0<\alpha<d\) there exists a compact set \(K\subset\mathbb{R}^{d}\) with \(\dim_{H}K=\alpha\) such that for any \(\ell\in\mathbb{N}_{\geq 2}\) we can find compact sets \(K_{1},K_{2},\ldots,K_{\ell}\subset\mathbb{R}^{d}\) satisfying \[K_{1}+K_{2}+\cdots+K_{\ell}\subset K\qquad\text{and}\qquad\dim_{H}K_{i}=\dim _{H}K=\alpha\ \forall 1\leq i\leq\ell. \tag{1.4}\] Here \(K\) is a compact set and \(\alpha\in(0,d)\) is a compact set. We say that \(K\) is _\(K\)-stable_ if for any \(0<\alpha<d\) there exists a compact set \(K\subset\mathbb{R}^{d}\) with \(\dim_{H}K=\alpha\) such that for any \(\ell\in\mathbb{N}_{\geq 2}\) we can find compact sets \(K_{1},K_{2},\ldots,K_{\ell}\subset\mathbb{R}^{d}\) satisfying \[K_{1}+K_{2}+\cdots+K_{\ell}\subset K\qquad\text{and}\qquad\dim_{H}K_{i}=\dim _{H}K=\alpha\ \forall 1\leq i\leq\ell. \tag{1.5}\] Here \(K\) is a compact set and \(\alpha\in(0,d)\) is a compact set. We say that \(K\) is _\(K\)-stable_ if for any \(0<\alpha<d\) there exists a compact set \(K\subset\mathbb{R}^{d}\) with \(\dim_{H}K=\alpha\) such that for any \(\ell\in\mathbb{N}_{\geq 2}\) we can find compact sets \(K_{1},K_{2},\ldots,K_{\ell}\subset\mathbb{R}^{d}\) satisfying \[K_{1}+K_{2}+\cdots+K_{\ell}\subset K\qquad\text{and}\qquad\dim_{H}K_{i}=\dim _{H}K=\alpha\ \forall 1\leq i\leq\ell. \tag{1.6}\] Here \(K\) is a compact set and \(\alpha\in(0,d)\) is a compact set. We say that \(K\) is _\(K\)-stable_ if for any \(0<\alpha<d\) there exists a compact set \(K\subset\mathbb{R}^{d}\) with \(\dim_{H}K=\alpha\) such that for any \(\ell\in\mathbb{N}_{\geq 2}\) we can find compact sets \(K_{1},K_{2},\ldots,K_{\ell}\subset\mathbb{R}^{d}\) satisfying \[K_{1}+K_{2}+\cdots+K_{\ell}\subset K\qquad\text{and}\qquad\dim_{H}K_{i}=\dim _{H}K=\alpha\ \forall 1\leq i\leq\ell. \tag{1.7}\] Here \(K\) is a compact set and \(\alpha\in(0,d)\) is a compact set. We say that \(K\) is _\(K\)-stable_ if for any \(0<\alpha<d\) there exists a compact set \(K\subset\mathbb{R}^{d}\) with \(\dim_{H}K=\alpha\) such that for any \(\ell\in\mathbb{N}_{\geq 2}\) we can find compact sets \(K_{1},K_{2},\ldots,K_{\ell}\subset\mathbb{R}^{d}\) satisfying \[K_{1}+K_{2}+\cdots+K_{\ell}\subset K\qquad\text{and}\qquad\dim_{H}K_{i}=\dim _{H}K=\alpha\ \forall 1\leq i\leq\ell. \tag{1.8}\] Here \(K\) is a compact set and \(\alpha\in(0,d)\) is a compact set. We say that \(K\) is _\(K\)-stable_ if for any \(0<\alpha<d\) there exists a compact set \(K\subset\mathbb{R}^{d}\) with \(\dim_{H}K=\alpha\) such that for any \(\ell\in\mathbb{N}_{\geq 2}\) we can find compact sets \(K_{1},K_{2},\ldots,K_{\ell}\subset\mathbb{R}^{d}\) satisfying \[K_{1}+K_{2}+\cdots+K_{\ell}\subset K\qquad\text{and}\qquad\dim_{H}K_{i}=\dim _{H}K=\alpha\ \forall 1\leq i\leq\ell. \tag{1.9}\] Here \(K\) is a compact set and \(\alpha\in(0,d)\) is a compact set. We say that \(K\) is _\(K\)-stable_ if for any \(0<\alpha<d\) there exists a compact set \(K\subset\mathbb{R}^{d}\) with \(\dim_{H}K=\alpha\) such that for any \(\ell\in\mathbb{N}_{\geq 2}\) we can find compact sets \(K_{1},K_{2},\ldots,K_{\ell}\subset\mathbb{R}^{d}\) satisfying \[K_{1}+K_{2}+\cdots+K_{\ell}\subset K\qquad\text{and}\qquad\dim_{H}K_{i}=\dim _{H}K=\alpha\ \forall 1\leq i\leq\ell. \tag{1.1}\] Here \(K\) is a compact set and \(\alpha\in(0,d)\) is a compact set. We say that \(K\) is _\(K\)-stable_ if for any \(0<\alpha<d\) there exists a compact set \(K\subset\mathbb{R}^{d}\) with \(\dim_{H}K=\alpha\) such that for any \(\ell\in\mathbb{N}_{\geq 2}\) we can find compact sets \(K_{1},K_{2},\ldots,K_{\ell}\subset\mathbb{R}^{d}\) satisfying \[K_{1}+K_{2}+\cdots+K_{\ell}\subset K\qquad\text{and}\qquad\dim_{H}K_{i}=\dim _{H}K=\alpha\ \forall 1\leq i\leq\ell. \tag{1.1}\] Here \(K\) is a compact set and \(\alpha\in(0,d)\) is a compact set. We say that \(K\) is _\(K\)-stable_ if for any \(0<\alpha<d\) there exists a compact set \(K\subset\mathbb{R}^{d}\) with \(\dim_{H}K=\alpha\) such that for any \(\ell\in\mathbb{N}_{\geq 2}\) we can find compact sets \(K_{1},K_{2},\ldots,K_{\ell}\subset\mathbb{R}^{d}\) satisfying \[K_{1}+K_{2}+\cdots+K_{\ell}\subset K\qquad\text{and}\qquad\dim_{H}K_{i}=\dim _{H}K=\alpha\ \forall 1\leq i\leq\ell. \tag{1.1}\] Here \(K\) is a compact set and \(\alpha\in(0,d)\) is a compact set. We say that \(K\) is _\(K\)-stable_ if for any \(0<\alpha<d\) there exists a compact set \(K\subset\mathbb{R}^{d}\) with \(\dim_{H}K=\alpha\) such that for any \(\ell\in\mathbb{N}_{\geq 2}\) we can find compact sets \(K_{1},K_{2},\ldots,K_{\ell}\subset\mathbb{R}^{d}\) satisfying \[K_{1}+K_{2}+\cdots+K_{\ell}\subset K\qquad\text{and}\qquad\dim_{H}K_{i}=\dim _{H}K=\alpha\ \forall 1\leq i\leq\ell. \tag{1.2}\] Here \(K\) is a compact set and \(\alpha\in(0,d)\) is a compact set. We say that \(K\) is _\(K\)-stable_ if for any \(0<\alpha<d\) there exists a compact set \(K\subset\mathbb{R}^{d}\) with \(\dim_{H}K=\alpha\) such that for any \(\ell\in\mathbb{N}_{\geq 2}\) we can find compact sets \(K_{1},K_{2},\ldots,K_{\ell}\subset\mathbb{R}^{d}\) satisfying (1.3) \[K_{1}+K_{2}+\cdots+K_{\ell}\subset K\qquad\text{and}\qquad\dim_{H}K_{i}=\dim_{H}K= \alpha\ \forall 1\leq i\ \(\dim_{H}K_{1}=\dim_{H}K\) then we have \(\dim_{H}K_{2}=0\). For the definition of upper Box dimension \(\overline{\dim}_{B}\) and Assouad dimension \(\dim_{A}\), we refer to the book of Fraser [7]. So, [8, Theorem 2.1] implies that any self-similar set in \(\mathbb{R}\) with non-integer dimension does not have the HSP-2, and hence fails the HSP. However, we don't know whether the HSP fails for all non-integer dimensional self-similar sets in \(\mathbb{R}^{d}\) with \(d\geq 2\). If we replace the Hausdorff dimension by the packing dimension in (1.1), then the sumset property described in Theorem 1.1 holds in a great generality. **Theorem 1.3**.: _If \(K\subset\mathbb{R}^{d}\) is a homogeneous self-similar set, then for any \(\ell\in\mathbb{N}_{\geq 2}\) there exist compact subsets \(K_{1},K_{2},\dots,K_{\ell}\subset\mathbb{R}^{d}\) such that_ \[K_{1}+K_{2}+\dots+K_{\ell}\subset K\qquad\text{and}\qquad\dim_{P}K_{i}=\dim_{ P}K\ \forall 1\leq i\leq\ell. \tag{1.2}\] Analogously, a set \(K\) satisfying (1.2) is said to have the _packing \(\ell\)-sumset property_ (simply called, PSP-\(\ell\)). If the set \(K\) has PSP-\(\ell\) for all \(\ell\in\mathbb{N}_{\geq 2}\), then we say that \(K\) has the _packing sumset property_ (for short, PSP). Thus, Theorem 1.3 states that the PSP holds for all homogeneous self-similar sets. The rest of the paper is organized as follows. In Section 2 we prove the existence of sets having the HSP and the PSP (Theorem 1.1). In Section 3 we prove that the HSP fails for a class of self-similar sets (Theorem 1.2). On the other hand, the PSP always holds for homogeneous self-similar sets (Theorem 1.3). This will be proven in Section 4. Finally, in Section 5 we consider the asymptotic packing sumset property for self-similar sets. ## 2. Existence of the HSP and the PSP In this section we will prove Theorem 1.1. In fact, we prove a stronger result: for any \(\alpha>0\) there exists a compact set \(K\) with \(\dim_{H}K=\dim_{P}K=\alpha\) such that \(K\) has the HSP and the PSP. Our proof is based on the following dimension formulae due to Feng et al. [5]. **Lemma 2.1**.: _Let \(\{N_{k}\}_{k=1}^{\infty}\) and \(\{m_{k}\}_{k=1}^{\infty}\) be two sequences of positive integers with \(N_{k}\geq m_{k}\geq 2\). Set_ \[E=\bigg{\{}\sum_{k=1}^{\infty}\frac{d_{k}}{N_{1}N_{2}\cdots N_{k}}:\ d_{k}\in \{0,1,\cdots,m_{k}-1\}\ \forall k\geq 1\bigg{\}}.\] _Then_ \[\dim_{H}E=\liminf_{k\to\infty}\frac{\log(m_{1}m_{2}\cdots m_{k})}{\log(N_{1} N_{2}\cdots N_{k+1})-\log m_{k+1}},\quad\dim_{P}E=\limsup_{k\to\infty}\frac{\log(m_{1}m _{2}\cdots m_{k})}{\log(N_{1}N_{2}\cdots N_{k})}.\] First we prove Theorem 1.1 for \(\alpha\in(0,1)\). **Proposition 2.2**.: _For any \(0<\alpha<1\), there exists a compact subset \(K\subseteq[0,1]\) such that \(\dim_{H}K=\dim_{P}K=\alpha\), and \(K\) has the HSP and the PSP._ Proof.: Fix \(\alpha\in(0,1)\). For \(k\geq 1\), let \(m_{k}=(k+1)!=(k+1)\times k\times(k-1)\times\cdots\times 2\times 1\), and let \(N_{k}=\lfloor m_{k}^{1/\alpha}\rfloor\), where \(\lfloor x\rfloor\) denotes the integer part of \(x\). Define \[K:=\bigg{\{}\sum_{k=1}^{\infty}\frac{d_{k}}{N_{1}N_{2}\cdots N_{k}}:\ d_{k}\in \{0,1,\cdots,m_{k}-1\}\ \forall k\geq 1\bigg{\}}. \tag{2.1}\] Note that \(m_{k}\nearrow\infty,N_{k}\nearrow\infty\) as \(k\to\infty\). By Stolz Theorem it follows that \[\lim_{k\to\infty}\frac{\log(m_{1}m_{2}\cdots m_{k})}{\log(N_{1}N_{2}\cdots N_{k} )}=\lim_{k\to\infty}\frac{\log m_{k}}{\log N_{k}}=\alpha.\] Observe that \[\lim_{k\to\infty}\frac{\log N_{k+1}-\log m_{k+1}}{\log(N_{1}N_{2}\cdots N_{k})} =\lim_{k\to\infty}\frac{(\frac{1}{\alpha}-1)\log m_{k+1}}{\log(N_{1}N_{2}\cdots N _{k})}=\lim_{k\to\infty}\frac{(1-\alpha)\log m_{k+1}}{\log(m_{1}m_{2}\cdots m_ {k})}=0.\] Thus, by Lemma 2.1, we conclude that \[\dim_{H}K=\liminf_{k\to\infty}\frac{\log(m_{1}m_{2}\cdots m_{k})}{\log(N_{1}N_ {2}\cdots N_{k+1})-\log m_{k+1}}=\alpha.\] Next we take \(\ell\in\mathbb{N}_{\geq 2}\). Note that \(\ell\mid m_{k}\) for all \(k>\ell\). Let \(m_{k}^{\prime}:=m_{k}/\ell=(k+1)!/\ell\) for \(k>\ell\). Set \[B_{\ell}:=\bigg{\{}\sum_{k=\ell+1}^{\infty}\frac{d_{k}}{N_{1}N_{2}\cdots N_{k }}:\ d_{k}\in\{0,1,\cdots,m_{k}^{\prime}-1\}\ \forall k>\ell\bigg{\}}. \tag{2.2}\] By Lemma 2.1, we have \[\dim_{H}B_{\ell} =\liminf_{k\to\infty}\frac{\log(m_{\ell+1}^{\prime}m_{\ell+2}^{ \prime}\cdots m_{k}^{\prime})}{\log(N_{1}N_{2}\cdots N_{k+1})-\log m_{k+1}^{ \prime}}\] \[=\liminf_{k\to\infty}\frac{\log(m_{\ell+1}m_{\ell+2}\cdots m_{k}) -(k-\ell)\log\ell}{\log(N_{1}N_{2}\cdots N_{k+1})-\log m_{k+1}+\log\ell}\] \[=\liminf_{k\to\infty}\frac{\log(m_{1}m_{2}\cdots m_{k})-k\log\ell }{\log(N_{1}N_{2}\cdots N_{k+1})-\log m_{k+1}}.\] Note that \[\lim_{k\to\infty}\frac{k\log\ell}{\log(m_{1}m_{2}\cdots m_{k})}=0.\] Therefore, \[\dim_{H}B_{\ell}=\liminf_{k\to\infty}\frac{\log(m_{1}m_{2}\cdots m_{k})}{\log (N_{1}N_{2}\cdots N_{k+1})-\log m_{k+1}}=\alpha.\] Take \(K_{1}=K_{2}=\cdots=K_{\ell}=B_{\ell}\) and it is straightforward to check \[K_{1}+K_{2}+\cdots+K_{\ell}=\bigg{\{}\sum_{k=\ell+1}^{\infty}\frac{d_{k}}{N_{ 1}N_{2}\cdots N_{k}}:\ d_{k}\in\{0,1,\cdots,\ell(m_{k}^{\prime}-1)\}\ \forall k>\ell\bigg{\}}\subset K.\] This implies that \(K\) has the HSP-\(\ell\). Since \(\ell\in\mathbb{N}_{\geq 2}\) was taken arbitrarily, we conclude that \(K\) has the HSP. Similarly, by Lemma 2.1 one can verify that \(\dim_{P}B_{\ell}=\dim_{P}K=\alpha\). Thus, the set \(K\) also has the PSP. Proof of Theorem 1.1.: Let \(\alpha\in(0,d)\). Then \(\alpha/d\in(0,1)\). Take \(\ell\in\mathbb{N}_{\geq 2}\). Let \(K\) and \(K_{1},K_{2},\ldots,K_{\ell}\) be constructed as in the proof of Proposition 2.2 such that \(\dim_{H}K_{i}=\dim_{H}K=\alpha/d\). Note that \(\dim_{P}K_{i}=\dim_{P}K=\alpha/d\). Then by Marstrand's product theorem (cf. [1, Theorem 3.2.1]) it follows that the product set \(K^{d}\) and the product sets \(K_{1}^{d},K_{2}^{d},\ldots,K_{\ell}^{d}\) are as required. Note from our construction that \(K\) is a compact subset of \([0,1]^{d}\) with \(\dim_{H}K=\alpha\) satisfying the HSP and the PSP. Observe that the HSP and the PSP are stable under translations and scalings. So, for any nonempty compact subset \(E\subset\mathbb{R}^{d}\) we can find a sequence of compact subsets \(K_{n}\) with \(\dim_{H}K_{n}=\alpha\) satisfying the HSP and the PSP, such that \(K_{n}\) converges to \(E\) in the Hausdorff metric. ## 3. The HSP is not universal In the previous section we show that the HSP holds for a large class of subsets in \(\mathbb{R}^{d}\). In this section we will show that the HSP is not universal, and prove Theorem 1.2. First, we need the following combinatorial lemma. **Lemma 3.1**.: _Let \(A\subset\mathbb{R}^{d}\) be a finite subset with \(\#A\geq 2\). Then for any distinct points \(t_{1},t_{2},\ldots,t_{\ell}\in\mathbb{R}^{d}\) with \(1\leq\ell\leq\#A+1\), we have_ \[\#\bigg{(}\bigcap_{j=1}^{\ell}(A+t_{j})\bigg{)}\leq\#A+1-\ell.\] Proof.: The inequality is clear for \(\ell=1\). In the following we assume \(\ell\geq 2\). We first define a lexicographical ordering for points in \(\mathbb{R}^{d}\). For \(x=(x_{1},\ldots,x_{d}),y=(y_{1},\ldots,y_{d})\in\mathbb{R}^{d}\), we define \(x\prec y\) if \(x_{1}<y_{1}\), or there exists \(1\leq j\leq d-1\) such that \(x_{1}=y_{1}\), \(\cdots\), \(x_{j}=y_{j}\), and \(x_{j+1}<y_{j+1}\). It is easy to verify that if \(x\prec y\) and \(x^{\prime}\prec y^{\prime}\) then we have \(x+x^{\prime}\prec y+y^{\prime}\). We write \(m:=\#A\) and \(A=\{a_{1},a_{2},\ldots,a_{m}\}\subset\mathbb{R}^{d}\) with \(a_{1}\prec a_{2}\prec\cdots\prec a_{m}\). Without loss of generality, we can assume that \(t_{1}\prec t_{2}\prec\cdots\prec t_{\ell}\). It suffices to prove that \[\{t_{1}+a_{1},\ldots,t_{1}+a_{\ell-1}\}\cap\bigcap_{j=1}^{\ell}(A+t_{j})=\emptyset.\] Suppose on the contrary that \(t_{1}+a_{k}\in\bigcap_{j=1}^{\ell}(A+t_{j})\) for some \(1\leq k\leq\ell-1\). Then there exist \(k_{2},\ldots,k_{\ell}\in\{1,2,\ldots,m\}\) such that \[t_{1}+a_{k}=t_{2}+a_{k_{2}}=\cdots=t_{\ell}+a_{k_{\ell}}.\] Since \(t_{1}\prec t_{2}\prec\cdots\prec t_{\ell}\), we have \(a_{k_{\ell}}\prec\cdots\prec a_{k_{2}}\prec a_{k}\). It follows that \(k\geq\ell\), a contradiction. Let \(K\subset\mathbb{R}^{d}\) be a homogeneous self-similar set in \(\mathbb{R}^{d}\) generated by the IFS \(\Phi=\{f_{b}(x)=\rho Ox+b:b\in D\}\), where \(D\subset\mathbb{R}^{d}\) is a finite subset with \(\#D\geq 2\). Then the set \(K\) can be written as \[K=\bigg{\{}\sum_{k=1}^{\infty}(\rho O)^{k-1}b_{k}:b_{k}\in D\;\forall k\geq 1 \bigg{\}}.\] This implies that the difference set \(K-K\) is also a self-similar set generated by the IFS \(\Psi=\{g_{b}(x)=\rho Ox+b:b\in D-D\}\), and it can be written as \[K-K=\bigg{\{}\sum_{k=1}^{\infty}(\rho O)^{k-1}t_{k}:t_{k}\in D-D\;\forall k\geq 1 \bigg{\}}.\] Suppose the IFS \(\Psi\) satisfies the strong separation condition (SSC). Then for each \(t\in K-K\) there exists a unique sequence \((t_{k})\in(D-D)^{\mathbb{N}}\) such that \[t=\sum_{k=1}^{\infty}(\rho O)^{k-1}t_{k},\] and the unique sequence \((t_{k})\) is called a coding of \(t\) with respect to the digit set \(D-D\). In this case, we have \[K\cap(K+t)=\bigg{\{}\sum_{k=1}^{\infty}(\rho O)^{k-1}b_{k}:b_{k}\in D\cap(D+t_{ k})\ \forall k\geq 1\bigg{\}}. \tag{3.1}\] Proof of Theorem 1.2.: Without loss of generality, we assume that \(\mathbf{0}=(0,0,\ldots,0)\in D\). Let \(K_{1}\) and \(K_{2}\) be two non-empty subsets of \(\mathbb{R}^{d}\) satisfying \(K_{1}+K_{2}\subset K\). Take \(x_{0}\in K_{1}\), and then we have \[(K_{1}-x_{0})+(K_{2}+x_{0})\subset K.\] Note that \(\mathbf{0}\in K_{1}-x_{0}\), and Hausdorff dimension is stable under translations. Then we can assume without loss of generality that \(\mathbf{0}\in K_{1}\). So, by using \(K_{1}+K_{2}\subset K\) it follows that \(K_{2}\subset K\), and hence, \[K_{1}\subset K-K,\qquad K_{2}\subset\bigcap_{t=-K_{1}}\big{(}K\cap(K+t) \big{)}. \tag{3.2}\] Note that \(t\in-K_{1}\subset K-K\). Since \(K-K\) satisfies the SSC, each \(t\in-K_{1}\) has a unique \(D-D\) coding. Let \(\Lambda\) denote the set of all unique codings \((t_{k})\in(D-D)^{\mathbb{N}}\) of points in \(-K_{1}\). For \(k\geq 1\), let \(\Lambda_{k}\subset D-D\) be the set of all possible digits occurring in the \(k\)-th position of sequences in \(\Lambda\). Then \(\Lambda\subset\prod_{k=1}^{\infty}\Lambda_{k}\). Since \(\mathbf{0}\in K_{1}\) and \(K-K\) satisfies the SSC, we have \(0\in\Lambda_{k}\) for all \(k\geq 1\). By (3.1) and (3.2), we have \[K_{2}\subset\widetilde{K_{2}}:=\bigg{\{}\sum_{k=1}^{\infty}(\rho O)^{k-1}b_{k }:b_{k}\in\bigcap_{b\in\Lambda_{k}}(D+b)\ \forall k\geq 1\bigg{\}}.\] For \(k\geq 1\), we write \[m_{k}=\#\bigg{(}\bigcap_{b\in\Lambda_{k}}(D+b)\bigg{)}.\] Then we obtain \[\dim_{H}K_{2}\leq\dim_{H}\widetilde{K_{2}}\leq\liminf_{k\to\infty}\frac{\log (m_{1}m_{2}\ldots m_{k})}{-k\log\rho}=\liminf_{k\to\infty}\frac{\sum_{j=1}^{k }\log m_{j}}{-k\log\rho}. \tag{3.3}\] On the other hand, since each point in \(-K_{1}\) has a unique coding in \(\Lambda\subset\prod_{k=1}^{\infty}\Lambda_{k}\), we have \[\dim_{H}K_{1}=\dim_{H}(-K_{1})\leq\liminf_{k\to\infty}\frac{\sum_{j=1}^{k} \log(\#\Lambda_{j})}{-k\log\rho}. \tag{3.4}\] Note by Lemma 3.1 that \(m_{j}+\#\Lambda_{j}\leq\#D+1\) for all \(j\geq 1\). Therefore, by (3.3) and (3.4) it follows that \[\dim_{H}K_{1}+\dim_{H}K_{2} \leq\liminf_{k\to\infty}\frac{\sum_{j=1}^{k}(\log m_{j}+\log(\# \Lambda_{j}))}{-k\log\rho}\] \[\leq\liminf_{k\to\infty}\frac{\sum_{j=1}^{k}(\log m_{j}+\log(\#D+ 1-m_{j}))}{-k\log\rho}\] \[\leq\frac{\gamma}{-\log\rho},\] where \[\gamma:=\max\big{\{}\log m+\log(\#D+1-m):m\in\{1,2,\ldots,\#D\}\big{\}}. \tag{3.5}\] Observe by the concavity of the function \(\log x\) that \[\gamma\leq 2\log\left(\frac{\#D+1}{2}\right)<2\log(\#D),\] where the second inequality follows by \(\#D\geq 2\). Hence, \[\dim_{H}K_{1}+\dim_{H}K_{2}\leq\frac{\gamma}{-\log\rho}<2\frac{\log\#D}{-\log \rho}=2\dim_{H}K.\] This completes the proof by setting \(\beta=\frac{\gamma}{-2\log\rho}\). _Remark 3.1_.: When \(\#D=2\), the number \(\gamma\) defined in (3.5) is indeed \(\log\#D\), and then we can conclude that \[\dim_{H}K_{1}+\dim_{H}K_{2}\leq\dim_{H}K\] for any two nonempty subsets \(K_{1},K_{2}\subset\mathbb{R}^{d}\) satisfying \(K_{1}+K_{2}\subset K\). At the end of this section we point out that Theorem 1.2 can be applied to homogeneous self-similar sets in \(\mathbb{R}\). For a positive integer \(N\) and a real number \(0<\rho<1/(N+1)\), we write \[E_{\rho,N}=\bigg{\{}\frac{1-\rho}{N}\sum_{k=1}^{\infty}x_{k}\rho^{k-1}:x_{k} \in\{0,1,\cdots,N\}\ \forall k\geq 1\bigg{\}}.\] By Theorem 1.2, for \(0<\rho<1/(2N+1)\) the set \(E_{\rho,N}\) does not have the HSP-2, and hence fails the HSP. ## 4. Homogeneous self-similar set has the PSP In contrast with Theorem 1.2 we show in this section that the PSP always holds for homogeneous self-similar sets, and prove Theorem 1.3. Recall that \(K\) is a homogeneous self-similar set in \(\mathbb{R}^{d}\) if it can be generated by an IFS \(\{f_{j}(x)=\rho Ox+b_{j}\}_{j=1}^{m}\), where \(0<\rho<1\), \(O\) is a \(d\times d\) orthogonal real matrix, and each \(b_{j}\) is a vector in \(\mathbb{R}^{d}\). Without loss of generality we may assume that \(b_{1}=\mathbf{0}\). Let \(\Omega:=\{1,2,\cdots,m\}^{\mathbb{N}}\) be the set of all infinite sequences \((i_{k})\) with each digit \(i_{k}\in\{1,2,\ldots,m\}\). Equipped with the product topology of the discrete topology on \(\{1,2,\ldots,m\}\), \(\Omega\) becomes a compact metric space. We define the coding map \(\pi:\Omega\to K\) by \[\pi((i_{k})):=\lim_{k\to\infty}f_{i_{1}}\circ f_{i_{2}}\circ\cdots\circ f_{i_ {k}}(\mathbf{0})=\sum_{k=1}^{\infty}(\rho O)^{k-1}b_{i_{k}}.\] Then \(\pi\) is continuous and surjective. For \(S\subset\mathbb{N}\), we define \[\Omega_{S}:=\big{\{}(i_{k})\in\Omega:\;i_{k}=1\text{ for }k\notin S\big{\}} \quad\text{ and }\quad K_{S}:=\pi\big{(}\Omega_{S}\big{)}.\] Then \(K_{S}\) is a subset of \(K=\pi(\Omega)\). **Proposition 4.1**.: _If \(S\subset\mathbb{N}\) satisfies_ \[\limsup_{n\to\infty}\frac{\#(S\cap[1,n])}{n}=1,\] _then we have_ \[\dim_{P}K_{S}=\overline{\dim}_{B}K_{S}=\dim_{P}K.\] Proof.: For the first equality, let \(V\) be an open subset that intersects \(K_{S}\). Then we can find \((j_{n})\in\Omega_{S}\) such that \(\pi((j_{n}))\in V.\) Since \(V\) is open and \(\pi\) is continuous, there exists \(n_{0}\in\mathbb{N}\) such that \(\pi([j_{1}j_{2}\cdots j_{n_{0}}])\subseteq V,\) where \([j_{1}j_{2}\cdots j_{n_{0}}]:=\{(i_{n})\in\Omega:i_{k}=j_{k}\text{ for }1\leq k\leq n_{0}\}\) is a cylinder set. It follows that \[\pi\big{(}[j_{1}j_{2}\cdots j_{n_{0}}]\cap\Omega_{S}\big{)}\subseteq V\cap K_ {S}.\] Note that the set \(K_{S}\) is the union of finitely many translations of \(\pi\big{(}[j_{1}j_{2}\cdots j_{n_{0}}]\cap\Omega_{S}\big{)}\), and the upper box-counting dimension is finitely stable. Therefore, we conclude that \[\overline{\dim}_{B}(V\cap K_{S})=\overline{\dim}_{B}K_{S}.\] Note that \(K_{S}\) is compact. Thus, by [4, Corollary 3.10] (see also [1, Corollary 2.8.2]) it follows that \[\dim_{P}K_{S}=\overline{\dim}_{B}K_{S}\] as desired. For the second equality, we first observe that \(\overline{\dim}_{B}K_{S}\leq\overline{\dim}_{B}K=\dim_{P}K\). So it suffices to prove the inverse inequality. Without loss of generality we assume that \(\operatorname{diam}K=1\). Write \(\alpha:=\dim_{P}K\). Suppose on the contrary that \(\overline{\dim}_{B}K_{S}<\alpha.\) Then there exists \(\varepsilon_{0}\in(0,\alpha)\) such that for all sufficiently large \(n\), we have \[\frac{\log N_{\rho^{n}}(K_{S})}{-n\log\rho}\leq\alpha-\varepsilon_{0}, \tag{4.1}\] where \(N_{\delta}(F)\) denotes the smallest number of closed balls of radius \(\delta\) that cover \(F\). For \(n\geq 1\), define \[\Omega_{S,n}:=\big{\{}(i_{k})\in\Omega:\;i_{k}=1\text{ for }k\notin S\cup \mathbb{N}_{\geq n+1}\big{\}}\quad\text{ and }\quad K_{S,n}:=\pi\big{(}\Omega_{S,n}\big{)}.\] That is, \(\Omega_{S,n}\) is the union of all \(n\)-level cylinder sets that intersects \(\Omega_{S}\). Note by \(\operatorname{diam}K=1\) that the diameter of all \(n\)-level sets of \(K\) is \(\rho^{n}\). Thus we have \[N_{3\rho^{n}}(K_{S,n})\leq N_{\rho^{n}}(K_{S}).\] Note that the set \(K\) is covered by the union of at most \(m^{n-\#(S\cap[1,n])}\) many translations of \(K_{S,n}\). It follows that \[N_{3\rho^{n}}(K)\leq m^{n-\#(S\cap[1,n])}\cdot N_{3\rho^{n}}(K_{S,n})\leq m^{n -\#(S\cap[1,n])}\cdot N_{\rho^{n}}(K_{S}). \tag{4.2}\] Thus, by (4.1) and (4.2) we have \[\underline{\dim}_{B}K \leq\liminf_{n\to\infty}\frac{\log N_{3\rho^{n}}(K)}{-\log(3\rho^{n})}\] \[\leq\liminf_{n\to\infty}\frac{\log\left(m^{n-\#(S\cap[1,n])}\cdot N _{\rho^{n}}(K_{S})\right)}{-\log\rho^{n}}\] \[\leq\alpha-\varepsilon_{0}+\frac{\log m}{-\log\rho}\cdot\liminf_{ n\to\infty}\left(1-\frac{\#(S\cap[1,n])}{n}\right)\] \[=\alpha-\varepsilon_{0}.\] This contradicts the fact that \(\underline{\dim}_{B}K=\overline{\dim}_{B}K=\dim_{P}K=\alpha\) by the self-similarity of \(K\) (cf. [3, Corollary 3.3]). Thus, we obtain \(\overline{\dim}_{B}K_{S}\geq\alpha\) as desired. **Lemma 4.2**.: _Let \(S\subset\mathbb{N}\) satisfy_ \[\beta=\limsup_{n\to\infty}\frac{\#(S\cap[1,n])}{n}>0. \tag{4.3}\] _Then for any \(\ell\in\mathbb{N}_{\geq 2}\) we can divide \(S\) into pairwise disjoint subsets \(S_{1},S_{2},\cdots,S_{\ell}\) such that_ \[\limsup_{n\to\infty}\frac{\#(S_{j}\cap[1,n])}{n}=\beta\quad\forall 1\leq j \leq\ell.\] Proof.: Note that for any given \(m\geq 0\), we have \[\limsup_{n\to\infty}\frac{\#(S\cap[m+1,n])}{n}=\beta.\] Thus, we can define an increasing sequence \(\{n_{k}\}\) of integers such that \(n_{0}=0\) and \[\frac{\#(S\cap[n_{k-1}+1,n_{k}])}{n_{k}}>\beta-\frac{1}{k}\quad\forall k\geq 1. \tag{4.4}\] Fix \(\ell\in\mathbb{N}_{\geq 2}\). For \(1\leq j\leq\ell\), we define \[S_{j}=\bigcup_{k=0}^{\infty}\big{(}S\cap[n_{k\ell+j-1}+1,n_{k\ell+j}]\big{)}.\] Clearly, \(S_{1},S_{2},\cdots,S_{\ell}\) are pairwise disjoint, and \(S=\bigcup_{j=1}^{\ell}S_{j}.\) Note by (4.4) that for each \(j\in\{1,2,\ldots,\ell\}\), \[\frac{\#(S_{j}\cap[1,n_{k\ell+j}])}{n_{k\ell+j}}>\beta-\frac{1}{k\ell+j}\quad \forall k\in\mathbb{N}.\] From this and (4.3) we conclude that \[\limsup_{n\to\infty}\frac{\#(S_{j}\cap[1,n])}{n}=\beta\quad\forall 1\leq j\leq\ell.\] Proof of Theorem 1.3.: Fix \(\ell\in\mathbb{N}_{\geq 2}\). By Lemma 4.2, we can divide \(\mathbb{N}\) into pairwise disjoint subsets \(S_{1},S_{2},\cdots,S_{\ell}\) such that \[\limsup_{n\to\infty}\frac{\#(S_{j}\cap[1,n])}{n}=1\quad\forall 1\leq j\leq\ell.\] By Proposition 4.1 this implies that \[\dim_{P}K_{S_{j}}=\dim_{P}K\quad\forall 1\leq j\leq\ell,\] Note that \(b_{1}=\mathbf{0}\). By using the pairwise disjointness of \(S_{1},S_{2},\ldots,S_{\ell}\) it is easy to check that \[K_{S_{1}}+K_{S_{2}}+\cdots+K_{S_{\ell}}=K.\] Thus, the set \(K\) has the PSP-\(\ell\). Since \(\ell\) was arbitrary, we conclude that the homogeneous self-similar \(K\) has the PSP. ## 5. Asymptotic packing sumset properties A set \(K\subset\mathbb{R}^{d}\) is said to satisfy the _asymptotic packing sumset property_ (simply called APSP) if for any \(\varepsilon>0\) and any \(\ell\in\mathbb{N}_{\geq 2}\) there exist sets \(K_{1},K_{2},\ldots,K_{\ell}\) such that \[K_{1}+K_{2}+\cdots+K_{\ell}\subset K\quad\text{and}\quad\dim_{P}K_{i}>\dim_{P} K-\varepsilon\ \forall 1\leq i\leq\ell.\] **Theorem 5.1**.: _For \(d\in\{1,2\}\), any self-similar set in \(\mathbb{R}^{d}\) has the APSP._ However, we don't know whether Theorem 5.1 holds for \(d\geq 3\). To prove Theorem 5.1 we need the following lemma which is essentially due to [17, Lemma 2.4]. Note that in [17] it was only proved for \(\mathbb{R}^{2}\), but its proof can be easily adapted to \(\mathbb{R}^{d}\). **Lemma 5.2** (Orponen, [17]).: _Let \(E\) be a self-similar set in \(\mathbb{R}^{d}\) with \(d\in\mathbb{N}\). Then for any \(\varepsilon>0\) there exists a self-similar set \(E_{\varepsilon}\subset E\) satisfying the strong separation condition such that_ \[\dim_{H}E_{\varepsilon}>\dim_{H}E-\varepsilon.\] We also need the following approximation result. **Lemma 5.3** (Peres and Shmerkin, [19]).: _Let \(E\) be the self-similar set that satisfies the open set condition in \(\mathbb{R}\) or \(\mathbb{R}^{2}\). Then for any \(\varepsilon>0\) there exists a homogeneous self-similar set \(E_{\varepsilon}\subset E\) such that_ \[\dim_{H}E_{\varepsilon}>\dim_{H}E-\varepsilon.\] Proof of Theorem 5.1.: Note that for a self-similar set \(E\) in \(\mathbb{R}^{d}\), we have \(\dim_{H}E=\dim_{P}E\). Then the theorem follows by using Theorem 1.3, Lemma 5.2 and Lemma 5.3. ## Acknowledgements The first author wants to thank Tuomas Orponen for many useful discussions during the Fractal Geometry conference in Edinburgh 2023, especially for Remark 1.2 and many useful references. The first author was supported by NSFC No. 11971079. The second author was supported by NSFC No. 12071148, and Science and Technology Commission of Shanghai Municipality (STCSM) No. 22DZ2229014, and Fundamental Research Funds for the Central Universities No. YBNLTS2023-016.
2307.09908
Tracer dynamics in the active random average process
We investigate the dynamics of tracer particles in the random average process (RAP), a single-file system in one dimension. In addition to the position, every particle possesses an internal spin variable $\sigma (t)$ that can alternate between two values, $\pm 1$, at a constant rate $\gamma$. Physically, the value of $\sigma (t)$ dictates the direction of motion of the corresponding particle and for finite $\gamma$, every particle performs a non-Markovian active dynamics. Herein, we study the effect of this non-Markovianity in the fluctuations and correlations of the positions of tracer particles. We analytically show that the variance of the position of a tagged particle grows sub-diffusively as $\sim \zeta_{\text{q}} \sqrt{t}$ at large times for the quenched uniform initial condition. While this sub-diffusive growth is identical to that of the Markovian/non-persistent RAP, the coefficient $\zeta_{\text{q}} $ is rather different and bears the signature of the persistent motion of active particles through higher point correlations (unlike in the Markovian case). Similarly, for the annealed (steady state) initial condition, we find that the variance scales as $\sim \zeta_{\text{a}} \sqrt{t}$ at large times with coefficient $\zeta_{\text{a}} $ once again different from the non-persistent case. Although $\zeta_{\text{q}}$ and $\zeta_{\text{a}} $ both individually depart from their Markov counterparts, their ratio $\zeta_{\text{a}} / \zeta_{\text{q}}$ is still equal to $\sqrt{2}$, a condition observed for other diffusive single-file systems. This condition turns out to be true even in the strongly active regimes as corroborated by extensive simulations and calculations. Finally, we study the correlation between the positions of two tagged particles in both quenched uniform and annealed initial conditions. We verify all our analytic results by extensive numerical simulations.
Saikat Santra, Prashant Singh, Anupam Kundu
2023-07-19T11:21:51Z
http://arxiv.org/abs/2307.09908v3
**Tracer dynamics in active random average process** ## Abstract **We investigate the dynamics of tracer particles in the random average process (RAP), a single-file system in one dimension. In addition to the position, every particle possesses an internal spin variable \(\sigma(t)\) that can alternate between two values, \(\pm 1\), at a constant rate \(\gamma\). Physically, the value of \(\sigma(t)\) dictates the direction of motion of the corresponding particle and for finite \(\gamma\), every particle performs a non-Markovian active dynamics. Herein, we study the effect of this non-Markovianity in the fluctuations and correlations of the positions of tracer particles. We analytically show that the variance of the position of a tagged particle grows sub-diffusively as \(\sim\zeta_{\mathrm{q}}\sqrt{t}\) at large times for the quenched uniform initial condition. While this sub-diffusive growth is identical to that of the Markovian/non-persistent RAP, the coefficient \(\zeta_{\mathrm{q}}\) is rather different and bears the signature of the persistent motion of active particles through higher point correlations (unlike in the Markovian case). Similarly, for the annealed (steady state) initial condition, we find that the variance scales as \(\sim\zeta_{\mathrm{a}}\sqrt{t}\) at large times with coefficient \(\zeta_{\mathrm{a}}\) once again different from the non-persistent case. Although \(\zeta_{\mathrm{q}}\) and \(\zeta_{\mathrm{a}}\) both individually depart from their Markov counterparts, their ratio \(\zeta_{\mathrm{a}}/\zeta_{\mathrm{q}}\) is still equal to \(\sqrt{2}\), a condition observed for other diffusive single-file systems. This condition turns out to be true even in the strongly active regimes as corroborated by extensive simulations and calculations. Finally, we study the correlation between the positions of two tagged particles in both quenched uniform and annealed initial conditions. We verify all our analytic results by extensive numerical simulations.** ###### Contents * 1 Introduction * 2 Model and summary of our main results * 2.1 Summary of the main results * 3 Correlation \(C_{i}(t)=\langle z_{i}(t)\sigma_{0}(t)\rangle\) * 4 Mean squared displacement and correlations in the quenched initial condition * 4.1 Mean-squared displacement \(g_{0}(t)=\langle z_{0}^{2}(t)\rangle\) * 4.2 Correlation \(g_{i}(t)=\langle z_{0}(t)z_{i}(t)\rangle\) * 5 Mean squared displacement and correlations in the annealed initial condition * 5.1 Mean squared displacement \(l_{0}(t)\) * 5.2 Correlation \(l_{i}(t)\) * 6 Effect of small \(\gamma\) on the MSD * 7 Conclusion * A Failure of mean field approximation to specify \(T_{i}(t)\) in Eq. (27) * B Expression of \(C_{i}(t)\) in Eq. (23) * C Sub-leading term in \(g_{0}(t)\) in Eq. (46) * D Details about numerical simulations * E Expressions of \(Y_{i}(s)\) and \(W_{i}(s)\) as \(s\to 0\) * F Computation of \(C_{i}(t_{0},t_{0}+t)=\langle z_{0}(t_{0})\sigma_{i}(t_{0}+t)\rangle\) * G Computation of \(S_{i}(t_{0},t_{0}+t)\) in Eq. (63) * G.1 Calculation of first term * G.2 Calculation of second term * G.3 \(\tilde{\mathcal{S}}_{i}(s,t)\) in Eq. (112) ## 1 Introduction The dynamics of a tracer particle in a collection of non-overtaking particles in one dimension is a prototypical example of strongly correlated system in statistical physics. This non-overtaking constraint, called single-file constraint, drastically changes the dynamical behaviour of a tracer particle [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]. For example, in single-file diffusion, the mean squared displacement (MSD) of a tagged particle grows sub-diffusively as \(\sim\sqrt{t}\) at late times in contrast to the linear growth of the MSD of a free diffusive particle. The coefficient of the sub-diffusive growth depends on the particle number density and bare diffusion of the particles [2, 4, 5]. For Hamiltonian systems, the single-file constraint, slows down the motion of a tagged particle [11, 12, 13, 14]. This slowing down of the dynamics is a common effect in the single-file motion and occurs due to the hindrance in the motion faced by one particle due to the presence of other particles. It has been found that the coefficient of the late time growth of the MSD of a tagged particle crucially depends, in addition to the density of the particles, on the microscopic dynamics of individual particles, interactions among themselves and on the statistical properties of the initial state [9, 10, 12, 13, 15, 16]. In this paper, we study how the motion of a tagged particle gets modified if all particles in the single-file system are active. Active matter is a class of driven out-of-equilibrium systems where every individual unit consumes energy from the environment and converts it into a systematic movement via some internal mechanisms [17, 18]. At the collective level, these particles exhibit interesting phenomena such as motility induced phase separation, absence of equation of state for pressure etc [19, 20]. Non-interacting active particles also show behaviours which are different than their passive counterpart as exemplified by clustering inside bounded domain, climbing against potential hill, non-Boltzmann stationary state and probability distribution of atypical fluctuations and survival probability which are different than the thermal particles [21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39]. At the interacting front, the distribution of two active particles with mutual exclusion has been studied and shown to display jamming features [40]. Going beyond two particles, there have also been attempts to derive fluctuating hydrodynamic descriptions for active lattice gases that turn out to be useful to study density and current fluctuations and entropy production [41, 42, 43]. With the growing interest in active matter in the last decade, people naturally got interested in knowing how broken detailed balance at the microscopic dynamics manifests itself in the tracer dynamics in single-file motion. Several numerical studies in this direction have pointed out that the temporal growth of the MSD of a tagged particle, at late times, remains same as in the absence of activity. However, the coefficient associated with this temporal growth gets non-trivially modified due to the persistent nature of these particles [44, 45, 46, 47, 48, 49]. Attempts to derive this with harmonic chain of active particles reproduce only the passive result at late times and do not shed light on the role of activity in the tracer dynamics [50, 51]. It is, however, not difficult to realize that the presence of activity increases the correlations among particles and this, in addition to the single-file constraint, should affect the motion of a tagged particle. Naturally one may ask how such enhanced correlations affect the motion of a tagged particle? While the above mentioned studies discuss the overall effect of the presence of activity, the contribution from the enhanced correlation is not very clear and transparent. Moreover, how the fluctuations in the initial conditions affect the motion of tracer particles in active single-file systems is also not explored. In absence of a general formulation to investigate these questions, it is crucial to study specific model systems (that are amenable to analytical calculations). In this paper, we consider a version of the random average process (RAP) in which individual particles are subjected to active noises. For this model, we provide systematic answers to these questions. Originally, the RAP model was first studied for non-persistent particles (without active noises) by Fontes and Ferrari as a generalisation of the smoothing process and the voter model [52]. It has also appeared in several other physical problems like force fluctuations in bead packs [53], in mass transport models [54, 55], in models of wealth distribution and traffic [56, 57] and generalized Hammersley process [58]. In this model, motion of the particles is restricted as each particle can jump, with a fixed rate \(p\), on either side only by a random fraction \(\eta\) of the space available till the next neighbouring particle [52, 59]. The random fraction \(\eta\) is chosen from some distribution \(R(\eta)\). Since particles cannot overtake their neighbouring particles, their initial order remains preserved throughout the time evolution which gives rise to the single-file motion. Meanwhile, the jump that a particle makes at a given instant is independent of what it does in the previous step. Therefore, we will refer to this model as the Markovian RAP (MRAP). Later, we will contrast this with the active case where every particle possesses an spin variable \(\sigma(t)\) that dictates the direction of its motion and has non-vanishing correlations at two different times. Being a paradigmatic model for interacting multi-particle system, the motion of tagged particles in MRAP was studied previously and several results were obtained both analytically and numerically. If \(x_{i}(t)\) denotes the position of the \(i\)-th particle at time \(t\), then the MSD \(\langle z_{i}^{2}(t)\rangle\) (with \(z_{i}(t)=x_{i}(t)-x_{i}(0)\)) and the correlation between two tagged particles \(\langle z_{i}(t)z_{j}(t)\rangle\) were computed using microscopic calculations [59, 55, 60, 61], as well as using hydrodynamic approach [62]. At late times, these quantities are explicitly given by \[\left.\begin{array}{l}\langle z_{0}^{2}(t)\rangle\simeq\frac{\rho^{-2}\mu_ {2}\sqrt{\mu_{1}p}}{(\mu_{1}-\mu_{2})\sqrt{\pi}}\ \sqrt{t},\\ \langle z_{0}(t)z_{i}(t)\rangle\simeq\frac{\rho^{-2}\mu_{2}\sqrt{\mu_{1}p}}{( \mu_{1}-\mu_{2})\sqrt{\pi}}\ \sqrt{t}\ f\left(\frac{|i|}{\sqrt{4\mu_{1}pt}}\right),\end{array}\right\}\ \text{(MRAP)} \tag{1}\] where \(\mu_{i}\) is the \(i^{\text{th}}\) moment of the jump distribution \(R(\eta)\) and \(\rho\) is the stationary density of the particles. Explicit expression of the scaling function \(f(y)\) is given by [59, 62] \[f(y)=e^{-y^{2}}-\sqrt{\pi}y\ \text{Erfc}(y). \tag{2}\] The same scaling function also appears in many single-file systems that possess diffusive hydrodynamics at the macroscopic scales [62, 63, 64, 65, 3]. It is, however, important to note that for MRAP, the equations for two-point correlation functions close onto themselves. Therefore, two-point correlations are enough to decide the pre-factor in the expressions of MSD. Contrarily, this is found not be true when the particles are subjected to active noises. Then, the two-point correlations would depend on three-point correlations and so on. Consequently, higher point correlations start to contribute to the growth of the MSD through two-point correlations. For example, such dependence on higher order correlations was also observed for gap statistics in hardcore run and tumble particles [66]. Question then arises: does activity facilitate in the growth of MSD? If so, how? In this paper, we present an example of a model where these questions can be thoroughly addressed through microscopic analytic computations aided by numerical simulations. Our paper is organised as follows: In Section 2, we introduce the model, fix notations and also summarize main results of the paper. We then study the properties of tracer particles with fixed initial condition in Section 4. More specifically, we look at the MSD of the position of a tagged particle in Section 4.1 and position correlation of two tagged particles in Section 4.2. Section 5 discusses these quantities in the annealed case with Section 5.1 devoted to the MSD and Section 5.2 to the correlation function. We discuss the validity of our results for strongly active regime (small \(\gamma\)) in Section 6. Finally we conclude in Section 7. ## 2 Model and summary of our main results We consider active particles moving in an infinite line distributed with density \(\rho\). We denote the position of the \(i\)-th particle at time \(t\) by \(x_{i}(t)\) where \(i\in\mathbb{Z}\) and \(x_{i}(t)\in\mathbb{R}\). In addition, every particle has an internal variable \(\sigma_{i}(t)\) (called spin) which can alternate between \(\pm 1\) at a rate \(\gamma>0\). The variable \(\sigma_{i}(t)\) represents the usual dichotomous noise widely studied for the run and tumble particles [21]. Initially, the positions of these particles are fixed and are kept at a fixed distance \(a=1/\rho\) apart. However, the initial value of \(\sigma_{i}(0)\) can be random which, for simplicity, we choose to be \(\pm 1\) with equal probability \(1/2\). This implies that for all \(i\in\mathbb{Z}\), we have \[x_{i}(0) =ia=i/\rho, \tag{3}\] \[\sigma_{i}(0) =\ \ 1\ \ \text{with probability }1/2,\] (4) \[=-1\ \ \text{with probability }1/2, \tag{5}\] At a small time interval \([t,t+dt]\), the direction of motion of the \(i\)-th particle depends on its spin variable \(\sigma_{i}(t)\). If \(\sigma_{i}(t)=1\), then the particle jumps to the right with probability \(pdt\) and does not jump with probability \((1-pdt)\). On the other hand, if \(\sigma_{i}(t)=-1\), then the particle jumps to its left with probability \(pdt\) and with probability \((1-pdt)\), stays at \(x_{i}(t)\). The jump, either to the left or to the right, is by a random fraction \(\eta_{i}\) of the space available between the particle and its neighbour. This means, when the particle jumps to the right, it will jump by an amount \(\eta_{i}\left[x_{i+1}(t)-x_{i}(t)\right]\) whereas jump to the left takes place by an amount \(\eta_{i}\left[x_{i-1}(t)-x_{i}(t)\right]\). The jump fraction \(\eta_{i}\in[0,1)\) is a random variable drawn independently from the distribution \(R(\eta)\) and characterized by the moments \(\mu_{k}=\langle\eta^{k}\rangle\). In contrary to the original MRAP, we see that the motion of a particle at time interval \([t,t+dt]\) depends on the value of \(\sigma(t)\) which itself depends on its previous history. Thus, every particle performs a non-Markovian active dynamics. A schematic illustration of this model is shown in Figure (1). We refer to this model as the active random average process (ARAP). Figure 1: Schematic illustration of the random average process with active particles. Each particle has an internal spin denoted by the arrows inside the circles and the direction of the arrow represents the state of the spin. The spin variable of individual particles changes direction independently with rate \(\gamma\). In a small time interval \(dt\) a particle chosen at random either makes jump in the direction of the spin with probability \(pdt\) or does not jump with the remaining probability \((1-pdt)\). Every successful jump of a particle, say \(i^{\text{th}}\) with \(\sigma_{i}=1\), takes place by a random fraction \(\eta_{i}\) of the space available in jump direction _i.e._ by an amount \(\eta_{i}\left(x_{i+1}(t)-x_{i}(t)\right)\) where \(\eta_{i}\in[0,1)\) is a random variable drawn from the distribution \(R(\eta_{i})\). The time evolution equation for the position \(x_{i}(t)\) and spin \(\sigma_{i}(t)\) can be written as \[x_{i}(t+dt) =x_{i}(t)+\Gamma_{i}(t), \tag{6}\] \[\sigma_{i}(t+dt) =\begin{cases}-\sigma_{i}(t),&\text{with probability}\quad\;\; \gamma dt,\\ \sigma_{i}(t),&\text{with probability}\;\;(1-\gamma dt),\end{cases} \tag{7}\] where the increment \(\Gamma_{i}(t)\) reads \[\Gamma_{i}(t)=\begin{cases}\eta_{i}\left[x_{i+1}(t)-x_{i}(t)\right],&\text{ with probability}\quad\left(\frac{1+\sigma_{i}(t)}{2}\right)pdt,\\ \eta_{i}\left[x_{i-1}(t)-x_{i}(t)\right],&\text{with probability}\quad\left( \frac{1-\sigma_{i}(t)}{2}\right)pdt,\\ 0,&\text{with probability}\quad\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\ * We next study the temporal behaviour of the MSD \(\langle z_{0}^{2}(t)\rangle\) of a tagged particle. Similar to MRAP, we find a crossover of \(\langle z_{0}^{2}(t)\rangle\) from linear growth at small times to sub-diffusive growth (\(\sqrt{t}\)) at large times as \[\langle z_{0}^{2}(t)\rangle\simeq\begin{cases}\mu_{2}a^{2}t,&\text{for small }t\\ \zeta_{1}\sqrt{t}+\zeta_{2},&\text{for large }t.\end{cases}\] (13) However, in contrast to the Markov case, constants \(\zeta_{1}\) and \(\zeta_{2}\) are found to depend on the higher-order correlation functions for the ARAP as \[\zeta_{1}=\sqrt{\frac{\mu_{1}}{\pi}}\left[2aC_{I}+T_{I}+\frac{a \mu_{2}}{\mu_{1}-\mu_{2}}\left\{a+2C_{1}(t\rightarrow\infty)\right\},\right.\] (14) \[\text{where}\ \ C_{I}=\sum_{i=-\infty}^{\infty}C_{i}(t\rightarrow\infty), \quad T_{I}=\sum_{i=-\infty}^{\infty}T_{i}(t\rightarrow\infty),\] (15) \[\text{and}\ \ \zeta_{2}=-\frac{\mu_{2}\zeta_{1}}{4(\mu_{1}-\mu_{2}) }\ \sqrt{\frac{\pi}{\mu_{1}}}.\] (16) Here \(C_{I}\) and \(T_{I}\) are constants that depend on the large time saturation values of the two-point correlation \(C_{i}(t)\) and the three-point correlation \(T_{i}(t)\) defined in Eq.(27). In the Markovian limit (\(\gamma\rightarrow\infty\)), both of them vanish and we recover Eq. (1) for \(\langle z_{0}^{2}(t)\rangle\). However, for finite \(\gamma\), these higher order correlations are non-vanishing and we observe a substantial enhancement of the MSD in comparison to the non-persistent case (see Figure (6)). * We mentioned earlier that the hierarchy of the correlation functions does not close. This is also seen in the time evolution equation of the three-point correlations \(T_{i}(t)\) which reveals that they depend on the four-point correlations which in turn, depend on the higher-point correlations. We numerically demonstrate that any decoupling approximation to break this hierarchy such as decomposing the four-point functions into lower point correlation functions does not provide a good approximation (see Appendix A). * We also compute the position correlation \(g_{i}(t)=\langle z_{0}(t)z_{i}(t)\rangle\) and find the following scaling behaviour at large times: \[g_{i}(t)\simeq\zeta_{1}\sqrt{t}\ f\left(\frac{|i|}{\sqrt{4\mu_{1}t}}\right),\] (17) where \(f(y)\) is the same scaling function given in Eq. (2). Once again we notice that while the scaling function \(f(y)\) is same as the Markov case [59], the pre-factor \(\zeta_{1}\) in Eq. (17) is different and therefore, carries the effect of the persistent dynamics of the active particles. * The previous results are derived for the quenched uniform initial condition where the initial positions of the particles remain fixed for different realisations. We also investigate the variance and the correlations with the steady state initial condition. For this case, we first evolve the system till time \(t_{0}\) and then start measuring the position till further time \((t_{0}+t)\). Observe that the position of the particle at the onset of the measurement is different for different realisations. Taking \(t_{0}\to\infty\), we obtain the MSD and correlation in the steady state to be \[l_{0}(t) =\lim_{t_{0}\to\infty}\langle[x_{0}(t_{0}+t)-x_{0}(t_{0})]^{2} \rangle\simeq\zeta_{1}\sqrt{2t},\] (18) \[l_{i}(t) =\lim_{t_{0}\to\infty}\langle[x_{i}(t_{0}+t)-x_{i}(t_{0})]\,[x_{0 }(t_{0}+t)-x_{0}(t_{0})]\rangle,\] \[\simeq\zeta_{1}\sqrt{2t}\;f\left(\frac{|i|}{\sqrt{2\mu_{1}t}} \right).\] (19) Both these results are valid only for large \(t\). Once again, compared to the MRAP, the persistent nature only affects the coefficient \(\zeta_{1}\) but does not change the sub-diffusive exponent. Another interesting observation is that for ARAP also, the ratio of the MSDs in the annealed and quenched initial settings \(l_{0}(t)/g_{0}(t)\) is equal to \(\sqrt{2}\) at large times, a condition valid for the Markov case [59]. This means while both \(l_{0}(t)\) and \(g_{0}(t)\) individually change due to the persistent dynamics, their ratio is still fixed to the value \(\sqrt{2}\), same as the Markov case, even at finite \(\gamma\). ## 3 Correlation \(C_{i}(t)=\langle z_{i}(t)\sigma_{0}(t)\rangle\) Let us begin by computing the spin-position correlation function \(C_{i}(t)=\langle z_{i}(t)\sigma_{0}(t)\rangle\) which will be useful in calculating the correlations and fluctuations of the tagged particles later. First note that due to the symmetry of our model, one also has \(C_{i}(t)=\langle z_{0}(t)\sigma_{i}(t)\rangle\). In order to evaluate the changes in \(C_{i}(t)\) in small time \(dt\), let us look at the different contributions to \(C_{i}(t+dt)=\langle z_{i}(t+dt)\sigma_{0}(t+dt)\rangle\). Following the update rule for \(z_{i}(t)\) in (9) and for \(\sigma_{i}(t)\) in (7), it is easy to see that \(C_{i}(t+dt)\) up to linear order in \(dt\) is given by \[C_{i}(t+dt) \simeq-\gamma dt\;\langle z_{i}(t)\sigma_{0}(t)\rangle+(1-\gamma dt )\;\langle z_{i}(t+dt)\sigma_{0}(t)\rangle,\] \[\simeq C_{i}(t)-2\gamma C_{i}(t)\Delta t+\frac{\mu_{1}dt}{2}\left[C _{i+1}(t)+C_{i-1}(t)-2C_{i}(t)+2a\delta_{i,0}\right]\] \[\qquad\qquad+\frac{\mu_{1}dt}{2}\left[\langle\sigma_{0}(t)\sigma _{i}(t)z_{i+1}(t)\rangle-\langle\sigma_{0}(t)\sigma_{i}(t)z_{i-1}(t)\rangle \right], \tag{20}\] where \(\mu_{k}=\langle\eta^{k}\rangle=\int_{0}^{1}\eta^{k}R(\eta)d\eta\). Taking \(dt\to 0\) limit, one arrives at the following equation for \(C_{i}(t)\): \[\frac{dC_{i}(t)}{dt} =-2\gamma C_{i}(t)+\frac{\mu_{1}}{2}\left[C_{i+1}(t)+C_{i-1}(t)-2C _{i}(t)+2a\delta_{i,0}\right]\] \[\qquad\qquad+\frac{\mu_{1}}{2}\left[\langle\sigma_{0}(t)\sigma_{i }(t)z_{i+1}(t)\rangle-\langle\sigma_{0}(t)\sigma_{i}(t)z_{i-1}(t)\rangle \right]. \tag{21}\] While this is an exact time evolution equation, it is not closed due to the presence of higher point correlations. In fact, this turns out to be a general property of the persistent case that the dynamics of any correlation function requires knowledge of higher order correlation functions. This makes the problem analytically challenging. However, as often done, one can make progress by performing the mean field approximation under which we break the three point correlation in Eq. (21) into a product of lower order correlations. The validity of this approximation will be discussed later. Under this approximation, Eq. (21) simplifies to \[\frac{dC_{i}(t)}{dt}=-2\gamma C_{i}(t)+\frac{\mu_{1}}{2}\left[C_{i+1}(t)+C_{i- 1}(t)-2C_{i}(t)+2a\delta_{i,0}\right]. \tag{22}\] One needs to solve this equation with the initial condition \(C_{i}(0)=0\) because \(z_{i}(0)=0\) by definition. In Appendix B, we have explicitly solved this equation and obtained the expression of \(C_{i}(t)\) as \[C_{i}(t)=\mu_{1}a\int_{0}^{t}e^{-(2\gamma+\mu_{1})\tau}\;I_{[i]}(\mu_{1}\tau)\;d\tau. \tag{23}\] where \(I_{i}(y)\) denotes the modified Bessel function. We also see \(C_{i}(t)=C_{-i}(t)\) since the dynamics of the \(0^{\rm th}\) particle experiences (statistically) same contributions from particles on its either sides. It turns out that for later calculations, one needs to specify \(C_{i}(t\to\infty)\) which can be easily computed from Eq. (23). Once again we refer to Appendix B for details on this calculation and quote only the final result here as \[C_{i}(t\to\infty)=\frac{\mu_{1}a}{2\sqrt{\gamma^{2}+\mu_{1} \gamma}}\,\exp[-|i|/\xi],\qquad\mbox{with} \tag{24}\] \[\xi^{-1}=\log\left[\frac{2\gamma+\mu_{1}+2\sqrt{\gamma^{2}+\mu_{ 1}\gamma}}{\mu_{1}}\right]. \tag{25}\] In Figures (2) and (3), we have compared our analytical results based on the mean field approximations with numerical simulations for different values of \(\gamma\). From this comparison, we find that Eq. (24) matches with the numerics only for moderate and large values of \(\gamma\) [see Figure (3)]. However, for small \(\gamma\), our results deviate significantly as seen for \(\gamma=0.1\). This is because, at smaller values of \(\gamma\), the effect of activity is so strong that the mean field (decoupling) approximation fails and one cannot really neglect the three-point (connected) correlation. In what follows, we show that the knowledge of the spin-position correlation \(C_{i}(t)\) is essential to compute the fluctuations and correlations of the displacements of the tagged particles for active random average process. Figure 2: Comparison of the theoretical expression of the correlation \(C_{i}(t)=\langle z_{i}(t)\sigma_{0}(t)\rangle\) in Eq. (23) with the numerical simulation for (a) \(i=0,\;\gamma=1\) (left panel) and (b) \(i=0,\;\gamma=2\) (right panel). In both panels, insets show same comparison for \(i=1\) with same set of parameters. Simulation is conducted with \(N=201\) particles. ## 4 Mean squared displacement and correlations in the quenched initial condition We now look at the mean squared displacement and equal time correlations of the positions of the tagged particles when their initial positions are fixed as given in Eq. (3). However, the initial spin \(\sigma_{i}(0)\) can still fluctuate and take values \(\sigma_{i}(0)=\pm 1\) with equal probability \(1/2\) independently for individual particles. First notice that due to the translational symmetry in our model, the correlation \(\langle z_{i}(t)z_{j}(t)\rangle\) will depend only on the separation \(|i-j|\). Therefore, without any loss of generality, we put \(j=0\) and denote the correlation \(\langle z_{0}(t)z_{i}(t)\rangle\) by \(g_{i}(t)\). At a small time interval \([t,t+dt]\), we evaluate \(g_{i}(t+dt)=\langle z_{0}(t+dt)z_{i}(t+dt)\rangle\) using the update rule in Eq.(9). Keeping all terms up to linear order in \(dt\), one finds the following evolution equation for \(i\neq 0\) \[g_{i}(t+dt)\simeq g_{i}(t)+\mu_{1}dt\Big{[}g_{i+1}(t)+g_{i-1}(t)-2g_{i}(t)+2aC _{i}(t)+T_{i}(t)\Big{]}, \tag{26}\] where \(C_{i}(t)=\langle z_{i}(t)\sigma_{0}(t)\rangle=\langle z_{0}(t)\sigma_{i}(t)\rangle\), and \(T_{i}(t)\) is a combination of the three-point correlations defined as \[T_{i}(t)=\frac{1}{2}\Big{[}\langle z_{0}(t)z_{i+1}(t)\sigma_{i}(t)\rangle- \langle z_{-1}(t)z_{i}(t)\sigma_{0}(t)\rangle+\langle z_{1}(t)z_{i}(t)\sigma_ {0}(t)\rangle-\langle z_{0}(t)z_{i-1}(t)\sigma_{i}(t)\rangle\Big{]}. \tag{27}\] On the other hand, the same procedure for \(i=0\) yields \[g_{0}(t+dt)\simeq g_{0}(t)+\mu_{1}dt\Big{[}g_{1}(t)+g_{-1}(t)-2g_{0}(t)+2aC_{0}(t)+T_ {0}(t)\Big{]}\] \[+\mu_{2}dt\left[a^{2}+2g_{0}(t)-2g_{1}(t)+2aC_{1}(t)-2aC_{0}(t)- T_{0}(t)\right]. \tag{28}\] Figure 3: Numerical verification of the correlation \(C_{i}(t)=\langle z_{i}(t)\sigma_{0}(t)\rangle\) at \(t\to\infty\) for different values of \(\gamma\). Solid lines represent the analytical expression in Eq. (24) and symbols are the simulation data. For very small \(\gamma\), we observe deviation of the numerical data from the theoretical expression because the mean-field approximation (used in the analytical calculation) becomes less and less valid as \(\gamma\) becomes small. Combining Eqs. (26) and (28) and taking \(dt\to 0\) limit, we find \[\frac{dg_{i}(t)}{dt}= \mu_{1}\Big{[}g_{i+1}(t)+g_{i-1}(t)-2g_{i}(t)+2aC_{i}(t)+T_{i}(t) \Big{]}\] \[+\delta_{i,0}\ \mu_{2}\left[a^{2}+2g_{0}(t)-2g_{1}(t)+2aC_{1}(t)-2aC_{0}( t)-T_{0}(t)\right], \tag{29}\] for all \(i=-\infty,...,-1,0,1,...\infty\). For the original MRAP, the corresponding equation was derived in [59]. Unlike in the Markov case, once again we find that Eq. (29) does not satisfy the closure property and involves higher order correlations in the form of \(T_{i}(t)\). Also the function \(C_{i}(t)\) needs the knowledge of higher order correlations as illustrated in Eq. (21). Overall this makes the computation of \(g_{i}(t)\) for the persistent case rather challenging. To proceed, we perform the joint Fourier-Laplace transformation \[\bar{g}(q,t)=\sum_{i=-\infty}^{\infty}e^{itiq}\ g_{i}(t), \mathcal{G}(q,s)=\int_{0}^{\infty}dt\ e^{-st}\ \bar{g}(q,t), \tag{30}\] \[\bar{C}(q,t)=\sum_{i=-\infty}^{\infty}\ e^{itiq}C_{i}(t), \mathcal{C}(q,s)=\int_{0}^{\infty}dt\ e^{-st}\ \bar{c}(q,t),\] (31) \[\bar{T}(q,t)=\sum_{i=-\infty}^{\infty}e^{itiq}\ T_{i}(t), \mathcal{T}(q,s)=\int_{0}^{\infty}dt\ e^{-st}\ \bar{T}(q,t), \tag{32}\] where \(\iota^{2}=-1\) and plug them in Eq. (29) to obtain \[\mathcal{G}(q,s)=\frac{\mu_{2}}{s+\beta(q)}\left[\frac{a^{2}}{s} +2\left(\tilde{g}_{0}(s)-\tilde{g}_{1}(s)\right)+2a\left(\tilde{C} _{1}(s)-\tilde{C}_{0}(s)\right)-\tilde{T}_{0}(s)\right]\] \[+\frac{\mu_{1}}{s+\beta(q)}\left[2aC(q,s)+\mathcal{T}(q,s) \right], \tag{33}\] Figure 4: (a) Comparison of the mean squared displacement \(\langle z_{0}^{2}(t)\rangle\) with the numerical simulation for small values of \(t\). Solid lines represent the analytical expression in Eq. (39) and symbols are from simulation. (b) Numerical verification of the crossover behaviour of \(\langle z_{0}^{2}(t)\rangle\) from linear growth at small times to sub-diffusive \(\sim\sqrt{t}\) growth at late times. The corresponding expressions are given in Eq. (39) for small \(t\) and in Eq. (46) for large \(t\). where \(\beta(q)=2\mu_{1}(1-\cos(q))\) and the functions \(\tilde{g}_{i}(s)\), \(\tilde{C}_{i}(s)\) and \(\tilde{T}_{i}(s)\) denote the Laplace transformations of \(g_{i}(t)\), \(C_{i}(t)\) and \(T_{i}(t)\) respectively. For given \(\tilde{C}_{i}(s)\) and \(\tilde{T}_{i}(s)\), Eq. (33) has two unknowns, namely \(\tilde{g}_{0}(s)\) and \(\tilde{g}_{1}(s)\). To get rid of \(\tilde{g}_{1}(s)\), we take the Laplace transformation of Eq. (29) for \(i=0\) and get \[\tilde{g}_{0}(s)-\tilde{g}_{1}(s)=\frac{a\mu_{2}\tilde{C}_{1}(s)}{\mu_{1}-\mu _{2}}+a\tilde{C}_{0}(s)+\frac{\tilde{T}_{0}(s)}{2}+\frac{\mu_{2}a^{2}}{2s(\mu_ {1}-\mu_{2})}-\frac{s\;\tilde{g}_{0}(s)}{2(\mu_{1}-\mu_{2})}. \tag{34}\] Inserting this in Eq. (33), we obtain \[\mathcal{G}(q,s)=\frac{\mu_{2}(\mu_{1}-\mu_{2})^{-1}}{s+\beta(q)}\left[\frac {\mu_{1}a^{2}}{s}+2a\mu_{1}\tilde{C}_{1}(s)-\frac{s\tilde{g}_{0}(s)}{p}\right] +\frac{\mu_{1}}{s+\beta(q)}\left[2a\mathcal{C}(q,s)+\mathcal{T}(q,s)\right]. \tag{35}\] We now have to specify only \(\tilde{g}_{0}(s)\) to calculate \(\mathcal{G}(q,s)\). However, the Laplace transform \(\tilde{g}_{0}(s)\) can be obtained self-consistently from Eq. (35) by integrating \(\mathcal{G}(q,s)\) with respect to \(q\). Then plugging back \(\tilde{g}_{0}(s)\) in Eq. (35) gives the function \(\mathcal{G}(q,s)\) exactly. Below, we analyse this equation first to calculate the mean squared displacement \(g_{0}(t)\) and then the equal time correlation. Figure 5: Numerical plot of \(\sum_{i}T_{i}(t)\) for \(\gamma=1\), where \(T_{i}(t)\) is defined in Eq. (27). At late times, we find that this sum saturates to a non-zero value \(\sum_{i}T_{i}(t)\simeq 0.438\) which is used in Eq. (14) while evaluating \(g_{0}(t)\). ### Mean-squared displacement \(g_{0}(t)=\langle z_{0}^{2}(t)\rangle\) The MSD can be obtained by taking the inverse Fourier transform \(\tilde{g}_{0}(s)=\int_{-\pi}^{\pi}\frac{dq}{2\pi}\ \mathcal{G}(q,s)\) with the expression of \(\mathcal{G}(q,s)\) is given in Eq. (35). We then obtain \[\tilde{g}_{0}(s)\left[1+\frac{\mu_{2}sY(s)}{\mu_{1}-\mu_{2}}\right]= \mu_{1}W(s)+\frac{\mu_{1}\mu_{2}\ Y(s)}{\mu_{1}-\mu_{2}}\left[ \frac{a^{2}}{s}+2a\tilde{C}_{1}(s)\right], \tag{36}\] \[\text{where}\quad\quad Y(s)= \frac{1}{2\pi}\int_{-\pi}^{\pi}\frac{dq}{s+\beta(q)}=\frac{1}{ \sqrt{s^{2}+4\mu_{1}s}},\] (37) \[\text{and}\quad\quad W(s)= \frac{1}{2\pi}\int_{-\pi}^{\pi}dq\ \frac{2a\mathcal{C}(q,s)+\mathcal{T}(q,s)}{s+\beta(q)}. \tag{38}\] This equation formally gives the exact MSD of the position of the tagged particle in the Laplace domain given the two and three point correlations \(C_{i}(t)\) and \(T_{i}(t)\) are known. Though the functions \(\mathcal{C}(q,s)\) and \(\mathcal{T}(q,s)\) are not known exactly, one can still derive some scaling behaviours of \(g_{0}(t)\) for different values of \(t\). For small \(t\), one has \(C_{i}(t\to 0)=0\) and \(T_{i}(t\to 0)=0\) because at very small times, the displacement is negligibly small. In the Laplace domain, this implies \(\left[s\tilde{C}_{i}(s)\right]_{s\to\infty}=0\) and \(\left[s\tilde{T}_{i}(s)\right]_{s\to\infty}=0\). Hence, the function \(W(s)\) in Eq. (38) decays faster than \(\sim 1/s\) as \(s\to\infty\). On the other hand, from Eq. (37), we see \(Y(s\to\infty)\simeq 1/s\). Using these approximations in Eq. (36), we obtain \(\tilde{g}_{0}\left(s\to\infty\right)\simeq\mu_{2}a^{2}/s^{2}\) which in the time domain gives \[g_{0}(t)=\langle z_{0}^{2}(t)\rangle\simeq\mu_{2}a^{2}t,\quad\text{ as }t\to 0. \tag{39}\] This linear growth of the MSD at small times has been numerically verified in Figure (4) (left panel). It is easy to understand the small \(t\) asymptotic from the following physical reasoning. At small times, the leading order contribution to the MSD comes from those realisations where the tagged particle has made one jump while the other particles have not moved at all. The probability of observing such event is \(t\) (\(pt\) in term of the unscaled time). Now the particle jumps by a random amount \(\pm\eta_{0}a\) depending on its spin \(\sigma_{0}(0)\). Since \(\sigma_{0}(0)=\pm 1\) with equal probability \(1/2\), we obtain the MSD \(\langle z_{0}^{2}(t)\rangle=\mu_{2}a^{2}\times t=\mu_{2}a^{2}t\) as given in Eq. (39). Next we focus on the large-\(t\) behaviour of the MSD from Eq. (36). Recall that for the MRAP, the MSD scales sub-diffusively as \(\sim\sqrt{t}\) with a prefactor that depends on the model parameters [see Eq. (1)]. To see this for the non-Markov active case, we perform the small-\(s\) expansion of \(\tilde{g}_{0}(s)\) in Eq. (36) for which we need to specify \(Y(s\to 0)\), \(\tilde{C}_{1}(s\to 0)\) and \(W(s\to 0)\). Expression of \(Y(s)\) for small \(s\) follows easily from Eq. (37) as \(Y(s\to 0)\simeq 1/\sqrt{4\mu_{1}s}\). As mentioned earlier, due to the hierarchical dependence of the correlations it is difficult to find the small-\(s\) behaviour of \(\tilde{C}_{1}(s\to 0)\) and \(W(s\to 0)\). We, however, numerically observe that the correlations \(C_{i}(t)\) and \(\sum_{i}T_{i}(t)\) at late time saturate to an \(i\)-dependent value and become time independent. This is numerically illustrated in Figures (3) and (5). Hence for small \(s\), one gets \(\tilde{C}_{1}(s\to 0)\simeq C_{1}(t\to\infty)/s\) and \[W(s\to 0)\simeq\frac{Y(s)}{s}\ \left[2aC_{I}+T_{I}\right], \tag{40}\] where we identify \[\begin{split}\bar{C}(q\to 0,t\to\infty)=\sum_{i}C_{i}(t\to \infty)=C_{I},\\ \bar{T}(q\to 0,t\to\infty)=\sum_{i}T_{i}(t\to\infty)=T_{I}\end{split} \tag{41}\] as defined in Eq. (15). These constants can be obtained from the saturation values of \(C_{i}(t)\) and \(\sum_{i}T_{i}(t)\) at large \(t\) [see Figures. (3) and (5)]. Furthermore from Eq. (37), it is easy to see that \(Y(s)\simeq\frac{1}{\sqrt{4\mu_{1}s}}\) for small \(s\). Plugging these small \(s\)-asymptotics in Eq. (36), we find that the Laplace transform \(\tilde{g}_{0}(s)\) reads \[\tilde{g}_{0}(s)\simeq\frac{\sqrt{\pi}\zeta_{1}}{2s^{3/2}},\quad\text{ as }s\to 0, \tag{42}\] where the constant \(\zeta_{1}\) is given explicitly in Eq. (14). We emphasize that this expression is exact at late times and does not involve any approximation. However, it can be simplified further by using the approximate expressions of \(C_{i}(t)\) at large \(t\) given in Eq. (24). Under this approximation, it is easy to compute \(C_{I}=\bar{C}(q\to 0,t\to\infty)\) by taking \(t\to\infty\) limit of Eq. (75) at \(q=0\) and one finds \[C_{I}\simeq\frac{\mu_{1}a}{2\gamma}, \tag{43}\] plugging which in Eq. (14), one finds the following simpler expression for \(\zeta_{1}\) \[\zeta_{1}\simeq\sqrt{\frac{\mu_{1}}{\pi}}\left[\frac{a^{2}\mu_{1}}{\gamma}+T_ {I}+\frac{a^{2}\mu_{2}}{\mu_{1}-\mu_{2}}\left\{1+\frac{\mu_{1}}{\sqrt{\gamma^{ 2}+\mu_{1}\gamma}}\,\exp[-1/\xi]\right\}\right], \tag{44}\] where \(\xi^{-1}\) is given in Eq.(12). It is now straightforward to perform the inverse Laplace transformation of Eq. (42) and obtain \[g_{0}(t)=\langle z_{0}^{2}(t)\rangle\simeq\zeta_{1}\sqrt{t}\quad\text{ as }t\to\infty. \tag{45}\] This gives the leading order contribution to the MSD at large times. One can also obtain the sub-leading term which just turns out to be a constant. To maintain continuity of our presentation, we relegate this calculation to Appendix C and present only the final result as \[g_{0}(t)=\langle z_{0}^{2}(t)\rangle\simeq\zeta_{1}\sqrt{t}+\zeta_{2}\quad \text{ as }t\to\infty, \tag{46}\] with \(\zeta_{1}\) and \(\zeta_{2}\) given in Eqs. (14) and (16), respectively. In conjunction to the Markov case, we find that the MSD for the persistent active case also scales sub-diffusively as \(\sim\sqrt{t}\) at large times. However the coefficient \(\zeta_{1}\) is different than the corresponding expression for the non-persistent case in Eq. (1). While for \(\gamma\to\infty\), the two coefficients converge, we see a clear difference between them for finite \(\gamma\). This difference is also illustrated in Figure (6) where we have also compared with the numerical simulations. This implies that at large times, the tracer particle performs sub-diffusion with exponent \(1/2\) for both Markov as well as the active RAP. But the persistent nature of the active particles enhances the coefficient of the MSD. For the dynamics of a tracer particle in an active single-file system, such a difference in the MSD with respect to the Markov case was also numerically observed recently in [49]. Herein, we are able to establish this analytically based on the mean field approximations. Expectedly, this approximation breaks down for small values of \(\gamma\) and one then needs to consider the exact form of \(\zeta_{1}\). Later, we show that Eq. (42) still remains valid for small \(\gamma\) and a number of results can still be derived. To summarize, we have shown that the MSD \(\langle z_{0}^{2}(t)\rangle\) exhibits a crossover from the diffusive scaling at small times to the sub-diffusive (\(\simeq\zeta_{1}\sqrt{t}\)) scaling at large times with the coefficient \(\zeta_{1}\) containing the effect of the persistent motion of active particles. This crossover behaviour has been shown in Figure (4) (right panel). Before we end this section, we remark that we have used the random sequential update scheme to perform the numerical simulation. For completeness, we have provided the details of this scheme in Appendix D. We use this numerical strategy to verify other analytical results throughout the paper. ### Correlation \(g_{i}(t)=\langle z_{0}(t)z_{i}(t)\rangle\) We now look at the equal time correlation of the positions of two tagged particles. For this, we have to perform the inverse Fourier transformation \(\tilde{g}_{i}(s)=\int_{-\pi}^{\pi}\frac{dq}{2\pi}\ e^{-iq}\ \mathcal{G}(q,s)\) for arbitrary \(i\) and use \(\mathcal{G}(q,s)\) from Eq. (35). We then obtain \[\tilde{g}_{i}(s)= \mu_{1}\ W_{i}(s)+\frac{\mu_{2}\ Y_{i}(s)}{\mu_{1}-\mu_{2}}\left[ \frac{\mu_{1}a^{2}}{s}+2a\mu_{1}\tilde{C}_{1}(s)-s\tilde{g}_{0}(s)\right], \tag{47}\] \[\text{where}\quad\quad Y_{i}(s)= \frac{1}{2\pi}\int_{-\pi}^{\pi}dq\ \frac{e^{-iiq}}{s+\beta(q)}\] (48) \[\text{and}\quad\quad W_{i}(s)= \frac{1}{2\pi}\int_{-\pi}^{\pi}dq\ e^{-iq}\ \left[\frac{2a \mathcal{C}(q,s)+\mathcal{T}(q,s)}{s+\beta(q)}\right], \tag{49}\] with \(\mathcal{C}(q,s)\) and \(\mathcal{T}(q,s)\) defined in Eq. (31) and Eq. (32), respectively. As discussed for the MSD, carrying out the integration over \(q\) in these expressions turns out to be difficult. However, for small \(s\) (which corresponds to large \(t\) in the time domain), one can still perform this integration approximately which then substantially simplifies the expression of \(\tilde{g}_{i}(s)\) in Eq. (47). In Appendix E, we have shown that for \(s\to 0\), the functions \(Y_{i}(s)\) and \(W_{i}(s)\) behave as \[Y_{i}(s) \simeq\frac{1}{\sqrt{4\mu_{1}s}}\,\exp\left[-|i|\sqrt{\frac{s}{ \mu_{1}}}\right], \tag{50}\] \[W_{i}(s) \simeq\frac{1}{\sqrt{4\mu_{1}s^{3}}}\ \left[2a\bar{C}(q\to 0,t \rightarrow\infty)+\bar{T}(q\to 0,t\rightarrow\infty)\right]. \tag{51}\] In addition to these quantities, we also need \(\tilde{g}_{0}(s)\) and \(\tilde{C}_{1}(s)\) for small \(s\) to evaluate \(\tilde{g}_{i}(s)\) in Eq. (47). For this, we use Eq. (42) to get \(\tilde{g}_{0}(s\to 0)\sim s^{-3/2}\) and Eq. (24) to get Figure 6: Comparison of theoretical expression of the mean squared displacement \(g_{0}(t)=\langle z_{0}^{2}(t)\rangle\) given in Eq. (46) with the same obtained from numerical simulation for \(\gamma=1\) (left panel) and \(\gamma=2\) (right panel). To contrast our result, we have also plotted \(\langle z_{0}^{2}(t)\rangle\) for the Markov case whose expression is given in Eq. (1) [solid black line labeled by MRAP]. For comparison, we have taken \(T_{I}\) is \(0.438\) for left panel and \(0.216\) for right panel. The steady-state value of the correlator \(C_{i}(t\rightarrow\infty)\) is approximated by the formula as given in Eq. (24). For both plots, simulation is performed with \(N=501\) particles. \(\tilde{C}_{1}(s)\simeq C_{1}(t\to\infty)/s\). Using these approximations in Eq. (47), we get the leading order behaviour of \(\tilde{g}_{i}(s)\) as \[\tilde{g}_{i}(s)\simeq\frac{\sqrt{\pi}\zeta_{1}}{2s^{3/2}}\,\exp\left[-|i|\sqrt{ \frac{s}{\mu_{1}}}\right],\ \ \ \ \ \text{as}\ s\to 0. \tag{52}\] To get the correlation \(g_{i}(t)\), we now use the following standard Laplace transformation [59]: \[\int_{0}^{\infty}dt\ e^{-st}\sqrt{\frac{t}{\pi}}\left[e^{-\frac{b^{2}}{4t}}- \frac{b\sqrt{\pi}}{2\sqrt{t}}\ \text{Erfc}\left(\frac{b}{2\sqrt{t}}\right)\right]=\frac{e^{-b\sqrt{s}}}{2s^{3 /2}},\ \ \ \ \ \text{with}\ b\geq 0. \tag{53}\] Plugging this in Eq. (52), we find that \(g_{i}(t)\) satisfies the scaling relation \[g_{i}(t)=\langle z_{0}(t)z_{i}(t)\rangle\simeq\zeta_{1}\sqrt{t}\ f\left(\frac {|i|}{\sqrt{4\mu_{1}t}}\right), \tag{54}\] with the scaling function \(f(y)\) given in Eq. (2). Once again, we observe that the correlation \(g_{i}(t)\) is characterised by the same scaling function \(f(y)\) as the MRAP in Eq. (1). However, in conjunction to the MSD, here also the signature of activity is found in the coefficient \(\zeta_{1}\) that arises in the scaling relation. This means while the scaling function \(f(y)\) is same for the two cases, the overall scaling form is slightly different for any finite \(\gamma\). In Figure (7), we have numerically illustrated this scaling behaviour for three different values of \(t\) and for two different values of \(\gamma\). For all cases, the simulation data converge to Eq. (2) under appropriate scaling. ## 5 Mean squared displacement and correlations in the annealed initial condition In the previous sections, we calculated the MSD and the correlations of the positions of tagged particles in the quenched case during which their initial positions are fixed for all realisations Figure 7: Illustration of the scaling behaviour of the correlation \(g_{i}(t)=\langle z_{0}(t)z_{i}(t)\rangle\) in Eq. (54) for \(\gamma=1\) and \(\gamma=2\). Symbols are the simulation data for three different which converge with the theoretical scaling function \(f(y)\) in Eq. (2). For both plots, simulation has been done with \(N=501\) particles. [see Eq. (3)]. For this case, we saw that the persistence nature of the active particles has effects on the dynamics of a tagged particle even at large times. In this section, we are interested in carrying out this analysis for the annealed case where we assume the initial positions are chosen from the stationary state of the system. Consequently, the initial positions of the particles fluctuate across different realisations. Various studies for the single-file motion of passive particles have shown that the fluctuations in initial positions have long term effect on the dynamics of a tracer particle [9, 10]. In particular, the MSD in the annealed initial condition at late times is \(\sqrt{2}\) times that in the quenched initial condition. In the remaining of our paper, we address two main questions: (a) How does MSD for the persistent particles behave in the annealed case? (b) Are the MSDs for two cases related as for the non-persistent single-file systems? In order to study the annealed case, we follow the standard technique which we briefly discuss here [9]. Starting from the quenched initial condition, we evolve the system up to time \(t_{0}\) and start measuring the position till further time \((t_{0}+t)\). We then define the following two-time correlation function of two tagged particles: \[h_{i}\left(t_{0},t_{0}+t\right) =\langle\left[x_{i}(t_{0}+t)-x_{i}(t_{0})\right]\left[x_{0}(t_{0} +t)-x_{0}(t_{0})\right]\rangle, \tag{55}\] \[=\langle\left[z_{i}(t_{0}+t)-z_{i}(t_{0})\right]\left[z_{0}(t_{0} +t)-z_{0}(t_{0})\right]\rangle,\] (56) \[=g_{i}(t_{0}+t)+g_{i}(t_{0})-2S_{i}(t_{0},t_{0}+t), \tag{57}\] where \(S_{i}(t_{0},t_{0}+t)=\langle z_{0}(t_{0})z_{i}(t_{0}+t)\rangle=\langle z_{i}( t_{0})z_{0}(t_{0}+t)\rangle\). The later equality can be easily proved by appropriately translating and reflecting the index \(i\). In the limit \(t_{0}\rightarrow\infty\), the two-time correlation function \(h_{i}\left(t_{0},t_{0}+t\right)\) reduces to the position correlation \(l_{i}(t)\) in the annealed initial condition, i.e. \[l_{i}(t) =\lim_{t_{0}\rightarrow\infty}h_{i}\left(t_{0},t_{0}+t\right), \tag{58}\] \[=\lim_{t_{0}\rightarrow\infty}\left[g_{i}(t_{0}+t)+g_{i}(t_{0})- 2S_{i}(t_{0},t_{0}+t)\right]. \tag{59}\] This means that we do not need to perform the averaging over the initial positions and simply use the results for the quenched case. Since by performing the time shift by \(t_{0}\) and taking \(t_{0}\rightarrow\infty\), we are effectively putting the system to its steady state, we anticipate the two methods to be equivalent. In what follows, we use the relation (59) and compute the correlation and fluctuation in the annealed setting. As clear from this relation, this reduces to calculating the correlation \(S_{i}(t_{0},t_{0}+t)\) which we carry out below. For this, we again start with the evolution of \(S_{i}(t_{0},t_{0}+t+dt)=\langle z_{0}(t_{0})z_{i}(t_{0}+t+dt)\rangle\) in a small time interval \([t,t+dt]\) and use Eq. (9) to plug \(z_{i}(t_{0}+t+dt)\). Following same steps as before, the dynamics of \(S_{i}(t_{0},t_{0}+t)\) can be shown to be \[\frac{\partial S_{i}(t_{0},t_{0}+t)}{\partial t} =\frac{\mu_{1}}{2}\left[S_{i+1}(t_{0},t_{0}+t)+S_{i-1}(t_{0},t_{0} +t)-2S_{i}(t_{0},t_{0}+t)+2aC_{i}(t_{0},t_{0}+t)\right]\] \[+\frac{\mu_{1}}{2}\left[\langle z_{0}(t_{0})\sigma_{i}(t_{0}+t)z_ {i+1}(t_{0}+t)\rangle-\langle z_{0}(t_{0})\sigma_{i}(t_{0}+t)z_{i-1}(t_{0}+t) \rangle\right]. \tag{60}\] where we have defined \(C_{i}(t_{0},t_{0}+t)=\langle z_{0}(t_{0})\sigma_{i}(t_{0}+t)\rangle\). Again, we see that this equation is not closed and involves higher order correlations. Under the mean field approximation, we break these higher correlations as a product of lower order correlations. Unlike in the equal time case, this approximation turns out to be quite good here since at large \(t\), these three point correlations rapidly decay to zero. Thus within this approximation, Eq. (60) can be simplified as \[\frac{\partial S_{i}(t_{0},t_{0}+t)}{\partial t}\simeq\frac{\mu_{1}}{2}\left[S_{i +1}(t_{0},t_{0}+t)+S_{i-1}(t_{0},t_{0}+t)-2S_{i}(t_{0},t_{0}+t)+2aC_{i}(t_{0},t _{0}+t)\right]. \tag{61}\] Now to solve this equation, we have to find the source term \(C_{i}(t_{0},t_{0}+t)\) on the right hand side. Once again this term can be easily calculated using the update rule for \(\sigma_{i}(t)\) in Eq. (7). Since this derivation is exactly same as the previous ones, we have presented it in Appendix F and quote only the final result as \[C_{i}(t_{0},t+t_{0})=C_{i}(t_{0})\ e^{-2\gamma t}, \tag{62}\] where \(C_{i}(t_{0})=\langle z_{0}(t_{0})\sigma_{i}(t_{0})\rangle\) is given in Eq. (23). We now have all terms in the right hand side of Eq. (61) which can now be straightforwardly solved by taking the joint Fourier-Laplace transformation. As shown in Appendix G, we obtain the expression of \(S_{i}(t_{0},t_{0}+t)\) as \[S_{i}(t_{0},t_{0}+t)\simeq\zeta_{1}\sqrt{t_{0}}-\frac{\zeta_{1}\sqrt{t}}{ \sqrt{2}}\ \mathcal{W}\left(\frac{|i|}{\sqrt{2\mu_{1}t}}\right), \tag{63}\] where the constant \(\zeta_{1}\) is given in Eq. (14) and the function \(\mathcal{W}(y)\) is defined as \[\mathcal{W}(y)=e^{-y^{2}}+\sqrt{\pi}y\ \mathrm{Erf(y)}. \tag{64}\] We emphasize that the expression of \(S_{0}(t_{0},t_{0}+t)\) in Eq. (63) is valid only for large \(t_{0}\) and large \(t\) but with their ratio \(t/t_{0}\) fixed to a value much smaller than 1. Below we use this form in Eq. (59) to compute the asymptotic behaviours of the MSD and correlation in the annealed initial setting Figure 8: Numerical verification of the MSD \(l_{0}(t)\) with steady state initial condition for two values of \(\gamma\). The theoretical expression is given in Eq. (65). For simulation, we have first allowed the system to evolve till time \(t_{0}=3000\) and then start measuring the position with \(N=501\) particles. ### Mean squared displacement \(l_{0}(t)\) Using the result in Eq. (63), we get \(S_{0}(t_{0},t_{0}+t)\simeq\zeta_{1}\sqrt{t_{0}}-\zeta_{1}\sqrt{t}/\sqrt{2}\). Plugging this in Eq. (59) along with \(g_{0}(t_{0}\to\infty)\) from Eq. (45), we obtain the MSD \(l_{0}(t)\) as \[l_{0}(t)\simeq\zeta_{1}\sqrt{2t},\qquad\text{for t}\gg 1. \tag{65}\] We have verified this analytic expression against simulations in Figure (8). Few remarks are in order. First, the MSD of a tracer in ARAP again scales sub-diffusively as \(\sim\sqrt{t}\) at large times (reminiscent of the Markov case). However, due to the persistent motion of active particles, once again we see that the coefficient accompanying this sub-diffusive growth is different than the Markov case. Only for \(\gamma\to\infty\), the coefficient converges to the non-persistent value. Second interesting observation is that while MSDs in the annealed and quenched initial settings change from their Markov counterparts, their ratio is still equal to \(\sqrt{2}\), a result seen for many single-file systems [9, 10, 59]. This implies that even though \(l_{0}(t)\) and \(g_{0}(t)\) individually carry the signature of activity, their ratio however is still fixed to the value \(\sqrt{2}\) even for finite \(\gamma\). Figure 9: Numerical verification of the scaling function \(f(y)\) in Eq. (66) for the correlation \(l_{i}(t)\) in the steady state. In both panels, we have performed the comparison for three different times shown in different colours. We observe that the data for different times (symbols) converge to the theoretical curve (solid line) under scaling with respect to time. For simulation, we have first evolved the system till time \(t_{0}=3000\) with \(N=501\) and then start measuring the position. ### Correlation \(l_{i}(t)\) We next look at the expression of \(l_{i}(t)\) in Eq. (59) for general \(i\) and insert \(S_{i}(t_{0},t_{0}+t)\) from Eq. (63) and \(g_{i}(t_{0})\) from Eq. (54). The correlation then turns out to be \[l_{i}(t) \simeq 2\zeta_{1}\sqrt{t_{0}}\ f\left(\frac{|i|}{\sqrt{4\mu_{1}t_{0}} }\right)-2\zeta_{1}\sqrt{t_{0}}+\zeta_{1}\sqrt{2t}\ \mathcal{W}\left(\frac{|i|}{\sqrt{2\mu_{1}t}}\right), \tag{66}\] \[\simeq-\frac{\sqrt{\pi}\zeta_{1}|i|}{\sqrt{\mu_{1}}}\ \mathrm{Erfc}\left(\frac{|i|}{ \sqrt{4\mu_{1}t_{0}}}\right)+\zeta_{1}\sqrt{2t}\ \mathcal{W}\left(\frac{|i|}{\sqrt{2\mu_{1}t}}\right),\] \[\simeq-\frac{\sqrt{\pi}\zeta_{1}|i|}{\sqrt{\mu_{1}}}+\zeta_{1} \sqrt{2t}\ \mathcal{W}\left(\frac{|i|}{\sqrt{2\mu_{1}t}}\right),\] \[\simeq\zeta_{1}\sqrt{2t}\ f\left(\frac{|i|}{\sqrt{2\mu_{1}t}} \right),\] where the scaling function \(f(y)\) is given in Eq. (2). Also, in going from second line to the third line, we have used the asymptotic behaviour of complementary error function as \(\mathrm{Erfc}\left(\frac{|i|}{\sqrt{4\mu_{1}t_{0}}}\right)\to 1\) as \(t_{0}\rightarrow\infty\) for finite \(i\). In Figure (9), we have compared the scaling behaviour of \(l_{i}(t)\) with the numerical simulation for \(\gamma=1\) in left panel and \(\gamma=2\) in right panel. For both panels, we have carried out the comparison for three different values of \(t\). We observe excellent match of our analytical results with the simulation for all cases. Compared to the Markov case, once again we see that the persistence only changes the pre-factor in the scaling relation (66) but keeps the form of the scaling function same. Figure 10: Numerical verification of the pre-factor \(\zeta_{1}\) associated with the variance \(g_{0}(t)\simeq\zeta_{1}\sqrt{t}\). We have also performed a comparison with the theoretical mean-field expression in Eq. (13) (shown in blue) and the exact expression in Eq. (14) (shown in green). Based on simulation, we have obtained (a) \(C_{I}=0.485\) and \(T_{I}=0.91\) (for left panel) and (b) \(C_{I}=0.91\) and \(T_{I}=2.08\) (for right panel). Simulation for both plots has been conducted with \(N=1001\) particles. ## 6 Effect of small \(\gamma\) on the MSD In the previous sections, we looked at the fluctuations and correlations of the positions of tagged particles and studied their dependence on the initial condition. Moreover, based on a mean field approximation, we provided semi-analytic expressions of these quantities. However, it turns out that this approximation is valid only for large and intermediate values of the flipping rate \(\gamma\) and breaks down for its smaller values. To illustrate this, we have plotted the simulation data of the MSD \(g_{0}(t)/\sqrt{t}\) for \(\gamma=0.5\) and \(\gamma=0.25\) in Figure (10). Furthermore, we compare our numerical result with its analytic form in Eq. (13) by computing the pre-factor \(\zeta_{1}\) in two ways: (i) first we obtain \(C_{i}(t)\) from numerics and plug it in Eq. (13) to get a complete numerical estimate of \(\zeta_{1}\), (ii) second we use the approximated theoretical form of \(\zeta_{1}\) in Eq. (44). As seen in Figure (10), while \(\zeta_{1}\) for case (i) matches with the simulation data, there is a clear departure of \(\zeta_{1}\) obtained for case (ii). This departure becomes more pertinent as we go to smaller and smaller values of \(\gamma\). Hence, we still find that, \(g_{0}(t)\) scales sub-diffusively as \(\sim\sqrt{t}\) at late times even for small \(\gamma\) with \(\zeta_{1}\) given by its exact form in Eq. (14). We next look at the MSD \(l_{0}(t)\) in the annealed case where initial positions are drawn from the steady state. For large and intermediate \(\gamma\), we proved before that the ratio \(l_{0}(t)/g_{0}(t)\), at late times, is still equal to \(\sqrt{2}\), a relation true for many single-file systems. In the remaining part of this section, we analyse how this ratio changes for smaller values of \(\gamma\). Extensive numerical simulations suggest that this ratio is still equal to the factor \(\sqrt{2}\) even for small \(\gamma\). For example, in Figure (11), we have shown the simulation data for the ratio \(l_{0}(t)/g_{0}(t)\) for \(\gamma=0.5\) and \(\gamma=0.25\). For both cases, we find that the ratio approaches the value \(\sqrt{2}\) at late Figure 11: Numerical comparison of the ratio of the MSD \(l_{0}(t)\) measured in the steady state and the MSD \(g_{0}(t)\) with quenched uniform initial state for two small values of \(\gamma\). We observe that even for small \(\gamma\), where the predictions from mean field approximation do not hold, the ratio \(l_{0}(t)/g_{0}(t)\), at large \(t\) approaches, \(\sqrt{2}\) implying that the relation in Eq. (70) to be valid for all non-zero \(\gamma\). times. This means while both \(l_{0}(t)\) and \(g_{0}(t)\) deviate from their mean field forms, their ratio is still fixed to \(\sqrt{2}\). To prove this, we first have to evaluate the behaviour of \(S_{0}(t_{0},t_{0}+t)\) [see Eq. (59)]. Rewriting the time evolution equation for \(S_{i}(t_{0},t_{0}+t)\) in Eq. (60), we get \[\frac{\partial S_{i}(t_{0},t_{0}+t)}{\partial t}=\frac{\mu_{1}}{2}\left[S_{i+1} (t_{0},t_{0}+t)+S_{i-1}(t_{0},t_{0}+t)-2S_{i}(t_{0},t_{0}+t)+2aU_{i}(t_{0},t_{ 0}+t)\right],\] where the function \(U_{i}(t_{0},t_{0}+t)\) denotes \[U_{i}(t_{0},t_{0}+t)= \frac{1}{2a}\left[\langle z_{0}(t_{0})\sigma_{i}(t_{0}+t)z_{i+1} (t_{0}+t)\rangle-\langle z_{0}(t_{0})\sigma_{i}(t_{0}+t)z_{i-1}(t_{0}+t)\rangle\right]\] \[\qquad\qquad+C_{i}(t_{0},t_{0}+t). \tag{67}\] Due to the presence of \(\sigma\)-variable, we anticipate \(U_{i}(t_{0},t_{0}+t)\) to decay, at late times, as \[U_{i}(t_{0},t_{0}+t)\stackrel{{\text{t}_{0}\to\infty}}{{\sim}} \psi_{i}(t)\ e^{-2\gamma t},\quad\text{ for }t\gg 1, \tag{68}\] where \(\psi_{i}(t)\) is some function of time \(t\). For the mean-field case, we could prove this explicitly in Eq. (62) where \(\psi_{i}(t)=C_{i}(t_{0}\to\infty)\). On the other hand, for exact case, one can see this by writing down the time evolution equation for \(U_{i}(t_{0},t_{0}+t)\) which gives decaying term like \(\sim-2\gamma U_{i}(t_{0},t_{0}+t)\). In Figure (12), we have numerically checked this ansatz for \(U_{0}(t_{0},t_{0}+t)\) and found it to be true. Proceeding with this form, one can then follow the mathematical steps exactly as done in Appendix G and obtain \[S_{0}\left(t_{0},t_{0}+t\right)\simeq\zeta_{1}\sqrt{t_{0}}-\zeta_{1}\sqrt{ \frac{t}{2}}, \tag{69}\] for both \(t_{0}\) and \(t\) large. We emphasize that this expression is valid for all non-zero values of \(\gamma\) with \(\zeta_{1}\) given exactly in Eq. (14). Plugging this in Eq. (59) yields \[l_{0}(t)\simeq\sqrt{2}\ g_{0}(t), \tag{70}\] Figure 12: Numerical plot of the correlation \(U_{0}(t_{0},t_{0}+t)\) in Eq. (67) for \(t_{0}=10000\). We find that the simulation data can be fitted by \(U_{0}(t_{0},t_{0}+t)\simeq 3e^{-0.5t}\) establishing the validity of the ansatz in Eq. (68). with \(g_{0}(t)=\zeta_{1}\sqrt{t}\). This means that while both \(l_{0}(t)\) and \(g_{0}(t)\) depart individually from their mean field forms, the ratio is still given by \(\sqrt{2}\). We have numerically verified this result in Figure (11) for \(\gamma=0.5\) and \(\gamma=0.25\). Demonstrating this result numerically for very small values of \(\gamma\) turns out to be computationally expensive. Recall from Eq. (59) that one needs to go to very large \(t_{0}\) to measure \(l_{0}(t)\). Numerically, we see that smaller the value of \(\gamma\), larger is the value of \(t_{0}\) that one has to consider. On the other hand at very large time, the boundary effects start to become important which alter the MSD. In order to avoid boundary effects, we have to increase the number of particles in the simulation, which makes the computation highly expensive. In our study, we have fixed the smallest value as \(\gamma=0.25\). Already for this value, we observe departure of the simulation data from mean field results. ## 7 Conclusion In this paper, we have investigated the motion of tracer particles in the random average process of persistent active particles in an infinite line. Using mean field approximation, we calculated the mean squared displacement and correlation of the positions of tracer particles both in the quenched initial condition and in the steady state. In particular, for the quenched case, we showed that the MSD exhibits a crossover from diffusive scaling at small times to the sub-diffusive (\(\sim\sqrt{t}\)) scaling at late times. Interestingly we find that the coefficient associated with this sub-diffusive growth is different than the corresponding non-persistent case and the two converge only in the limit \(\gamma\to\infty\). For finite \(\gamma\), we see a clear difference between them as illustrated in Figure (6). Similarly, for the position correlation, we find slight difference in Eq. (54) compared to the Markov case. While the overall scaling function \(f(y)\) in Eq. (54) is same as the MRAP, the pre-factor \(\zeta_{1}\) is different and therefore carries the effect of the activity. Next, we studied these quantities in the steady state where we first evolve the system till time \(t_{0}\to\infty\) and then start measuring the positions. Unlike in the quenched case, here the positions at the onset of the measurement fluctuate for different realisations. For this case, we analytically showed that the MSD at late times grows sub-diffusively as \(\sim\sqrt{t}\) with the associated coefficient once again different than the Markov case. Only for \(\gamma\to\infty\), the two become equal. Quite remarkably while both MSDs in the quenched initial condition and in the steady state individually change due to the persistent nature of the particles, their ratio is still equal to \(\sqrt{2}\) at large times. This is a common result known to be true for many single-file system [10, 59, 9]. Our study here reveals it to be valid for the ARAP also for all non-zero values of \(\gamma\). Finally, we calculated the correlation between positions of two tagged particles at steady state in Eq. (66). Solving single-file motion for active particles is a notoriously challenging problem. Here, we showcased an example of active single-file motion for which we could derive many results analytically. Carrying out this study for active particles with hardcore exclusions is an interesting and important direction to explore. Recent numerical studies in this direction have pointed out at some interesting qualitative differences than the usual single-file diffusion [49]. Proving this analytically is still an open problem. For systems obeying diffusive hydrodynamics, the coefficient of the sub-diffusive growth of the MSD of a tracer particles is specified by diffusivity and mobility of the system [9]. In short range interacting systems such transport coefficients are usually determined by the two point correlations [63] as in the Markov RAP case where only \(\mu_{1}\) and \(\mu_{2}\) appears in the expression of the MSD [see Eq. (1)]. In contrast, for our active RAP system we observe that the MSD gets contribution from higher point correlations also. It would be interesting to investigate if it is possible to derive the same MSD form as in Eq. (13) from hydrodynamic evolution for the density, more precisely for the inter-particle separation field similar to the Markov RAP case [62]. Also, in this work, we have only looked at the MSD and the two-point correlation functions. Obtaining higher moments and distribution of the position of a tagged particle are interesting problems even for the Markov RAP. Finally it would be interesting to study the effect of local biases in the single-file model of active particles in the same spirit as in [14, 62]. ## Acknowledgements The authors thank Tirthankar Banerjee for stimulating discussions on the paper. A. K. acknowledges the support of the core research grant no. CRG/2021/002455 and MATRICS grant MTR/2021/000350 from the Science and Engineering Research Board (SERB), Department of Science and Technology, Government of India. P. S. acknowledges the support of Novo Nordisk Foundation grant NNF21OC0071284. The authors also acknowledge the support from the Department of Atomic Energy, Government of India, under project no. 19P1112R&D. ## Appendix A Failure of mean field approximation to specify \(T_{i}(t)\) in Eq. (27) In this appendix, we will explain why the mean field approximation does not correctly characterize the correlation \(T_{i}(t)\) in Eq. (27). Looking at this expression, we see that we have to calculate three-point correlation functions like \(\langle z_{0}(t)z_{i+1}(t)\sigma_{i}(t)\rangle\) and \(\langle z_{0}(t)z_{i-1}(t)\sigma_{i}(t)\rangle\). To see if the mean field approach works for this case, we define a general correlation \(\mathcal{T}_{ij}(t)\equiv\langle z_{0}(t)z_{i+1}(t)\sigma_{j}(t)\rangle\) and write its time evolution equations as \[\frac{d\mathcal{T}_{ij}(t)}{dt}= -2\gamma\mathcal{T}_{ij}(t)+\frac{\mu_{1}}{2}\Big{[}\mathcal{T}_{ i+1,j}(t)+\mathcal{T}_{i-1,j}(t)+\mathcal{T}_{i+1,j+1}(t)+\mathcal{T}_{i-1,j-1}(t) -4\mathcal{T}_{i,j}(t)\] \[+2a\{\langle\sigma_{0}z_{i+1}(t)\sigma_{j}(t)\rangle+\langle \sigma_{i+1}(t)z_{0}(t)\sigma_{j}(t)\rangle\Big{]}+\frac{\mu_{1}p}{2}\Big{[} \langle z_{0}(t)\sigma_{i+1}(t)z_{i+2}(t)\sigma_{j}(t)\rangle\] \[-\langle z_{0}(t)\sigma_{i+1}(t)z_{i}(t)\sigma_{j}(t)\rangle+ \langle z_{1}(t)\sigma_{0}(t)z_{i+1}(t)\sigma_{j}(t)\rangle-\langle z_{-1}(t) \sigma_{0}(t)z_{i+1}(t)\sigma_{j}(t)\rangle\Big{]}\] \[+\mu_{2}\Big{[}2\langle z_{0}(t)z_{0}(t)\sigma_{j}(t)\rangle- \langle z_{1}(t)z_{1}(t)\sigma_{j}(t)\rangle-\langle z_{-1}(t)z_{-1}(t)\sigma _{j}(t)\rangle\] \[-\langle z_{0}(t)z_{1}(t)\sigma_{0}(t)\sigma_{j}(t)\rangle+ \langle z_{0}(t)z_{-1}(t)\sigma_{0}(t)\sigma_{j}(t)\rangle\Big{]}. \tag{71}\] Note that we are interested in calculating \(T_{i}(t)\) which is obtained by putting \(i=j\) in \(\mathcal{T}_{ij}(t)\). Coming to Eq. (71), we observe that it does not satisfy the closure property as it contains four-point correlation functions. Once again, we use mean field approximations to break the four-point correlation in terms of lower point correlations as \[\langle z_{0}(t)\sigma_{i+1}(t)z_{i+2}(t)\sigma_{j}(t)\rangle\simeq \langle z_{0}(t)\sigma_{i+1}(t)\rangle\langle z_{i+2}(t)\sigma_{j} (t)\rangle+\langle z_{0}(t)\sigma_{j}(t)\rangle\langle z_{i+2}(t)\sigma_{i+1} (t)\rangle\] \[+\langle z_{0}(t)z_{i+2}(t)\rangle\langle\sigma_{i+1}(t)\sigma_{ j}(t)\rangle. \tag{72}\] With this approximation, the following four-point point correlation function appearing in Eq. (71) becomes \[P_{0}(t) =\Big{[}\langle z_{0}(t)\sigma_{1}(t)z_{2}(t)\sigma_{0}(t)\rangle- \langle z_{0}(t)\sigma_{1}(t)z_{0}(t)\sigma_{0}(t)\rangle+\langle z_{1}(t) \sigma_{0}(t)z_{1}(t)\sigma_{0}(t)\rangle\] \[\qquad\qquad\qquad-\langle z_{-1}(t)\sigma_{0}(t)z_{1}(t)\sigma_ {0}(t)\rangle\Big{]},\] \[\simeq C_{1}(t)\big{[}C_{2}(t)-C_{0}(t)\big{]}. \tag{73}\] We now test the validity of this approximation. For this, we measure both \(P_{0}(t)\) and \(C_{1}(t)\big{[}C_{2}(t)-C_{0}(t)\big{]}\) from the numerical simulations and compare them in Figure (13) for \(\gamma=1\) and \(\gamma=2\). For both cases, we observe that \(P_{0}(t)\) has a large positive value whereas \(C_{1}(t)\big{[}C_{2}(t)-C_{0}(t)\big{]}\) has a small negative value. Clearly, this implies \(P_{0}(t)\neq C_{1}(t)\big{[}C_{2}(t)-C_{0}(t)\big{]}\). Hence we numerically find that the decoupling approximation of breaking four-point correlation function in terms of two-point correlation functions in Eq. (72) is not valid and thus the analytical calculation of obtaining three point correlation functions seems difficult. ## Appendix B Expression of \(C_{i}(t)\) in Eq. (23) In this appendix, we will derive the expression of \(C_{i}(t)\) quoted in Eq. (23) of the main text. For this, we take the Fourier transform \(\bar{C}(q,t)=\Sigma_{i=-\infty}^{\infty}e^{iq}C_{i}(t)\) (where \(\iota^{2}=-1\)) and insert this in Eq. (22) to obtain \[\frac{d\bar{C}(q,t)}{dt}=-\alpha(q)\bar{C}(q,t)+\mu_{1}a, \tag{74}\] where \(\alpha(q)=\mu_{1}(1-\cos(q))+2\gamma\). Solving this equation, we get \[\bar{C}(q,t)=\mu_{1}a\left(\frac{1-e^{-\alpha(q)t}}{\alpha(q)}\right). \tag{75}\] Figure 13: Comparison of the numerically obtained four-point correlation function \(P_{0}(t)\) as defined in Eq. (73) with its approximated value \(C_{1}(t)(C_{2}(t)-C_{0}(t))\) using mean field. For both \(\gamma=1\) and \(\gamma=2\), the two deviate substantially indicating that the mean field approximation does not work for \(P_{0}(t)\). Performing the inverse Fourier transformation yields \[C_{i}(t)=\frac{\mu_{1}a}{2\pi}\int_{-\pi}^{\pi}e^{-i{{q}}}\left(\frac{1-e^{-\alpha (q)t}}{\alpha(q)}\right)dq. \tag{76}\] Performing the integration over \(q\) analytically in this equation is difficult. However, one can get a simplified expression by proceeding as follows. Differentiating on both sides of Eq. (76), we get \[\frac{dC_{i}(t)}{dt} =\frac{\mu_{1}a}{2\pi}\int_{-\pi}^{\pi}e^{-i{{q}}}e^{-\alpha(q)t}dq,\] \[=\frac{\mu_{1}a}{2\pi}e^{-(2\gamma+\mu_{1})t}\int_{-\pi}^{\pi} \cos({{q}})e^{\mu_{1}t\cos(q)}dq,\] \[=\mu_{1}ae^{-(2\gamma+\mu_{1})t}I_{|i|}(\mu_{1}t). \tag{77}\] Next, we integrate both sides with respect to \(t\) and use the initial condition \(C_{i}(0)=0\) to obtain \[C_{i}(t)=\mu_{1}a\int_{0}^{t}e^{-(2\gamma+\mu_{1})\tau}I_{|i|}(\mu_{1}\tau)\;d\tau. \tag{78}\] In the limit \(t\to\infty\), one can exactly carry out the integration over \(\tau\) to get \[C_{i}(t\to\infty) =\mu_{1}a\int_{0}^{\infty}e^{-2\gamma\tau-\mu_{1}\tau}I_{|i|}(\mu _{1}\tau)\;d\tau,\] \[=a\int_{0}^{\infty}e^{-\left(\frac{2\gamma+\mu_{1}}{\mu_{1}} \right)w}I_{|i|}(w)\;dw,\] \[=\frac{\mu_{1}a}{\sqrt{4\gamma^{2}+4\mu_{1}\gamma}}\left[\left( \frac{2\gamma+\mu_{1}+\sqrt{4\gamma^{2}+4\mu_{1}\gamma}}{\mu_{1}}\right) \right]^{-|i|}. \tag{79}\] This result has been used in Eq. (24) in the main text. ## Appendix C Sub-leading term in \(g_{0}(t)\) in Eq. (46) In this appendix, we derive the expression of the sub-leading term in the variance \(g_{0}(t)\) for large \(t\). As seen in Eq. (46), for large \(t\), the variance scales sub-diffusively as \(g_{0}(t)\simeq\zeta_{1}\sqrt{t}\) with prefactor \(\zeta_{1}\) given in Eq. (14). Here, we are interested in calculating the next order correction which turns out to be a constant. To derive this, we begin with the Laplace transform \(\tilde{g}_{0}(s)\) in Eq. (36) and rewrite it as \[\tilde{g}_{0}(s)\left[1+\frac{\mu_{2}sY(s)}{\mu_{1}-\mu_{2}} \right]= \mu_{1}W(s)+\frac{\mu_{1}\mu_{2}\;Y(s)}{\mu_{1}-\mu_{2}}\left[ \frac{a^{2}}{s}+2a\tilde{C}_{1}(s)\right], \tag{80}\] where \(Y(s)\) and \(W(s)\) are defined in Eqs. (37) and (38) respectively. Note that \(Y(s\to 0)\simeq 1/\sqrt{4\mu_{1}s}\) inserting which in Eq. (80) gives \[\tilde{g}_{0}(s)\simeq\underbrace{\frac{\mu_{1}W(s)}{1+\phi\sqrt{s}}}_{\text {first term}}+\underbrace{\frac{\mu_{2}\sqrt{\mu_{1}}}{2\sqrt{s}\;\left(\mu_{ 1}-\mu_{2}\right)\left(1+\phi\sqrt{s}\right)}\;\left[\frac{a^{2}}{s}+2a \tilde{C}_{1}(s)\right]}_{\text{second term}}, \tag{81}\] where \(\phi=\frac{\mu_{2}}{\sqrt{4\mu_{1}}\ (\mu_{1}-\mu_{2})}\). For computational ease, we have written the two terms separately in the right hand side. As evident, for the first term, we have to specify the function \(W(s)\). For small \(s\), the integrand in Eq. (38) is dominated by smaller values of \(q\). We therefore anticipate the major contribution to the integration to come from smaller values of \(q\). With this approximation, the expression of \(W(s)\) reduces to \[W(s)\simeq[2a\mathcal{C}\left(q\to 0,s\right)+\mathcal{T}\left(q\to 0,s \right)]\times\frac{1}{\sqrt{4\mu_{1}s}}, \tag{82}\] and the first term in Eq. (81) for \(s\to 0\) becomes \[\text{first term}\simeq\sqrt{\frac{\mu_{1}}{4}}\ \left(\frac{1}{\sqrt{s}}- \phi\right)\ \left[2a\mathcal{C}\left(q\to 0,s\right)+\mathcal{T}\left(q\to 0,s \right)\right]. \tag{83}\] We now have to compute the asymptotic forms of \(\mathcal{C}\left(q\to 0,s\right)\) and \(\mathcal{T}\left(q\to 0,s\right)\) as \(s\to 0\). To do this, we first recall that both \(C_{i}(t)\) and \(T_{i}(t)\) involve the \(\sigma\)-variables in their definitions. Due to this, both of these functions \(\mathcal{C}\left(q\to 0,s\right)\) and \(\mathcal{T}\left(q\to 0,s\right)\) relax exponentially to their steady values with relaxation time scale \(\sim 1/\gamma\). For \(\mathcal{C}\left(q\to 0,s\right)\), one can show this from Eq. (75) whereas for \(\mathcal{T}\left(q\to 0,s\right)\), one can see this numerically. In the Laplace domain, this implies \[\mathcal{C}(q\to 0,s\to 0)\simeq \ \frac{\bar{C}(q\to 0,t\to\infty)}{s}+\text{constant term}, \tag{84}\] \[\mathcal{T}(q\to 0,s\to 0)\simeq \ \frac{\bar{T}(q\to 0,t\to\infty)}{s}+\text{constant term}, \tag{85}\] where the constant terms do not involve \(s\). Inserting these forms in Eq. (83) and using the identification in Eq. (41), we obtain \[\text{first term}\simeq\frac{\sqrt{\mu_{1}}}{2}\ \left[2aC_{I}+T_{I}\right]\ \left(\frac{1}{s^{3/2}}-\frac{\phi}{s}\right), \tag{86}\] which can be simplified further as \[\text{first term}\simeq\frac{1}{2}\left[\frac{a^{2}(\mu_{1})^{3/2}}{\gamma}+ \sqrt{\mu_{1}}\ T_{I}\right]\ \left(\frac{1}{s^{3/2}}-\frac{\phi}{s}\right), \tag{87}\] using the approximation \(C_{I}=\frac{\mu_{1}a}{2\gamma}\) as derived in Eq. (43). So far, we have obtained the approximate form of the first term in Eq. (81) for \(s\to 0\). Next, we carry out the same analysis for the second term. Looking at Eq. (81), we observe that one needs to calculate the Laplace transform \(\tilde{C}_{1}(s)\) for small \(s\). To obtain this, we consider the expression of \(C_{1}(t)\) in Eq. (23) and rewrite it as \[C_{1}(t)=C_{1}(t\to\infty)-\mu_{1}a\int_{t}^{\infty}e^{-(2\gamma+\mu_{1})\tau }\ I_{1}(\mu_{1}\tau)\ d\tau. \tag{88}\] For large \(t\), the argument of the Bessel function is also large which enables us to use the approximation \(I_{1}(\mu_{1}\tau)\simeq e^{\mu_{1}\tau}/\sqrt{2\pi\mu_{1}\tau}\) for \(\mu_{1}\tau\gg 1\). With this approximation, the integration over \(\tau\) can be easily carried out and we get \[C_{1}(t)\simeq C_{1}(t\to\infty)-a\sqrt{\frac{\mu_{1}}{4\gamma}}\ \text{Erfc}\left(\sqrt{2\gamma t}\right). \tag{89}\] Taking the Laplace transformation of this equation gives \[\tilde{C}_{1}(s) \simeq\frac{C_{1}(t\rightarrow\infty)}{s}-a\sqrt{\frac{\mu_{1}}{4 \gamma}}\ \frac{1}{s+2\gamma+\sqrt{2\gamma s+4\gamma^{2}}}, \tag{90}\] \[\simeq\frac{C_{1}(t\rightarrow\infty)}{s}-\frac{a}{4\gamma} \sqrt{\frac{\mu_{1}}{4\gamma}},\ \ \ \ \text{as}\ s\to 0. \tag{91}\] Plugging this approximate form in Eq. (81), we obtain the second term as \[\text{second term}=\frac{a\mu_{2}\sqrt{\mu_{1}}}{2(\mu_{1}-\mu_{2})}\left(a+2 C_{1}(t\rightarrow\infty)\right)\ \left(\frac{1}{s^{3/2}}-\frac{\phi}{s}\right). \tag{92}\] Therefore, we have computed the approximate forms of both terms in Eq. (81) for small \(s\). Using these forms, the expression of the Laplace transform \(\tilde{g}_{0}(s)\) simplifies to \[\tilde{g}_{0}(s)\simeq\frac{\sqrt{\pi}}{2}\ \zeta_{1}\left(\frac{1}{s^{3/2}}- \frac{\phi}{s}\right). \tag{93}\] Finally performing the inverse Laplace transformation, we obtain the result written in Eq. (46) in the main text. ## Appendix D Details about numerical simulations This appendix provides details about the numerical simulation adopted to verify various analytical results in the paper. To begin with, we have \(N(=2n+1)\) number of particles initially placed at a uniform distance apart as \[x_{i}(0)=ia, \tag{94}\] where \(i\) is an integer that lies between \(-n\leq i\leq n\). On the other hand, we choose initial \(\sigma_{i}(0)\) from \(\pm 1\) with equal probability \(1/2\) independently for all particles. This means while the initial positions are fixed for different realisations, the initial \(\sigma_{i}(0)\) still fluctuate. For a given realisation, we then implement random sequential rule to update the position of the particles. During a small time interval \([t,t+dt]\), we choose a random integer \(m\) uniformly between \([-n,n]\) and update the position \(x_{m}(t)\) and spin variable \(\sigma_{m}(t)\) according to Eqs. (6) and (7). We then repeat this step for \(N\) times. This implies that in the time interval \(dt\), we perform the update rule randomly for \(N\) times. Finally, we iterate this process till the observation time \(t\) is reached. For all figures, we chose \(dt=0.01\) except for the Figures (2) and (3) for which we chose \(dt=0.002\). The random variables \(\eta_{i}\) in Eq. (8) are chosen uniformly from \([0,1)\), hence \(R(\eta)=1\) for \(0\leq\eta<1\) and zero otherwise. ## Appendix E Expressions of \(Y_{i}(s)\) and \(W_{i}(s)\) as \(s\to 0\) Here, we derive the approximate expressions of \(Y_{i}(s)\) and \(W_{i}(s)\) in Eqs. (48) and (49) for small values of \(s\). Let us first present the calculation for \(Y_{i}(s)\) which reads \[Y_{i}(s)= \frac{1}{2\pi}\int_{-\pi}^{\pi}dq\ \frac{e^{-i\eta}}{s+\beta(q)}, \tag{95}\] where \(\beta(q)=2\mu_{1}(1-\cos(q))\). For \(s\to 0\), the integrand in Eq. (95) diverges as \(q\to 0\). Therefore, we expect the major contribution to the integration should come from the small values of \(q\). Taking the \(q\to 0\) limit, we get \(\beta(q)\simeq\mu_{1}q^{2}\) plugging which in Eq. (95), we get \[Y_{i}(s)\simeq \frac{1}{2\pi}\int_{-\pi}^{\pi}dq\ \frac{e^{-iq}}{s+\mu_{1}q^{2}}. \tag{96}\] Changing the variable \(q=\sqrt{s/\mu_{1}}\ w\) and taking \(s\to 0\), we get \[Y_{i}(s) \simeq\frac{1}{2\pi\sqrt{s\mu_{1}}}\ \int_{-\pi}^{\pi}\frac{dw}{1+w^ {2}}\ \exp\left[-\iota iw\sqrt{\frac{s}{\mu_{1}}}\right], \tag{97}\] \[\simeq\frac{1}{\sqrt{4\mu_{1}s}}\ \exp\left[-|i|\sqrt{\frac{s}{\mu_{1}}} \right]. \tag{98}\] This result has been quoted in Eq. (50) which was instrumental in obtaining the asymptotic behaviour of the MSD and correlation for the positions of the particles. It turns out that for \(W_{i}(s)\) also, one can proceed similarly to get its small \(s\) behaviour. To see this, let us first rewrite its expression from Eq. (49) \[W_{i}(s)= \frac{1}{2\pi}\int_{-\pi}^{\pi}dq\ e^{-iiq}\ \left[\frac{2a{\cal C}(q,s)+{\cal T}(q,s)}{s+\beta(q)}\right], \tag{99}\] where \({\cal C}(q,s)\) and \({\cal T}(q,s)\) are the joint Fourier-Laplace transforms given in Eqs. (31) and (32) respectively. As discussed before for small \(s\), the integration in Eq. (99) will be dominated by small values of \(q\) which gives \[W_{i}(s)\simeq \frac{1}{2\pi}\int_{-\pi}^{\pi}dq\ e^{-iiq}\ \left[\frac{2a{\cal C}(q \to 0,s\to 0)+{\cal T}(q\to 0,s\to 0)}{s+\mu_{1}q^{2}}\right]. \tag{100}\] Numerically, we see that both \(\bar{C}(q,t)\) and \(\bar{T}(q,t)\) in Eqs. (31) and (32) attain stationary values as \(t\to\infty\). This implies, as mentioned previously, that in the Laplace domain, one gets \[{\cal C}(q\to 0,s\to 0)\simeq \ \frac{\bar{C}(q\to 0,t\to\infty)}{s}, \tag{101}\] \[{\cal T}(q\to 0,s\to 0)\simeq \ \frac{\bar{T}(q\to 0,t\to\infty)}{s}, \tag{102}\] Plugging these forms in Eq. (100) gives \[W_{i}(s)\simeq\frac{Y_{i}(s)}{s}\ \left[2aC_{I}+T_{I}\right]. \tag{103}\] where we have used Eq. (41) and \(Y_{i}(s)\) is given in Eq. (96). Inserting the form of \(Y_{i}(s)\) for small \(s\) from Eq. (98) and performing inverse Laplace transform we finally get \[W_{i}(s)=\frac{Y_{i}(s)}{s}\ \left[2aC_{I}+T_{I}\right]\ \exp\left[-|i|\sqrt{\frac{s}{\mu_{1}}} \right]. \tag{104}\] To summarize, in this appendix, we have derived the forms of \(Y_{i}(s)\) and \(W_{i}(s)\) as \(s\to 0\). Observe that for \(i=0\), the function \(Y_{i}(s)\) from Eq. (95) reduces to \(Y(s)\) in Eq. (37). Consequently, the Eq. (104) with \(i=0\) provides the small \(s\) behaviour of \(W(s)\) as quoted in Eq. (40). ## Appendix F Computation of \(C_{i}(t_{0},t_{0}+t)=\langle z_{0}(t_{0})\sigma_{i}(t_{0}+t)\rangle\) This appendix presents a derivation of the expression of \(C_{i}(t_{0},t_{0}+t)\) quoted in Eq. (62). Let us begin by writing the dynamics of \(C_{i}(t_{0},t_{0}+t+dt)\) in small time interval \(dt\). Using the time evolution of \(\sigma_{i}(t)\) in Eq. (7), we have \[C_{i}(t_{0},t_{0}+t+dt) =\langle z_{0}(t_{0})\sigma_{i}(t_{0}+t+dt)\rangle, \tag{105}\] \[\simeq-\gamma dt\langle z_{0}(t_{0})\sigma_{i}(t_{0}+t)\rangle+( 1-\gamma dt)\langle z_{0}(t_{0})\sigma_{i}(t_{0}+t)\rangle,\] (106) \[\simeq C_{i}(t_{0},t_{0}+t)-2\gamma dt\ C_{i}(t_{0},t_{0}+t). \tag{107}\] Taking the \(dt\to 0\) limit, we get the dynamics of \(C_{i}(t_{0},t_{0}+t)\) as \[\frac{\partial C_{i}(t_{0},t_{0}+t)}{\partial t}=-2\gamma C_{i}(t_{0},t_{0}+t). \tag{108}\] In order to solve this equation, we need to specify appropriate initial condition. Recall that as \(t\to 0\), one has \(C_{i}(t_{0},t_{0})=\langle z_{0}(t_{0})\sigma_{i}(t_{0})\rangle\) which is simply \(C_{i}(t_{0})\) defined in Section 3. The solution of \(C_{i}(t_{0})\) has been obtained in Eq. (23). Solving Eq. (108) with this initial condition, we obtain \[C_{i}(t_{0},t+t_{0})=C_{i}(t_{0})\ e^{-2\gamma t}, \tag{109}\] which has also been quoted in Eq. (62). ## Appendix G Computation of \(S_{i}(t_{0},t_{0}+t)\) in Eq. (63) In this appendix, we derive the expression of \(S_{i}(t_{0},t_{0}+t)\) in Eq. (63) which was used to calculate the MSD and correlation for the active RAP with annealed initial condition. To this aim, we take the joint Fourier-Laplace transformation of \(S_{i}(t_{0},t_{0}+t)\) as \[\mathcal{S}(q,s,t)=\sum_{i=-\infty}^{\infty}e^{iq}\ \tilde{\mathcal{S}}_{i}(s,t), \quad\text{with}\ \tilde{\mathcal{S}}_{i}(s,t)=\int_{0}^{\infty}dt_{0}\ e^{- st_{0}}\ S_{i}(t_{0},t_{0}+t), \tag{110}\] and insert it in Eq. (61) to obtain \[\mathcal{S}(q,s,t)=\mathcal{G}(q,s)e^{-\frac{\beta(q)t}{2}}+\frac{\mu_{1}a\ \mathcal{C}(q,s)}{\left(\frac{\beta(q)}{2}-2\gamma\right)}\left(e^{-2\gamma t}- e^{-\frac{\beta(q)}{2}t}\right), \tag{111}\] where \(\mathcal{G}(q,s)\) and \(\mathcal{C}(q,s)\) denote, respectively, the joint Fourier-Laplace transformations of \(g_{i}(t_{0})\) and \(C_{i}(t_{0})\) given in Eqs. (35) and (75). Also, we have defined \(\beta(q)=2\mu_{1}(1-\cos(q))\). Now to get \(S_{i}(t_{0},t_{0}+t)\) from Eq. (111), one needs to perform two inversions: one is the inverse Fourier transformation with respect to \(q\) and the other is the inverse Laplace transformation with \(s\). Let us first write the inversion with respect to \(q\) as \[\tilde{\mathcal{S}}_{i}(s,t)=\underbrace{\int_{-\pi}^{\pi}\frac{dq}{2\pi}\ e^{-iiq}\ e^{-\frac{\beta(q)t}{2}}\ \tilde{\mathcal{G}}(q,s)}_{\text{first term}}+\underbrace{\mu_{1}a\over 2 \pi}\int_{-\pi}^{\pi}\frac{dq\ e^{-iq}}{\left[\frac{\beta(q)}{2}-2\gamma \right]}\ \mathcal{C}(q,s)\left(e^{-2\gamma t}-e^{-\frac{\beta(q)}{2}t}\right), \tag{112}\] where we have broken the right hand side into two parts for computational convenience. We now proceed to evaluate these two terms separately. ### Calculation of first term Since we are interested in the \(t_{0}\rightarrow\infty\) limit, we evaluate these terms for small values of \(s\). Using the approximation \(\tilde{\mathcal{G}}(q,s)\simeq\zeta_{1}\sqrt{\mu_{1}\pi}/s(s+\mu_{1}q^{2})\) for \(s\to 0\) from Eq. (52), the first term in Eq. (112) becomes \[\text{first term}\simeq\frac{\sqrt{\mu_{1}}\;\zeta_{1}}{2\sqrt{\pi}s}\int_{- \pi}^{\pi}\frac{dq}{(s+\mu_{1}q^{2})}\;\;\exp\Bigl{[}-\mu_{1}t(1-\cos(q))-i \iota q\Bigr{]}. \tag{113}\] Since the integrand is exponentially decaying in \(t\), the leading contribution to the integral comes from the smaller values of \(q\) which enables us to approximate it as \[\text{first term}\simeq\frac{\sqrt{\mu_{1}}\;\zeta_{1}}{2\sqrt{\pi}s}\int_{- \pi}^{\pi}\frac{dq}{(s+\mu_{1}q^{2})}\;\;\exp\Bigl{[}-\frac{\mu_{1}t}{2}q^{2}- \iota iq\Bigr{]}. \tag{114}\] Under the variable transformation \(q=\sqrt{\frac{s}{\mu_{1}}}w\), it simplifies to \[\text{first term} \simeq\frac{\zeta_{1}}{2\sqrt{\pi}s^{3/2}}\int_{-\infty}^{\infty }dw\;\;\frac{\exp\Bigl{[}-\frac{stw^{2}}{2}-\iota i\sqrt{\frac{s}{\mu_{1}}}w \Bigr{]}}{1+w^{2}}, \tag{115}\] \[\simeq\frac{\sqrt{\pi}\zeta_{1}}{4s^{3/2}}\left[e^{\frac{st}{2}-| i|}\sqrt{\frac{s}{\mu_{1}}}\;\text{Erfc}\left(\sqrt{\frac{st}{2}}-\frac{|i|}{ \sqrt{2\mu_{1}t}}\right)+e^{\frac{st}{2}+|i|}\sqrt{\frac{s}{\mu_{1}}}\right.\] \[\times\;\left.\text{Erfc}\left(\sqrt{\frac{st}{2}}+\frac{|i|}{ \sqrt{2\mu_{1}t}}\right)\right]. \tag{116}\] Once again we take the \(s\to 0\) limit to recast this equation as \[\text{first term}\simeq\frac{\sqrt{\pi}\zeta_{1}}{2}\Bigl{[}\frac{1}{s^{3/2} }-\frac{1}{s}\Bigl{\{}\frac{\sqrt{2t}}{\sqrt{\pi}}e^{-\frac{i^{2}}{2\mu_{1}t} }+\frac{i}{\sqrt{\mu_{1}}}\text{erf}\left(\frac{i}{\sqrt{2\mu_{1}t}}\right) \Bigr{\}}\Bigr{]}. \tag{117}\] Now it is straightforward to perform the inverse Laplace transformation with respect to \(s\). However, before that, we calculate the second term in Eq. (112). ### Calculation of second term Observe that the second term in Eq. (112) depends on the function \(\mathcal{C}(q,s)\) whose expression is given in Eq. (75) in the time domain. For small \(s\), this function simplifies to \[\mathcal{C}(q,s)\simeq\frac{\mu_{1}a}{s\left[\mu_{1}(1-\cos(q))+2\gamma\right]} \tag{118}\] Plugging this in Eq. (112) gives \[\text{second term}\simeq\frac{(\mu_{1}a)^{2}}{2\pi s}\int_{-\pi}^{\pi}\frac{ dq\;e^{-iq}}{\left[\frac{\beta(q)}{2}-2\gamma\right]\left[\frac{\beta(q)}{2}+2 \gamma\right]}\;\left(e^{-2\gamma t}-e^{-\frac{\beta(q)}{2}t}\right). \tag{119}\] Performing integration over \(q\) at this stage turns out to be difficult. However, we can carry out this by taking an additional Laplace transform with respect to \(t\rightarrow\lambda\) under which the second term becomes second term \[\simeq\frac{(\mu_{1}a)^{2}}{2\pi s(\lambda+2\gamma)}\int_{-\pi}^{\pi} \frac{dq\ e^{-itiq}}{\left[2\gamma+\mu_{1}(1-\cos(q))\right]\left[\lambda+\mu_{ 1}(1-\cos(q))\right]},\] (120) \[\simeq\frac{(\mu_{1}a)^{2}}{8\pi s\gamma^{2}}\ \int_{-\pi}^{\pi}\ dq\ \frac{e^{-itiq}}{\lambda+\mu_{1}q^{2}/2}.\] (121) In going to the second line, we have used the approximation that for small \(\lambda\) (which corresponds to large \(t\)), the integral is dominated by small values of \(q\) which then allows us to write \(1-\cos(q)\simeq q^{2}/2\). Next, we change the variable \(q=\sqrt{\frac{2\lambda}{\mu_{1}}}\ w\) and take \(\lambda\to 0\) to rewrite Eq. (121) as second term \[\simeq\frac{a^{2}\ \mu_{1}^{3/2}}{4\pi s\gamma^{2}\sqrt{2\lambda}} \ \int_{-\infty}^{\infty}dw\ \frac{e^{-iiw\sqrt{\frac{2\lambda}{\mu_{1}}}}}{1+w^{2}},\] (122) \[\simeq\frac{a^{2}(\mu_{1})^{3/2}}{4s\gamma^{2}\sqrt{2\lambda}}e^ {-|i|\sqrt{\frac{2\lambda}{\mu_{1}}}}.\] (123) Finally performing the inverse Laplace transformation from \(\lambda\to t\) yields second term \[\simeq\frac{a^{2}\ \mu_{1}^{3/2}}{4s\gamma^{2}\sqrt{2\pi t}}\exp\left(-|i|^ {2}/2\mu_{1}t\right).\] (124) ### \(\tilde{\mathcal{S}}_{i}(s,t)\) in Eq. (112) Comparing the first term and the second term in Eqs. (117) and (124) respectively, we find that the leading order contribution to \(\tilde{\mathcal{S}}_{i}(s,t)\) in Eq. (112) at large \(t\) comes only from the first term. This allows us to write \[\tilde{\mathcal{S}}_{i}(s,t)\simeq\frac{\sqrt{\pi}\zeta_{1}}{2}\Big{[}\frac{1 }{s^{3/2}}-\frac{1}{s}\Big{\{}\frac{\sqrt{2t}}{\sqrt{\pi}}e^{-\frac{i^{2}}{2 \mu_{1}t}}+\frac{i}{\sqrt{\mu_{1}}}\mathrm{erf}\left(\frac{i}{\sqrt{2\mu_{1} \mathrm{t}}}\right)\Big{\}}\Big{]}. \tag{125}\] Notice that our result is valid only for large \(t_{0}\) and large \(t\) but with the ratio \(t/t_{0}\ll 1\). Finally taking the inverse Laplace transformation, we obtain the expression of \(S_{i}(t_{0},t_{0}+t)\) presented in Eq. (63).
2310.05525
Physical Layer Security in a Private 5G Network for Industrial and Mobility Application
Cellular communication technologies such as 5G are deployed on a large scale around the world. Compared to other communication technologies such as WiFi, Bluetooth, or Ultra Wideband, the 5G communication standard describes support for a large variety of use cases, e.g., Internet of Things, vehicular, industrial, and campus-wide communications. An organization can operate a Private 5G network to provide connectivity to devices in their manufacturing environment. Physical Layer Key Generation (PLKG) is a method to generate a symmetric secret on two nodes despite the presence of a potential passive eavesdropper. To the best of our knowledge, this work is one of the first to implement PLKG in a real Private 5G network. Therefore, it highlights the possibility of integrating PLKG in the communication technology highly relevant for industrial applications. This paper exemplifies the establishment of a long-term symmetric key between an aerial vehicle and IT infrastructure both located in a manufacturing environment and communicating via the radio interface of the Private 5G network.
Shivraj Hanumant Gonde, Christoph Frisch, Svetoslav Duhovnikov, Martin Kubisch, Thomas Meyerhoff, Dominic Schupke
2023-10-09T08:45:00Z
http://arxiv.org/abs/2310.05525v1
# Physical Layer Security in a Private 5G Network for Industrial and Mobility Application ###### Abstract Cellular communication technologies such as 5G are deployed on a large scale around the world. Compared to other communication technologies such as WiFi, Bluetooth, or Ultra Wideband (UWB), the 5G communication standard describes support for a large variety of use cases, e.g., Internet of Things (IoT), vehicular, industrial, and campus-wide communications. An organization can operate a Private 5G network to provide connectivity to devices in their manufacturing environment. Physical Layer Key Generation (PLKG) is a method to generate a symmetric secret or two nodes despite the presence of a potential passive eavesdropper. To the best of our knowledge, this work is one of the first to implement PLKG in a real Private 5G network. Therefore, it highlights the possibility of integrating PLKG in the communication technology highly relevant for industrial applications. This paper exemplifies the establishment of a long-term symmetric key between an aerial vehicle and IT infrastructure both located in a manufacturing environment and communicating via the radio interface of the Private 5G network. Physical Layer Security, Wireless Communication, 5G ## I Introduction The promise of solving mathematically complex problems more efficiently has led to major developments in the field of quantum computing. A quantum computer reduces the time needed to solve some complex problems, which threatens existing systems relying on traditional symmetric and asymmetric cryptography algorithms such as Advanced Encryption Standard (AES), Diffie-Hellman (DH), and Rivest-Shamir-Adleman (RSA). Asymmetric cryptography algorithms, which are based on mathematical problems such as discrete logarithm and prime factorization, are not solvable on a classical computer in polynomial time but can be solved on a quantum computer using Shor's algorithm [1]. Increasing the key length used for symmetric cryptography algorithms such as AES from 128 bits to 256 bits is one solution to safeguard against attacks from a quantum computer [2]. Quantum Key Distribution (QKD) and Post Quantum Cryptography (PQC) are popular solutions to overcome threats posed by quantum computers. Ranging from sensors to aerial vehicles, devices intended for mobile use cases rely on wireless communication for connectivity with varying levels of criticality and Quality of Service (QoS) requirements. Out of the wide variety of options available for establishing wireless connectivity, cellular technology in comparison is one of the most widely adopted with large-scale deployments around the world. This makes it a suitable candidate for many applications such as IoT, vehicular networks, or industrial networks. Due to the broadcast nature of wireless channels, securing communication between wireless nodes is important. Physical Layer Security (PLS), as explored in this study, is a solution which could be integrated into many use cases. PLKG, a subset of PLS, generates a secret bit stream on two wireless nodes even in the presence of an eavesdropper. This method of generating bits can be implemented with low overhead as part of the channel estimation process typically carried out using the pilot signals. Recent developments in the area of cellular technology, electric vertical take-off and landing (eVTOL) aircraft, and quantum computing motivate this work; it explores a different approach to establish a secure link between communicating nodes against quantum threats. _Contribution:_ This work aims to establish a secure communication link between an eVTOL outside the manufacturing environment (e.g. when handed over to the customer) and the IT infrastructure present in the manufacturing environment using a symmetric key pair, which is generated while the eVTOL was being manufactured. During the final stages of manufacturing, while the eVTOL is still in the trusted manufacturing environment, PLKG is used to generate a symmetric key pair between the eVTOL and the IT infrastructure. The generated key is used to secure communications between the eVTOL and IT infrastructure at a later stage, i.e., after the eVTOL moves out of the manufacturing environment, where key generation is more cumbersome. Beyond this use case, PLKG finds application in many more cases in the mobility domain and other industrial domains, where secure wireless connectivity is required. Hereby, the contribution of the work is an implementation of PLKG in a real Private 5G network and an evaluation of the implementation to assess the feasibility of using such a method for generating long-term keys of sufficient length and entropy. _Outline:_ Section II discusses the features of wireless channels which enable PLKG, the steps involved in PLKG, and the contributions of this work and previous works. Section III explains the use case and measures taken to implement PLKG in a Private 5G network. Details of the implementation and results are presented in Section IV. Section V concludes this paper. ## II Background PLKG in a wireless channel is possible due to its frequency selective fading nature in environments where there is motion around the wireless nodes or in situations where one or both the nodes are moving. The bits generated in such channels are similar on both nodes as frequency-selective fading is reciprocal in nature within a period of time, referred to as coherence time, if no interferers are present. Hence, the frequency-selective nature of the wireless channel, if estimated by two communicating nodes within the coherence time, will be highly correlated in a non-interfered situation. Important features of a wireless channel that enable PLKG are as follows: 1. _Frequency-selective fading_: For generating dissimilar bits using PLKG, the wireless channel should affect each frequency component of the signal differently, i.e., it has to be frequency-selective. In a flat fading channel, repeating bits would occur, and the entropy of the generated bit stream would be low. 2. _Time-varying nature_: The frequency-selective nature of a channel should vary in time so that every channel estimate results in a different set of bits. A frequency-selective wireless channel static in time would produce highly correlated bit streams from consecutive channel probes. An important factor to consider is the presence of an eavesdropper. When two nodes, Alice and Bob, generate secret bits using PLKG, a third node Eve, who is aware of the algorithm used by Alice and Bob, cannot generate the same bit stream if it is located at least half a wavelength away from both Alice and Bob. This is due to spatial de-correlation in wireless channels as discussed in [3]. ### _Physical Layer Key Generation_ To generate a symmetric secret bit stream on two communicating wireless nodes, steps illustrated in Fig. 1 have to be performed by both nodes (Alice and Bob) described as follows: 1. _Channel Probing_: Alice and Bob exchange a signal with each other within the coherence time and capture the fading it undergoes due to the wireless channel, i.e., channel state information (CSI) estimation. Two ways to implement this step in practice are to measure (i) the Channel Frequency Response [8] and (ii) the Received Signal Strength [9]. The former is better regarding the amount of information extracted from the channel, and the latter is relatively easier to retrieve, e.g., from commercial WiFi Network Interface controllers (NICs) [10]. 2. _Quantization_: To convert the CSI estimates into bits, a quantization algorithm is used. It can be a single-level crossing-based or a multi-level quantization where the entire range of possible values of the CSI is divided into multiple regions, each corresponding to a predefined bit or sequence of bits. 3. _Information Reconciliation_: A mismatch in CSI estimate at Alice and Bob is expected due to asymmetric hardware defects and non-simultaneous measurements. This leads to mismatches in the bit stream generated by Alice and Bob. In PLKG and QKD where the goal is to establish a symmetric key pair, an information reconciliation step is performed to correct mismatches or errors in the bit stream. [10] summarizes information reconciliation for PLKG in the existing literature. 4. _Privacy Amplification_: Due to the information reconciliation step, which often includes a public discussion, knowledge of the generated bit stream is leaked. A final step strengthens the generated bit stream to compensate for this and generate a cryptographic key. In this study, the channel frequency response is measured and used as CSI. The measured CSI is quantized to generate the bit stream. Information Reconciliation and Privacy Amplification are out-of-scope for this work. We focus on the Channel Prob Fig. 1: Sequence of steps involved in Physical Layer Key Generation. ing and Quantization steps depicted in Fig. 1, as they depend on the nature of the wireless channel and features of the Private 5G network. Information reconciliation is circumvented by choosing suitable parameters for the quantization algorithm, i.e., larger sampling intervals in quantization are used to reduce bit mismatch at 5G Base Station (gNB) and User Equipment (UE). For Privacy Amplification, we use a hash function to compress the quantized bits. It should be noted that Private 5G enables a trusted environment for key generation (here the manufacturing site), since the spectrum is licensed locally for exclusive use. The access to this local spectrum as well as interferers can be managed by the spectrum owner. Therefore, reciprocity can be ensured. Practical implementations of PLKG have been performed for WiFi, Bluetooth and UWB [4, 5, 6, 7, 14]. Compared with these communication technologies, 5G by design supports a wider range of applications due to its advantages including larger coverage area and increased bandwidth and capacity. Given the flexible nature of a 5G New Radio (5G-NR) physical layer, which includes multiple options for subcarrier spacing, density of reference signals in time and frequency, and flexible time slot configuration for uplink and downlink, a PLKG implementation in a Private 5G can be refined according to the environment. A simulation study for PLS implementation in Long-Term Evolution (LTE) wireless standard has been presented in [12] and a practical implementation which measures entropy of generated bit stream by recording CSI on a single LTE node is presented in [13]. Lack of practical 5G based PLKG testbench is discussed in [15]. This work demonstrates PLKG in a real Private 5G network by recording CSI on both nodes (gNB and UE) and evaluates its feasibility in a manufacturing environment. ## III Concept PLKG can be used in mobile ad-hoc networks, wireless sensor networks, and wireless local area networks, where the devices can use generated keys immediately. This section explains the usage of the generated keys via PLKG as long-term keys used to establish a secure link between an eVTOL and IT infrastructure in the manufacturing environment. This work aims to explore the applicability of PLKG in the presented use case. Parameters considered in this work within a Private 5G network to enable PLKG are explained in the second part of this section. ### _Use Case_ PLKG is used to generate a symmetric key pair on an eVTOL and IT infrastructure of the manufacturing environment. The manufacturing environment is a final assembly line where hardware and software components are put together to assemble the eVTOL, after which it is delivered to the customer. During this assembly process, PLKG is carried out to generate a key pair for the eVTOL and the IT infrastructure of the manufacturing environment. The environment is considered to be dynamic, with robots and personnel moving around the eVTOL during PLKG. The manufacturing environment has a Private 5G network operating on the 5G-NR n78 band in Time Division Duplex (TDD) mode. A benefit of using TDD mode is that it enables both gNB and UE to probe the same set of frequency components separated in time. The generated keys are used by the vehicle to establish a secure communication link with the manufacturing environment for operations such as software updates and data offloading during its regular maintenance, which is not necessarily in an environment controlled by the manufacturer. The advantages of using PLKG in this scenario are as follows: 1. _Information Theoretically Secure_: secret bits generated via PLKG are information-theoretically secure, i.e., they are secure against threats posed by quantum computers. 2. _Low Overhead Algorithm_: Sufficiently long bits can be generated via PLKG with low overhead. CSI estimates can be extracted from data and reference signals which are transmitted for normal communication between nodes, i.e., there is no need for an exclusive session for PLKG to take place. 3. _Automation_: The entire process of PLKG can be fully automated. Hence, no involvement of personnel is needed and the time during which a given eVTOL carries out PLKG can be hidden or randomized as well. Eve in the manufacturing environment is assumed to be situated at a location \(\gg 4cm\) (half a wavelength of carrier frequency in 5G n78 band is around 4cm) from both the eVTOL and wireless terminal of the IT infrastructure as shown in Fig. 2. To obtain similar CSI, Alice and Bob exchange pilot signals within the coherence period of the wireless channel. A passive Eve located \(>>4cm\) from Alice and Bob records a different CSI as depicted in Fig. 2. This results in an uncorrelated bit stream generated by Eve [3, 11]. ### _PLKG in 5G_ To implement PLKG in a Private 5G network, a setup with an Amarisoft Classic gNB, a Raspberry Pi with SIM8200EA-M2 5G HAT as UE, and two USRP B210 Software-defined radio (SDR) as recording devices were built. The SDR recorded raw IQ samples at the antenna port of both devices from which relevant symbols were extracted and demodulated. The gNB and UE communicated via a 5G link on the n78 band in TDD mode. A bandwidth of 20MHz was used with 30kHz subcarrier spacing, resulting in 612 subcarriers for communication. Demodulation Reference Signals (DMRS) were configured to have two occurrences within a slot with mapping Type A and length 2. Typically, for channel estimation, DMRS symbols are used. In this study, DMRS and Quadrature Phase Shift Keying (QPSK) modulated data symbols were used to get the channel estimate. Multiple QPSK modulated data symbols within a single frame were averaged to get a less noisy estimate of the channel as shown in Fig. 3. Time-sharing of the channel was configured such that the uplink and downlink took place for a duration of 2ms and 2.5ms contiguous blocks respectively in every 5ms time period. The remaining 0.5ms was not used by uplink or downlink and occurred between an uplink and downlink block. Measurements were carried out in an indoor lab as well as in an open space outdoors. Ideally, to implement PLKG on 5G-NR enabled devices (UE or gNB), the DMRS symbols after demodulation can be used as an input to the quantization algorithm to generate the bit stream. ## IV Results Using the Private 5G network setup built for PLKG, experiments were performed indoors and outdoors to record CSI estimates in environments that in part mimic a manufacturing environment. The recorded CSI was quantized, and a bit stream was generated. Using a hash function, the bit stream was compressed and the respective estimate of bit level security was calculated based on Lempel-Ziv-Welch lossless compression algorithm. ### _Quantization Algorithm_ A quantization algorithm generates bits from the CSI estimates. A two-level (L=2) quantization can be used where a threshold is defined, and bits are derived based on the CSI being above or below the threshold. This approach results in blocks of 1's and 0's when the channel does not change rapidly over consecutive subcarriers, i.e., consecutive frequency components. To overcome this challenge, the quantization region was divided into multiple levels on the y-axis denoted by L. In addition to multiple levels, the width of each interval was computed in two ways. One where all levels have the same width and another where all the levels are equiprobable as first proposed for PLKG in [5]. The number of levels L was varied from 2 to 16. The number of bits sampled from each CSI estimate is denoted by S which was varied from 2 to 16. Elimination criteria were also implemented to eliminate CSI estimates representing a static channel and in situations where received signals had a very low signal-to-noise ratio. ### _Results_ A larger variance was observed in a dynamic channel, i.e., in an environment where people were moving around the gNB and UE, as compared to a static channel where people and objects around the communicating nodes were stationary. Compared to a static channel, the variance in CSI estimates in a dynamic channel indoors and outdoors was 1.35 times and 3.0 times higher, respectively. In a manufacturing environment, it is assumed that moving people, objects, and robots create a dynamic channel. Based on this, PLKG was carried out in a dynamic channel. After channel probing, CSI estimates were quantized and tested for bias, i.e., to check if an equal number of 1's and 0's were generated in the entire bit stream. Ideally, the bias should be close to 0.5. Bias in this study was observed to be closer to 0.5 when L increased beyond 2. This bias test is only used as an indicator to find an anomaly in the generated bit stream before the next test is applied. To compute the upper bound of entropy for the generated bit stream, the Lempel-Ziv-Welch lossless compression algorithm was used. After compression, the size of the bit stream reduced to 0.2 to 0.1 times of the input bit stream, i.e., the bit stream generated after quantization. Table I summarizes the results of PLKG showing the number of bits generated before and after compression for equal width of quantization levels, L = 4, 7 and S = 3, 5, 7, 9. Fig. 2: An illustration of the use case where an eVTOL (Alice) and IT infrastructure (Bob) both present in the manufacturing environment communicate via 5G-NR interface to generate a long-term symmetric key pair using PLKG in the presence of a passive eavesdropper. For a symmetric key to be established between the nodes, the generated bit stream on both sides should be similar. Fig. 3 shows the CSI estimates at gNB and UE recorded within a duration of 10ms. For L = 4 and S = 3, similar bit streams were derived at gNB and UE. For the considered quantization parameters at a channel probing rate of 10ms for a duration of 5 seconds, a maximum of 142 bits of entropy can be derived as seen in Table I. ### _Discussion_ _Limitations due to reciprocity:_ The limit on L and S was due to the limitations with respect to reciprocity. For L = 7, the resulting bit stream at gNB and UE had mismatches. To reduce the number of mismatches to 0, L = 4 and S = 3 were chosen. Other than increasing the duration of PLKG, for achieving a higher key generation rate, reciprocity between measurements at gNB and UE must be improved by compensating for hardware defects. The poor reciprocal behavior can be in part attributed to the measurement setup which consisted of a SDR tapping out signal from the antenna port, and to the use of a demodulator at a very early stage of development to extract CSI estimates. Impairments specific to this such as the DC offset in CSI estimate at UE as seen in Fig. 3 affect reciprocity. To overcome this, the DC offset was suppressed by interpolating to the nearest neighbours as seen for CSI estimate at gNB in Fig. 3. In this study, the correlation coefficient between CSI measurements at gNB and UE varied from 0.1 to 0.85, the cause of the low correlation must be further investigated along with better methods to extract less noisy CSI estimates from the received signal. _Eve:_ Including an eavesdropper in future studies will help narrow down suitable quantization parameters. For example, in scenarios where CSI estimates of gNB and UE have low correlation coefficients, reducing L and S will result in a similar bit stream. The extent up to which L and S can be lowered in the presence of an eavesdropper must be studied. Very low values of L and S will result in a similar bit stream at Alice, Bob, and Eve. _PLKG in Private 5G:_ In a Private 5G network, as compared to a Public 5G network, physical layer parameters of 5G signals can be tuned for better PLKG performance. For a specific wireless channel, physical layer parameters such as subcarrier spacing, bandwidth, and TDD slot configuration can be fine-tuned such that it can capture the frequency selective fading profile of the channel more effectively while maintaining reciprocity. ## V Conclusion This work is one of the first to demonstrate a practical implementation of PLKG in a real Private 5G network. The feasibility of deriving a symmetric bit stream with an entropy of 142 bits on gNB and UE in a duration of 5 seconds was shown. The chosen indoor and outdoor environments were found to have sufficient entropy to generate a 256-bit long key within a few seconds. A distinction between static and dynamic channel was made and it was found that dynamic channel was suitable for PLKG in both outdoor and indoor environment. The result of this study motivates further development of 5G based PLKG testbench and investigation into PLKG for next generation cellular networks due to its wide range of applications. ## Acknowledgment This work was partly funded by the Bavarian Ministry of Economic Affairs, Regional Development and Energy as part of the project 6G Future Lab Bavaria.
2307.10281
Semi-supervised Cycle-GAN for face photo-sketch translation in the wild
The performance of face photo-sketch translation has improved a lot thanks to deep neural networks. GAN based methods trained on paired images can produce high-quality results under laboratory settings. Such paired datasets are, however, often very small and lack diversity. Meanwhile, Cycle-GANs trained with unpaired photo-sketch datasets suffer from the \emph{steganography} phenomenon, which makes them not effective to face photos in the wild. In this paper, we introduce a semi-supervised approach with a noise-injection strategy, named Semi-Cycle-GAN (SCG), to tackle these problems. For the first problem, we propose a {\em pseudo sketch feature} representation for each input photo composed from a small reference set of photo-sketch pairs, and use the resulting {\em pseudo pairs} to supervise a photo-to-sketch generator $G_{p2s}$. The outputs of $G_{p2s}$ can in turn help to train a sketch-to-photo generator $G_{s2p}$ in a self-supervised manner. This allows us to train $G_{p2s}$ and $G_{s2p}$ using a small reference set of photo-sketch pairs together with a large face photo dataset (without ground-truth sketches). For the second problem, we show that the simple noise-injection strategy works well to alleviate the \emph{steganography} effect in SCG and helps to produce more reasonable sketch-to-photo results with less overfitting than fully supervised approaches. Experiments show that SCG achieves competitive performance on public benchmarks and superior results on photos in the wild.
Chaofeng Chen, Wei Liu, Xiao Tan, Kwan-Yee K. Wong
2023-07-18T10:58:29Z
http://arxiv.org/abs/2307.10281v1
# Semi-supervised Cycle-GAN for face photo-sketch translation in the wild ###### Abstract The performance of face photo-sketch translation has improved a lot thanks to deep neural networks. GAN based methods trained on paired images can produce high-quality results under laboratory settings. Such paired datasets are, however, often very small and lack diversity. Meanwhile, Cycle-GANs trained with unpaired photo-sketch datasets suffer from the _steganography_ phenomenon, which makes them not effective to face photos in the wild. In this paper, we introduce a semi-supervised approach with a noise-injection strategy, named Semi-Cycle-GAN (SCG), to tackle these problems. For the first problem, we propose a _pseudo sketch feature_ representation for each input photo composed from a small reference set of photo-sketch pairs, and use the resulting _pseudo pairs_ to supervise a photo-to-sketch generator \(G_{p2s}\). The outputs of \(G_{p2s}\) can in turn help to train a sketch-to-photo generator \(G_{s2p}\) in a self-supervised manner. This allows us to train \(G_{p2s}\) and \(G_{s2p}\) using a small reference set of photo-sketch pairs together with a large face photo dataset (without ground-truth sketches). For the second problem, we show that the simple noise-injection strategy works well to alleviate the _steganography_ effect in SCG and helps to produce more reasonable sketch-to-photo results with less overfitting than fully supervised approaches. Experiments show that SCG achieves competitive performance on public benchmarks and superior results on photos in the wild. ## 1 Introduction Face photo-sketch translation can be considered as a specific type of image translation between an input face photo and sketch. It has a wide range of applications. For example, police officers often have to identify criminals from sketch images, sketch images are also widely used in social media. There are lots of works on face photo-sketch translation. Traditional methods are based on patch matching. They usually divide an input photo into small patches and find corresponding sketch patches in a reference dataset composed of well-aligned photo-sketch pairs. In this way, they (Song et al., 2014; Zhou et al., 2012; Zhu et al., 2017; Wang and Tang, 2009) achieved pleasant results without explicitly modeling the mapping between photos and sketches, which is highly non-linear and difficult. However, sketches generated by these methods are often over-smoothed and lack subtle contents, such as ears in Fig. 1(a)(ii). Moreover, these methods are usually very slow due to the time-consuming patch matching and optimization process. Recent methods based on Convolutional Neural Networks (CNNs) try to directly learn the translation between photos and sketches. However, results produced by simple CNNs are usually blurry (see Fig. 1(a)(iii)), and Generative Adversary Networks (GAN) (Goodfellow et al., 2014) often generate unpleasant artifacts (see Fig. 1(a)(iv)). Finally, due to the lack of large training datasets, these learning-based approaches cannot generalize well to photos in the wild. Latest works (Yu et al., 2020; Wang et al., 2017; Fang et al., 2020) utilize Cycle-GAN (Zhu et al., 2017) to learn the translation between photos and sketches. Cycle-GAN is designed for unpaired translation between different domains. Styles are translated with a discriminator loss and content consistency is guaranteed with a cycle-consistency loss. However, the cycle-consistency loss used to constrain content is weak, and therefore these methods still require paired data to calculate an MSE (mean squared error) loss between the prediction and ground truth. In experiments, we observed that models directly using unpaired Cycle-GAN fail to preserve facial content (see Fig. 2). This is because Cycle-GAN learns to "hide" information of the input photos in the generated sketches as invisible high-frequency noise, also called _steganography_(Chu et al., 2017; Bashkirova et al., 2019). It makes it difficult to learn face photo-sketch translation with Cycle-GAN in an unpaired setting. Please refer to Sec. 3.1 for a detailed discussion. In this paper, we propose a semi-supervised learning framework based on Cycle-GAN, named Semi-Cycle-GAN (SCG), for face photo-sketch translation. To ensure content consistency, we introduce a novel _pseudo sketch feature_ (PSF) to supervise the training of the photo-to-sketch generator \(G_{p2s}\). Figure 1(b) shows the pipeline to construct PSF for an input photo without ground truth sketch. Suppose we have a small reference set of photo-sketch pairs and a large face photo dataset without ground-truth sketches. Similar to the exemplar-based approach, we first subdivide an input photo and its VGG-19 (Simonyan and Zisserman, 2014) feature maps into overlapping patches. We then match (in the feature space) these photo patches with the photo patches in the reference set and compose a PSF from the VGG-19 features of the corresponding sketch patches in the reference set. We next supervise the training of \(G_{p2s}\) using the MSE between the feature maps of the generated sketch and the PSF of the input photo. The motivation for PSF is that styles of sketches are consistent for facial components with similar shapes. To find corresponding sketch patches for an input photo, we only need to match the facial components with similar shapes in the reference set. Since the shapes of facial components are limited, a small reference set with a few hundreds of photo-sketch pairs is often sufficient for this purpose. However, the same approach cannot be used for training the sketch-to-photo generator \(G_{s2p}\) because sketch patches with the same shape may give rise to photo patches of many different styles. Instead, we follow Cycle-GAN and use sketches generated by \(G_{p2s}\) to train \(G_{s2p}\) in a self-supervised manner. Although the proposed PSF helps to constrain the contents of the output sketches from \(G_{p2s}\), we find _steganography_ still exists and is quite harmful to the training of \(G_{s2p}\) because it learns to cheat. To solve this problem, we employ a simple _noise-injection_ strategy to disrupt the invisible steganography and force \(G_{s2p}\) to learn better translation from sketches to photos. Although the inputs of \(G_{s2p}\) are noisy during training, we observed that \(G_{s2p}\) can handle clean sketches quite well during testing due to the intrinsic image prior of CNNs (Ulyanov et al., 2017). Experiments demonstrated that the _noise-injection_ strategy can largely benefit the training of \(G_{s2p}\). In summary, our main contributions are: * We propose a semi-supervised learning framework based on Cycle-GAN, named Semi-Cycle-GAN, for face photo-sketch translation. * The proposed _pseudo sketch feature_ (PSF) allows us to train \(G_{p2s}\) using a small reference set of photo-sketch pairs together with a large face photo dataset without ground-truth sketches. This enables our networks to generalize well to face photos in the wild. * We introduce a self-supervised approach to train the sketch-to-photo generator \(G_{s2p}\)_without using real sketches_ through cycle-consistency. In particular, we find that cycle-consistency loss suffers greatly from invisible steganography, and the simple _noise-injection_ strategy helps a lot to improve it. A preliminary version of this work appeared in Chen et al. (2018). We extend it in five aspects: (1) we combine our previously proposed semi-supervised learning framework with cycle-consistency to conduct both photo-to-sketch and sketch-to-photo translations; (2) we find that cycle-consistency loss suffers greatly from invis Figure 1: Example results comparison and the proposed pseudo sketch feature. ible steganography, and the simple _noise-injection_ strategy helps a lot to improve it; (3) we add a Gram matrix loss based on PSF which provides second-order style supervision; (4) we provide more comparisons with recently proposed methods such as PS2MAN (Wang et al., 2017), SCA-GAN (Yu et al., 2020), Knowledge Transfer (Zhu et al., 2019) (denoted as KT), GENRE (Li et al., 2021) and PANet (Nie et al., 2022); (5) we adopt recent perceptual oriented metrics (_i.e._, LPIPS (Zhang et al., 2018), DISTS (Ding et al., 2020), and FID (Heusel et al., 2017)) for performance evaluation. In particular, our extended framework shows better performance than Chen et al. (2018). ## 2 Related Works **Exemplar-Based Methods** Since photos and sketches are in two different modalities, it is not straightforward to learn a direct mapping between them. Tang and Wang (2003) introduced eigentransformation to perform exemplar matching between photos and sketches by assuming a linear transformation between them. Liu et al. (2005) noticed that the linear assumption holds better locally, and proposed the patch-based local linear embedding (LLE). Wang and Tang (2009) introduced a multi-scale markov random fields (MRF) model to resolve inconsistency between adjacent patches. Zhang et al. (2010) extended MRF with shape priors and SIFT features. Zhou et al. (2012) proposed the markov weight fields (MWF) model to synthesize new sketch patches that are not present in the training dataset. Gao et al. (2012) proposed to adaptively determine the number of candidate patches by sparse representation. Wang et al. (2013) proposed a transductive model which optimizes the MRF-based photo-to-sketch and sketch-to-photo models simultaneously. A few works such as Song et al. (2014)and Wang et al. (2017) tried to improve the efficiency of the sketch generation procedure. Recent methods Zhu et al. (2017) and Chen et al. (2018) used features from a pretrained CNN network as the patch feature to replace unrobust traditional features. **Learning-Based Methods** In recent years, CNN based methods have become the mainstream. Zhang et al. (2015) proposed to directly translate the input photo to sketch with a fully convolution network (FCN). Zhang et al. (2017) introduced a branched fully convolutional network (BFCN) which is composed of a content branch and a texture branch with different losses. Wang et al. (2017) improved the vanilla GAN with multi-scale structure for face photo-sketch translation. Wang et al. (2017) introduced multi-scale discriminators to CycleGAN. Zhang et al. (2018) proposed multi-domain adversarial learning in the latent feature space of faces and sketches. Fang et al. (2020) introduced VGG-based feature identity loss to better preserve identity information. Yu et al. (2020) extended Cycle-GAN (Zhu et al., 2017) with facial parsing map and proposed the SCA-GAN. Some recent popular works (Yi et al., 2019, 2020, 2021; Huang et al., 2021; Li et al., 2020) consider a different kind of portrait style with simple thick lines and achieve pleasant results. However, it is out of the scope of this paper and hence we do not compare with them in this work. ## 3 Semi-Cycle-GAN with noise-injection ### Steganography in Cycle-GAN In this section, we first give a brief review of the unpaired Cycle-GAN for face photo-sketch translation. We then show how Cycle-GAN cheats with invisible steganography. Given a photo set \(P\) and a sketch set \(S\), Cycle-GAN learns two generators: a photo-to-sketch generator \(G_{p2s}\) that maps photo \(p\in P\) to sketch \(s\in S\), and a symmetric sketch-to-photo generator \(G_{s2p}\) that maps sketch \(s\in S\) to photo \(p\in P\) (see Fig. 3(a)). Two discriminators \(D_{s}\) and \(D_{p}\) are used to minimize the style differences between the generated and real sketches (_i.e._, \(\hat{s}\) and \(s\)) and between generated and real photos (_i.e._, \(\hat{p}\) and \(p\)). Cycle-consistency losses are used to constrain content in Figure 2: Illustration of steganography when training Cycle-GAN with unpaired data. formation in photo-sketch translation and are given by: \[L_{cyc_{p}} =\mathbb{E}[||G_{s2p}(G_{p2s}(p))-p||], \tag{1}\] \[L_{cyc_{s}} =\mathbb{E}[||G_{p2s}(G_{s2p}(s))-s||].\] Note that Eq. (1) does not impose a direct constraint over \(G_{p2s}(p)\) and \(G_{s2p}(s)\), and this leads to a large solution space. Chu et al. (2017) pointed out that Cycle-GAN tends to hide invisible steganography in the outputs to satisfy the cycle-consistency constraint when two domains have different complexity. Specifically, in face photo-sketch translation, the photo domain \(P\) is much more complex than the sketch domain \(S\), which makes learning of \(G_{s2p}\) much more difficult than \(G_{p2s}\). As a consequence, when we train \(G_{s2p}\) and \(G_{p2s}\) in an unpaired manner with cycle-consistency, the networks tend to learn a trivial solution by cheating with steganography rather than learning the desired translation networks. Figure 4 provides a theoretical illustration of steganography effect and how noise-injection helps to solve this problem. Given that the high-dimensional photo domain \(P\) contains a more extensive range of information in comparison to the low-dimensional sketch domain \(S\), it poses a considerable challenge for the \(G_{s2p}\) network to reconstruct the missing information (_e.g._, hair color) from grayscale input sketches. The networks tend to learn to conceal the extra information in a low-amplitude signal (_i.e._, the red curve) to facilitate seamless reconstruction of the high-dimensional signal while retaining the appearance of the sketch signal. Since steganography needs to be low-amplitude signals, it is vulnerable to disruption through the application of random noise. In addition, \(G_{s2p}\) with random noise will act as a normal GAN to complement missing information in the low-dimensional sketch domain. Figure 2 shows some example results when training Cycle-GAN with unpaired dataset. We can observe from the left half of Fig. 2 (_photo\(\rightarrow\)sketch\(\rightarrow\)photo_) that the lost letter in the generated sketch was recovered in the reconstructed photo, and extra glasses in the sketch were removed. A similar phenomenon also appears in the right half (_sketch\(\rightarrow\)photo\(\rightarrow\)sketch_). Closely related works including Chu et al. (2017) and Bashkirova et al. (2019) focus on how to avoid adversarial attack that is usually invisible in the images. We, on the other hand, are the first to study the visual effects brought by such steganography in face photo-sketch translation, which have been ignored by previous works based on Cycle-GAN (Yu et al., 2020; Wang et al., 2017). To solve this problem, we propose the Semi-Cycle-GAN framework for face photo-sketch translation. As shown in Fig. 3(b), our framework is composed of four networks, namely \(G_{p2s}\), \(G_{s2p}\), \(D_{s}\), and \(D_{p}\). Unlike Cycle-GAN, we do not use the bidirectional cycle-consistency loss as a content constraint. We use PSF loss (see Sec. 3.2 for details) to supervise the training of \(G_{p2s}\), and cycle-consistency loss with _noise-injection_ to supervise the training of \(G_{s2p}\). In this manner, we can train our Semi-Cycle-GAN using a small paired photo-sketch dataset together with a large face dataset. ### Pseudo Sketch Feature Given a test photo \(p\), our target is to construct a pseudo sketch feature \(\Phi^{\prime}(p)\) as the supervision using the reference set \(\mathcal{R}\{(p_{i}^{\mathcal{R}},s_{i}^{\mathcal{R}})\}_{i=1}^{N}\), where \(p_{i}^{\mathcal{R}}\) and \(p_{i}^{\mathcal{R}}\) are photos and sketches in \(\mathcal{R}\). We first use a pretrained VGG-19 network to extract a feature map for \(p\) at the \(l\)-th layer, denoted as \(\Phi^{l}(p)\). Similarly, we can get the feature maps for photos and sketches in the reference dataset, _i.e._, \(\{\Phi^{l}(p_{i}^{\mathcal{R}})\}_{i=1}^{N}\) and \(\{\Phi^{l}(s_{i}^{\mathcal{R}})\}_{i=1}^{N}\). The feature maps are then subdivided into \(k\times k\) patches for the following feature patch matching process. For simplicity, we denote a vectorized representation of a \(k\times k\) patch centered at a point \(j\) of Figure 4: Theoretical illustration of how noise-injection works. Figure 3: Framework of unpaired Cycle-GAN and our Semi-Cycle-GAN for face-sketch translation. \(\Phi^{l}(p)\) as \(\Psi_{j}\left(\Phi^{l}(p)\right)\), and the same definition applies to \(\Psi_{j}\left(\Phi^{l}(p_{i}^{\mathcal{R}})\right)\) and \(\Psi_{j}(\Phi^{l}\left(s_{i}^{\mathcal{R}}\right))\). For each patch \(\Psi_{j}\left(\Phi^{l}(p)\right)\), where \(j=1,2,\ldots,m^{l}\) and \(m^{l}=(H^{l}-k+1)\times(W^{l}-k+1)\) with \(H^{l}\) and \(W^{l}\) being the height and width of \(\Phi^{l}(p)\), we find its best match \(\Psi_{j^{\prime}}\left(\Phi^{l}(p_{i^{\prime}}^{\mathcal{R}})\right)\) in the reference set based on cosine distance, _i.e._, \[(i^{\prime},j^{\prime})=\operatorname*{arg\,max}_{\begin{subarray}{c}i^{ \prime}=1-N\\ j^{\prime}=1-m^{l}\end{subarray}}\frac{\Psi_{j}\left(\Phi^{l}(p)\right)\cdot \Psi_{j^{\prime}}\left(\Phi^{l}(p_{i^{\prime}}^{\mathcal{R}})\right)}{\left\| \Psi_{j}\left(\Phi^{l}(p)\right)\right\|_{2}\left\|\Psi_{j^{\prime}}\left( \Phi^{l}(p_{i^{\prime}}^{\mathcal{R}})\right)\right\|_{2}}. \tag{2}\] Since photos and their corresponding sketches in \(\mathcal{R}\) are well aligned, the indices of the best matching result \((i^{\prime},j^{\prime})\) can be used directly to find the corresponding sketch feature patch, _i.e._, \(\Psi_{j^{\prime}}\left(\Phi^{l}(s_{j^{\prime}}^{\mathcal{R}})\right)\) which serves as the pseudo sketch feature patch \(\Psi_{j}^{\prime}\left(\Phi^{l}(p)\right)\). Finally, we obtain the pseudo sketch feature representation (at layer \(l\)) for \(p\) as \(\left\{\Psi_{j^{\prime}}^{l}\left(\Phi^{l}(p)\right)\right\}_{j=1}^{m^{l}}\). We provide an intuitive visualization of PSF in supplementary material. ### Loss Functions We train generators (\(G_{p2s}\), \(G_{s2p}\)) and discriminators (\(D_{s}\), \(D_{p}\)) alternatively with the following loss functions \[L_{G}^{total}=\lambda_{p}L_{p}+\lambda_{sty}L_{sty}+\lambda_{cyc}L_{cyc}+ \lambda_{adv}(L_{G_{p2s}}+L_{G_{s2p}}), \tag{3}\] \[L_{D}^{total}=L_{D_{p2s}}+L_{D_{s2p}} \tag{4}\] where \(\lambda_{p}\), \(\lambda_{sty}\), \(\lambda_{cyc}\), and \(\lambda_{adv}\) are trade-off weights for each loss term respectively. We describe details of each term as below. **Pseudo Sketch Feature Loss** The pseudo sketch feature loss is formulated as \[L_{p}(p,\tilde{s})=\sum_{l=3}^{5}\sum_{j=1}^{m^{l}}\left\|\Psi_{j}\left(\Phi^ {l}(\tilde{s})\right)-\Psi_{j}^{\prime}\left(\Phi^{l}(p)\right)\right\|_{2}^{ 2}, \tag{5}\] where \(l=3,4,5\) are relu3_1, relu4_1, and relu5_1 in VGG-19, and \(\tilde{s}\) is the predicted sketch from \(G_{p2s}\). **Style Loss** Inspired by recent style transfer methods, we include Gram Matrix loss (Gatys et al., 2016) as a second-order feature loss to provide better style supervision. We first average pool features in each \(k\times k\) patch for both \(\Psi_{j}\left(\Phi^{l}(\tilde{s})\right)\) and \(\Psi_{j}^{\prime}\left(\Phi^{l}(p)\right)\), resulting in features \(\psi_{l}\) and \(\psi_{l}^{\prime}\) of size \(m^{l}\times c^{l}\), where \(c^{l}\) is the channel number in \(l\)-th layer. We then calculate the Gram Matrix loss as \[L_{sty}(p,\tilde{s})=\sum_{l=3}^{5}\frac{1}{(c^{l}m^{l})^{2}}\|\psi_{l}^{T} \psi_{l}-\psi_{l}^{\prime T}\psi_{l}^{\prime}\|_{2}^{2}, \tag{6}\] **Cycle-Consistency with Noise-injection** We use the cycle-consistency loss with _noise-injection_ as supervision, which is formulated as \[L_{cyc}(p)=\|G_{s2p}\left(G_{p2s}(p)+\sigma z_{1}\right)-p\|_{2}^{2}, \tag{7}\] where \(z_{1}\) is randomly sampled from a normal distribution with the same dimensions as \(G_{p2s}(p)\), and \(\sigma\) is a hyperparameter that controls the noise level. **GAN Loss** We use the hinge loss to make the training process more stable. The objective functions of hinge loss are given by \[L_{G}=-\mathbb{E}[D(G(x))], \tag{8}\] \[L_{D}=\mathbb{E}[\max(0,1-D(y))]+\mathbb{E}[\max(0,1+D(G(x)))], \tag{9}\] where \(x,y,D\) refer to \(p,s,D_{s}\) when \(G\) is \(G_{p2s}\), and \(s,p,D_{p}\) when \(G\) is \(G_{s2p}\). ## 4 Experiments ### Datasets and Metrics **Datasets** To compare with previous works, we evaluate our model on two public benchmark datasets, namely the CUFS dataset (combination of CUHK (Tang and Wang, 2003), AR (Martinez and benavente., 1998) and XM2VTS (Messer et al., 1999)), and the CUFSF dataset (Zhang et al., 2011b). For semi-supervised learning, we use extra face photos from VGG-Face dataset (Parkhi et al., 2015). We randomly select 1,244 photos from VGG-Face to test model performance on natural images. More details are provided in supplementary material. **Training Details** We set all the trade-off weights \(\lambda_{p}\), \(\lambda_{sty}\), \(\lambda_{cyc}\), and \(\lambda_{adv}\) to 1 for simplicity. We use Adam (Kingma and Ba, 2014) with learning rates 0.001 for generators and 0.004 for discriminators, and set \(\beta_{1}=0.9,\beta_{2}=0.999\). The learning rates are linearly decayed to 0 after the first 10 epochs. The training batch size is 2, and models are trained on Nvidia 1080Ti GPUs. **Metrics** For test sets with ground truth, we use FSIM (Zhang et al., 2011), LPIPS (Zhang et al., 2018) and DISTS Ding et al. (2020) to measure the texture quality, and NLDA score to measure the identity similarity following Wang et al. (2017). For the evaluation of face-sketch translation in the wild, there are no ground truth sketches to calculate FSIM, LPIPS, and DISTS. We therefore exploit FID (Heusel et al., 2017) to measure the feature statistic distance between the generated sketch datasets and real sketch datasets. We explain details of these metrics in supplementary material. ### Comparison on Public Benchmarks We evaluate our model on both photo-to-sketch and sketch-to-photo translations on CUFS and CUFSF, which were captured under laboratory settings. We compare our results both qualitatively and quantitatively with four exemplar-based methods, namely MWF (Zhou et al., 2012), SSD (Song et al., 2014), RSLCR (Wang et al., 2017), and DGFL (Zhu et al., 2017), and five GAN-based methods, namely Pix2Pix-GAN (Isola et al., 2017), PS2MAN (Wang et al., 2017), MDAL (Zhang et al., 2018), KT (Zhu et al., 2019) and SCA-GAN (Yu et al., 2020). We obtain the results of MWF, SSD, RSLCR, and DGFL from Wang et al. (2017), the results of SCA-GAN and KT from the respective authors, and use the public codes of Cycle-GAN and PS2MAN to produce the results. We also compare the photo-to-sketch translation results with our previous work FSW (Chen et al., 2018). All the models are trained on the CUFS and CUFSF datasets with the same train/test partition. #### 4.2.1 Photo-to-Sketch Translation Figure 5 shows some photo-to-sketch results on CUFS and CUFSF. Exemplar-based methods (Fig. 5(b,c,d,e)) in general perform worse than learning-based methods (Fig. 5(f,g,h,i,j,k)). Their results are over-smoothed and do not show hair textures. They also fail to preserve contents well, such as hairpins in the first row and glasses in the last row. GAN-based methods can generate better textures, but they usually produce artifacts because of the unstable training. For example, Pix2Pix produces lots of artifacts in the hair and eyes (Fig. 5(f)), and PS2MAN generates lots of artifacts when the facial parts of inputs are not clear or with a strong reflection of light (see the last two rows of Fig. 5(g)). Although the results of SCA-GAN look great, it suffers from incorrect parsing map guidance, such as hairpins in the first row, hairlines in the second row of Fig. 5(h). Referring to Fig. 5(j,k), we have improved our previous results of FSW by introducing \(L_{\text{sty}}\) and the photo reconstruction branch. The quantitative results with different metrics in Tab. 1 support our observations. It can be observed that exemplar-based methods perform much worse in terms of all metrics including FSIM, LPIPS, DISTS and NLDA. KT shows the best FSIM score but poor perceptual scores compared with SCG. We can see from Fig. 5(i) that the textures, especially hair textures, generated by KT are much worse than SCG. SCA-GAN generates better textures but the generated images might be different from Figure 5: Examples of synthesized face sketches on the CUFS dataset and the CUFSF dataset. See more examples in supplementary material. the original images (_e.g_., missing components) due to incorrect parsing map, which also leads to poor LPIPS and DISTS scores. In contrast, our SCG presents the second best results in terms of FSIM and the best results in terms of LPIPS and DISTS. As for sketch recognition, SCG also demonstrates best NLDA score on CUFS and competitive results on CUFSF, which clearly demonstrate its superiority. ### Sketch-to-Photo Translation Figure 6 shows some example sketch-to-photo results. Same as photo-to-sketch translation, the results of Pix2Pix and PS2MAN contain many undesired artifacts. SCA-GAN produces results with the best visual quality, which is consistent with the quantitative results shown in Tab. 2. However, it still generates results with missing components under incorrect parsing map predictions, such as the missing eyes and glasses in the last row of Fig. 6(d). Without any GAN losses, KT suffers from unrealistic textures. For instance, results in Fig. 6(e) are grainy. Although SCG is trained in a self-supervised manner without seeing any real input sketches, it still shows competitive performance. Referring to Tab. 2, SCG shows the best or second results in 5 out of 8 columns. The biggest problem of SCG is that the synthesized colors are quite different from the ground truth. This is legitimate because the model is not suppose to recover exact color as ground truth unless overfitting. ### Photo-to-Sketch Translation in the Wild In this section, we will focus on photo-to-sketch translation in the wild. Since there are too many sketch styles in the wild, sketch-to-photo translation in the wild is beyond the scope of this paper, and we will leave it for future work. We compare SCG with other methods which provide codes, including SSD, RSLCR, Pix2Pix-GAN, PS2MAN, Cycle-GAN. Figure 7 shows some photos sampled from our VGG-Face test dataset and the sketches generated by different methods. It can be observed that these photos may show very different lightings and poses _etc_. Among the results of other methods, exemplar-based methods (see Fig. 7(b,c)) fail to deal with pose changes \begin{table} \begin{tabular}{c|c c|c c|c c|c c} \hline \multirow{2}{*}{Method} & \multicolumn{2}{c|}{FSIM \(\uparrow\)} & \multicolumn{2}{c|}{LPIPS \(\downarrow\)} & \multicolumn{2}{c|}{DISTS \(\downarrow\)} & \multicolumn{2}{c}{NLDA\(\uparrow\)} \\ \cline{2-9} & CUFS & CUFSF & CUFS & CUFS & CUFS & CUFS & CUFS & CUFS & CUFSF \\ \hline \hline MWF & 0.7144 & 0.7029 & 0.3671 & 0.4090 & 0.2533 & 0.2825 & 92.3 & 73.8 \\ SSD & 0.6997 & 0.6824 & 0.4033 & 0.4283 & 0.2536 & 0.2608 & 91.1 & 70.6 \\ RSLCR & 0.6905 & 0.6650 & 0.4042 & 0.4521 & 0.2556 & 0.2896 & 98.0 & 75.9 \\ DGRL & 0.7078 & 0.6957 & 0.3655 & 0.3972 & 0.2410 & 0.2480 & 98.2 & 78.8 \\ \hline Pix2Pix-GAN & 0.7153 & 0.7060 & 0.3600 & 0.3868 & 0.2151 & 0.2025 & 93.8 & 71.7 \\ PS2MAN & 0.7157 & 0.7219 & 0.3794 & 0.4155 & 0.2430 & 0.2471 & 97.6 & 77.0 \\ SCA-GAN & 0.7160 & 0.7268 & 0.3608 & 0.4169 & 0.2085 & 0.2168 & – & – \\ MDAL & 0.7275 & 0.7076 & 0.3319 & 0.3841 & 0.2037 & 0.2096 & 96.6 & 66.7 \\ KT & **0.7369** & **0.7311** & 0.3485 & 0.3743 & 0.2116 & 0.2039 & 98.0 & **80.4** \\ \hline \hline FSW & 0.7274 & 0.7103 & 0.2362 & 0.3787 & 0.2063 & 0.2111 & 98.0 & 78.04 \\ SCG (ours) & 0.7343 & 0.7261 & **0.3232** & **0.3489** & **0.1967** & **0.184** & **98.6** & 78.1 \\ \hline \end{tabular} \end{table} Table 1: Quantitative results for photo-to-sketch translation. SCA-GAN* needs a parsing map as guidance. \begin{table} \begin{tabular}{c|c c|c c|c c|c c} \hline \multirow{2}{*}{Method} & \multicolumn{2}{c|}{FSIM \(\uparrow\)} & \multicolumn{2}{c|}{LPIPS \(\downarrow\)} & \multicolumn{2}{c|}{DISTS \(\downarrow\)} & \multicolumn{2}{c}{NLDA\(\uparrow\)} \\ \cline{2-9} & CUFS & CUFSF & CUFS & CUFS & CUFS & CUFS & CUFS & CUFSF \\ \hline \hline Pix2Pix-GAN & 0.7598 & 0.7877 & 0.3977 & 0.4025 & 0.2421 & 0.2481 & 87.1 & 51.4 \\ PS2MAN & 0.7645 & 0.7870 & 0.3686 & 0.4267 & 0.2254 & 0.2706 & 84.7 & 42.2 \\ SCA-GAN & 0.7653 & **0.8304** & 0.3251 & **0.3198** & 0.1794 & **0.1829** & – & – \\ KT & **0.7794** & 0.7932 & **0.3233** & 0.3758 & 0.1821 & 0.2379 & **93.8** & **65.9** \\ \hline \hline SCG (ours) & 0.7652 & 0.7777 & 0.3574 & 0.3522 & **0.1710** & 0.2092 & 90.0 & 49.7 \\ \hline \end{tabular} \end{table} Table 2: Quantitative results for sketch-to-photo translation. SCA-GAN* needs a parsing map as guidance. Figure 6: Examples of synthesized face photos on the CUFS dataset and the CUFSF dataset. Figure 7: Comparison for images in the wild. Benefiting from the additional training data, SCG can deal with various photos. and different hairstyles. Although GANs can generate some sketch-like textures, none of them can well preserve the contents. The face shapes are distorted and the key facial parts are lost. It can be seen from Fig. 7(g,h) that only FSW and SCG can handle photos in the wild well and generate pleasant results. Compared with FSW, SCG can generate more realistic shadows and textures. The same conclusion can also be drawn from the quantitative results shown in Tab. 3. We also conduct user study to better evaluate their subjective performance, as shown in Tab. 3 right part. We notice that our methods (FSW and SCG) are much preferred over previous methods. By introducing the \(G_{s2p}\) branch and cycle-consistency, SCG further improves the performance of our previous work FSW. Details of user study are in supplementary material. We have also included comparison on the latest in-the-wild benchmark WildSketch Nie et al. (2022), as shown in Tab. 4 and Fig. 8. Our findings indicate that when incorporating additional, diverse photos from the VGG face dataset, our method achieves SoTA performance. Figure 8 also supports our claim that the inclusion of extra training photos improves generalization abilities of our model. For instance, ours demonstrates greater robustness towards hair color variations in the first row and shows better result with the presence of a hat in the second row. These results underscore the effectiveness of our proposed semi-supervised approach. ### Ablation Study To study the effectiveness of different components of the proposed method, we gradually modify the baseline Semi-Cycle-GAN and compare their results. Table 5 shows the results of all model variations. We discuss the results below. \begin{table} \begin{tabular}{c|c c c c c} \hline Method & Cycle-GAN & GENRE & CA-GAN & PANet & Ours \\ \hline FSIM\(\uparrow\) & 0.6654 & 0.6902 & 0.6960 & 0.6950 & **0.7** \\ \hline \end{tabular} \end{table} Table 4: Quantitative comparison on WildSketch dataset. Figure 8: Example comparison with WildSketch dataset. \begin{table} \begin{tabular}{c|c|c c c c c} \hline Method & FID\(\downarrow\) & & & & & \\ \hline \hline SSD & 94.6 & & & & & \\ Fast-RSLCR & 144.0 & & & & & \\ Pix2Pix-GAN & 86.7 & & & & & \\ PS2MAN & 90.8 & & & & & \\ Cycle-GAN & 87.8 & & & & & \\ FSW & 81.3 & & & & & \\ SCG (ours) & **67.9** & & & & & \\ \hline \end{tabular} \end{table} Table 3: Quantitative results and user study for photo-to-sketch translation in the wild. Figure 10: Examples of improvement on \(G_{p2s}\) brought by style loss. \begin{table} \begin{tabular}{c|c c c c c c c} \hline Configuration & A & B & C & D & E & F & G \\ \hline \(\sigma^{\prime}\) & 0 & 10 & 20 & 30 & 20 & 20 & 20 \\ \(\lambda^{*}\) & 1 & 1 & 1 & 3 & 5 & 3 \\ \(\lambda^{*}\) & 2 & 2 & 2 & 2 & 2 & 2 & 2 \\ \hline \(\text{IPIS}_{1}\)(\(\text{P2S}\)) & 0.3260 & 0.3273 & 0.3277 & 0.3287 & 0.3257 & 0.3273 & **0.3235** \\ \(\text{IPIS}_{1}\)(\(\text{S2P}\)) & 0.4273 & 0.3454 & 0.3435 & 0.3461 & 0.3435 & 0.3447 & **0.33.94** \\ \hline \end{tabular} \end{table} Table 5: Ablation study of Semi-Cycle-GAN. \(\sigma\): noise level, \(k\): feature patch size, \(L_{avg}\): use second-order style loss or not. **Noise injection.** We show an example result with and without _noise-injection_ in Fig. 9. It can be observed that Fig. 9(c) with \(\sigma=20\) is much better than Fig. 9(b) with \(\sigma=0\). This demonstrates that _noise-injection_ can greatly improve the performance of \(G_{s2p}\). This is because the proposed _noise-injection_ strategy breaks the steganography in the outputs of \(G_{p2s}\), and increases the generalization ability of \(G_{s2p}\). We explore models with different levels of _noise-injection_, and the results are shown in columns A, B, C, and D of Tab. 5. We can see that adding more noise is not helpful to the performance of \(G_{s2p}\) but degrades the performance of \(G_{p2s}\). This is likely because the backward gradients from \(G_{s2p}\) are corrupted when noise-injection level is too high. We empirically find \(\sigma=20\) strikes a good balance between the performance of \(G_{p2s}\) and \(G_{s2p}\). **Patch size.** We present the results with patch size 1, 3, and 5 in columns C, E, and F of Tab. 5 respectively. We can observe that \(k=3\) gives the best performance, while \(k=5\) is worse than \(k=3\). This may be caused by the fact that a large patch in the feature space represents a much larger patch in the pixel space and this leads to undesired extra contents in the pseudo sketch feature. We therefore set \(k=3\) in our experiments. **Second-order style loss.** Comparing the results in columns E and G of Tab. 5, we can notice that model with \(L_{sty}\) shows better performance for both \(G_{p2s}\) and \(G_{s2p}\). This is because \(L_{sty}\) provides better style supervision for \(G_{p2s}\), which can in turn benefit the training of \(G_{s2p}\). Figure 10 shows some examples of improvement on \(G_{p2s}\) brought by style loss. **Extra training photos** Introducing more training photos from VGG-Face dataset is the key to improve the generalization ability of our model. As demonstrated in Fig. 11, as we add more photos to the training set, the results improve significantly, see the eyes region. ## 5 Conclusion In this paper, we propose a semi-supervised CycleGAN, named Semi-Cycle-GAN (SCG), for face photo-sketch translation. Instead of supervising our network using ground-truth sketches, we construct a novel pseudo sketch feature representation for each input photo based on feature space patch matching with a small reference set of photo-sketch pairs. This allows us to train our model using a large face photo dataset (without ground-truth sketches) with the help of a small reference set of photo-sketch pairs. Since directly training \(G_{s2p}\) in a self-supervised manner as Cycle-GAN suffers from steganography, we exploit a _noise-injection_ strategy to improve the robustness. Experiments show that our method can produce sketches comparable to (if not better than) those produced by other state-of-the-art methods on four public benchmarks, and outperforms them on photo-to-sketch translation in the wild.
2303.02372
Refinements of degree conditions for the existence of a spanning tree without small degree stems
A spanning tree of a graph without no vertices of degree $2$ is called a {\it homeomorphically irreducible spanning tree} (or a {\it HIST}) of the graph. Albertson, Berman, Hutchinson and Thomassen~[J. Graph Theory {\bf 14} (1990), 247--258] gave a minimum degree condition for the existence of a HIST, and recently, Ito and Tsuchiya~[J. Graph Theory {\bf 99} (2022), 162--170] found a sharp degree-sum condition for the existence of a HIST. In this paper, we refine these results, and extend the first one to a spanning tree in which no vertex other than the endvertices has small degree.
Michitaka Furuya, Akira Saito, Shoichi Tsuchiya
2023-03-04T10:17:15Z
http://arxiv.org/abs/2303.02372v3
# Refinements of degree conditions for the existence ###### Abstract A spanning tree of a graph without no vertices of degree 2 is called a _homeomorphically irreducible spanning tree_ (or a _HIST_) of the graph. Albertson, Berman, Hutchinson and Thomassen [J. Graph Theory **14** (1990), 247-258] gave a minimum degree condition for the existence of a HIST, and recently, Ito and Tsuchiya [J. Graph Theory **99** (2022), 162-170] found a sharp degree-sum condition for the existence of a HIST. In this paper, we refine these results in a sense, and extend the first one to a spanning tree without stems of small degree. _Key words and phrases._ homeomorphically irreducible spanning tree (HIST), \([2,k]\)-ST, minimum degree, degree-sum. _AMS 2010 Mathematics Subject Classification._ 05C05, 05C07. ## 1 Introduction Let \(G\) be a graph. We let \(V(G)\) and \(E(G)\) denote the _vertex set_ and the _edge set_ of \(G\), respectively. For \(u\in V(G)\), let \(N_{G}(u)\) and \(d_{G}(u)\) denote the _neighborhood_ and the _degree_ of \(u\), respectively; thus \(N_{G}(u)=\{v\in V(G):uv\in E(G)\}\) and \(d_{G}(u)=|N_{G}(u)|\). For an integer \(i\geq 0\), let \(V_{i}(G)=\{u\in V(G):d_{G}(u)=i\}\) and \(V_{\geq i}(G)=\{u\in V(G):d_{G}(u)\geq i\}\). We let \(\delta(G)\) denote the _minimum degree_ of \(G\). We let \[\sigma_{2}(G)=\min\{d_{G}(u)+d_{G}(v):u,v\in V(G),\ u\neq v,\ uv\notin E(G)\}\] if \(G\) is not complete; we let \(\sigma_{2}(G)=\infty\) if \(G\) is complete. Let \(G\) be a graph. A spanning tree of \(G\) without vertices of degree \(2\) is called a _homeomorphically irreducible spanning tree_ (or a _HIST_) of \(G\); i.e., a spanning tree \(T\) of \(G\) is a HIST if and only if \(V_{2}(T)=\emptyset\). The structure of a HIST is sometimes used as an essential tool to construct graph classes; for example, in an explicit class of edge-minimal \(3\)-connected plane graphs given by Halin [8], HISTs play a key role. Motivated from such importance, the existence of a HIST (or a large subtree having no vertex of degree \(2\)) has been widely studied (for example, see [1, 2, 3, 9, 12]). During the course of this study, the concept of HISTs was naturally extended: A spanning tree \(T\) of \(G\) is called a \([2,k]\)_-ST_ of \(G\) if \(\bigcup_{2\leq i\leq k}V_{i}(T)=\emptyset\) (see [5]). In this paper, * we refine some known minimum degree/degree-sum conditions assuring us the existence of a HIST, and * we extend one of the results obtained in (i) to the existence of a \([2,k]\)-ST. We first focus on the following theorem, which is one of the fundamental results on degree conditions for HISTs. **Theorem A** (Albertson, Berman, Hutchinson and Thomassen [1]): Let \(G\) be a connected graph of order \(n\), and suppose that \(\delta(G)\geq 4\sqrt{2n}\). Then \(G\) has a HIST. In [1], they remarked that by following the same proof strategy, they can show that for every integer \(k\geq 3\), there exists a constant \(c_{k}\) such that every connected graph of order \(n\) and minimum degree at least \(c_{k}\sqrt{n}\) has a \([2,k]\)-ST. However, they did not investigate the behavior of \(c_{k}\) as a function of \(k\). In this paper, by refining their arguments and adding a new observation, we prove that \(c_{k}=O(k^{\frac{3}{2}})\). **Theorem 1**: Let \(k\geq 2\) be an integer. Let \(G\) be a connected graph of order \(n\), and suppose that \(\delta(G)\geq\sqrt{k(k-1)(k+2\sqrt{2k}+2)n}\). Then \(G\) has a \([2,k]\)-ST. If \(k=2\), then \(\sqrt{k(k-1)(k+2\sqrt{2k}+2)}=4\). Thus Theorem 1 slightly improves Theorem A. In Section 2, we also show \(c_{k}=\Omega(k^{\frac{1}{2}})\). Next we focus on the degree-sum conditions for the existence of a HIST. Recently, Ito and Tsuchiya [10] proved the following. **Theorem B (Ito and Tsuchiya [10])**: Let \(G\) be a connected graph of order \(n\geq 8\). If \(\sigma_{2}(G)\geq n-1\), then \(G\) has a HIST. They also showed that the bound on \(\sigma_{2}\) is best possible, i.e., for each integer \(n\geq 8\), there exists a graph \(G\) of order \(n\) with \(\sigma_{2}(G)=n-2\) having no HIST. In this paper, we characterize such graphs. Let \(D_{n}\) be the graph obtained from a complete graph \(K\) of order \(n-2\) and a path \(P\) of order \(3\) by identifying a vertex of \(K\) and an endvertex of \(P\). Then, as mentioned in [10], \(|V(D_{n})|=n\), \(\sigma_{2}(D_{n})=n-2\) and \(D_{n}\) has no HIST. Our second result is the following, which is proved in Section 3. **Theorem 2**: Let \(G\) be a connected graph of order \(n\geq 10\), and suppose that \(\sigma_{2}(G)\geq n-2\). Then \(G\) has a HIST if and only if \(G\) is not isomorphic to \(D_{n}\). The proof of Theorem 2 depends on a known result (Theorem C in Subsection 1.1), and so it is not easy to extend Theorem 2 to a \([2,k]\)-ST version in a similar way. Actually, two of the authors [7] extended Theorem 2 to a \([2,k]\)-ST version in a different way if a target graph has sufficiently large order \(n\geq 295\). Note that it does not cover Theorem 2 when a target graph has a small order. From our experience, the analysis of the existence of a HIST in small graphs is frequently valuable. For example, in [6], a list of connected \(P_{5}\)-free graphs of order at most \(7\) having no HIST plays an important role. Thus we decide to give the proof of Theorem 2 independently of a result in [7]. ### Further notations and a preliminary In this subsection, we prepare some notations and introduce a known result, which are used in our proof. For terms and symbols not defined in this paper, we refer the reader to [4]. Let \(G\) be a graph. For two disjoint subsets \(U_{1}\) and \(U_{2}\) of \(V(G)\), let \(E_{G}(U_{1},U_{2})=\{u_{1}u_{2}\in E(G):u_{1}\in U_{1},\ u_{2}\in U_{2}\}\). For \(u,v\in V(G)\), the _distance_ between \(u\) and \(v\), denoted by \(\mathrm{dist}_{G}(u,v)\), is the minimum length of a path of \(G\) connecting \(u\) and \(v\). The value \(\mathrm{diam}(G)=\max\{\mathrm{dist}_{G}(u,v):u,v\in V(G)\}\) is called the _diameter_ of \(G\). For \(F\subseteq E(G)\), let \(V(F)=\{u,v:uv\in F\}\). For a subgraph \(H\) of \(G\) and a subset \(F\) of \(E(G)\), let \(H+F\) be the subgraph of \(G\) with \(V(H+F)=V(H)\cup V(F)\) and \(E(H+F)=E(H)\cup F\). Let \(\omega(G)\) be the number of components of \(G\). Let \(m\geq 1\) be an integer. Let \(p_{1},p_{2},\ldots,p_{m}\) be integers with \(p_{i}\geq 1\). For each integer \(i\) with \(1\leq i\leq m\), let \(A^{i}\) be a copy of \(K_{2,p_{i}}\), and let \(x_{i,1}\) and \(x_{i,2}\) be two vertices of \(A^{i}\) such that \(\{x_{i,1},x_{i,2}\}\) is one of the partite sets of \(A^{i}\). Let \(\hat{A}_{m}(p_{1},p_{2},\ldots,p_{m})\) be the graph obtained from \(A_{1},A_{2},\ldots,A_{m}\) by identifying \(x_{1,1},x_{2,1},\ldots,x_{m,1}\) and adding the edge set \(\{x_{i,2}x_{j,2}:1\leq i<j\leq m\}\). For an integer \(p\geq 1\), let \(B_{p}\) be the graph obtained from \(A_{1},A_{2},\ldots,A_{m}\) by identifying \(x_{1,1},x_{2,1},\ldots,x_{m,1}\) and adding the edge set \(\{x_{i,2}x_{j,2}:1\leq i<j\leq m\}\). obtained from \(\hat{A}_{2}(2,p)\) by adding the edge \(yy^{\prime}\) where \(\{y,y^{\prime}\}=V(A^{1})\setminus\{x_{1,1},x_{1,2}\}\). Let \(\mathcal{A}=\{\hat{A}_{m}(p_{1},p_{2},\ldots,p_{m}):m\geq 1,\ p_{i}\geq 1,\ 1\leq i\leq m\}\) and \(\mathcal{B}=\{B_{p}:p\geq 1\}\) (see Figure 1). Recently, Shan and Tsuchiya [11] proved the following theorem. **Theorem C** (Shan and Tsuchiya [11]): Let \(G\) be a graph of order \(n\geq 10\) with \(\operatorname{diam}(G)=2\). Then \(G\) has a HIST if and only if \(G\) is not isomorphic to any graph in \(\mathcal{A}\cup\mathcal{B}\). ## 2 Proof of Theorem 1 Proof of Theorem 1.: Throughout this proof, we implicitly use the fact that \(k\geq 2\). Let \(c_{k}=\sqrt{k(k-1)(k+2\sqrt{2k}+2)}\). Hence \[c_{k} =\sqrt{k(k-1)(k+2\sqrt{2k}+2)}\] \[\geq\sqrt{k(k-1)(k+4+2)}\] \[=\sqrt{k^{3}+5k^{2}-6k}.\] Since \(n>\delta(G)\geq c_{k}\sqrt{n}\), we have \(\sqrt{n}(\sqrt{n}-c_{k})>0\). Consequently, we obtain \[\sqrt{n}>c_{k}\geq\sqrt{k^{3}+5k^{2}-6k}. \tag{2.1}\] Let \(p=\frac{(c_{k}^{2}-k(k-1)(k-2))\sqrt{n}+kc_{k}}{2c_{k}}\). **Claim 2.1**: We have \(c_{k}\sqrt{n}>p>k+2\). _Proof._ By (2.1), \[c_{k}\sqrt{n}-p =c_{k}\sqrt{n}-\frac{(c_{k}^{2}-k(k-1)(k-2))\sqrt{n}+kc_{k}}{2c_{k}}\] \[=\frac{(c_{k}^{2}+k(k-1)(k-2))\sqrt{n}-kc_{k}}{2c_{k}}\] \[>\frac{(k^{3}+5k^{2}-6k+k(k-1)(k-2))c_{k}-kc_{k}}{2c_{k}}\] \[=\frac{(2k^{3}+2k^{2}-4k)-k}{2}\] \[>0\] and \[p-(k+2) =\frac{(c_{k}^{2}-k(k-1)(k-2))\sqrt{n}+kc_{k}}{2c_{k}}-(k+2)\] \[>\frac{(k^{3}+5k^{2}-6k-k(k-1)(k-2))c_{k}+kc_{k}-2c_{k}(k+2)}{2c_ {k}}\] \[=\frac{(8k^{2}-8k)+k-2(k+2)}{2}\] \[>0.\] Therefore, \(c_{k}\sqrt{n}>p>k+2\). \(\qed\) Let \(u_{0}\in V(G)\), and let \(F_{0}\) be the graph with \(V(F_{0})=\{u_{0}\}\cup N_{G}(u_{0})\) and \(E(F_{0})=\{u_{0}v:v\in N_{G}(u_{0})\}\). Then by Claim 2.1, \(|V(F_{0})|=d_{G}(u_{0})+1>\delta(G)\geq c_{k}\sqrt{n}>p\) and \(d_{F_{0}}(u_{0})=d_{G}(u_{0})\geq c_{k}\sqrt{n}>k+2\). In particular, there exists a subforest \(F\) of \(G\) such that \(\bigcup_{2\leq i\leq k+1}V_{i}(F)=\emptyset\) and every component of \(F\) has at least \(p\) vertices. Choose \(F\) so that \(|V(F)|\) is as large as possible. Note that \[\omega(F)\leq\frac{n}{p}. \tag{2.2}\] Let \(A=V(G)\setminus V(F)\). Fix a vertex \(a\in A\). It follows from Claim 2.1 that if \(|N_{G}(a)\cap A|\geq p\), then the graph \(F^{\prime}\) with \(V(F^{\prime})=V(F)\cup\{a\}\cup(N_{G}(a)\cap A)\) and \(E(F^{\prime})=E(F)\cup\{ay:y\in N_{G}(a)\cap A\}\) is a subforest of \(G\) such that \(\bigcup_{2\leq i\leq k+1}V_{i}(F^{\prime})=\emptyset\) and every component of \(F^{\prime}\) has at least \(p\) vertices, which contradicts the maximality of \(F\). Thus \(|N_{G}(a)\cap A|<p\). Again by the maximality of \(F\), we also obtain \(N_{G}(a)\cap V_{\geq k+2}(F)=\emptyset\). Consequently, we have \[|N_{G}(a)\cap V_{1}(F)|=|N_{G}(a)\cap V(F)|=d_{G}(a)-|N_{G}(a)\cap A|>c_{k} \sqrt{n}-p. \tag{2.3}\] Furthermore, for a vertex \(u\in V_{1}(F)\), it follows from the maximality of \(F\) that \(|N_{G}(u)\cap A|\leq k\). This together with (2.3) implies that \[(c_{k}\sqrt{n}-p)|A| <\sum_{a\in A}|N_{G}(a)\cap V_{1}(F)|\] \[=|E_{G}(A,V_{1}(F))|\] \[=\sum_{u\in V_{1}(F)}|N_{G}(u)\cap A|\] \[\leq k|V_{1}(F)|\] \[<k(n-|A|),\] and hence \[|A|<\frac{kn}{c_{k}\sqrt{n}-p+k}. \tag{2.4}\] For each vertex \(a\in A\), take \(u_{a}\in N_{G}(a)\cap V_{1}(F)\) (here, the condition that \(N_{G}(a)\cap V_{1}(F)\neq\emptyset\) is assured by (2.3) and Claim 2.1). Let \(F^{*}\) be the spanning subgraph of \(G\) with \(E(F^{*})=E(F)\cup\{au_{a}:a\in A\}\). Then \(F^{*}\) is a forest with \(\omega(F^{*})=\omega(F)\). Furthermore, since \(\bigcup_{2\leq i\leq k+1}V_{i}(F^{*})\subseteq\{u_{a}:a\in A\}\), we have \(|\bigcup_{2\leq i\leq k+1}V_{i}(F^{*})|\leq|A|\). Since \(G\) is connected and \(F^{*}\) is a spanning subforest of \(G\), there exists a spanning tree \(T_{0}\) of \(G\) with \(E(F^{*})\subseteq E(T_{0})\). Since \(|E(T_{0})\setminus E(F^{*})|\) is less than \(\omega(F^{*})\) (\(=\omega(F)\)), \(|(\bigcup_{2\leq i\leq k+1}V_{i}(T_{0}))\setminus(\bigcup_{2\leq i\leq k+1}V_{ i}(F^{*}))|\leq 2|E(T_{0})\setminus E(F^{*})|<2\omega(F)\). In particular, it follows from (2.2) and (2.4) that \[\left|\bigcup_{2\leq i\leq k+1}V_{i}(T_{0})\right|<|A|+2\omega(F)<\frac{kn}{c_ {k}\sqrt{n}-p+k}+\frac{2n}{p}=\frac{kpn+2(c_{k}\sqrt{n}-p+k)n}{(c_{k}\sqrt{n}- p+k)p}.\] For a spanning tree \(S\) of \(G\), let \[\mu(S)=\sum_{2\leq i\leq k+1}(k+2-i)|V_{i}(S)|.\] A spanning tree \(S\) is _admissible_ if \(\mu(S)<\frac{k(kpn+2(c_{k}\sqrt{n}-p+k)n)}{(c_{k}\sqrt{n}-p+k)p}\). Since \(\mu(T_{0})\leq k\sum_{2\leq i\leq k+1}|V_{i}(T_{0})|<\frac{k(kpn+2(c_{k}\sqrt{ n}-p+k)n)}{(c_{k}\sqrt{n}-p+k)p}\), \(T_{0}\) is admissible. Now we choose an admissible spanning tree \(T\) of \(G\) so that **(X1)**: \((|V_{2}(T)|,|V_{3}(T)|,\ldots,|V_{k}(T)|)\) is lexicographically as small as possible. We show that \(T\) is a \([2,k]\)-ST of \(G\). By way of contradiction, suppose that \(\bigcup_{2\leq i\leq k}V_{i}(T)\neq\emptyset\). Let \(s\) be the minimum integer such that \(2\leq s\leq k\) and \(V_{s}(T)\neq\emptyset\), and let \(u\in V_{s}(T)\). **Claim 2.2**: We have \(d_{G}(u)>(k-1)\mu(T)\). _Proof._ Since \(d_{G}(u)\geq c_{k}\sqrt{n}\) and \(T\) is an admissible spanning tree of \(G\), it suffices to show that \[c_{k}\sqrt{n}\geq\frac{(k-1)k(kpn+2(c_{k}\sqrt{n}-p+k)n)}{(c_{k} \sqrt{n}-p+k)p}. \tag{2.5}\] Since \(x=c_{k}^{2}\) (\(=k(k-1)(k+2\sqrt{2k}+2)\)) is a solution to an equation \(x^{2}-2k(k-1)(k+2)x+k^{2}(k-1)^{2}(k-2)^{2}=0\), we have \[c_{k}^{4}-2k(k-1)(k+2)c_{k}^{2}+k^{2}(k-1)^{2}(k-2)^{2}=0.\] Furthermore, it follows from (2.1) that \[c_{k}^{2}-k^{3}-k^{2}+2k\geq(k^{3}+5k^{2}-6k)-k^{3}-k^{2}+2k>0.\] Hence \[((c_{k}^{2}-k(k-1)(k-2))\sqrt{n}+kc_{k})^{2}-8c_{k}k(k-1)(c_{k} \sqrt{n}+k)\sqrt{n}\] \[=(c_{k}^{4}-2k(k-1)(k+2)c_{k}^{2}+k^{2}(k-1)^{2}(k-2)^{2})n+2kc_{ k}(c_{k}^{2}-k^{3}-k^{2}+2k)\sqrt{n}+k^{2}c_{k}^{2}\] \[>0,\] and so \[\frac{((c_{k}^{2}-k(k-1)(k-2))\sqrt{n}+kc_{k})^{2}}{4c_{k}}-2k(k-1 )(c_{k}\sqrt{n}+k)\sqrt{n}>0. \tag{2.6}\] Since \(p=\frac{(c_{k}^{2}-k(k-1)(k-2))\sqrt{n}+kc_{k}}{2c_{k}}\), \[-c_{k}\sqrt{n}p^{2}+((c_{k}^{2}-k(k-1)(k-2))n+kc_{k}\sqrt{n})p\] \[=-c_{k}\sqrt{n}\left(p-\frac{(c_{k}^{2}-k(k-1)(k-2))\sqrt{n}+kc_{ k}}{2c_{k}}\right)^{2}+\frac{((c_{k}^{2}-k(k-1)(k-2))\sqrt{n}+kc_{k})^{2} \sqrt{n}}{4c_{k}}\] \[=\frac{((c_{k}^{2}-k(k-1)(k-2))\sqrt{n}+kc_{k})^{2}\sqrt{n}}{4c_ {k}}.\] This together with (2.6) leads to \[c_{k}\sqrt{n}(c_{k}\sqrt{n} -p+k)p-(k-1)k(kpn+2(c_{k}\sqrt{n}-p+k)n)\] \[=-c_{k}\sqrt{n}p^{2}+((c_{k}^{2}-k(k-1)(k-2))n+kc_{k}\sqrt{n})p- 2k(k-1)(c_{k}\sqrt{n}+k)n\] \[=\left(\frac{((c_{k}^{2}-k(k-1)(k-2))\sqrt{n}+kc_{k})^{2}}{4c_{k} }-2k(k-1)(c_{k}\sqrt{n}+k)\sqrt{n}\right)\sqrt{n}\] \[>0.\] Consequently, (2.5) holds. \(\quad\Box\) Fix a vertex \(v\in N_{G}(u)\setminus N_{T}(u)\). Let \(Q_{v}\) be the unique path in \(T\) joining \(u\) and \(v\), and write \(N_{Q_{v}}(v)=\{w_{v}\}\). Let \(t_{v}\) be the integer with \(w_{v}\in V_{t_{v}}(T)\). Since \(uv\notin E(T)\), \(w_{v}\neq u\). In particular, \(|N_{Q_{v}}(w_{v})|=2\). By the definition of \(s\), we have \[2\leq s\leq t_{v}. \tag{2.7}\] Let \(Z_{v}=\{v\}\cup(N_{T}(w_{v})\setminus N_{Q_{v}}(w_{v}))\) (see Figure 2). Then \(|Z_{v}|=t_{v}-1\). Remark that for vertices \(v,v^{\prime}\in N_{G}(u)\setminus N_{T}(u)\), \(v^{\prime}\in Z_{v}\) if and only if \(w_{v^{\prime}}=w_{v}\). (2.8) **Claim 2.3**: _Let \(v\in N_{G}(u)\setminus N_{T}(u)\). Then_ * \(t_{v}\in\{s,s+1\}\)_, and_ * \(Z_{v}\not\subseteq N_{G}(u)\)_._ _Proof._ We first prove (i). Suppose that \(t_{v}\notin\{s,s+1\}\). By (2.7), \(t_{v}\geq s+2\). Let \(T_{1}=(T-vw_{v})+uv\). Then \(T_{1}\) is a spanning tree of \(G\), and \(d_{T_{1}}(u)=d_{T}(u)+1=s+1\), \(d_{T_{1}}(w_{v})=d_{T}(w_{v})-1=t_{v}-1\)\((\geq s+1)\) and \(d_{T_{1}}(y)=d_{T}(y)\) for all vertices \(y\in V(G)\setminus\{u,w_{v}\}\). Now we calculate the value \(\mu(T)-\mu(T_{1})\). Recall that \(s\leq k\). If \(t_{v}\leq k+1\), then \[\mu(T)-\mu(T_{1})=(k+2-s)+(k+2-t_{v})-(k+2-(s+1))-(k+2-(t_{v}-1))=0;\] if \(t_{v}=k+2\), then \[\mu(T)-\mu(T_{1})=(k+2-s)-(k+2-(s+1))-(k+2-(k+1))=0;\] if \(t_{u}\geq k+3\), then \[\mu(T)-\mu(T_{1})=(k+2-s)-(k+2-(s+1))=1.\] In any case, we have \(\mu(T)\geq\mu(T_{1})\), and hence \(T_{1}\) is an admissible spanning tree of \(G\). However, by the definition of \(s\), \(|V_{i}(T_{1})|=|V_{i}(T)|=0\) for all integers \(i\) with \(2\leq i\leq s-1\) and \(|V_{s}(T_{1})|=|V_{s}(T)\setminus\{u\}|=|V_{s}(T)|-1\), which contradicts (X1). Thus (i) holds. Next we prove (ii). Suppose that \(Z_{v}\subseteq N_{G}(u)\). Let \(T_{2}=(T-\{w_{v}z:z\in Z_{v}\})+\{uz:z\in Z_{v}\}\). Then \(T_{2}\) is a spanning tree of \(G\), and \(s+t_{v}-1\) (\(\geq s+1\)), \(d_{T_{2}}(w_{v})=1\) and \(d_{T_{2}}(y)=d_{T}(y)\) for all vertices \(y\in V(G)\setminus\{u,w_{v}\}\). Now we calculate the value \(\mu(T)-\mu(T_{2})\). By (i), \(t_{v}\leq s+1\leq k+1\). If \(s+t_{v}-1\leq k+1\), then \[\mu(T)-\mu(T_{2})=(k+2-s)+(k+2-t_{v})-(k+2-(s+t_{v}-1))=k+1;\] if \(s+t_{v}-1\geq k+2\), then \[\mu(T)-\mu(T_{2})=(k+2-s)+(k+2-t_{v})>0.\] In either case, we have \(\mu(T)>\mu(T_{2})\), and hence \(T_{2}\) is an admissible spanning tree of \(G\). However, by the definition of \(s\), \(|V_{i}(T_{2})|=|V_{i}(T)|=0\) for all integers \(i\) with \(2\leq i\leq s-1\) and \(|V_{s}(T_{2})|=|V_{s}(T)\setminus\{u\}|=|V_{s}(T)|-1\), which contradicts (X1). Thus (ii) holds. \(\quad\Box\) By Claim 2.3(i), \(N_{G}(u)\setminus N_{T}(u)=\{v\in N_{G}(u)\setminus N_{T}(u):w_{v}\in V_{s}( T)\cup V_{s+1}(T)\}\). By (2.8) and Claim 2.3(ii), \[\text{for a vertex }w\in V_{s}(T),\,|\{v\in N_{G}(u)\setminus N_{T}(u):w_{v}= w\}|\leq s-2\] and \[\text{for a vertex }w\in V_{s+1}(T),\,|\{v\in N_{G}(u)\setminus N_{T}(u):w_{v} =w\}|\leq s-1.\] Again by (2.8), it follows that \[|N_{G}(u)\setminus N_{T}(u)| =\sum_{w\in V_{s}(T)\cup V_{s+1}(T)}|\{v\in N_{G}(u)\setminus N_ {T}(u):w_{v}=w\}|\] \[\leq(s-2)|V_{s}(T)|+(s-1)|V_{s+1}(T)|. \tag{2.9}\] Since \[(k-1)(k+2-s)-(2s-2)=k^{2}+k-s(k+1)\geq k^{2}+k-k(k+1)=0\] and \[(k-1)(k+2-(s+1))-(s-1)=k^{2}-sk\geq k^{2}-k^{2}=0,\] we have \[(k-1)(k+2-s)\geq 2s-2\ \text{ and }\ (k-1)(k+2-(s+1))\geq s-1. \tag{2.10}\] Since \(u\in V_{s}(T)\), \(|V_{s}(T)|\geq 1\). This together with Claim 2.2, (2.9) and (2.10) implies that \[d_{G}(u) >(k-1)\mu(T)\] \[\geq(k-1)((k+2-s)|V_{s}(T)|+(k+2-(s+1))|V_{s+1}(T)|)\] \[\geq(2s-2)|V_{s}(T)|+(s-1)|V_{s+1}(T)|\] \[=s|V_{s}(T)|+(s-2)|V_{s}(T)|+(s-1)|V_{s+1}(T)|\] \[\geq s+|N_{G}(v)\setminus N_{T}(u)|,\] and hence \(d_{G}(u)-s>|N_{G}(u)\setminus N_{T}(u)|\), which contradicts the fact that \(u\in V_{s}(T)\). This completes the proof of Theorem 1. \(\quad\Box\) We expect that the coefficient of \(\sqrt{n}\) in Theorem 1 can be further improved. However, in our proof, the coefficient of \(\sqrt{n}\) is best possible: Let \(c\) be a positive number that is (slightly) smaller than \(\sqrt{k(k-1)(k+2\sqrt{2k}+2)}\), and suppose that a connected graph \(G\) of order \(n\) satisfies \(\delta(G)=c\sqrt{n}\). Then we can verify that \(c^{4}-2k(k-1)(k+2)c^{2}+k^{2}(k-1)^{2}(k-2)^{2}<0\), and hence (2.6) does not hold if \(n\) is sufficiently large. We conclude this section by showing that the coefficient of \(\sqrt{n}\) in Theorem 1 cannot be improved with \(\sqrt{k-1}-\varepsilon\) for any \(\varepsilon>0\). It suffices to construct infinitely many connected graphs \(G\) of order \(n\) such that \(\delta(G)\geq\sqrt{(k-1)n+\frac{(2k-1)^{2}}{4}}-k+\frac{1}{2}\) and \(G\) has no \([2,k]\)-ST. Let \(k\) and \(d\) be positive integers such that \(k\geq 2\) and \(l:=\frac{d}{k-1}\) is an integer. Let \(H_{0},H_{1},\ldots,H_{l}\) be vertex-disjoint graphs such that \(H_{0}\) is a complete bipartite graph having a bipartition \((U_{1},U_{2})\) with \(|U_{1}|=|U_{2}|=d\) and for an integer \(i\) with \(1\leq i\leq l\), \(H_{i}\) is a complete graph of order \(d+1\). Take \(l\) vertices \(u_{1},u_{2},\ldots,u_{l}\in U_{1}\), and for each integer \(i\) with \(1\leq i\leq l\), take a vertex \(v_{i}\in V(H_{i})\). Let \(G_{k,d}=(\bigcup_{0\leq i\leq l}H_{i})+\{u_{i}v_{i}:1\leq i\leq l\}\) (see Figure 3). Then \(G_{k,d}\) is a connected graph of order \(n:=2d+\frac{d(d+1)}{k-1}\) and \(\delta(G_{k,d})=d=\frac{\sqrt{4(k-1)n+(2k-1)^{2}}-2k+1}{2}\). This together with the following proposition leads to a desired conclusion. **Proposition 3**: There exists no \([2,k]\)-ST of \(G_{k,d}\). _Proof._ Suppose that \(G_{k,d}\) has a \([2,k]\)-ST \(T\). Let \(G_{0}\) be the subgraph of \(T\) induced by \(\{u_{i}:1\leq i\leq l\}\cup U_{2}\). Then \(|V(G_{0})|=l+d=\frac{kd}{k-1}\). For each integer \(i\) with \(1\leq i\leq l\), since \(u_{i}\) is a cut-vertex of \(G_{k,d}\), \(d_{T}(u_{i})\geq k+1\). Hence \(|E(G_{0})|=\sum_{1\leq i\leq l}|N_{T}(u_{i})\cap U_{2}|\geq kl=\frac{kd}{k-1}= |V(G_{0})|\). This implies that \(G_{0}\) contains a cycle, Figure 3: Graph \(G_{k,d}\) which contradicts the assumption that \(G_{0}\) is a subgraph of a tree \(T\). \(\Box\) ## 3 Proof of Theorem 2 Throughout this proof, we implicitly use the fact that \(n\geq 10\). As we mentioned in Section 1, \(D_{n}\) has no HIST, and so the "only if" part holds. Thus we suppose that that \(G\) is not isomorphic to \(D_{n}\) and prove that \(G\) has a HIST. Set \(d=\mbox{diam}(G)\). If \(d=1\), then \(G\) is a complete graph, and hence \(G\) has a HIST. Since we can easily verify that every graph \(A\in\mathcal{A}\cup\mathcal{B}\) of order \(n\) has two non-adjacent vertices of degree \(2\), we have \(\sigma_{2}(A)=4\)\((<n-2)\). In particular, \(G\) is isomorphic to no graph in \(\mathcal{A}\cup\mathcal{B}\). Hence by Theorem C, if \(d=2\), then \(G\) has a HIST. Thus we may assume that \(d\geq 3\). Let \(P\) be a diametral path of \(G\), i.e., \(P\) is a path of length \(d\) joining two vertices with the distance \(d\) in \(G\), and write \(P=u_{0}u_{1}\cdots u_{d}\). Choose \(P\) so that **(P1)**: \(d_{G}(u_{0})\) is as small as possible, **(P2)**: subject to (P1), \(\max\{3-d_{G}(u_{1}),0\}\) is as small as possible, and **(P3)**: subject to (P2), \(|N_{G}(u_{0})\cap(N_{G}(u_{1})\cup N_{G}(u_{2}))|\) is as large as possible. Since \(d\geq 3\), \(N_{G}(u_{0})\) and \(N_{G}(u_{d})\) are disjoint, and hence \[n-2\leq\sigma_{2}(G)\leq d_{G}(u_{0})+d_{G}(u_{d})=|N_{G}(u_{0})\cup N_{G}(u_{ d})|\leq|V(G)\setminus\{u_{0},u_{d}\}|=n-2.\] This forces \(N_{G}(u_{0})\cup N_{G}(u_{d})=V(G)\setminus\{u_{0},u_{d}\}\) and \(\sigma_{2}(G)=d_{G}(u_{0})+d_{G}(u_{d})=n-2\). Since \(u_{2}\notin N_{G}(u_{0})\), we have \(u_{2}\in N_{G}(u_{d})\), and so \(d=3\). In the remaining of the proof, we implicitly use the facts that \(N_{G}(u_{0})\cup N_{G}(u_{3})=V(G)\setminus\{u_{0},u_{3}\}\) and \(\sigma_{2}(G)=d_{G}(u_{0})+d_{G}(u_{3})=n-2\). **Claim 3.1**: _Let \(i\in\{0,3\}\), and let \(v\in N_{G}(u_{i})\). Then_ * \(d_{G}(v)\geq n-2-d_{G}(u_{3-i})=d_{G}(u_{i})\)_, and_ * _if_ \(N_{G}(v)\cap N_{G}(u_{3-i})=\emptyset\)_, then_ \(N_{G}(v)=(N_{G}(u_{i})\setminus\{v\})\cup\{u_{i}\}\)_, and in particular,_ \(N_{P}(u_{i})\neq\{v\}\) _and_ \(v\) _is adjacent to the unique vertex in_ \(N_{P}(u_{i})\)_._ _Proof._ Since \(vu_{3-i}\notin E(G)\), \(d_{G}(v)\geq\sigma_{2}(G)-d_{G}(u_{3-i})=n-2-d_{G}(u_{3-i})=d_{G}(u_{i})\), which proves (i). We suppose that \(N_{G}(v)\cap N_{G}(u_{3-i})=\emptyset\), and prove (ii). Since \(N_{G}(v)\subseteq(N_{G}(u_{i})\setminus\{v\})\cup\{u_{i}\}\), we have \(d_{G}(v)\leq|(N_{G}(u_{i})\setminus\{v\})\cup\{u_{i}\}|=d_{G}(u_{i})\). This together with (i) forces \(N_{G}(v)=(N_{G}(u_{i})\setminus\{v\})\cup\{u_{i}\}\). In particular, either \(N_{P}(u_{i})=\{v\}\) or \(v\) is adjacent to the unique vertex in \(N_{P}(u_{i})\). If \(N_{P}(u_{i})=\{v\}\), then \(v\) is a vertex of \(P\) and so \(N_{G}(v)\cap N_{G}(u_{3-i})\neq\emptyset\), which contradicts the assumption in this paragraph. Consequently, we obtain the desired conclusion. \(\quad\Box\) Now we focus on the following conditions for a subtree \(T\) of \(G\): **(T1)**: \(\{u_{0},u_{3}\}\cup N_{G}(u_{0})\subseteq V(T)\), **(T2)**: \(V(T)\setminus\{u_{3}\}\subseteq V_{1}(T)\cup V_{\geq 3}(T)\), and **(T3)**: \(n-|V(T)|+d_{T}(u_{3})-3\geq 0\). **Claim 3.2**: If there exists a subtree \(T\) of \(G\) satisfying (T1)-(T3), then \(G\) has a HIST. _Proof._ Suppose that there exists a subtree \(T\) of \(G\) satisfying (T1)-(T3). Since every vertex in \(V(G)\setminus V(T)\) is adjacent to \(u_{3}\) by (T1), \(T^{\prime}:=T+\{u_{3}v:v\in V(G)\setminus V(T)\}\) is a spanning tree of \(G\). Furthermore, it follows from (T3) that \(|V(G)\setminus V(T)|=n-|V(T)|\geq 3-d_{T}(u_{3})\), and so \[d_{T^{\prime}}(u_{3})=d_{T}(u_{3})+|V(G)\setminus V(T)|\geq d_{T}(u_{3})+(3-d _{T}(u_{3}))=3.\] This together with (T2) implies that \(T^{\prime}\) is a HIST of \(G\). \(\quad\Box\) We divide the proof into two cases. **Case 1:**\(d_{G}(u_{0})\leq 3\). **Claim 3.3**: We have \(d_{G}(u_{1})\geq 3\). _Proof._ Suppose that \(d_{G}(u_{1})\leq 2\). Then \(N_{G}(u_{1})=\{u_{0},u_{2}\}\). For the moment, we further suppose that \(d_{G}(u_{0})=1\). For a vertex \(v\in N_{G}(u_{3})\setminus\{u_{2}\}\), since \(N_{G}(v)\cap N_{G}(u_{0})=\emptyset\), it follows from Claim 3.1(ii) with \(i=3\) that \(N_{G}(v)=(N_{G}(u_{3})\setminus\{v\})\cup\{u_{3}\}\). Since \(v\) is arbitrary, \(N_{G}(u_{3})\cup\{u_{3}\}\) is a clique of \(G\). This implies that \(G\) is isomorphic to \(D_{n}\), which is a contradiction. Thus \(d_{G}(u_{0})\geq 2\). Let \(v^{\prime}\in N_{G}(u_{0})\setminus\{u_{1}\}\). Since \(d_{G}(u_{1})=2\), \(u_{1}v^{\prime}\notin E(G)\). Hence by Claim 3.1(ii) with \((i,v)=(0,v^{\prime})\), we have \(N_{G}(v^{\prime})\cap N_{G}(u_{3})\neq\emptyset\). In particular, \(P^{\prime}=u_{0}v^{\prime}wu_{3}\) is a diametral path of \(G\) where \(w\in N_{G}(v^{\prime})\cap N_{G}(u_{3})\). Since \(u_{1}v^{\prime}\notin E(G)\), \(d_{G}(v^{\prime})+2=d_{G}(v^{\prime})+d_{G}(u_{1})\geq\sigma_{2}(G)\geq n-2>4\), and hence \(d_{G}(v^{\prime})\geq 3\). Since \(P^{\prime}\) is a diametral path of \(G\), this contradicts to (P2). \(\quad\Box\) By Claim 3.3, there exists a vertex \(a\in N_{G}(u_{1})\setminus\{u_{0},u_{2}\}\). Choose \(a\) so that \(a\in N_{G}(u_{0})\) if possible. **Claim 3.4**: If \(N_{G}(u_{0})\subseteq N_{G}(u_{1})\cup N_{G}(u_{2})\), then \(G\) has a HIST. _Proof._ Suppose that \(N_{G}(u_{0})\subseteq N_{G}(u_{1})\cup N_{G}(u_{2})\). For each \(v\in N_{G}(u_{0})\setminus\{u_{1}\}\), take \(w_{v}\in N_{G}(v)\cap\{u_{1},u_{2}\}\). By Claim 3.1(i) with \((i,v)=(3,u_{2})\), \(d_{G}(u_{0})\geq 10-2-3=5\), and hence there exists a vertex \(a^{\prime}\in N_{G}(u_{2})\setminus\{u_{1},u_{3},a\}\). Let \(T=P+(\{u_{1}a,u_{2}a^{\prime}\}\cup\{vw_{v}:v\in N_{G}(u_{0})\setminus\{u_{1},a, a^{\prime}\}\})\). Then \(T\) is a tree and satisfies (T1) and (T2). Furthermore, \(n-|V(T)|+d_{T}(u_{3})-3\geq n-8+1-3\geq 0\), and so \(T\) satisfies (T3). Hence by Claim 3.2, \(G\) has a HIST. \(\quad\Box\) By Claim 3.4, we may assume that \(N_{G}(u_{0})\not\subseteq N_{G}(u_{1})\cup N_{G}(u_{2})\). Since \(u_{1}\in N_{G}(u_{2})\), this implies that \(d_{G}(u_{0})\geq 2\) and there exists a vertex \(z_{1}\in N_{G}(u_{0})\setminus(N_{G}(u_{1})\cup N_{G}(u_{2}))\). In particular, \[|N_{G}(u_{0})\cap(N_{G}(u_{1})\cup N_{G}(u_{2}))|\leq d_{G}(u_{0})-1. \tag{3.1}\] Since \(u_{1}z_{1}\notin E(G)\), it follows from Claim 3.1(ii) with \((i,v)=(0,z_{1})\) that \(N_{G}(z_{1})\cap N_{G}(u_{3})\neq\emptyset\). Let \(z_{2}\in N_{G}(z_{1})\cap N_{G}(u_{3})\). Since \(u_{2}z_{1}\notin E(G)\), \(z_{2}\neq u_{2}\). **Claim 3.5**: If \(N_{G}(u_{0})\setminus\{u_{1},z_{1}\}\subseteq N_{G}(u_{1})\), then \(u_{1}z_{2}\notin E(G)\), and in particular, \(z_{2}\neq a\). _Proof._ Suppose that \(N_{G}(u_{0})\setminus\{u_{1},z_{1}\}\subseteq N_{G}(u_{1})\) and \(u_{1}z_{2}\in E(G)\). Then \(u_{0}u_{1}z_{2}u_{3}\) is a diametral path of \(G\). However, it follows from (3.1) that \(|N_{G}(u_{0})\cap(N_{G}(u_{1})\cup N_{G}(z_{2}))|=|N_{G}(u_{0})|=d_{G}(u_{0})> |N_{G}(u_{0})\cap(N_{G}(u_{1})\cup N_{G}(u_{2}))|\), which contradicts (P3). \(\quad\Box\) By Claim 3.1(i) with \((i,v)=(3,u_{2})\), \(d_{G}(u_{2})\geq n-2-d_{G}(u_{0})\geq 5\). Since \(u_{2}z_{1}\notin E(G)\), this implies that there exist vertices \(b,b^{\prime}\in N_{G}(u_{2})\) such that \(b\notin\{u_{1},z_{1},a,z_{2},u_{3}\}\) and \(b^{\prime}\notin N_{G}(u_{0})\cup\{a,u_{3}\}\) where \(b\) might be equal to \(b^{\prime}\). Suppose that \(d_{G}(u_{0})=2\). Then by Claim 3.1(i) with \((i,v)=(3,z_{2})\), \(d_{G}(z_{2})\geq n-2-d_{G}(u_{0})\geq 6\), and hence there exists a vertex \(c\in N_{G}(z_{2})\setminus\{z_{1},a,u_{2},b,u_{3}\}\). By Claim 3.5, \(z_{2}\neq a\) and \(c\neq u_{1}\). Hence by the definition of \(b\) and \(c\), the vertices \(u_{0},u_{1},u_{2},u_{3},z_{1},z_{2},a,b,c\) are pairwise distinct. Let \(T_{1}=P+\{u_{1}a,u_{2}b,u_{3}z_{2},z_{2}z_{1},z_{2}c\}\) (see the left graph in Figure 4). Then \(T_{1}\) is a tree and satisfies (T1) and (T2). Furthermore, \(n-|V(T_{1})|+d_{T_{1}}(u_{3})-3\geq n-9+2-3\geq 0\), and so \(T_{1}\) satisfies (T3). Hence by Claim 3.2, \(G\) has a HIST. Thus we may assume that \(d_{G}(u_{0})=3\). Suppose that \(a\notin N_{G}(u_{0})\). Then by the choice of \(a\), \(N_{G}(u_{0})\cap N_{G}(u_{1})=\emptyset\). Recall that \(a\neq b^{\prime}\) and \(b^{\prime}\notin N_{G}(u_{0})\). Let \(T_{2}=P+(\{u_{1}a,u_{2}b^{\prime}\}\cup\{u_{0}v:v\in N_{G}(u_{0})\setminus\{u_ {1}\}\})\) (see the middle graph in Figure 4). Then \(T_{2}\) is a tree and satisfies (T1) and (T2). Furthermore, \(n-|V(T_{2})|+d_{T_{2}}(u_{3})-3\geq n-8+1-3\geq 0\), and so \(T_{2}\) satisfies (T3). Hence by Claim 3.2, \(G\) has a HIST. Thus we may assume that \(a\in N_{G}(u_{0})\), i.e., \(N_{G}(u_{0})=\{u_{1},z_{1},a\}\). If \(z_{2}a\in E(G)\), then \(u_{0}az_{2}u_{3}\) is a diametral path of \(G\), \(\max\{3-d_{G}(a),0\}=0\) and \(|N_{G}(u_{0})\cap(N_{G}(a)\cup N_{G}(z_{2}))|=3>|N_{G}(u_{0})\cap(N_{G}(u_{1}) \cup N_{G}(u_{2}))|\) by (3.1), which contradicts (P3). Thus \[z_{2}a\notin E(G). \tag{3.2}\] By Claim 3.1(i) with \((i,v)=(3,z_{2})\), \(d_{G}(z_{2})\geq n-2-d_{G}(u_{0})\geq 5\), and hence there exists a vertex \(c^{\prime}\in N_{G}(z_{2})\setminus\{z_{1},u_{2},b,u_{3}\}\). By Claim 3.5 and (3.2), \(c^{\prime}\notin\{u_{1},a\}\). Hence by the definition of \(b\) and \(c^{\prime}\), the vertices \(u_{0},u_{1},u_{2},u_{3},z_{1},z_{2},a,b,c^{\prime}\) are pairwise distinct. Let \(T_{3}=P+\{u_{1}a,u_{2}b,u_{3}z_{2},z_{2}z_{1},z_{2}c^{\prime}\}\) (see the right graph in Figure 4). Then \(T_{3}\) is a tree and satisfies (T1) and (T2). Furthermore, \(n-|V(T_{3})|+d_{T_{3}}(u_{3})-3\geq n-9+2-3\geq 0\), and so \(T_{3}\) satisfies (T3) Hence by Claim 3.2, \(G\) has a HIST. **Case 2:**\(d_{G}(u_{0})\geq 4\). By (P1), \(d_{G}(u_{3})\geq d_{G}(u_{0})\geq 4\). This together with Claim 3.1(i) implies that \[d_{G}(v)\geq d_{G}(u_{i})\geq 4\text{ for }v\in V(G)\setminus\{u_{0},u_{3}\}, \tag{3.3}\] where \(i\) is the integer with \(i\in\{0,3\}\) and \(v\in N_{G}(u_{i})\). In particular, we can take vertices \(a_{j}\in N_{G}(u_{j})\setminus V(P)\) (\(j\in\{1,2\}\)) with \(a_{1}\neq a_{2}\) so that **(A1)**: \(\min\{|N_{G}(u_{i})\setminus\{a_{1},a_{2}\}|:i\in\{0,3\}\}\) is as large as possible, and **(A2)**: subject to (A1), \(|\{u_{1}a_{2},u_{2}a_{1}\}\cap E(G)|\) is as small as possible. If \(\min\{|N_{G}(u_{i})\setminus\{a_{1},a_{2}\}|:i\in\{0,3\}\}\geq 3\), then \(P+(\{u_{1}a_{1},u_{2}a_{2}\}\cup\{u_{i}v:v\in N_{G}(u_{i})\setminus\{a_{1},a_ {2}\},\ i\in\{0,3\}\})\) is a HIST of \(G\), as desired. Thus we may assume that there exists an integer \(i_{0}\in\{0,3\}\) such that \(|N_{G}(u_{i_{0}})\setminus\{a_{1},a_{2}\}|\leq 2\). Since \(d_{G}(u_{3})\geq d_{G}(u_{0})\geq 4\), it follows from (A1) that \(d_{G}(u_{i_{0}})=4\), \(\{a_{1},a_{2}\}\subseteq N_{G}(u_{i_{0}})\setminus V(P)\) and \[(N_{G}(u_{j})\setminus V(P))\cap N_{G}(u_{3-i_{0}})=\emptyset\text{ for each }j\in\{1,2\}. \tag{3.4}\] In particular, \(|N_{G}(u_{i_{0}})\cap(N_{G}(u_{1})\cup N_{G}(u_{2}))|\geq|N_{P}(u_{i_{0}})\cup \{a_{1},a_{2}\}|=3>1=|N_{P}(u_{3-i_{0}})|=|N_{G}(u_{3-i_{0}})\cap(N_{G}(u_{1}) \cup N_{G}(u_{2}))|\). Since \(d_{G}(u_{3})\geq d_{G}(u_{0})\), this together with (3.3) and the choice of \(P\) (i.e., (P1)-(P3)) leads to \(i_{0}=0\). If \(N_{G}(u_{0})\setminus\{u_{1},a_{2}\}\subseteq N_{G}(u_{1})\), then \(T_{1}^{\prime}:=P+(\{u_{2}a_{2}\}\cup\{u_{1}v:v\in N_{G}(u_{0})\setminus\{u_ {1},a_{2}\}\})\) is a tree satisfying (T1)-(T3) because \(n-|V(T_{1}^{\prime})|+d_{T_{1}^{\prime}}(u_{3})-3=n-7+1-3>0\) (see the left graph in Figure 5). Hence by Claim 3.2, \(G\) has a HIST. Thus we may assume that \(N_{G}(u_{0})\setminus\{u_{1},a_{2}\}\not\subseteq N_{G}(u_{1})\). Since \(d_{G}(u_{0})=4\) by (3.3), it follows from (3.4) that \(|N_{G}(u_{0})\setminus(N_{G}(u_{1})\cup\{u_{1}\})|=1\), and so \(N_{G}(u_{0})\cap N_{G}(u_{1})=\{a_{1},a_{2}\}\). Write \(N_{G}(u_{0})\setminus(N_{G}(u_{1})\cup\{u_{1}\})=\{z^{\prime}_{1}\}\). If \(z^{\prime}_{1}u_{2}\in E(G)\), then \(|N_{G}(u_{0})\setminus\{a_{1},z^{\prime}_{1}\}|=2=|N_{G}(u_{0})\setminus\{a_{1 },a_{2}\}|\) and \(|\{u_{1}z^{\prime}_{1},u_{2}a_{1}\}\cap E(G)|=|\{u_{2}a_{1}\}\cap E(G)|<|\{u_{ 1}a_{2},u_{2}a_{1}\}\cap E(G)|\), which contradicts (A1) and (A2). Thus \(z^{\prime}_{1}u_{2}\notin E(G)\). This together with Claim 3.1(ii) with \((i,v)=(0,z^{\prime}_{1})\) implies that \(z^{\prime}_{1}\) is adjacent to a vertex \(z^{\prime}_{2}\in N_{G}(u_{3})\setminus\{u_{2}\}\). Since \(d_{G}(z^{\prime}_{2})\geq 4\) by (3.3), there exists a vertex \(w\in N_{G}(z^{\prime}_{2})\setminus\{z^{\prime}_{1},u_{3}\}\). By (3.4), \(w\notin\{u_{1},u_{2}\}\). Suppose that \(w\in N_{G}(u_{0})\). Then \(w\in\{a_{1},a_{2}\}\). Let \(T^{\prime}_{2}\) be a subgraph of \(G\) with \(V(T^{\prime}_{2})=V(G)\setminus(N_{G}(u_{3})\setminus\{u_{2},z^{\prime}_{2}\})\) and \(E(T^{\prime}_{2})=\{u_{1}u_{2},u_{1}a_{1},u_{1}a_{2},wu_{0},wz^{\prime}_{2},z^ {\prime}_{2}z^{\prime}_{1},z^{\prime}_{2}u_{3}\}\) (see the middle graph in Figure 5). Then \(T^{\prime}_{2}\) is a tree and satisfies (T1) and (T2). Furthermore, \(n-|V(T^{\prime}_{2})|+d_{T^{\prime}_{2}}(u_{3})-3=n-8+1-3\geq 0\), and so \(T^{\prime}_{2}\) satisfies (T3). Hence by Claim 3.2, \(G\) has a HIST. Thus we may assume that \(w\in N_{G}(u_{3})\). Recall that \(w\neq u_{2}\). Let \(T^{\prime}_{3}=P+\{u_{1}a_{1},u_{2}a_{2},u_{3}z^{\prime}_{2},z^{\prime}_{2}z^ {\prime}_{1},z^{\prime}_{2}w\}\) (see the right graph in Figure 5). Then \(T^{\prime}_{3}\) is a tree and satisfies (T1) and (T2). Furthermore, \(n-|V(T^{\prime}_{3})|+d_{T^{\prime}_{3}}(u_{3})-3=n-9+2-3\geq 0\), and so \(T^{\prime}_{2}\) satisfies (T3). Hence by Claim 3.2, \(G\) has a HIST. This completes the proof of Theorem 2. ## Acknowledgment This work was supported by the Research Institute for Mathematical Sciences, an International Joint Usage/Research Center located in Kyoto University, This work is also supported by JSPS KAKENHI Grant numbers 18K13449 (to M.F.), 20K11684 (to A.S.) and 19K14584 (to S.T).
2304.09280
Exact analysis of the subthreshold variability for conductance-based neuronal models with synchronous synaptic inputs
The spiking activity of neocortical neurons exhibits a striking level of variability, even when these networks are driven by identical stimuli. The approximately Poisson firing of neurons has led to the hypothesis that these neural networks operate in the asynchronous state. In the asynchronous state neurons fire independently from one another, so that the probability that a neuron experience synchronous synaptic inputs is exceedingly low. While the models of asynchronous neurons lead to observed spiking variability, it is not clear whether the asynchronous state can also account for the level of subthreshold membrane potential variability. We propose a new analytical framework to rigorously quantify the subthreshold variability of a single conductance-based neuron in response to synaptic inputs with prescribed degrees of synchrony. Technically we leverage the theory of exchangeability to model input synchrony via jump-process-based synaptic drives; we then perform a moment analysis of the stationary response of a neuronal model with all-or-none conductances that neglects post-spiking reset. As a result, we produce exact, interpretable closed forms for the first two stationary moments of the membrane voltage, with explicit dependence on the input synaptic numbers, strengths, and synchrony. For biophysically relevant parameters, we find that the asynchronous regime only yields realistic subthreshold variability (voltage variance $\simeq 4-9\mathrm{mV^2}$) when driven by a restricted number of large synapses, compatible with strong thalamic drive. By contrast, we find that achieving realistic subthreshold variability with dense cortico-cortical inputs requires including weak but nonzero input synchrony, consistent with measured pairwise spiking correlations.
Logan A. Becker, Baowang Li, Nicholas J. Priebe, Eyal Seidemann, Thibaud Taillefumier
2023-04-18T20:36:30Z
http://arxiv.org/abs/2304.09280v3
Exact analysis of the subthreshold variability for conductance-based neuronal models with synchronous synaptic inputs ###### Abstract The spiking activity of neocortical neurons exhibits a striking level of variability, even when these networks are driven by identical stimuli. The approximately Poisson firing of neurons has led to the hypothesis that these neural networks operate in the asynchronous state. In the asynchronous state neurons fire independently from one another, so that the probability that a neuron experience synchronous synaptic inputs is exceedingly low. While the models of asynchronous neurons lead to observed spiking variability, it is not clear whether the asynchronous state can also account for the level of subthreshold membrane potential variability. We propose a new analytical framework to rigorously quantify the subthreshold variability of a single conductance-based neuron in response to synaptic inputs with prescribed degrees of synchrony. Technically we leverage the theory of exchangeability to model input synchrony via jump-process-based synaptic drives; we then perform a moment analysis of the stationary response of a neuronal model with all-or-none conductances that neglects post-spiking reset. As a result, we produce exact, interpretable closed forms for the first two stationary moments of the membrane voltage, with explicit dependence on the input synaptic numbers, strengths, and synchrony. For biophysically relevant parameters, we find that the asynchronous regime only yields realistic subthreshold variability (voltage variance \(\simeq 4-9\)mV\({}^{2}\)) when driven by a restricted number of large synapses, compatible with strong thalamic drive. By contrast, we find that achieving realistic subthreshold variability with dense cortico-cortical inputs requires including weak but nonzero input synchrony, consistent with measured pairwise spiking correlations. We also show that without synchrony, the neural variability averages out to zero for all scaling limits with vanishing synaptic weights, independent of any balanced state hypothesis. This result challenges the theoretical basis for mean-field theories of the asynchronous state. + Footnote †: Corresponding author; [email protected] ## I Introduction A common and striking feature of cortical activity is the high degree of neuronal spiking variability [1]. This high variability is notably present in sensory cortex and motor cortex, as well as in regions with intermediate representations [2; 3; 4; 5]. The prevalence of this variability has led to it being a major constraint for modeling cortical networks as achieving high variability in biophysically relevant spiking networks poses a number of challenges. Cortical neurons are thought to receive a large number of synaptic contacts (\(\simeq 10^{4}\)) [6], which are commonly thought to operate asynchronously [7; 8; 9]. In the asynchronous state, neurons fire independently from one another, so that the probability that a neuron experiences synchronous synaptic inputs is exceedingly low. Although the impact of such asynchronous inputs varies across synapses, the law of large numbers implies that variability should average out when integrated at the soma. In principle, this would lead to clock-like spiking responses, contrary to experimental observations [10]. A number of mechanisms have been proposed to explain how high spiking variability emerges in cortical networks [11]. The prevailing approach posits that excitatory and inhibitory inputs converge on cortical neurons in a balanced manner. In balanced models, the overall excitatory and inhibitory drives cancel each other so that transient imbalances in the drive can bring the neuron's membrane voltage across the spike-initiation threshold. Such balanced models result in spiking statistics that match those found in the neocortex [12; 13; 14; 15; 16]. While the high spiking variability is an important constraint for generating cortical network modeling, there are other biophysical signatures that may be employed. We now have access to the subthreshold membrane voltage fluctuations that underlie spikes in awake, behaving animals (see Fig. 1). Membrane voltage recordings reveal two main deviations from the balanced hypothesis: first, in contrast to balanced models, membrane voltage does not hover near spiking threshold and is modulated by the synaptic drive; second, it exhibits non-Gaussian fluctuation statistics with positive skewness [17; 18; 19]. In this work, we further argue that membrane voltage recordings reveal much larger voltage fluctuations than predicted by balanced cortical models [20; 21]. How could such large subthreshold variations in membrane voltage emerge? One way that fluctuations could emerge, even for large numbers of input, is if there is synchrony in the driving inputs [22]. In practice, input synchrony is revealed by the presence of positive spik ing correlations, which quantify the propensity of distinct synaptic inputs to co-activate. Measurements of spiking correlations between pairs of neurons vary across reports, but have generally been shown to be weak [7; 8; 9]. That said, even weak correlations can have a large impact when the population of correlated inputs is large [23; 24]. Further, the existence of input synchrony, supported by weak but persistent spiking correlations, is consistent with at least two other experimental observations. First, intracellular recordings from pairs of neurons in both anesthetized and awake animals reveal a high degree of membrane voltage correlations [25; 26; 27]. Second, excitatory and inhibitory conductance inputs are highly correlated with each other within the same neuron [27; 28]. These observations suggest that input synchrony could explain the observed level of subthreshold variability. While our focus is on achieving realistic subthreshold variability, other challenges to asynchronous networks have been described. In particular, real neural networks exhibit distinct regimes of activity depending on the strength of their afferent drives [29]. In that respect, Zerlaut _et al._ showed that asynchronous networks can exhibit a spectrum of realistic regimes of activity if they have moderate recurrent connections and are driven by strong thalamic projections (see also [16]). Furthermore, it has been a challenge to identify the scaling rule that should apply to synaptic strengths for asynchrony to hold stably in idealized networks [30]. Recently, Sanzeni _et al._ proposed that a realistic asynchronous regime is achieved for a particular large-coupling rule, whereby synaptic strengths scale in keeping with the logarithmic size of the network. Both studies consider balanced networks with conductance-based neuronal models but neither include a role for synchrony, as it would challenge the asynchronous state hypothesis. The asynchronous state hypothesis is theoretically attractive because it represents a naturally stable regime of activity in infinite-size, balanced networks of current-based neuronal models [13; 14; 15; 16]. Such neuronal models, however, neglect the voltage dependence of conductances and it remains unclear whether the asynchronous regime is asymptotically stable for infinite-size, conductance-based network models. Here, independent of the constraint of network stability, we ask whether biophysically relevant neuronal models can achieve the observed subthreshold variability under realistic levels of input synchrony. To answer this question, we derive exact analytical expressions for the stationary voltage variance of a single conductance-based neuron in response to synchronous shot-noise drives [31; 32]. We develop this analysis for a variant of classically considered neuronal models. We call this variant the all-or-none-conductance-based (AONCB) model for which synaptic activation occurs as an all-or-none process rather than as an exponentially relaxing process. To perform an exact treatment of these models, we develop original probabilistic techniques inspired from Marcus' work about shot-noise driven dynamics [33; 34]. To model shot-noise drives with synchrony, we develop a statistical framework based on the property of input exchangeability, which assumes that no synaptic inputs play a particular role. In this framework, we show that input drives with varying degree of synchrony can be rigorously modeled via jump processes, while synchrony can be quantitatively related to measures of pairwise spiking correlations. Our main results are biophysically interpretable formulas for the voltage mean and variance in the limit of instantaneous synapses. Crucially, these formulas explicitly depend on the input numbers, weights, and synchrony, and hold without any forms of diffusion approximation. This is in contrast with analytical treatments which elaborate on the diffusion and effective-time-constant approximations [29; 30; 35; 36]. We leverage these exact, explicit formulas to determine under which synchrony conditions a neuron can achieve the experimentally observed subthreshold variability. For biophysically relevant synaptic numbers and weights, we find that achieving realistic variability is possible in response to a restricted number of large asynchronous connections, compatible with the dominance of thalamo-cortical projections in the input layers of the visual cortex. However, we find that achieving realistic variability in response to a large number of moderate cortical inputs, as in superficial cortical visual layers, necessitates nonzero input synchrony in amounts that are consistent with the weak levels of measured spiking correlations observed _in vivo_. In practice, persistent synchrony may spontaneously emerge in large but finite neural networks, as nonzero correlations are the hallmark of finite-dimensional interacting dynamics. The network structural features responsible for the magnitude of such correlations remains unclear, and we do not address this question here (see [37; 38] for review). The persistence of synchrony is also problematic for theoretical approaches that consider Figure 1: **Large trial-by-trial membrane voltage fluctuations.** Membrane voltage responses are shown using whole cell recordings in awake behaving primates for both fixation alone trials (left) and visual stimulation trials (right). A drifting grating was presented for 1 second beginning at the arrow. Below the membrane voltage traces are records of horizontal and vertical eye movements, illustrating that the animal was fixating during the stimulus. Red and green traces indicate different trials under the same conditions. Adapted from [18]. networks in the infinite-size limits. Indeed, our analysis supports that in the absence of synchrony and for all scaling of the synaptic weights, subthreshold variability must vanish in the limit of arbitrary large numbers of synapses. This suggests that independent of any balanced condition, the mean-field dynamics that emerge in infinite-size networks of conductance-based neurons will not exhibit Poisson-like spiking variability, at least in the absence of additional constraints on the network structure or on the biophysical properties of the neurons. In current-based neuronal models, however, variability is not dampened by a conductance-dependent effective time constant. These findings therefore challenge the theoretical basis for the asynchronous state in conductance-based neuronal networks. Our exact analysis, as well as its biophysical interpretations, is only possible at the cost of several caveats: First, we neglect the impact of the spike-generating mechanism (and of the post-spiking reset) in shaping the subthreshold variability. Second, we quantify synchrony under the assumption of input exchangeability, that is, for synapses having a typical strength as opposed to being heterogeneous. Third, we consider input drives that implement an instantaneous form of synchrony with temporally precise synaptic coactivations. Fourth, we do not consider slow temporal fluctations in the mean synaptic drive. Fifth, and perhaps most concerning, we do not account for the stable emergence of a synchronous regime in network models. We argue in the discussion that all the above caveats but the last one can be addressed without impacting our findings. Addressing the last caveat remains an open problem. ## II Stochastic modeling In this section, we specify the modeling framework of our analysis. In Section II.1, we define the conductance-based neuronal model that is subjected to synchronous inputs. In Section II.2, we model synchronous input drives as compound Poisson processes for exchangeable sets of excitatory inputs. In Section II.3, we extend our input model to include separately exchangeable sets of excitatory and inhibitory inputs. In Section II.4, we recapitulate our modeling approach within Marcus theory about shot-noise driven systems. ### All-or-none-conductance-based neurons We consider the subthreshold dynamics of an original neuronal model, which we called the all-or-none-conductance-based (AONCB) model. In this model, as for virtually all conductance-based models, the membrane voltage \(V\) obeys the first-order stochastic differential equation \[C\dot{V}=G(V_{\mathrm{L}}-V)+g_{\mathrm{e}}(V_{\mathrm{e}}-V)+g_{\mathrm{i}}( V_{\mathrm{i}}-V)+I\,, \tag{1}\] where randomness arises from the stochastically activating excitatory and inhibitory conductances, respectively denoted by \(g_{\mathrm{e}}\) and \(g_{\mathrm{i}}\) (see Fig. 2a). We further consider that both conductances result from the action of \(K_{\mathrm{e}}\) excitatory and \(K_{\mathrm{i}}\) inhibitory synapses: \(g_{\mathrm{e}}(t)=\sum_{k=1}^{K_{\mathrm{e}}}g_{\mathrm{e},k}(t)\) and \(g_{\mathrm{i}}(t)=\sum_{k=1}^{K_{\mathrm{i}}}g_{\mathrm{i},k}(t)\). In the absence of synaptic inputs, i.e., when \(g_{\mathrm{e}}=g_{\mathrm{i}}=0\), and of external current \(I\), the voltage exponentially relaxes toward its leak reversal potential \(V_{\mathrm{L}}\) with passive time constant \(\tau=C/G\), where \(C\) denotes the cell's membrane capacitance and \(G\) denotes the cellular passive conductance [39]. In the presence of synaptic inputs, the transient synaptic currents \(I_{\mathrm{e}}=g_{\mathrm{e}}(V_{\mathrm{e}}-V)\) and \(I_{\mathrm{i}}=g_{\mathrm{i}}(V_{\mathrm{i}}-V)\) cause the membrane voltage to fluctuate. Conductance-based models account for the voltage-dependence of synaptic currents via the driving forces \(V_{\mathrm{e}}-V\) and \(V_{\mathrm{i}}-V\), where \(V_{\mathrm{e}}\) and \(V_{\mathrm{i}}\) denotes the excitatory and inhibitory reversal potential, respectively. Without loss of generality, we assume in the following that \(V_{\mathrm{L}}=0\) and that \(V_{\mathrm{i}}<V_{\mathrm{L}}=0<V_{\mathrm{e}}\). We model the spiking activity of the \(K_{\mathrm{e}}+K_{\mathrm{i}}\) upstream neurons as shot noise [31; 32], which can be generically modeled as a \((K_{\mathrm{e}}+K_{\mathrm{i}})\)-dimensional stochastic point process [40; 41]. Let us denote by \(\{N_{\mathrm{e},k}(t)\}_{1\leq k\leq K_{\mathrm{e}}}\) its excitatory component and by \(\{N_{\mathrm{i},k}(t)\}_{1\leq k\leq K_{\mathrm{i}}}\) its inhibitory component, where \(t\) denotes time and \(k\) is the neuron index. For each neuron \(k\), the process \(N_{\mathrm{e/i},k}(t)\) is specified as the counting process registering the spiking occurrences of neuron \(k\) up to time \(t\). In other words, \(N_{\mathrm{i}}(t)=\sum_{k}\mathbbm{1}_{\{T_{\mathrm{e/i},k}\leq t\}}\), where \(\{T_{\mathrm{e/i},k,n}\}_{n\in\mathbb{Z}}\) denotes the full sequence of spiking times of neuron \(k\) and where \(\mathbbm{1}_{A}\) denotes the indicator function of set \(A\) (\(\mathbbm{1}_{A}(x)=1\) if \(x\) is in \(A\) and \(\mathbbm{1}_{A}(x)=0\) if \(x\) is not in \(A\)). Note that by convention, we label spikes so that \(T_{\mathrm{e/i},k,0}\leq 0<T_{\mathrm{e/i},k,1}\) for all neuron \(k\). Given a point-process model for the upstream spiking activity, classical conductance-based models consider that a single input to a synapse causes an instan Figure 2: **All-or-none-conductance-based models.** (a) Electrical diagram of conductance-based model for which the neuronal voltage \(V\) evolves in response to fluctuations of excitatory and inhibitory conductances \(g_{\mathrm{e}}\) and \(g_{\mathrm{i}}\). (b) In all-or-none models, inputs delivered as Poisson processes transiently activate the excitatory and inhibitory conductances \(g_{\mathrm{e}}\) and \(g_{\mathrm{i}}\) during a finite, nonzero synaptic activation time \(\tau_{s}>0\). Simulation parameters: \(K_{\mathrm{e}}=K_{\mathrm{i}}=50\), \(r_{\mathrm{e}}=\tau_{\mathrm{i}}=10\)Hz, \(\tau=15\)ms \(\tau_{s}=2\)ms \(>0\). taneous increase of its conductance, followed by an exponential decay with typical time scale \(\tau_{s}>0\). Here we depart from this assumption and consider that the synaptic conductances \(g_{\mathrm{e/i},k}\) operates all-or-none with a common activation time still referred to as \(\tau_{s}\). Specifically, we assume that the dynamics of the conductance \(g_{\mathrm{e/i},k}\) follows \[\dot{g}_{\mathrm{e/i},k}(t)= \tag{2}\] \[w_{\mathrm{e/i},k}\sum_{n}\left(\delta(t-T_{\mathrm{e/i},k,n})- \delta(t-T_{\mathrm{e/i},k,n}-\tau_{s})\right),\] where \(w_{\mathrm{e/i},k}\geq 0\) is the dimensionless synaptic weight. The above equation prescribes that the \(n\)-th spike delivery to synapse \(k\) at time \(T_{\mathrm{e/i},k,n}\) is followed by an instantaneous increase of that synapse's conductance by an amount \(w_{\mathrm{e/i},k}\) for a period \(\tau_{s}\). Thus, the synaptic response prescribed by Eq. (2) is all-or-none as opposed to being graded as in classical conductance-based models. However, just as in classical models, Eq. (2) allows synapses to multi-activate via linear superposition, thereby neglecting nonlinear synaptic saturation (see Fig. 2b). To be complete, AONCB neurons must in principle include a spike-generating mechanism. In that regard, it is customary to consider an integrate-and-fire mechanism [42; 43]: a neuron emits a spike whenever its voltage \(V\) exceeds a threshold value \(V_{\mathrm{T}}\), and reset instantaneously to some value \(V_{R}\) afterwards. Such a mechanism impacts the neuronal subthreshold voltage dynamics via post-spiking reset, which implements a nonlinear form of feedback. However, in this work we focus on the variability that is generated by fluctuating, possibly synchronous, synaptic inputs. For this reason, we neglect the influence of the spiking reset in our analysis and actually, we ignore the spike-generating mechanism altogether. ### Synchronous input model via exchangeability Our goal here is to rigorously model synchronous input via compound Poisson processes [40; 41], which in turn will serve as the drive to AONCB neurons. To do so in a principled way, we first consider a discrete model of excitatory synaptic inputs under the assumption of input exchangeability [44; 45]. Specifically, we suppose that the neuron under consideration receives inputs from \(K_{\mathrm{e}}\) neurons, chosen from an arbitrary large--actually soon to be considered infinite--pool of \(N\gg K_{\mathrm{e}}\) neurons. Adopting a discrete-time representation with elementary bin size \(\Delta t\), we denote by \(\{x_{1,i},\ldots,x_{K_{\mathrm{e}},i}\}\) in \(\{0,1\}^{K_{\mathrm{e}}}\) the input state within the \(i\)-th bin. Our main simplifying assumption consists in modeling all \(N\) inputs as exchangeable random variables \(\{X_{1,i},\ldots,X_{N,i}\}\) that are distributed identically over \(\{0,1\}^{N}\) and independently across time. Owing to the latter independence property, we drop the dependence on time index \(i\) in the following. By exchangeable, we mean that no combination of inputs plays a distinct role so that at all time, Figure 3: **Parametrizing correlations via exchangeability.** The activity of \(K_{\mathrm{e}}=100\) exchangeable synaptic inputs collected over \(N\) consecutive time bins can be represented as \(\{0,1\}\)-valued array \(\{X_{k,i}\}_{1\leq k\leq K_{\mathrm{e}},1\leq i\leq N}\), where \(X_{k,i}=1\) if input \(k\) activates in time bin \(i\). Under assumptions of exchangeability, the input spiking correlation is entirely captured by the count statistics of how many inputs coactivate within a given time bin. In the limit \(K_{\mathrm{e}}\rightarrow\infty\), the distribution of the fraction of coactivating inputs coincides with the directing de Finetti measure, which we consider as a parametric choice in our approach. In the absence of correlation, synapses tend to activate in isolation: \(\rho_{\mathrm{e}}=0\) in (a). In the presence of correlation, synapses tend to coactivate yielding disproportionately large synaptic activation event: \(\rho_{\mathrm{e}}=0.1\) in (b). Considering the associated cumulative counts specifies discrete-time jump processes that can be generalized to the continuous-time limit, i.e., for time bins of vanishing duration \(\Delta t\to 0^{+}\). the distribution of \(\{X_{1},\ldots,X_{N}\}\) is independent of the inputs labelling. In other words, for all permutations \(\sigma\) of \(\{1,\ldots,N\}\), \(\{X_{\sigma(1)},\ldots,X_{\sigma(N)}\}\) and \(\{X_{1},\ldots,X_{N},\}\) have identical distribution [44; 45]. By contrast with independent random spiking variables, exchangeable ones can exhibit positive correlations, that is \[\rho_{\rm e}=\frac{\mathbb{C}\left[X_{k},X_{l}\right]}{\sqrt{\mathbb{V}\left[X _{k}\right]\mathbb{V}\left[X_{l}\right]}}>0\,,\] where \(\rho_{\rm e}\) denotes the constant pairwise correlation for all \(k\neq l\) and where \(\mathbb{C}\left[X_{k},X_{l}\right]\) and \(\mathbb{V}\left[X_{k}\right]\) denote the covariance and the variance of the binary variables \(X_{k}\), respectively. Interestingly, a more explicit form of \(\rho_{\rm e}\) can be obtained in the limit of an infinite size pool \(N\to\infty\). This follows from de Finetti theorem [46], which states that the probability of observing a given input configuration for \(K_{\rm e}\) neurons is given by \[\mathbb{P}\left[X_{1},\ldots,X_{K_{\rm e}}\right]=\int\prod_{k=1} ^{K_{\rm e}}\theta_{\rm e}^{X_{k}}(1-\theta_{\rm e})^{1-X_{k}}\,{\rm d}F_{\rm e }(\theta_{\rm e})\,, \tag{3}\] where \(F_{\rm e}\) is the directing de Finetti probability measure supported on \([0,1]\). In the equation above, the number \(\theta_{\rm e}\) simply represents the (fluctuating) probability that a neuron spikes in a given time bin. The core message of de Finetti theorem is that the correlated spiking activity of neurons from an infinite exchangeable pool is obtained as a mixture of conditionally independent binomial laws. This mixture is specified by the directing measure \(F_{\rm e}\), which thus fully parametrizes our synchronous input model. Independent spiking corresponds to choosing \(F_{\rm e}\) as a point-mass measure concentrated on some probability \(p_{\rm e}=r_{\rm e}\Delta t\), \(0\leq p_{\rm e}\leq 1\), where \(r_{\rm e}\) denotes the individual spiking rate of an excitatory neuron: \(dF_{\rm e}(\theta)=\delta(\theta-p_{\rm e})d\theta\) (see Fig 3a). By contrast, a dispersed directing measure \(F_{\rm e}\) corresponds to the existence of correlations among the inputs (see Fig 3b). Accordingly, we show in Appendix A that the spiking pairwise correlation takes the explicit form \[\rho_{\rm e}=\frac{\mathbb{V}\left[\theta_{\rm e}\right]}{\mathbb{E}\left[ \theta_{\rm e}\right](1-\mathbb{E}\left[\theta_{\rm e}\right])}\,,\] where \(\mathbb{E}\left[\theta_{\rm e}\right]\) and \(\mathbb{V}\left[\theta_{\rm e}\right]\) denote the expectation and the variance of \(\theta_{\rm e}\sim F_{\rm e}\), respectively. The above formula reveals that nonzero correlations corresponds to nonzero variance, as is always the case for dispersed distribution. Observe that an infinity of measures \(F_{\rm e}\) can achieve the same spiking correlation. This observation is the reason why one can argue that the correlation \(\rho_{\rm e}\) is not a genuine modeling parameter. Considering \(\rho_{\rm e}\) as a genuine parameter requires additional assumptions about the form of \(F_{\rm e}\). In our exchangeable setting, a reasonable parametric choice for \(F_{\rm e}\) is given by beta distributions \(\text{Beta}(\alpha_{\rm e},\beta_{\rm e})\), where \(\alpha_{\rm e}\) and \(\beta_{\rm e}\) denote shape parameters [47]. Practically, this choice is motivated by the ability of beta distributions to efficiently fit correlated spiking data generated by existing algorithms [48]. Formally, this choice is motivated by the fact that beta distributions are conjugate priors for the binomial likelihood functions, so that the resulting probabilistic models can be studied analytically [49; 50; 51]. In particular, for \(F_{\rm e}\sim\text{Beta}(\alpha_{\rm e},\beta_{\rm e})\), the probability that \(k_{\rm e}\) synapses among the \(K_{\rm e}\) inputs are jointly active within the same time bin follows the beta-binomial distribution \[P_{{\rm e},k}=\binom{K_{\rm e}}{k}\frac{B(\alpha_{\rm e}+k,\beta_{\rm e}+K_{ \rm e}-k)}{B(\alpha_{\rm e},\beta_{\rm e})}\,. \tag{4}\] Accordingly, the mean number of active excitatory inputs is \(\mathbb{E}\left[k_{\rm e}\right]=K_{\rm e}\alpha_{\rm e}/(\alpha_{\rm e}+\beta _{\rm e})=K_{\rm e}r_{\rm e}\Delta t\). Utilizing Eq. (4), we also find that \(\rho_{\rm e}=1/(1+\alpha_{\rm e}+\beta_{\rm e})\) (see Fig. 4a). Given a typical synaptic weight \(w_{\rm e}\), the overall synaptic drive to a neuron is determined by the number of active inputs at each discrete time step, while synchrony is encoded via the probability \(P_{{\rm e},k}\). As AONCB dynamics unfolds in continuous time, we need to consider this discrete drive in the continuous-time limit as well, i.e., for vanishing time bins \(\Delta t\to 0^{+}\). When \(\Delta t\to 0^{+}\), we show in Appendix B that the overall synaptic drive specifies a compound Poisson process \(Y_{\rm e}\) with variable jump size \(W_{\rm e}\), i.e., \[Y_{\rm e}(t)=\sum_{n=0}^{N(t)}W_{{\rm e},n}\,, \tag{5}\] where \(W_{{\rm e},n}\) are i.i.d. samples of \(W_{\rm e}\) (see Fig. 4c). In the following, we denote the distribution of \(W_{\rm e}\) as \(p_{\rm e}\). Moreover, given a fixed typical synaptic weight \(w_{\rm e}\), we show that the jumps are given as \(W_{\rm e}=kw_{\rm e}\), with \(k\) distributed on \(\{1,\ldots,K_{\rm e}\}\) according to \[p_{{\rm e},k} = \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! coincidental synaptic activations. Indeed, as the limiting process \(\Delta t\to 0^{+}\) conserves the population spiking rate, the rate \(b_{\rm e}\) of synaptic activation events satisfies \(K_{\rm e}r_{\rm e}=b_{\rm e}\mathbb{E}_{\rm e}\left[k\right]\) so that we have \[b_{\rm e}=\frac{K_{\rm e}r_{\rm e}}{\mathbb{E}_{\rm e}\left[k\right]}=r_{\rm e }\beta_{\rm e}\left(\psi(\beta_{\rm e}+K_{\rm e})-\psi(\beta_{\rm e})\right)\,. \tag{8}\] Let us stress for clarity, that if \(k_{\rm e}\) synapses activate synchronously, this only count as one synaptic event, which can come in variable size \(k\). Consistently, we have in general \(r_{\rm e}\leq b_{\rm e}\leq K_{\rm e}r_{\rm e}\). Further inspection confirms that when \(\beta_{\rm e}\to 0\), we have perfect synchrony with \(\rho_{\rm e}=1\) and \(b_{\rm e}\to r_{\rm e}\), whereas the independent spiking regime with \(\rho_{\rm e}=0\) is attained for \(\beta_{\rm e}\to\infty\), for which we have \(b_{\rm e}\to K_{\rm e}r_{\rm e}\). The above parametrization in term of beta distributions offers a principled way to model spiking correlations via the compound Poisson process \(Y_{\rm e}\) with discrete jump distribution given by \(p_{\rm e,k}\). There are other possible parametrizations and our result will hold for arbitrary jump distribution \(p_{\rm e}\). When considering an arbitrary \(p_{\rm e}\), the main caveat is understanding how such a distribution may correspond to a given input number \(K_{\rm e}\) and spiking correlation \(\rho_{\rm e}\). For this reason, we will always consider that \(k_{\rm e}=W_{\rm e}/w_{\rm e}\) follows the distribution \(p_{\rm e,k}\) given in Eq. (6) when discussing the roles of \(w_{\rm e}\), \(K_{\rm e}\), and \(\rho_{\rm e}\) in shaping the voltage response of a neuron. ### Correlation between excitation and inhibition via partial exchangeability One can generalize the modeling of synchronous inputs via compound Poisson processes to include inhibition in addition to excitation. Such a generalization leverages representations akin to Eq. (3) but for the joint probability distributions of \(K_{\rm e}\) exchangeable excitatory inputs and \(K_{\rm i}\) exchangeable inhibitory inputs. Accordingly, let us assume that the inputs specify a \((K_{\rm e}+K_{\rm i})\)-dimensional random variable \(\{X_{1},\ldots,X_{K_{\rm e}},Y_{1},\ldots,Y_{K_{\rm i}}\}\) on \(\{0,1\}^{K_{\rm e}+K_{\rm i}}\). Let us further assume that the excitatory inputs \(X_{i}\), \(1\leq i\leq K_{\rm e}\) and the inhibitory inputs \(X_{j}\), \(1\leq j\leq K_{\rm i}\) are separately exchangeable. Here, we can only assume partial exchangeability as excitatory and inhibitory inputs are distinguishable [52]. As a result, the directing measure must be chosen as a bivariate distribution \(F_{\rm ei}(\theta_{\rm e},\theta_{\rm i})\) over the unit square \([0,1]\times[0,1]\), so that we have \[\mathbb{P}\left[X_{1},\ldots,X_{K_{\rm e}},Y_{1},\ldots,Y_{K_{\rm i }}\right]=\] \[\int\prod_{k=1}^{K_{\rm e}}\theta_{\rm e}^{X_{\rm k}}(1-\theta_{ \rm e})^{1-X_{\rm k}}\prod_{l=1}^{K_{\rm i}}\theta_{\rm i}^{X_{\rm i}}(1- \theta_{\rm i})^{1-X_{\rm i}}\,{\rm d}F_{\rm ei}(\theta_{\rm e},\theta_{\rm i })\,.\] In this setting, we show in Appendix A that the spiking correlation between excitatory and inhibitory inputs is given by \[\rho_{\rm ei}=\frac{\mathbb{C}\left[\theta_{\rm e},\theta_{\rm i}\right]}{ \sqrt{\mathbb{E}\left[\theta_{\rm e}\right]\mathbb{E}\left[\theta_{\rm i} \right](1-\mathbb{E}\left[\theta_{\rm e}\right])(1-\mathbb{E}\left[\theta_{ \rm i}\right])}}\geq 0\,, \tag{9}\] Figure 4: **Limit compound Poisson process.** (a) Modeling synaptic inputs for a bin size \(\Delta t=1\)ms specifies an input count process and a cumulative count process (left) as in Fig. 3. Correlations are parametrized via the distribution \(P_{\rm e,k}\) of the input count \(k_{\rm e}=\sum_{k=1}^{K_{\rm e}}X_{k}\) (top right). Alternatively, the discrete-time cumulative count process encodes correlations via its jump distribution (bottom right): \(P_{\rm e,k}/(1-P_{\rm e,0})\). (b) Taking a smaller bin size \(\Delta t=0.1\)ms yields similarly looking raster plots and cumulative counts, but an increasing proportion of bins become empty, with zero count. Accordingly, the input-count distribution increasingly concentrates on zero. In the presence of correlation, however, the jump distribution remains dispersed. (c) In the limit \(\Delta t\to 0\), the input-count distribution is concentrated on zero. By contrast, the distribution of jump sizes converges toward a well-defined distribution: \(p_{\rm e,k}=\lim_{\Delta t\to 0^{+}}P_{\rm e,k}/(1-P_{\rm e,0})\). This distribution characterizes the jumps of a limit compound Poisson process. where \(\mathbb{C}\left[\theta_{\rm e},\theta_{\rm i}\right]\) denotes the covariance of \(\theta_{\rm e}\) and \(\theta_{\rm i}\). Thus, independence between excitation and inhibition for which \(\rho_{\rm ei}=0\) corresponds to directing measure \(F_{\rm ei}\) with product form, i.e., \(F_{\rm ei}(\theta_{\rm e},\theta_{\rm i})=F_{\rm e}(\theta_{\rm e})F_{\rm i}( \theta_{\rm i})\), where \(F_{\rm e}\) and \(F_{\rm i}\) denote the marginal distributions. Alternative forms of the directed measure \(F_{\rm ei}\) lead to nonzero cross correlation \(\rho_{\rm ei}\), which necessarily satisfies \(0<|\rho_{\rm ei}|\leq\sqrt{\rho_{\rm e}\rho_{\rm i}}\). Eq. (9) shows that, in principle, \(F_{\rm ei}\) can be chosen as to achieve negative correlations between excitation and inhibition in the discrete setting. However, when shifting to a continuous-time representation, our exchangeability-based modeling approach can only capture nonnegative correlations \(\rho_{\rm ei}\geq 0\). This is because in the limit of vanishing bin size \(\Delta t\to 0^{+}\), nonzero correlations between excitation and inhibition amounts to having simultaneously activating excitatory and inhibitory synapses. To see this, let us consider a particular case for which the marginals of \(F_{\rm ei}\) are given by the same beta distribution: \(F_{\rm e}=F_{\rm i}=F\sim\text{Beta}(\alpha,\beta)\). Let us further consider two particular coupling for \(\theta_{\rm e}\) and \(\theta_{\rm i}\): \((i)\) the case of maximum positive correlation for \(\theta_{\rm e}=\theta_{\rm i}\) and \((ii)\) the case of zero correlation for which \(\theta_{\rm e}\) and \(\theta_{\rm i}\) are independent. Note that albeit symmetric, cases \((i)\) and \((ii)\) are not fully exchangeable due to excitation and inhibition being associated to distinct reversal potentials \(V_{\rm i}<0<V_{\rm e}\). For the maximally correlated case \((i)\), the probability that \(k\), \(1\leq k\leq K_{\rm e}\), excitatory synapses and \(l\), \(1\leq l\leq K_{\rm i}\), inhibitory synapses are jointly active within the same time bin follows the modified beta-binomial distribution \[P_{\rm ei,kl}=\binom{K_{\rm e}}{k}\binom{K_{\rm i}}{l}\frac{B(\alpha+k+l, \beta+2K-k-l)}{B(\alpha,\beta)}\,,\] whereas for the independent case \((ii)\), this probability is \(P_{\rm ei,kl}=P_{\rm e,k}P_{\rm i,l}\) where \(P_{\rm e,k}\) and \(P_{\rm i,l}\) refers to the same beta-binomial distribution defined by (4) for the parameter \(\alpha\), \(\beta\), \(K_{\rm e}\), and \(K_{\rm i}\) (see Fig. 5a for \(\rho_{\rm e}=\rho_{\rm i}=\rho_{\rm ei}\)). As we still have \(r_{\rm e/i}\Delta t=K_{\rm e/i}\alpha/(\alpha+\beta)\), the derivation of the continuous-time limit proceeds similarly as for the case of excitation alone, by considering vanishing time bins \(\Delta t\to 0^{+}\), which amounts to \(\alpha\to 0^{+}\) (see Appendix B). This implies that the excitatory- and inhibitory-specific correlations are both equal to \(\rho_{\rm e}=\rho_{\rm i}=1/(1+\beta)\) in this limit. However, owing to considering both excitation and inhibition, the continuous-time limit \(\Delta t\to 0^{+}\) actually defines two coupled Poisson processes \((N_{\rm e},N_{\rm i})\) with associated rates of synaptic events \(b_{\rm e}\) and \(b_{\rm i}\) satisfying Eq. (8) for the parameters \(\alpha\), \(\beta\), \(K_{\rm e}\) and \(K_{\rm i}\), respectively. The key observation is to realize that the coupling between these Poisson processes is mediated by simultaneous excitatory and inhibitory activations. As a result, the continuous-time limit depicting the excitatory and inhibitory drives is specified via a compound Poisson process \(Y\) with bivariate jumps \((W_{\rm e},W_{\rm i})\): \[Y(t)=\left(\sum_{n}^{N(t)}W_{\rm e,n},\sum_{n}^{N(t)}W_{\rm i,n}\right)\,, \tag{10}\] where the overall driving Poisson process \(N\) registers the number of synaptic activations without double counts. Note that this implies that \(\max(N_{\rm e}(t),N_{\rm i}(t))\leq N(t)\leq N_{\rm e}(t)+N_{\rm i}(t)\) with \(\max(b_{\rm e},b_{\rm i})\leq b\leq b_{\rm e}+b_{\rm i}\). For the maximally correlated case \((i)\), we show in Appendix C that the jumps are given as \((W_{\rm e},W_{\rm i})=(kw_{\rm e},lw_{\rm i})\), with \((k,l)\) distributed on \(\{0,\ldots,K\}\times\{0,\ldots,K\}\setminus\{0,0\}\) (see Fig. 5b and c) according to \[p_{\rm ei,kl} =\lim_{\alpha\to 0^{+}}\frac{P_{\rm ei,kl}}{1-P_{\rm ei,00}} \tag{11}\] \[=\binom{K_{\rm e}}{k}\binom{K_{\rm i}}{l}\frac{B(k+l,\beta+K_{\rm e }+K_{\rm i}-k-l)}{\psi(\beta+K_{\rm e}+K_{\rm i})-\psi(\beta)}\,.\] Incidentally, the driving Poisson process \(N\) has a rate \(b\) determined by adapting Eq. (8) \[b=r\beta\left(\psi(\beta+K_{\rm e}+K_{\rm i})-\psi(\beta)\right)\,,\] for which one can check that \(r\leq b\leq(K_{\rm e}+K_{\rm i})r\). By contrast, for the independent case \((ii)\), in the limit \(\alpha\to 0^{+}\), jumps are either excitatory alone or inhibitory alone. In Appendix C, we actually show that \[p_{\rm ei,kl}=\lim_{\alpha\to 0^{+}}\frac{P_{k}P_{l}}{1-P_{\rm e,0}P_{\rm i,0}}= \frac{1}{2}\left(p_{\rm e,k}\mathbbm{1}_{\{l=0\}}+p_{\rm i,l}\mathbbm{1}_{\{k=0 \}}\right)\,,\] where \(p_{\rm e,k}\) and \(p_{\rm i,l}\) are specified by (6) for the parameters \(\alpha\), \(\beta\), \(K_{\rm e}\), and \(K_{\rm i}\). Incidentally, the driving process is such that \(N=N_{\rm e}+N_{\rm i}\) with rate \(b=b_{\rm e}+b_{\rm i}\). The two considered cases \((i)\) and \((ii)\) above are only examples of compound Poisson processes modeling jointly excitation and inhibition within AONCB models. In general, such models will be determined by the choice of an overall rate of synaptic events \(b\) and a bivariate jump distribution \(p_{\rm ei}\) for the excitatory jumps \(W_{\rm e}\) and the inhibitory jumps \(W_{\rm i}\). Correlation between excitation and inhibition corresponds to those choices of \(p_{\rm ei}\) for which \(W_{\rm e}W_{\rm i}>0\) with nonzero probability, which indicates the presence of synchronous excitatory and inhibitory inputs. Incidentally, synchrony restricts nonzero correlations to be positive. Then, when \(\rho_{\rm ei}>0\), the overall rate of synaptic events \(b\) must be such that \(b<b_{\rm e}+b_{\rm i}\) owing to synchronization of excitatory and inhibitory inputs. In the following, we refer to expectations with respect to the joint jump distribution \(p_{\rm ei}\) as \(\mathbb{E}_{\rm ei}\left[\cdot\right]\). This is by contrast with \(\mathbb{E}_{\rm e}\left[\cdot\right]\) which denotes expectation with respect to the distribution of the excitatory jumps alone \(p_{\rm e}\). Ideally, this distribution \(p_{\rm ei}\) should be such that its conditional marginals \(p_{\rm e}\) and \(p_{\rm i}\), with support on \(\{W_{\rm e}>0\}\) and \(\{W_{\rm i}>0\}\), respectively, are distributed according to the previously introduced and biophysically interpretable distributions given by Eq. (6) (see Appendix B). Unfortunately, there does not seem to be a simple low-dimensional parametrization for distributions \(p_{\rm ei}\) with such conditional marginals and a varying degree of correlations, except in particular cases such as \((i)\) and \((ii)\). To address this point, one can resort to a variety of methods including copulas [53; 54]. However, these are beyond the scope of the present work. For these reasons, we will perform all our calculations for arbitrary jump joint distribution \(p_{\rm ei}\) on the positive orthant \((0,\infty)\times(0,\infty)\). We Figure 5: **Limit compound Poisson process with excitation and inhibition.** The continuous-time limit procedure depicted in Fig. 4 generalizes to the case of joint excitatory and inhibitory inputs, which breaks the assumption of exchangeability. (a) Under assumption of partial exchangeability, synaptic inputs can only be distinguished by the fact that they are either excitatory or inhibitory, which is marked by being colored in red or blue in the discrete representation of correlated synaptic inputs with bin size \(\Delta t=1\)ms. Accordingly, considering excitation and inhibition separately specifies two associated input-count processes and two cumulative counting processes. For nonzero spiking correlation \(\rho=0.03\), these processes are themselves correlated as captured by the joint distribution of excitatory and inhibitory input counts \(P_{\mathrm{el},kl}\) (center) and by the joint distribution of excitatory and inhibitory jumps \(P_{\mathrm{ei},kl}/(1-P_{00})\) (right). (b) The input count distribution \(P_{\mathrm{ei},kl}\) is a finite-size approximation of the bivariate directing de Finetti measure \(F_{\mathrm{el}}\), which we consider as a parameter as usual. For a smaller bin size \(\Delta t=0.1\)ms, this distribution concentrates in \((0,0)\), as an increasing proportion of time bins does not register any synaptic events, be they excitatory or inhibitory. In the presence of correlation however, the conditioned jump distribution remains correlated but also dispersed. (c) In the limit \(\Delta t\to 0\), the input-count distribution is concentrated in \((0,0)\), consistent with the fact that the average number of synaptic activations remains constant while the number of bins diverges. By contrast, the distribution of synaptic event size conditioned to distinct from \((0,0)\) converges toward a well-defined distribution: \(p_{\mathrm{ei},k}=\lim_{\Delta t\to 0^{+}}P_{\mathrm{ei},k}/(1-P_{ \mathrm{ei},00})\). This distribution characterizes the jumps of a bivariate compound Poisson process, obtained as the limit of the cumulative count process when considering \(\Delta t\to 0^{+}\). will only restrict ourselves to particular parametric forms for \(p_{\rm ei}\) when discussing the role of \(\rho_{\rm ei}\), whose specification via (9) requires modeling assumptions about \(F_{\rm ei}\). In that respect, we show in Appendix D that the coefficient \(\rho_{\rm ei}\) can always be deduced from the knowledge of a discrete distribution \(p_{\rm ei,kl}\) on \(\{0,\ldots,K_{\rm e}\}\times\{0,\ldots,K_{\rm i}\}\setminus\{0,0\}\) via \[\rho_{\rm ei}=\frac{\mathbb{E}_{\rm ei}\left[k_{\rm e}k_{\rm i}\right]}{ \sqrt{K_{\rm e}\mathbb{E}_{\rm ei}\left[k_{\rm e}\right]K_{\rm i}\mathbb{E}_{ \rm ei}\left[k_{\rm i}\right]}}\,,\] where the expectations are with respect to \(p_{\rm ei,kl}\). ### Ito, Stratonovich, and Marcus integrals We are now in a position to formulate the mathematical problem at stake within the framework developed by Marcus to study shot-noise driven systems [33; 34]. Our goal is quantifying the subthreshold variability of an AONCB neuron subjected to synchronous inputs. Mathematically, this amounts to computing the first two moments of the stationary process solving the following stochastic dynamics \[\dot{V}=-V/\tau+h_{\rm e}(V_{\rm e}-V)+h_{\rm i}(V_{\rm i}-V)+I/C\,, \tag{12}\] where \(V_{\rm i}<0<V_{\rm e}\) are constants and where the reduced conductances \(h_{\rm e}=g_{\rm e}/C\) and \(h_{\rm i}=g_{\rm i}/C\) follows stochastic processes defined in terms of a compound Poisson process \(Y\) with bivariate jumps. Formally, the compound Poisson process \(Y\) is specified by \(b\), the rate of its governing Poisson process \(N\), and by the joint distribution of its jumps \(p_{\rm ei}\). Each point of the Poisson process \(N\) represents a synaptic activation time \(T_{k}\), where \(k\) is in \(\mathbb{Z}\) with the convention that \(T_{0}\leq 0\leq T_{1}\). At all these times, the synaptic input sizes are drawn as i.i.d. random variables \((W_{\rm o,k},W_{\rm i,k})\) in \(\mathbb{R}^{+}\times\mathbb{R}^{+}\) with probability distribution \(p_{\rm ei}\). In order to understand how our approach fits in Marcus framework, it is important to remark that the driving process \(Y\) is distinct from the conductance process \(h=(h_{\rm e},h_{\rm i})\). The latter process is formally defined for AONCB neurons as \[h(t) =\frac{Y(t)-Y(t-\epsilon\tau)}{\epsilon\tau}\,,\] \[=\frac{1}{\epsilon\tau}\left(\sum_{n=N(t-\epsilon\tau)+1}^{N(t) }\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! Observe that the above Marcus rule directly implies that no jump can cause the voltage to exit \((V_{\mathrm{i}},V_{\mathrm{e}})\), the allowed range of variation for \(V\). Moreover, note that this rule specifies an exact even-driven simulation scheme given knowledge of the synaptic activation times and sizes \(\{T_{n},W_{\mathrm{e},n},W_{\mathrm{i},n}\}_{n\in\mathbb{Z}}\)[58]. We adopt the above Marcus-type numerical scheme in all the simulations that involved instantaneous synapses (see Fig. 6b). ## III Moment calculations In this section, we derive our two main exact results for AONCB neurons driven by synchronous synaptic inputs. Specifically, we derive the stationary mean voltage Eq. (19) in III.1 and the stationary voltage variance Eq. (30) in III.2. These results are obtained by probabilistic treatment exploiting the properties of compound Poisson processes, which yields interpretable formulas in the limit of instantaneous synapses \(\epsilon=\tau_{s}/\tau\to 0^{+}\). Readers who have no interest in the method of derivation for these results may skip the content of this section, aside from Eq. (19) and Eq. (30). ### Stationary voltage mean For a positive synaptic activation time \(t>0\), the classical method of the variation of the constant applies to solve Eq. (1). This yields an expression for \(V_{\epsilon}(t)\) in terms of regular Riemann-Stieltjes integrals where the conductance traces \(h_{\mathrm{e}}(t)\) and \(h_{\mathrm{i}}(t)\) are treated as a form of deterministic quenched disorder. Specifically, given an initial condition \(V_{\epsilon}(0)\), we have \[V_{\epsilon}(t)=V_{\epsilon}(0)e^{-\int_{0}^{t}\frac{1}{\tau}+h_ {\mathrm{e}}(u)+h_{\mathrm{i}}(u)\,\mathrm{d}u}\] \[+\int_{0}^{t}\left(V_{\mathrm{e}}h_{\mathrm{e}}(u)+V_{\mathrm{i} }h_{\mathrm{i}}(u)+I/C\right)e^{-\int_{u}^{t}\frac{1}{\tau}+h_{\mathrm{e}}(v) +h_{\mathrm{i}}(v)\,\mathrm{d}v}\,\mathrm{d}u\,.\] where \(V_{\epsilon}(t)\) depends on \(\epsilon\) via the all-or-none-conductance processes \(h_{\mathrm{e}}\) and \(h_{\mathrm{i}}\). As usual, the stationary dynamics of the voltage \(V_{\epsilon}\) is recovered by considering the limit of arbitrary large times \(t\to\infty\), for which one can neglect the influence of the initial condition \(V_{\epsilon}(0)\). Introducing the cumulative input processes \(H=(H_{\mathrm{e}},H_{\mathrm{i}})\) defined by \[\left(H_{\mathrm{e}}(t),H_{\mathrm{i}}(t)\right)=\left(\int_{0}^{t}h_{\mathrm{ e}}(u)\,du,\int_{0}^{t}h_{\mathrm{i}}(u)\,du\right)\,.\] and satisfying \(\tau\mathrm{d}H_{\mathrm{e}}(t)=h_{\mathrm{e}}(t)\,\mathrm{d}t\) and \(\tau\mathrm{d}H_{\mathrm{i}}(t)=h_{\mathrm{i}}(t)\,\mathrm{d}t\), we have \[V_{\epsilon}= \tag{14}\] \[\int_{-\infty}^{0}e^{\frac{1}{\tau}+H_{\mathrm{e}}(t)+H_{\mathrm{ i}}(t)}\left(\mathrm{d}[V_{\epsilon}H_{\mathrm{e}}(t)+V_{\mathrm{i}}H_{ \mathrm{i}}(t)]+\frac{I}{G}\frac{\mathrm{d}t}{\tau}\right)\,.\] In turn, expanding the integrand above yields the following expression for the stationary expectation of the Figure 6: **Limit of instantaneous synapses within the Marcus framework.** (a) Simulation of the conductance processes \(g_{\mathrm{e}}\) and \(g_{\mathrm{i}}\) as all-or-none conductance process with nonzero synaptic time constant \(\tau_{s}=2\)ms. We consider Poisson-process drive without cross-population correlation \(\rho_{\mathrm{ei}}=0\), but with nonzero correlations within the excitatory and inhibitory synaptic inputs: \(\rho_{\mathrm{e}}=0.03\) and \(\rho_{\mathrm{i}}=0.06\). For \(\tau_{s}>0\), the membrane voltage \(V\) is simulated via a standard Euler discretization scheme. The corresponding empirical conductance and voltage distributions are shown on the right. The later voltage distribution asymptotically determines the stationary moments of \(V\). (b) In the limit of instantaneous synapses \(\epsilon=\tau_{s}/\tau\to 0^{+}\), the conductance processes \(g_{\mathrm{e}}\) and \(g_{\mathrm{i}}\) converge toward the increments of compound Poisson processes, which are determined as a collection of Dirac delta functions with i.i.d. weights. Simulating the limit process \(V\) obtained when \(\epsilon=\tau_{s}/\tau\to 0^{+}\) requires to adopt the framework of Marcus integrals, which generalize Stratonovich integrals to the case of point-process drives, when possible. Importantly, for the same sequence of activation time, the voltage trace and the empirical voltage distribution are only marginally altered in the limit \(\epsilon\to 0^{+}\), at least fcompared with \(\epsilon=0.2\) in (a). voltage \[\mathbb{E}\left[V_{\epsilon}\right] =V_{\mathrm{e}}\int_{-\infty}^{0}e^{\frac{t}{\tau}}\mathbb{E}\left[e ^{H_{\mathrm{e}}(t)+H_{\mathrm{i}}(t)}\,\mathrm{d}H_{\mathrm{e}}(t)\right]\] \[+V_{\mathrm{i}}\int_{-\infty}^{0}e^{\frac{t}{\tau}}\mathbb{E} \left[e^{H_{\mathrm{e}}(t)+H_{\mathrm{i}}(t)}\,\mathrm{d}H_{\mathrm{i}}(t)\right]\] \[+\frac{I}{G}\int_{-\infty}^{0}e^{\frac{t}{\tau}}\mathbb{E}\left[e ^{H_{\mathrm{e}}(t)+H_{\mathrm{i}}(t)}\right]\,\frac{\mathrm{d}t}{\tau}\,. \tag{15}\] Our primary task is to evaluate the various stationary expectations appearing in the above formula. Such a goal can be achieved analytically for AONCB models. As the involved calculations tend to be cumbersome, we only give a detailed account in Appendix. Here we account for the key steps of the calculation, which ultimately produces an interpretable compact formula for \(\mathbb{E}\left[V_{\mathrm{e}}\right]\) in the limit of instantaneous synapses, i.e., when \(\epsilon\to 0\). In order to establish this compact formula, it is worth introducing the stationary bivariate function \[Q_{\epsilon}(t,s)=\mathbb{E}\left[e^{H_{\mathrm{e}}(t)+H_{\mathrm{i}}(s)} \right]\,, \tag{16}\] which naturally depends on \(\epsilon\) via \(H_{\mathrm{e}}(t)\) and \(H_{\mathrm{i}}(s)\). The function \(Q_{\epsilon}\) is of great interest because all the stationary expectations at stake in Eq. (15) can be derived from it. Before justifying this point, an important observation is that the expectation defining \(Q_{\epsilon}(t,s)\) only bears on the cumulative input processes \(H_{\mathrm{e}}\) and \(H_{\mathrm{i}}\), which specify bounded, piecewise continuous functions with probability one, independent of \(\epsilon\). As a result of this regular behavior, the expectation commute with the limit of instantaneous synapses allowing one to write \[Q(t,s) =\lim_{\epsilon\to 0^{+}}Q_{\epsilon}(t,s)\] \[=\mathbb{E}\left[e^{\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \epsilon\to 0}\,H_{\mathrm{e}}(t)+H_{\mathrm{i}}(s)\right]\,,\] \[=\mathbb{E}\left[e^{-Y_{\mathrm{e}}(t)-Y_{\mathrm{i}}(t)}\right]\,,\] where we exploit the fact that the cumulative input processes \(H_{\mathrm{e}}\) and \(H_{\mathrm{i}}\) converge toward the coupled compound Poisson processes \(Y_{\mathrm{e}}\) and \(Y_{\mathrm{i}}\) when \(\epsilon\to 0^{+}\). The above remark allows one to compute the term due to current injection \(I\) in Eq. (15), where the expectation can be identified to \(Q_{\epsilon}(t,t)\). Indeed, utilizing the standard form for the moment-generating function for compound Poisson processes [40], we find that \[Q(t,t)=e^{a_{\mathrm{ei},\mathrm{i}}t/\tau}\,,\] where we introduce the first-order aggregate efficacy \[a_{\mathrm{ei},1}=b\tau\left(1-\mathbb{E}_{\mathrm{ei}}\left[e^{-(W_{\mathrm{ e}}+W_{\mathrm{i}})}\right]\right)\,.\] Remember that in the above definition, \(\mathbb{E}_{\mathrm{ei}}\left[\cdot\right]\) denotes the expectation with respect to the joint probability of the conductance jumps, i.e., \(p_{\mathrm{ei}}\). It remains to evaluate the expectations associated to excitation and inhibition reversal potentials in Eq. (15). These terms differ from the current-associated term in that they involve expectations of stochastic integrals with respect to the cumulative input processes \(H_{\mathrm{e}/\mathrm{i}}\). This is by contrast with evaluating Eq. (16), which only involves expectations of functions that depends on \(H_{\mathrm{e}/\mathrm{i}}\). In principle, one could still hope to adopt a similar route as for the current associated term, exploiting the compound Poisson process \(Y\) obtained in the limit of instantaneous synapses. However, such an approach would require that the operations of taking the limit of instantaneous synapses and evaluating the stationary expectation still commute. This is a major caveat as such a commuting relation generally fails for point-process-based stochastic integrals. Therefore, one has to analytically evaluate the expectations at stake for positive synaptic activation time \(\epsilon>0\), without resorting to the simplifying limit of instantaneous synapses. This analytical requirement is the primary motivation to consider AONCB models. The first step in the calculation is to realize that for \(\epsilon>0\), the conductance traces \(h_{\mathrm{e}}(t)=\tau\mathrm{d}H_{\mathrm{e}}(t)/\mathrm{d}t\) and \(h_{\mathrm{i}}(t)=\tau\mathrm{d}H_{\mathrm{i}}(t)/\mathrm{d}t\) are bounded, piecewise continuous functions with probability one. Under these conditions, it then holds that \[\lim_{s\to t}\partial_{t}Q_{\epsilon}(t,s) =\mathbb{E}\left[\frac{\mathrm{d}H_{\mathrm{e}}(t)}{\mathrm{d}t} \,e^{H_{\mathrm{e}}(t)+H_{\mathrm{i}}(t)}\right]\,,\] \[\lim_{s\to t}\partial_{s}Q_{\epsilon}(t,s) =\mathbb{E}\left[\frac{\mathrm{d}H_{\mathrm{i}}(t)}{\mathrm{d}t} \,e^{H_{\mathrm{e}}(t)+H_{\mathrm{i}}(t)}\right]\,,\] so that the sought-after expectations can be deduced from the closed-form knowledge of \(Q_{\epsilon}(t,s)\) for positive \(\epsilon>0\). The analytical expression of \(Q_{\epsilon}(t,s)\) can be obtained via careful manipulation of the processes \(H_{\mathrm{e}}\) and \(H_{\mathrm{i}}\) featured in the exponent of Eq. (16) (see Appendix F). In a nutshell, these manipulations hinge on splitting the integrals defining \(H_{\mathrm{e}}(t)\) and \(H_{\mathrm{i}}(s)\) into independent contributions arising from spiking events occurring in the five nonoverlapping, contiguous intervals bounded by the times \(0\geq-\epsilon\tau\geq t\geq s\geq t-\epsilon\tau\geq s-\epsilon\tau\). There is no loss of generality in assuming the latter ordering and from the corresponding analytical expression, we can compute \[\lim_{\epsilon\to 0^{+}}\lim_{s\to t}\partial_{t}Q_{\epsilon}(t,s) =ba_{\mathrm{e},1}e^{a_{\mathrm{ei},\mathrm{i}}t/\tau}\,,\] \[\lim_{\epsilon\to 0^{+}}\lim_{s\to t}\partial_{s}Q_{\epsilon}(t,s) =ba_{\mathrm{i},1}e^{a_{\mathrm{ei},\mathrm{i}}t/\tau}\,,\] where we define the effective first-order synaptic efficacies \[a_{\mathrm{e},1} =b\tau\mathbb{E}_{\mathrm{ei}}\left[\frac{W_{\mathrm{e}}}{W_{ \mathrm{e}}+W_{\mathrm{i}}}\left(1-e^{-(W_{\mathrm{e}}+W_{\mathrm{i}})} \right)\right]\,, \tag{17}\] \[a_{\mathrm{i},1} =b\tau\mathbb{E}_{\mathrm{ei}}\left[\frac{W_{\mathrm{i}}}{W_{ \mathrm{e}}+W_{\mathrm{i}}}\left(1-e^{-(W_{\mathrm{e}}+W_{\mathrm{i}})} \right)\right]\,. \tag{18}\] Observe that by definition, \(a_{\mathrm{e},1}\) and \(a_{\mathrm{i},1}\) satisfy \(a_{\mathrm{e},1}+a_{\mathrm{i},1}=a_{\mathrm{ei},1}\). Altogether, upon evaluation of the integrals featured in Eq. (15), these results allow one to produce a compact expression for the stationary voltage mean in the limit of instantaneous synapses: \[\mathbb{E}\left[V\right]=\lim_{\epsilon\to 0^{+}}\mathbb{E}\left[V_{\epsilon} \right]=\frac{a_{\mathrm{e},1}V_{\mathrm{e}}+a_{\mathrm{i},1}V_{\mathrm{i}}+I/G}{1+ a_{\mathrm{e},1}+a_{\mathrm{i},1}}\,. \tag{19}\] The above formula is the same as the one obtained for fixed asynchronous conductances set to values \(a_{\mathrm{e},1}\) and \(a_{\mathrm{i},1}\). Thus, the impact of synchrony entirely lies in the definition of the first-order synaptic efficacies via Eq. (17) and Eq. (18). Technically, the exponential form of the efficacies follows from the shot-noise nature of the synaptic conductances. At the same time, the expectation form of the efficacies follows from the stochastic nature of the conductance jumps \((W_{\mathrm{e}},W_{\mathrm{i}})\), which captures input synchrony. ### Stationary voltage variance The calculation of the stationary voltage variance is more challenging than that of the stationary voltage mean. However, in the limit of instantaneous synapses, this calculation produces a compact, interpretable formula as well. Adopting a similar approach as for the stationary mean calculation, we start by expressing \(V_{\epsilon}^{2}\) in the stationary limit in terms of a stochastic integrals involving the cumulative input processes \(H_{\mathrm{e}}\) and \(H_{\mathrm{i}}\). Specifically, using Eq. (14), we have \[V_{\epsilon}^{2} =\left(\int_{-\infty}^{0}e^{\frac{t}{\tau}+H_{\mathrm{e}}(t)+H_{ \mathrm{i}}(t)}\left(\mathrm{d}(V_{\mathrm{e}}H_{\mathrm{e}}(t)+V_{\mathrm{i} }H_{\mathrm{i}}(t))+\frac{I}{G}\frac{\mathrm{d}t}{\tau}\right)\right)^{2}\,,\] \[=\iint_{\mathbb{R}_{-}^{2}}e^{\frac{t+\tau}{\tau}+H_{\mathrm{e}} (t)+H_{\mathrm{i}}(t)+H_{\mathrm{e}}(s)+H_{\mathrm{i}}(s)}\left(\mathrm{d}(V _{\mathrm{e}}H_{\mathrm{e}}(t)+V_{\mathrm{i}}H_{\mathrm{i}}(t))+\frac{I}{G} \frac{\mathrm{d}t}{\tau}\right)\left(\mathrm{d}(V_{\mathrm{e}}H_{\mathrm{e}}( s)+V_{\mathrm{i}}H_{\mathrm{i}}(s))+\frac{I}{G}\frac{ds}{\tau}\right)\,. \tag{20}\] Our main goal is to compute the stationary expectation of the above quantity. As for the stationary voltage mean, our strategy is \((i)\) to derive the exact stationary expectation of the integrands for finite synaptic activation time, \((ii)\) to evaluate these integrands in the simplifying limit of instantaneous synapses, and \((iii)\) to rearrange the terms obtained after integration into an interpretable final form. Enacting the above strategy is a rather tedious task, and as for the calculation of the mean voltage, we only present the key steps of the calculation in the following. The integrand terms at stake are obtained by expanding Eq. (20), which yields the following quadratic expression for the stationary second moment of the voltage \[\mathbb{E}\left[V_{\epsilon}^{2}\right] =A_{\mathrm{e},\epsilon}V_{\mathrm{e}}^{2}+B_{\mathrm{ei}, \epsilon}V_{\mathrm{e}}V_{\mathrm{i}}+A_{\mathrm{i},\epsilon}V_{\mathrm{i}}^ {2} \tag{21}\] \[\quad+\left(V_{\mathrm{e}}B_{\mathrm{e}I,\epsilon}+V_{\mathrm{i} }B_{\mathrm{i}I,\epsilon}\right)(I/G)+A_{I,\epsilon}(I/G)^{2}\,,\] whose various coefficients needs to be evaluated. These coefficients are conveniently specified in terms of the following symmetric random function \[\mathcal{E}_{\mathrm{ei}}(t,s)=e^{H_{\mathrm{e}}(t)+H_{\mathrm{i}}(t)+H_{ \mathrm{e}}(s)+H_{\mathrm{i}}(s)}\,.\] which features prominently in Eq. (20). Moreover, drawing on the calculation of the stationary mean voltage, we anticipate that the quadrivariate version of \(\mathcal{E}_{\mathrm{ei}}(t,s)\) will play a central role in the calculation via its stationary expectation. Owing to this central role, we denote this expectation as \[R_{\epsilon}(t,u,s,v)=\mathbb{E}\left[e^{H_{\mathrm{e}}(t)+H_{\mathrm{i}}(u)+H _{\mathrm{e}}(s)+H_{\mathrm{i}}(v)}\right]\,.\] where we make the \(\epsilon\)-dependence explicit. As a mere expectation with respect to the cumulative input processes \((H_{\mathrm{e}},H_{\mathrm{i}})\), the expectation can be evaluated in closed form for AONCB models. This again requires careful manipulations of the processes \(H_{\mathrm{e}}\) and \(H_{\mathrm{i}}\), which need to split into independent contributions arising from spiking events occurring in nonoverlapping intervals. By contrast with the bivariate case, the quadrivariate case requires to consider nine contiguous intervals. There is no loss of generality to consider these interval bounds to be determined by the two following time orderings: \(O\)**-order:**\(0\geq-\epsilon\tau\geq t\geq u\geq t-\epsilon\tau\geq u-\epsilon\tau\geq s \geq v\geq s-\epsilon\tau\geq v-\epsilon\tau\), \(D\)**-order:**\(0\geq-\epsilon\tau\geq t\geq u\geq s\geq v\geq t-\epsilon\tau\geq u-\epsilon\tau\geq s -\epsilon\tau\geq v-\epsilon\tau\). where \(O\) stands for off-diagonal ordering and \(D\) for diagonal ordering. The reason to only consider the \(O/D\)-orders is that all the relevant calculations will be made in the limit \((u,v)\rightarrow(t,s)\). By symmetry of \(R_{\epsilon}(t,u,s,v)\), it is then enough to restrict our consideration to the limit \((u,v)\rightarrow(t^{-},s^{-})\), which leaves the choice of \(t,s\leq 0\) to be determined. By symmetry, one can always choose \(t>s\), so that the only remaining alternative is to decide wether \((t,s)\) belong to the diagonal region \(\mathcal{D}_{\epsilon}=\{t,s\leq 0\,|\,\epsilon\tau\geq|t-s|\}\) or the off-diagonal region \(\mathcal{O}_{\epsilon}=\{t,s\leq 0\,|\,\epsilon\tau<|t-s|\}\). For the sake of completeness, we give the two expressions of \(R_{\epsilon}(t,u,s,v)\) on the regions \(\mathcal{O}_{\epsilon}\) and \(\mathcal{D}_{\epsilon}\) in Appendix G. Owing to their tediousness, we do not give the detailed calculations leading to these expressions, which are lengthy but straightforward elaborations on those used in Appendix F. Here we stress that for \(\epsilon>0\), these expressions reveal that \(R_{\epsilon}(t,u,s,v)\) is defined as a twice-differentiable quadrivariate function. With these remarks in mind, the coefficients featured in Eq. (21) can be categorized in three classes: 1. There is a single current-dependent inhomogeneous coefficient \[A_{I,\epsilon}=\iint_{\mathbb{R}_{-}^{2}}e^{\frac{t+s}{\tau}}\mathbb{E}\left[ \mathcal{E}_{\text{ei}}(t,s)\right]\,\frac{\text{d}t\,ds}{\tau^{2}}\,,\] where we recognize that \(\mathbb{E}\left[\mathcal{E}_{\text{ei}}(t,s)\right]=R_{\epsilon}(t,t,s,s) \stackrel{{\text{def}}}{{=}}R_{\epsilon}(t,s)\). As \(R_{\epsilon}(t,s)\) is merely a stationary expectation with respect to the cumulative input processes \((H_{\text{e}},H_{\text{i}})\), it can be directly evaluated in the limit of instantaneous synapses. In other word, step \((ii)\) can be performed before step \((i)\), similarly as for the stationary voltage mean calculation. However, having a general analytical expression for \(R_{\epsilon}(t,u,s,v)\) on \(\mathcal{O}_{\epsilon}\) (see Appendix G), we can directly evaluate for all \(t\neq s\) that \[R(t,s) =\lim_{\epsilon\to 0^{+}}R_{\epsilon}(t,s)\] \[=e^{(2a_{\text{ei},2}\max(t,s)-a_{\text{ei},1}[t-s])/\tau}\,, \tag{22}\] where we define the second-order aggregate efficacy \[a_{\text{ei},2}=\frac{b\tau}{2}\left(1-\mathbb{E}_{\text{ei}}\left[e^{-2(W_{ \epsilon}+W_{\text{i}})}\right]\right)\,.\] It is clear that the continuous function \(R(t,s)\) is smooth everywhere except on the diagonal where it admits a slope discontinuity. As we shall see, this slope discontinuity is the reason why one needs to consider the \(\mathcal{D}_{\epsilon}\) region carefully, even when only concerned with the limit \(\epsilon\to 0^{+}\). That being said, the diagonal behavior plays no role here and straightforward integration of \(R(t,s)\) on the negative orthant gives \[A_{I}=\lim_{\epsilon\to 0^{+}}A_{I,\epsilon}=\frac{1}{(1+a_{\text{ei},1})(1+a_{ \text{ei},2})}\,.\] 2. There are two current-dependent linear coefficients \[B_{\text{e}I,\epsilon} =2\iint_{\mathbb{R}_{-}^{2}}e^{\frac{t+s}{\tau}}\mathbb{E}\left[ \mathcal{E}_{\text{ei}}(t,s)\text{d}H_{\text{e}}(t)\right]\frac{\text{d}s}{ \tau}\,,\] \[B_{\text{i}I,\epsilon} =2\iint_{\mathbb{R}_{-}^{2}}e^{\frac{t+s}{\tau}}\mathbb{E}\left[ \mathcal{E}_{\text{ei}}(t,s)\text{d}H_{\text{i}}(t)\right]\frac{\text{d}s}{ \tau}\,,\] where the coefficient \(2\) above comes from the fact that \(B_{\text{e}I,\epsilon}\) and \(B_{\text{i}I,\epsilon}\) are actually resulting from the contributions of two symmetric terms in the expansion of Eq. (20). Both \(B_{\text{e}I,\epsilon}\) and \(B_{\text{i}I,\epsilon}\) involve expectations of stochastic integrals akin to those evaluated for the stationary mean calculation. Therefore, these terms can be treated similarly by implementing step \((i)\) and \((ii)\) sequentially. The trick is to realize that for positive \(\epsilon\) and \(t\neq s\leq 0\), it holds that \[\mathbb{E}\left[\mathcal{E}_{\text{ei}}(t,s)\,\frac{\text{d}H_{ \text{e}}(t)}{\text{d}t}\right]=\lim_{u\to 0}\partial_{t}R_{\epsilon}(t,u,s,s)\,,\] \[\mathbb{E}\left[\mathcal{E}_{\text{ei}}(t,s)\,\frac{\text{d}H_{ \text{i}}(t)}{\text{d}t}\right]=\lim_{v\to s}\partial_{s}R_{\epsilon}(t,t,s,v)\,,\] Thus for any \((t,s)\) in the off-diagonal region \(O_{\epsilon}\), the analytical knowledge of \(R_{\epsilon}(t,u,s,v)\) (see Appendix G) allows one to evaluate \[\lim_{u\to t^{-}}\tau\frac{\partial_{t}R_{\epsilon}(t,u,s,s)}{R_{ \epsilon}(t,s)}=\left\{\begin{array}{rl}a_{\text{e},1}&\text{if}\;\;t>s\,,\\ a_{\text{e},2}-a_{\text{e},1}&\text{if}\;\;t<s\,,\end{array}\right. \tag{23}\] \[\lim_{v\to s^{-}}\tau\frac{\partial_{s}R_{\epsilon}(t,u,s,s)}{R_{ \epsilon}(t,s)}=\left\{\begin{array}{rl}a_{\text{i},1}&\text{if}\;\;t>s\,,\\ a_{\text{i},2}-a_{\text{i},1}&\text{if}\;\;t<s\,,\end{array}\right. \tag{24}\] where the second-order synaptic efficacies are defined as \[a_{\text{e},2} =\frac{b\tau}{2}\mathbb{E}_{\text{ei}}\left[\frac{W_{\text{e}}}{ W_{\text{e}}+W_{\text{i}}}\left(1-e^{-2(W_{\epsilon}+W_{\text{i}})}\right) \right]\,, \tag{25}\] \[a_{\text{i},2} =\frac{b\tau}{2}\mathbb{E}_{\text{ei}}\left[\frac{W_{\text{i}}}{ W_{\text{e}}+W_{\text{i}}}\left(1-e^{-2(W_{\epsilon}+W_{\text{i}})}\right) \right]\,. \tag{26}\] Observe that these efficacies satisfy the familiar relation \(a_{\text{e},2}+a_{\text{i},2}=a_{\text{ei},2}\). Taking the limits of Eq. (23) and Eq. (24) when \(\epsilon\to 0^{+}\) specify two bivariate functions that are continuous everywhere, except on the diagonal \(t=s\), where these functions present a jump discontinuity. This behavior is still regular enough to discard any potential contributions from diagonal terms, so that we can restrict ourselves to the region \(O_{\epsilon}\). Then, taking the limit \(\epsilon\to 0^{+}\) after integration of over \(O_{\epsilon}\), we find that \[B_{\text{e}I} =\lim_{\epsilon\to 0^{+}}B_{\text{e}I,\epsilon}=\frac{a_{\text{e},2}}{(1+a_ {\text{ei},1})(1+a_{\text{ei},2})}\,,\] \[B_{\text{i}I} =\lim_{\epsilon\to 0^{+}}B_{\text{i}I,\epsilon}=\frac{b\tau a_{\text{i},2}}{ (1+a_{\text{ei},1})(1+a_{\text{ei},2})}\,.\] 3. There are four quadratic coefficients associated to the reversal-potential \(V_{\text{e}}\) and \(V_{\text{i}}\), including two diagonal terms \[A_{\text{e},\epsilon} =\iint_{\mathbb{R}_{-}^{2}}e^{\frac{t+s}{\tau}}\mathbb{E}\left[ \mathcal{E}_{\text{ei}}(t,s)\text{d}H_{\text{e}}(t)\text{d}H_{\text{e}}(s) \right]\,,\] \[A_{\text{i},\epsilon} =\iint_{\mathbb{R}_{-}^{2}}e^{\frac{t+s}{\tau}}\mathbb{E}\left[ \mathcal{E}_{\text{ei}}(t,s)\text{d}H_{\text{i}}(t)\text{d}H_{\text{i}}(s) \right]\,,\] and two symmetric cross terms contributing \[B_{\text{ei},\epsilon}=2\iint_{\mathbb{R}_{-}^{2}}e^{\frac{t+s}{\tau}}\mathbb{E} \left[\mathcal{E}_{\text{ei}}(t,s)\text{d}H_{\text{e}}(t)\text{d}H_{\text{i}}(s) \right]\,.\] Notice that it is enough to compute only one diagonal term as the other term can be deduced by symmetry. Following the same method as for the linear terms, we start by remarking that for all \((t,s)\) in the off-diagonal region \(\mathcal{O}_{\epsilon}\), it holds that \[\mathbb{E}\left[\mathcal{E}_{\text{ei}}(t,s)\,\frac{\text{d}H_{ \text{e}}(t)}{\text{d}t}\frac{\text{d}H_{\text{e}}(s)}{\text{d}s}\right] =\lim_{(u,v)\to(t,s)}\partial_{t}\partial_{s}R_{\epsilon}(t,u,s,v)\,,\] \[\mathbb{E}\left[\mathcal{E}_{\text{ei}}(t,s)\,\frac{\text{d}H_{ \text{e}}(t)}{\text{d}t}\frac{\text{d}H_{\text{i}}(s)}{\text{d}s}\right] =\lim_{(u,v)\to(t,s)}\partial_{t}\partial_{v}R_{\epsilon}(t,u,s,v)\,,\] As before, the analytical knowledge of \(R_{\epsilon}(t,u,s,v)\) on the \(O_{\epsilon}\) region (see Appendix G) allows one to evaluate \[\lim_{(u,v)\to(t,s)^{-}}\tau^{2}\frac{\partial_{t}\partial_{u}R_{ \epsilon}(t,u,s,s)}{R_{\epsilon}(t,s)} =a_{\rm e,1}(2a_{\rm e,2}-a_{\rm e,1})\,,\] \[\lim_{(u,v)\to(t,s)^{-}}\tau^{2}\frac{\partial_{t}\partial_{s}R_{ \epsilon}(t,u,s,v)}{R_{\epsilon}(t,s)} =\] \[\frac{1}{2}(a_{\rm e,1}(2a_{\rm i,2}-a_{\rm i,1})+a_{\rm i,1}(2a_{ \rm e,2}-a_{\rm e,1}))\,.\] The above closed-form expressions allow one to compute \(A^{\prime}_{\rm e,\epsilon}\) and \(B^{\prime}_{\rm ei,\epsilon}\), the part of the coefficients \(A_{\rm e,\epsilon}\) and \(B_{\rm ei,\epsilon}\) resulting from integration over the off-diagonal region \(O_{\epsilon}\), which admit well-defined limit values \(A^{\prime}_{\rm e}=\lim_{\epsilon\to 0^{+}}A^{\prime}_{\rm e,\epsilon}\) and \(B^{\prime}_{\rm ei}=\lim_{\epsilon\to 0^{+}}B^{\prime}_{\rm ei,\epsilon}\) with: \[A^{\prime}_{\rm e} =\lim_{\epsilon\to 0^{+}}\iint_{\mathcal{O}_{\epsilon}}e^{ \frac{t+s}{\tau}}\mathbb{E}\left[\mathcal{E}_{\rm ei}(t,s){\rm d}H_{\rm e}(t ){\rm d}H_{\rm e}(s)\right]\,,\] \[=\frac{a_{\rm e,1}(2a_{\rm e,2}-a_{\rm e,1})}{(1+a_{\rm ei,1})(1 +b\tau a_{\rm ei,2})}\,,\] \[B^{\prime}_{\rm ei} =2\lim_{\epsilon\to 0^{+}}\iint_{\mathcal{O}_{\epsilon}}e^{ \frac{t+s}{\tau}}\mathbb{E}\left[\mathcal{E}_{\rm ei}(t,s){\rm d}H_{\rm e}(t ){\rm d}H_{\rm i}(s)\right]\,,\] \[=\frac{a_{\rm e,1}(2a_{\rm i,2}-a_{\rm i,1})+a_{\rm i,1}(2a_{\rm e,2}-a_{\rm e,1})}{(1+a_{\rm ei,1})(1+a_{\rm ei,2})}\,.\] However, for quadratic terms, one also needs to include the contributions arising from the diagonal region \(\mathcal{D}_{\epsilon}\), as suggested be the first-order jump discontinuity of \(R(t,s)=\lim_{\epsilon\to 0^{+}}R_{\epsilon}(t,s)\) on the diagonal \(t=s\). To confirm this point, one can show from the analytical expression of \(R_{\epsilon}(t,u,s,v)\) on \(\mathcal{D}_{\epsilon}\) (see Appendix G), that all relevant second-order derivative terms scale as \(1/\epsilon\) over \(\mathcal{D}_{\epsilon}\). This scaling leads to the nonzero contributions \(A^{\prime\prime}_{\rm e,\epsilon}\) and \(B^{\prime\prime}_{\rm ei,\epsilon}\) resulting form the integration of these second-order derivative terms over the diagonal region \(\mathcal{D}_{\epsilon}\), even in the limit \(\epsilon\to 0^{+}\). Actually, we find that these contributions also admit well-defined limit values \(A^{\prime\prime}_{\rm e}=\lim_{\epsilon\to 0^{+}}A^{\prime\prime}_{\rm e,\epsilon}\) and \(B^{\prime\prime}_{\rm ei}=\lim_{\epsilon\to 0^{+}}B^{\prime\prime}_{\rm ei, \epsilon}\) with: (see Appendix H) \[A^{\prime\prime}_{\rm e} =\lim_{\epsilon\to 0^{+}}\iint_{\mathcal{D}_{\epsilon}}e^{ \frac{t+s}{\tau}}\mathbb{E}\left[\mathcal{E}_{\rm ei}(t,s){\rm d}H_{\rm e}(t ){\rm d}H_{\rm e}(s)\right]\,,\] \[=\frac{a_{\rm e,12}-c_{\rm ei}}{1+a_{\rm ei,2}}\,,\] \[B^{\prime\prime}_{\rm ei} =2\lim_{\epsilon\to 0^{+}}\iint_{\mathcal{D}_{\epsilon}}e^{ \frac{t+s}{\tau}}\mathbb{E}\left[\mathcal{E}_{\rm ei}(t,s){\rm d}H_{\rm e}(t ){\rm d}H_{\rm i}(s)\right]\,,\] \[=\frac{2c_{\rm ei}}{1+a_{\rm ei,2}}\,.\] Remembering that the expression of \(A^{\prime\prime}_{\rm i}\) can be deduced from that of \(A^{\prime\prime}_{\rm e}\) by symmetry, Eq. (27) defines \(A^{\prime\prime}_{\rm e}\), and thus \(A^{\prime\prime}_{\rm i}\), in terms of the useful auxiliary second-order effacacies \(a_{\rm e,12}=a_{\rm e,1}-a_{\rm e,2}\) and \(a_{\rm i,12}=a_{\rm i,1}-a_{\rm i,2}\). These efficacies will feature prominently in the final variance expression and it is worth mentioning their explicit definitions as \[a_{\rm e,12} =\frac{b\tau}{2}\mathbb{E}_{\rm ei}\left[\frac{W_{\rm e}}{W_{ \rm e}+W_{\rm i}}\left(1-e^{-(W_{\rm e}+W_{\rm i})}\right)^{2}\right]\,, \tag{27}\] \[a_{\rm i,12} =\frac{b\tau}{2}\mathbb{E}_{\rm ei}\left[\frac{W_{\rm i}}{W_{\rm e }+W_{\rm i}}\left(1-e^{-(W_{\rm e}+W_{\rm i})}\right)^{2}\right]\,. \tag{28}\] The other quantity of interest is the coefficient \(c_{\rm ei}\), which appears in both Eq. (27) and Eq. (27). This nonnegative coefficient, defined as \[c_{\rm ei}=\frac{b\tau}{2}\mathbb{E}_{\rm ei}\left[\frac{W_{\rm e}W_{\rm i}}{( W_{\rm e}+W_{\rm i})^{2}}\left(1-e^{-(W_{\rm e}+W_{\rm i})}\right)^{2}\right]\,, \tag{29}\] entirely captures the (nonnegative) correlation between excitatory and inhibitory inputs and shall be seen as an efficacy as well. Keeping these definitions in mind, the full quadratic coefficients are finally obtained as \(A_{\rm e}=A^{\prime}_{\rm e}+A^{\prime\prime}_{\rm e}\), \(A_{\rm i}=A^{\prime}_{\rm i}+A^{\prime\prime}_{\rm i}\), and \(B_{\rm ei}=B^{\prime}_{\rm ei}+B^{\prime\prime}_{\rm ei}\). From there, injecting the analytical expressions of the various coefficients in the quadratic form Eq. (21) leads to an explicit formula for the stationary voltage variance in the limit of instantaneous synapses. Then, one is only left Figure 7: **Comparison of simulation and theory.** (a) Examples of voltage traces obtained via Monte-Carlo simulations of an AONCB neuron for various type of synchrony-based input correlations: uncorrelated \(\rho_{\rm e}=\rho_{\rm i}=\rho_{\rm ei}=0\) (uncorr, yellow), within correlation \(\rho_{\rm e},\rho_{\rm i}>0\) and \(\rho_{\rm ei}=0\) (within corr, cyan), within and across correlation \(\rho_{\rm e},\rho_{\rm i},\rho_{\rm ei}>0\) (across corr, magenta). (b) Comparison of the analytically derived expressions (19) and (30) with numerical estimates obtained via Monte-Carlo simulations for the synchrony conditions considered in (a). with step \((iii)\), which aims at exhibiting a compact, interpretable form for this formula. We show in Appendix I that lengthy but straightforward algebraic manipulations lead to the following simplified form \[\mathbb{V}\left[V\right]=\frac{1}{1+a_{\mathrm{ei},2}}\times \tag{30}\] \[\left(a_{\mathrm{e},12}(V_{\mathrm{e}}\!-\!\mathbb{E}\left[V \right])^{2}+a_{\mathrm{i},12}(V_{\mathrm{i}}\!-\!\mathbb{E}\left[V\right])^{2} -c_{\mathrm{ei}}(V_{\mathrm{e}}\!-\!V_{\mathrm{i}})^{2}\right)\,.\] Note that for AONCB models, establishing the above exact expression does not require any approximation other than taking the limit of instantaneous synapses. In particular, we neither resort to any diffusion approximations [29, 30] nor invoke the effective-time-constant approximation [59, 60]. We give in Appendix J an alternative factorized form \(\mathbb{V}\left[V\right]\) to justify the nonnegativity of expression Eq. (30). In Fig. 7, we illustrate the excellent agreement of the analytically derived expressions (19) and (30) with numerical estimates obtained via Monte-Carlo simulations of the AONCB dynamics for various input synchrony conditions. Discussing and interpreting quantitatively (30) within a biophysically relevant context will be the main focus of the remaining of this work. ## IV Comparison with experimental data In this section, we leverage the biophysically interpretable formulas (19) and (30) to determine under which synchrony conditions a neuron can achieve the experimentally observed subthreshold variability. In IV.1, we show that for biophysically relevant parameters, asynchronous drives only yields realistic subthreshold variability for a restricted number of large synaptic inputs. In IV.2, we show that realistic subthreshold variability can also be achieved with moderate synaptic inputs by including input synchrony in amounts compatible with measured pairwise spiking correlation. In IV.3, we demonstrate that the asynchronous state hypothesis is incompatible with the persistence of variability in mean-field dynamics, independent of any scaling assumptions about the synaptic weights. ### Independent inputs yield exceedingly small neural variability Cortical activity typically exhibits a high degree of trial-to-trial variability in response to identical stimuli [61, 62], with individual neuronal spiking exhibiting Poisson-process characteristics [63, 3]. Such variability is striking because neurons are thought to typically receive a large number (\(\simeq 10^{4}\)) of synaptic contacts [6]. As a result, in the absence of correlations, neuronal variability should average out, leading to quasi-deterministic neuronal voltage dynamics [64]. To explain how variability seemingly defeats averaging in large neural networks, it has been proposed that neurons operate in a special regime, whereby inhibitory and excitatory drive nearly cancel one another [12, 13, 14, 15, 16]. In such balanced networks, the voltage fluctuations become the main determinant of the dynamics, yielding a Poisson-like spiking activity [12, 13, 14, 15, 16]. Here, we exploit the analytical framework of AONCB neurons to argue that this fluctuation-dominated picture predicts voltage fluctuations that are order of magnitudes smaller than experimental observations [17, 18, 19, 1]. Such observations indicate that the variability of the neuronal membrane voltage exhibits typical variance values of \(\simeq 4-9\mathrm{mV}^{2}\). [14, 16]. Balanced models, as virtually all mean-field models, assume that neuronal inputs have zero correlation structure, for which synapses are driven by independent Poisson processes. In particular, excitation and inhibition act independently. Within the framework of AONCB neurons, this latter assumption corresponds to choosing a joint jump distribution of the form \[p_{\mathrm{ei}}(W_{\mathrm{e}},W_{\mathrm{i}})=\frac{b_{\mathrm{e}}}{b}p_{ \mathrm{e}}(W_{\mathrm{e}})\delta(W_{\mathrm{i}})+\frac{b_{\mathrm{i}}}{b}p_{ \mathrm{i}}(W_{\mathrm{i}})\delta(W_{\mathrm{e}})\,.\] where \(\delta(\cdot)\) denotes the Dirac delta function so that \(W_{\mathrm{e}}W_{\mathrm{i}}=0\) with probability one. In other words, there Figure 8: **Voltage mean and variance in the absence of input correlations.** Column (a) depicts the stationary subthreshold response of an AONCB neurons driven by \(K_{\mathrm{e}}=100\) and \(K_{\mathrm{i}}=25\) synapses with typical dimensionless weights \(w_{\mathrm{e}}=0.01\) and \(w_{\mathrm{i}}=0.04\). Column (b) depicts the stationary subthreshold response of an AONCB neurons driven by \(K_{\mathrm{e}}=10^{3}\) and \(K_{\mathrm{i}}=250\) synapses with moderate weights \(w_{\mathrm{e}}=0.001\) and \(w_{\mathrm{i}}=0.004\). For synaptic weights \(w_{\mathrm{e}},w_{\mathrm{i}}\ll 1\), the mean voltage response is identical as \(K_{\mathrm{e}}w_{\mathrm{e}}=K_{\mathrm{i}}w_{\mathrm{i}}=1\) for (a) and (b). By contrast, for \(\rho_{\mathrm{e}}=\rho_{\mathrm{i}}=\rho_{\mathrm{ei}}=0\), the voltage variance is at least an order-of-magnitude smaller than that experimentally observed (\(4-9\mathrm{mV}^{2}\)) for typical weights as shown in (a). Reaching the lower range of realistic neural variability requires driving the cell via large synaptic weights as shown in (b). is no synchrony between excitatory and inhibitory inputs. Moreover, \(b_{\rm e}\) and \(b_{\rm i}\) are independently specified via Eq. (8) and the overall rate of synaptic events is purely additive: \(b=b_{\rm e}+b_{\rm i}\). Consequently, the crosscorrelation efficacy \(c_{\rm ei}\) in Eq. (30) vanishes and the dimensionless efficacies simplify to \[a_{\rm e,1}=b_{\rm e}\tau\mathbb{E}_{\rm e}\left[1-e^{-W_{\rm e}}\right]\quad \text{and}\quad a_{\rm i,1}=b_{\rm i}\tau\mathbb{E}_{\rm i}\left[1-e^{-W_{\rm i }}\right]\,,\] where the expectations are with respect to the excitatory and inhibitory jump distributions \(p_{\rm e}\) and \(p_{\rm i}\). Further assuming that individual excitatory and inhibitory synapses act independently leads to considering that \(p_{\rm e}\) and \(p_{\rm i}\) depict the size of individual synaptic inputs, as opposed to aggregate events. This corresponds to taking \(\beta_{\rm e}\to\infty\) and \(\beta_{\rm i}\to\infty\) in our parametric model based on beta distributions. Then, as intuition suggests, the overall rates of excitation and inhibition activation are recovered as \(b_{\rm e}=K_{\rm e}r_{\rm e}\) and \(b_{\rm i}=K_{\rm i}r_{\rm i}\), where \(r_{\rm e}\) and \(r_{\rm i}\) are the individual spiking rates. In order to investigate our findings numerically, we consider that excitatory and inhibitory synaptic weights have typical values denoted by \(w_{\rm e}\) and \(w_{\rm i}\), respectively, so that \(p_{\rm e}(W_{\rm e})=\delta(W_{\rm e}-w_{\rm e})\) and \(p_{\rm i}(W_{\rm i})=\delta(W_{\rm i}-w_{\rm i})\). Such typical weights can be estimated via biophysical considerations within the framework of AONCB neurons. In order to develop these considerations, we assume the values \(V_{\rm i}=-10\text{mV}<V_{\rm L}=0<V_{\rm e}=60\text{mV}\) for reversal potentials and \(\tau=15\text{ms}\) for the passive membrane time constant. Given these values, we set the upper range of excitatory synaptic weights so that when delivered to a neuron close to its resting state, unitary excitatory inputs cause peak membrane fluctuations of \(\simeq 0.5\text{mV}\) at the soma, attained after a peak time of \(\simeq 5\text{ms}\). Such fluctuations correspond to typically large _in-vivo_ synaptic activations of thalamo-cortical projections in rats [65]. Although activations of similar amplitude have been reported for cortico-cortical connections [66, 67], recent large-scale _in vivo_ studies has revealed that cortico-cortical excitatory connections are typically much weaker [68, 69]. At the same time, these studies have shown that inhibitory synaptic conductances are about fourfold larger than excitatory ones, but with similar timescales. Fitting these values within the framework of AONCB neurons for \(\epsilon=\tau_{s}/\tau\simeq 1/4\) reveals that the largest possible synaptic inputs correspond to dimensionless weights such that \(w_{\rm e}\simeq 0.01\) and \(w_{\rm i}\simeq 0.04\). Furthermore, we will assume that the more numerous but comparatively moderate cortico-cortical recurrent connections are an order of magnitude weaker than typical thalamo-cortical projections, i.e., \(w_{\rm e}\simeq 0.001\) and \(w_{\rm i}\simeq 0.004\). Such a range is in keeping with estimates used in [30]. Thus, independent individual synaptic weights are small in the sense that \(w_{\rm e},w_{\rm i}\ll 1\), which warrants neglecting exponential corrections for the evaluation of the synaptic efficacies, at least in the absence of synchrony-based correlations. Accordingly, we have \[a_{\rm e,1}\simeq K_{\rm e}r_{\rm e}\tau w_{\rm e}\quad\text{and}\quad a_{\rm e,12}\simeq K_{\rm e}r_{\rm e}\tau w_{\rm e}^{2}/2\,,\] as well as symmetric expressions for inhibitory efficacies. Plugging these values in Eq. (30) yields the classical mean-field estimate for the stationary variance \[\mathbb{V}\left[V\right]\simeq\frac{K_{\rm e}r_{\rm e}w_{\rm e}^{2}(V_{\rm e }-\mathbb{E}\left[V\right])^{2}+K_{\rm i}r_{\rm i}w_{\rm i}^{2}(V_{\rm i}- \mathbb{E}\left[V\right])^{2}}{2(1/\tau+K_{\rm e}r_{\rm e}w_{\rm e}+K_{\rm i}r _{\rm i}w_{\rm i})}\,,\] which is exactly the same expression as that derived via the diffusion and effective-time-constant approximations in [35, 36]. However, observe that the only approximation we made in obtaining the above expression is to neglect exponential corrections due to the relative weakness of biophysically relevant synaptic weights, which we hereafter refer to as the small-weight approximation. In Fig. 8, we represent the stationary mean \(\mathbb{E}\left[V\right]\) and variance \(\mathbb{V}\left[V\right]\) as a function of the neuronal spiking input rates \(r_{e}\) and \(r_{\rm i}\), but for distinct values of synaptic weights \(w_{\rm e}\) and \(w_{\rm i}\). In Fig. 8a, we consider synaptic weights as large as biophysically admissible based on recent _in-vivo_ studies [68, 69], i.e., \(w_{\rm e}=0.01\) and \(w_{\rm i}=0.04\). By contrast, in Fig. 8b, we consider moderate synaptic weights \(w_{\rm e}=0.001\) and \(w_{\rm i}=0.004\), which yield somatic postsynaptic deflections of typical amplitudes. In both cases, we consider input numbers \(K_{\rm e}\) and \(K_{\rm i}\) such that the mean voltage \(\mathbb{E}\left[V\right]\) covers the same biophysical range of val Figure 9: **Dependence on the number of inputs and the synaptic weights in the absence of correlations.** Column (a) depicts the stationary subthreshold response of an AONCB neurons driven by a varying number of excitatory synapses \(K_{\rm e}\) with varying weight \(w_{\rm e}\) at rate \(r_{\rm e}=20\text{Hz}\), with background inhibitory drive given by \(K_{\rm i}=250\) with moderate weights \(w_{\rm i}=0.004\) and \(r_{\rm i}=20\text{Hz}\). Column (b) depicts the same as in column (a) but for a background inhibitory drive given by \(K_{\rm i}=25\) with large weights \(w_{\rm i}=0.04\) and \(r_{\rm i}=20\text{Hz}\). For both conditions, achieving realistic level of variance, i.e., \(\mathbb{V}\left[V\right]\simeq 4-9\text{mV}^{2}\), while ensuring a biophysically relevant mean range of variation, i.e., \(\Delta\mathbb{E}\left[V\right]\simeq 10\text{-}20\text{mV}\), is only possible for large synapses: \(w_{\rm e}\geq 0.015\) for moderate inhibitory synapses in Fig. 9a and \(w_{\rm e}\geq 0.01\) for large synapses. ues as \(r_{\rm e}\) and \(r_{\rm i}\) varies between 0Hz and 50Hz. Given a zero resting potential, we set this biophysical range to be bounded by \(\Delta\mathbb{E}\left[V\right]\leq 20\)mV as typically observed experimentally in electrophysiological recordings. These conditions correspond to constant aggregate weights set to \(K_{\rm e}w_{\rm e}=K_{\rm i}w_{\rm i}=1\) so that \[K_{\rm e}r_{\rm e}w_{\rm e}=K_{\rm i}r_{\rm i}w_{\rm i}\leq 50\mathrm{Hz}\simeq 1/ \tau\,.\] This implies that the AONCB neurons under consideration do not reach the high-conductance regime for which the passive conductance can be neglected, i.e., \(K_{\rm e}r_{\rm e}w_{\rm e}+K_{\rm e}r_{\rm e}w_{\rm i}\gg 1/\tau\)[70]. Away from the high-conductance regime, the variance magnitude is controlled by the denominator of Eq. (31). Accordingly, the variance in both cases is primarily dependent on the excitatory rate \(r_{\rm e}\) since for \(K_{\rm e}w_{\rm e}=K_{\rm i}w_{\rm i}=1\), the effective excitatory driving force \(F_{\rm e}=K_{\rm e}w_{\rm e}^{2}(V_{\rm e}-\mathbb{E}\left[V\right])^{2}\) dominates the effective inhibitory driving force \(F_{\rm i}=K_{\rm i}w_{\rm i}^{2}(V_{\rm i}-\mathbb{E}\left[V\right])^{2}\). This is because the neuronal voltage typically sits close to the inhibitory reversal potential but far from the excitatory reversal potential \(V_{\rm e}-\mathbb{E}\left[V\right]>\mathbb{E}\left[V\right]-V_{\rm i}\). For instance, when close to rest \(\mathbb{E}\left[V\right]\simeq 0\), the ratio of the effective driving forces is \((K_{\rm e}w_{\rm e}^{2}V_{\rm e}^{2})/(K_{\rm i}w_{\rm i}^{2}V_{\rm i}^{2})\simeq 9\) fold in favor of excitation. Importantly, the magnitude of the variance is distinct for moderate synapses and for large synapses. This is because for constant aggregate weights \(K_{\rm e}w_{\rm e}=K_{\rm i}w_{\rm i}=1\), the ratio of effective driving forces for large and moderate synapses scales in keeping with the ratio of the weights, and so does the ratio of variances away from the high conductance regime. Thus we have \[F_{\rm e}|_{w_{\rm e}=10^{-2}}/F_{\rm e}|_{w_{\rm e}=10^{-3}}=F_{\rm i}|_{w_{ \rm i}=10^{-2}}/F_{\rm i}|_{w_{\rm i}=10^{-3}}=10\,,\] and the variance decreases by one order of magnitude from large weights in Fig. 8a to moderate weights in Fig. 8b. The above numerical analysis reveals that achieving realistic levels of subthreshold variability for a biophysical mean range of variation requires AONCB neurons to be exclusively driven by large synaptic weights. This is confirmed by considering the voltage mean \(\mathbb{E}\left[V\right]\) and variance \(\mathbb{V}\left[V\right]\) in Fig. 9 as a function of the number of inputs \(K_{\rm e}\) and of the synaptic weights \(w_{\rm e}\) for a given level of inhibition. We choose this level of inhibition to be set by \(K_{\rm i}=250\) moderate synapses \(w_{\rm i}=0.004\) with \(r_{\rm i}=20\)Hz in Fig. 9a and by \(K_{\rm i}=25\) large synapses \(w_{\rm i}=0.04\) with \(r_{\rm i}=20\)Hz in Fig. 9b. As expected, assuming that \(r_{\rm e}=20\)Hz in the absence of input correlations, the voltage mean \(\mathbb{E}\left[V\right]\) only depends on the product \(K_{\rm e}w_{\rm e}\), which yields similar mean range of variations for \(K_{\rm e}\) varying up to 2000 in Fig. 9a and up to 200 in Fig. 9b. Thus, it is possible to achieve the same range of variations as with moderate synaptic with a fewer number of larger synaptic weights. By contrast, the voltage variance \(\mathbb{V}\left[V\right]\) only achieves realistic levels for large synaptic weights in both conditions, with \(w_{\rm e}\geq 0.015\) for moderate inhibitory background synapses in Fig. 9a and \(w_{\rm e}\geq 0.01\) for large inhibitory background synapses in Fig. 9b. ### Including input correlations yields realistic subthreshold variability With zero correlation structure, achieving the experimentally observed variability necessitates an excitatory drive mediated via synaptic weights \(w_{\rm e}\simeq 0.01\), which corresponds to the upper bounds of the biophysically admissible range and is in agreement with numerical results presented in [30]. Given such synaptic weights, every single excitatory synaptic activation would cause a post-synaptic potential with a peak amplitude larger or equal to 0.5mV. Albeit possible, this is unrealistic given the wide distribution of amplitudes observed experimentally, whereby the vast majority of synaptic events are small to moderate, at least for cortico-cortical connections [68; 69]. In principle, one can remedy this issue by allowing for synchronous activation of, say, \(k_{\rm e}=10\) synapses with moderate weight \(w_{\rm e}=0.001\), as it amounts to the activation of a single synapse with large weight \(k_{\rm e}w_{\rm e}=0.01\). A weaker assumption that yields a similar increase in neural variability is to only ask for synapses to tend to synchronize probabilistically, which amounts to require \(k_{\rm e}\) to be a random variable with some distribution mass on \(\{k_{\rm e}>1\}\). This exactly amounts to model the input drive via a jump process as presented in Section II, with a jump distribution \(p_{\rm e}\) that probabilistically captures this degree of input synchrony. In turn, this distribution \(p_{\rm e}\) corresponds to a precise input correlation \(\rho_{\rm e}\) via Eq. (7). With these preliminary remarks in mind, we explore the role of input correlations in shaping the voltage variability of AONCB neurons with instantaneous synapses. Experimental estimates of the spiking correlations are typically thought as weak with coefficients ranging from 0.01 to 0.04 [7; 8; 9]. However, it is important to note that such weak values do not warrant the neglect of correlations owing to the typically high number of synaptic connections. Actually, if \(K_{\rm e}\) denotes the number of excitatory inputs, all assumed to play exchangeable roles, an empirical criterion to decide whether a correlation coefficient \(\rho_{\rm e}\) is weak is that \(\rho_{\rm e}<1/K_{\rm e}\)[23; 24]. Assuming the lower estimate of \(\rho_{\rm e}\simeq 0.01\), this criterion is achieved for \(\simeq 100\) inputs, which is well below the typical number of excitatory synapses for cortical neurons. We confirm the impact of nonzero correlation in Fig. 10 where we consider the cases of moderate weights \(w_{\rm e}=0.001\) and \(w_{\rm e}=0.004\) and large weights \(w_{\rm e}=0.01\) and \(w_{\rm i}=0.04\) as in Fig. 8 but for \(\rho_{\rm e}=\rho_{\rm i}=0.03\). Specifically, we assume in both cases that the AONCB neuron is subjected to two independent beta-binomial-derived compound Poisson process drives with rate \(b_{\rm e}\) and \(b_{\rm i}\), respectively. We compute both rate of synaptic events \(b_{\rm e}\) and \(b_{\rm i}\) via (8) by setting \(\beta_{\rm e}=\beta_{\rm i}=1/\rho_{\rm e}-1=1/\rho_{\rm i}-1\) and for the corresponding number of inputs \(K_{\rm e}\) and \(K_{\rm i}\) and spiking rates \(r_{\rm e}\) and \(r_{\rm i}\). This ensures that the mean number of synaptic activations \(b_{\rm e}\mathbb{E}_{\rm e}\left[k_{\rm e}\right]=K_{\rm e}r_{\rm e}\) and \(b_{\rm i}\mathbb{E}\left[k_{\rm i}\right]=K_{\rm i}r_{\rm i}\) remains constant when compared with Fig. 8. As a result, the mean response of the AONCB neuron is essentially left unchanged by the presence of correlations, with virtually identical biophysical range of variations \(\Delta\mathbb{E}_{\mathrm{ei}}\left[V\right]\simeq 10\)-\(20\mathrm{mV}\). This is because for correlation \(\rho_{\mathrm{e}}=\rho_{\mathrm{i}}\simeq 0.03\), the aggregate weights still satisfy \(k_{\mathrm{e}}w_{\mathrm{e}},k_{\mathrm{i}}w_{\mathrm{i}}<1\) with probability close to one given that \(K_{\mathrm{e}}w_{\mathrm{e}}=K_{\mathrm{i}}w_{\mathrm{i}}=1\). Then, in the absence of crosscorrelation, i.e., \(\rho_{\mathrm{ei}}=0\), we still have \[a_{\mathrm{e,1}}=b_{\mathrm{e}}\tau\mathbb{E}_{\mathrm{e}}\left[1-e^{-k_{ \mathrm{e}}w_{\mathrm{e}}}\right]\simeq b_{\mathrm{e}}\tau w_{\mathrm{e}} \mathbb{E}_{\mathrm{e}}\left[k_{\mathrm{e}}\right]=K_{\mathrm{e}}r_{\mathrm{e }}\tau w_{\mathrm{e}}\,,\] as well as \(a_{\mathrm{i,1}}\simeq K_{\mathrm{i}}r_{\mathrm{i}}r_{\mathrm{i}}\) by symmetry. However, for both moderate and large synaptic weights, the voltage variance \(\mathbb{V}\left[V\right]\) now exhibits similar or slightly larger magnitudes than observed experimentally. This is because the second-order efficacies involved in the denominator of Eq. (30) with \(c_{\mathrm{ei}}=0\) satisfy \[a_{\mathrm{e,12}}=\frac{b_{\mathrm{e}}\tau}{2}\mathbb{E}_{\mathrm{e}}\left[ \left(1-e^{-k_{\mathrm{e}}w_{\mathrm{e}}}\right)^{2}\right]\simeq\frac{b_{ \mathrm{e}}\tau w_{\mathrm{e}}^{2}}{2}\mathbb{E}_{\mathrm{e}}\left[k_{\mathrm{ e}}^{2}\right]\,,\] and a symmetric relation for \(a_{\mathrm{i,2}}\). In turn, using the parametric form Eq. (6) for \(p_{\mathrm{e,k}}\), one can show that \[a_{\mathrm{e,12}}\simeq\left(1+\rho_{\mathrm{e}}(K_{\mathrm{e}}-1)\right) \frac{K_{\mathrm{e}}r_{\mathrm{e}}\tau w_{\mathrm{e}}^{2}}{2}\,,\] where we recognize \(K_{\mathrm{e}}r_{\mathrm{e}}\tau w_{\mathrm{e}}^{2}/2=a_{\mathrm{e,12}}|_{ \rho_{\mathrm{e}}=0}\) as the second-order efficacy in the absence of correlations from Fig. 8. A similar statement holds for the inhibition-related second-order efficacy \(a_{\mathrm{i,12}}\). This shows that correlations increase neural variability whenever \(\rho_{\mathrm{e}}>1/K_{\mathrm{e}}\) or \(\rho_{\mathrm{i}}>1/K_{\mathrm{i}}\), which exactly coincides with the empirical criterion given previously to assess the relative weakness of correlations. Recapitulating all these statements shows that including correlation separately in the excitatory and inhibitory inputs yields an increase in the neural variability. Specifically, when excitation and inhibition act independently, i.e., \(\rho_{\mathrm{ei}}=0\), we find in Appendix K that \[\mathbb{V}\left[V\right]|_{\rho_{\mathrm{ei}}=0}-\mathbb{V}\left[V\right]|_{ \rho_{\mathrm{ei}}=\rho_{\mathrm{ei}}=0}\simeq\frac{\rho_{\mathrm{e}}(K_{ \mathrm{e}}-1)K_{\mathrm{e}}r_{\mathrm{e}}w_{\mathrm{e}}^{2}(V_{\mathrm{e}}- \mathbb{E}\left[V\right])^{2}+\rho_{\mathrm{i}}(K_{\mathrm{i}}-1)K_{\mathrm{i}} r_{\mathrm{i}}w_{\mathrm{i}}^{2}(V_{\mathrm{i}}-\mathbb{E}\left[V\right])^{2}}{2(1/ \tau+K_{\mathrm{e}}r_{\mathrm{e}}w_{\mathrm{e}}+K_{\mathrm{i}}r_{\mathrm{i}}w _{\mathrm{i}})}\,, \tag{31}\] which follows from the fact that the small-weight approximation for \(\mathbb{E}\left[V\right]\) is independent of correlations and from neglecting the exponential corrections due to the nonzero size of the synaptic weights. In particular, the above formula remains valid as long as the correlations \(\rho_{\mathrm{e}}\) and \(\rho_{\mathrm{i}}\) are weak enough so that the aggregate weights satisfy \(k_{\mathrm{e}}w_{\mathrm{e}},k_{\mathrm{i}}w_{\mathrm{i}}<1\) with probability close to one. To inspect the relevance of exponential corrections, we estimate in Appendix L the error incurred by neglecting exponential corrections. Focusing on the case of excitatory inputs, we find that for correlation coefficients \(\rho_{\mathrm{e}}\leq 0.05\), neglecting exponential corrections incurs less than a 3% error if the number of inputs is smaller than \(K_{\mathrm{e}}\leq 1000\) for moderate synaptic weight \(w_{\mathrm{e}}=0.001\) or than \(K_{\mathrm{e}}\leq 100\) for large synaptic weight \(w_{\mathrm{e}}=0.01\). The voltage variance shown in Fig. 10 for \(\rho_{\mathrm{e}}=\rho_{\mathrm{i}}=0.03\) and \(\rho_{\mathrm{ei}}=0\) exceeds the typical levels measured _in vivo_, i.e., \(4-9\mathrm{mV}^{2}\), for large synaptic weights. The inclusion of correlations between excitation an inhibition, i.e., \(\rho_{\mathrm{ei}}>0\) can reduce the voltage variance to more realistic levels. We confirm this point in Fig. 11 where we consider the cases of moderate weights \(w_{\mathrm{e}}=0.001\) and \(w_{\mathrm{e}}=0.004\) and large weights \(w_{\mathrm{e}}=0.01\) and \(w_{\mathrm{i}}=0.04\) as in Fig. 10 but for \(\rho_{\mathrm{e}}=\rho_{\mathrm{i}}=\rho_{\mathrm{ei}}=0.03\). Positive crosscorrelation between excitation and inhibition only marginally impacts the mean voltage response. This is due to the fact Figure 10: **Voltage mean and variance in the presence of excitatory and inhibitory input correlations but without correlation across excitation and inhibition: \(\rho_{\mathrm{e}}=\rho_{\mathrm{i}}>\rho_{\mathrm{ei}}=0\).** Column (a) depicts the stationary subthreshold response of an AONCB neurons driven by \(K_{\mathrm{e}}=100\) and \(K_{\mathrm{i}}=25\) synapses with typical dimensionless weights \(w_{\mathrm{e}}=0.01\) and \(w_{\mathrm{i}}=0.04\). Column (b) depicts the stationary subthreshold response of an AONCB neurons driven by \(K_{\mathrm{e}}=10^{3}\) and \(K_{\mathrm{i}}=250\) synapses with atypically large dimensionless weights \(w_{\mathrm{e}}=0.001\) and \(w_{\mathrm{i}}=0.004\). For synaptic weights \(w_{\mathrm{e}},w_{\mathrm{i}}\ll 1\), the mean voltage response is identical as \(K_{\mathrm{e}}w_{\mathrm{e}}=K_{\mathrm{i}}w_{\mathrm{i}}=1\) for (a) and (b). By contrast with the case of no correlation in Fig. 8, for \(\rho_{\mathrm{e}}=\rho_{\mathrm{i}}=0.03\) and \(\rho_{\mathrm{ei}}=0\), the voltage variance achieved similar level as experimentally observed (\(4-9\mathrm{mV}^{2}\)) for typical weight as shown in (a), but slightly too large levels for large synaptic weight as shown in (b). that exponential corrections become slightly more relevant as the presence of crosscorrelation leads to larger aggregate weights: \(W_{\mathrm{e}}+W_{\mathrm{i}}\) with \(W_{\mathrm{e}}\) and \(W_{\mathrm{i}}\) possibly being jointly positive. By contrast with this marginal impact on the mean response, the voltage variance is significantly reduced when excitation and inhibition are correlated. This is in keeping with the intuition that the net effect of such crosscorrelation is to cancel excitatory and inhibitory synaptic inputs with one another, before they can cause voltage fluctuations. The amount by which the voltage variance is reduced can be quantified in the small-weight approximation. In this approximation, we show in Appendix K that the efficacy \(c_{\mathrm{ei}}\) capturing the impact of crosscorrelations simplifies to \[c_{\mathrm{ei}}\simeq\frac{b\tau}{2}\mathbb{E}_{\mathrm{ei}}\left[W_{\mathrm{ e}}W_{\mathrm{i}}\right]=(\rho_{\mathrm{ei}}\sqrt{r_{\mathrm{e}}r_{\mathrm{i}}} \tau/2)(K_{\mathrm{e}}w_{\mathrm{e}})(K_{\mathrm{i}}w_{\mathrm{i}})\,.\] Using the above simplified expression and invoking the fact that the small-weight approximation for \(\mathbb{E}\left[V\right]\) is independent of correlations show a decrease in the voltage variance in the amount \[\mathbb{V}\left[V\right]|-\mathbb{V}\left[V\right]|_{\rho_{\mathrm{ ei}}=0}\simeq \tag{32}\] \[-\frac{\rho_{\mathrm{ei}}\sqrt{r_{\mathrm{e}}r_{\mathrm{i}}}(K_{ \mathrm{e}}w_{\mathrm{e}})(K_{\mathrm{i}}w_{\mathrm{i}})(V_{\mathrm{e}}- \mathbb{E}\left[V\right])(\mathbb{E}\left[V\right]-V_{\mathrm{i}})}{1/\tau+K_{ \mathrm{e}}r_{\mathrm{e}}w_{\mathrm{e}}+K_{\mathrm{i}}r_{\mathrm{i}}w_{ \mathrm{i}}}\leq 0\,.\] Despite the above reduction in variance, we show in Appendix K that positive correlations always cause an overall increase of neural variability: ### Variability-preserving scaling limits Numerical analysis reveals that the correlations must significantly impact the voltage variability whenever the number of inputs are such that \(K_{\mathrm{e}}>1/\rho_{\mathrm{e}}\) or \(K_{\mathrm{i}}>1/\rho_{\mathrm{i}}\). Spiking correlations are typically measured _in vivo_ to be larger than \(0.01\). Therefore, synchrony must shape the response of neurons that are driven by more than \(100\) active inputs, which is presumably allowed by the typically high number of synaptic contacts (\(\simeq 10^{4}\)) in cortex [6]. In practice, we find that synchrony can explain the relatively high level of neural variability observed in the subthreshold neuronal responses. Beyond these practical findings, we predict that input synchrony also have significant theoretical implications with respect to modeling spiking networks. Analytically tractable models for cortical activity are generally obtained by considering spiking networks in the infinite-size limit. Such infinite-size networks are tractable because the neurons they comprise only interact via population averages, erasing any role for nonzero correlation structure. Distinct mean-field models assume that synaptic weights vanish according to distinct scalings with respect to the number of synapses, i.e., \(w_{\mathrm{e}/\mathrm{i}}\to 0\) as \(K_{\mathrm{e}/\mathrm{i}}\rightarrow\infty\). In particular, classical mean-field limits consider the scaling \(w_{\mathrm{e}/\mathrm{i}}\sim 1/K_{\mathrm{e}/\mathrm{i}}\), balanced mean-field limits consider the scaling \(w_{\mathrm{e}/\mathrm{i}}\sim 1/\sqrt{K_{\mathrm{e}/\mathrm{i}}}\), with \(K_{\mathrm{e}}w_{\mathrm{e}}-K_{\mathrm{i}}w_{\mathrm{i}}=O(1)\), and strong coupling limits consider the scaling \(w_{\mathrm{e}/\mathrm{i}}\sim 1/\ln K_{\mathrm{e}/\mathrm{i}}\), with \(K_{\mathrm{e}}w_{\mathrm{e}}-K_{\mathrm{i}}w_{\mathrm{i}}=O(1)\) as well. Importantly, all these mean-field limits assume no correlation, and in particular, no synchrony. Our analysis of AONCB neurons shows that the neglect of synchrony-based correlations is incompatible with the maintenance of neural variability in the infinite-size limit. Indeed, Eq. (31) shows that for any scaling with \(1/w_{\mathrm{e}}=o(K_{\mathrm{e}})\) and \(1/w_{\mathrm{i}}=o(K_{\mathrm{i}})\), as for all the mean-field limits mentioned above, we have \[\mathbb{V}\left[V\right]=O(w_{\mathrm{e}})+O(w_{\mathrm{i}})\xrightarrow[]{K_ {\mathrm{e}},K_{\mathrm{i}}\rightarrow\infty}0\,.\] Thus, in the absence of correlation and independent of the synaptic weight scaling, the subthreshold voltage variance of AONCB neurons must vanish in the limit of arbitrary large numbers of synapses. We expect such decay of the voltage variability to be characteristic of conductance-based models in the absence of input correlation. Indeed, dimensional analysis suggests that volt Figure 11: **Voltage mean and variance in the presence of excitatory and inhibitory input correlations and with correlation across excitation and inhibition: \(\rho_{\mathrm{e}}=\rho_{\mathrm{i}}>0\).** Column (a) depicts the stationary subthreshold response of an AONCB neurons driven by \(K_{\mathrm{e}}=100\) and \(K_{\mathrm{i}}=25\) synapses with typical dimensionless weights \(w_{\mathrm{e}}=0.01\) and \(w_{\mathrm{i}}=0.04\). Column (b) depicts the stationary subthreshold response of an AONCB neurons driven by \(K_{\mathrm{e}}=10^{3}\) and \(K_{\mathrm{i}}=250\) synapses with atypically large dimensionless weights \(w_{\mathrm{e}}=0.001\) and \(w_{\mathrm{i}}=0.004\). For synaptic weights \(w_{\mathrm{e}},w_{\mathrm{i}}\ll 1\), the mean voltage response is identical as \(K_{\mathrm{e}}w_{\mathrm{e}}=K_{\mathrm{i}}w_{\mathrm{i}}=1\) for (a) and (b). Compared with the case of no crosscorrelation in Fig. 10, for \(\rho_{\mathrm{e}}=\rho_{\mathrm{i}}=\rho_{\mathrm{ei}}=0.03\), the voltage variance is reduced to a biophysical range similar to that experimentally observed (\(4-9\mathrm{mV}^{2}\)) for typical weight as shown in (a), as well as for atypically large synaptic weight as shown in (b). age variance for both current-based and conductance-based models are generically obtained via normalization by the reciprocal of the membrane time constant. However, by contrast with current-based models, the reciprocal of the membrane time constant for conductance-based models, i.e., \(1/\tau+K_{\mathrm{e}}w_{\mathrm{e}}r_{\mathrm{e}}+K_{\mathrm{i}}w_{\mathrm{i}}r _{\mathrm{i}}\), involves contributions from synaptic conductances. Thus, to ensure nonzero asymptotic variability, the denominator scaling \(O(K_{\mathrm{e}}w_{\mathrm{e}})+O(K_{\mathrm{i}}w_{\mathrm{i}})\) must be balanced by the natural scaling of the Poissonian input drives, i.e., \(O(K_{\mathrm{e}}w_{\mathrm{e}}^{2})+O(K_{\mathrm{i}}w_{\mathrm{i}}^{2})\). In the absence of input correlations, this is only possible for fixed-size weights, which is incompatible with any scaling assumptions. Assuming fixed-size weights and taking the limit \(K_{\mathrm{e}/\mathrm{i}}\to\infty\) with fixed rate-input ratio \(\gamma_{\mathrm{ei}}=(K_{\mathrm{e}}r_{\mathrm{e}})/(K_{\mathrm{i}}r_{\mathrm{ i}})\), yields \[\frac{\mathbb{V}\left[V\right]}{(V_{\mathrm{e}}-\mathbb{E}\left[ V\right])(\mathbb{E}\left[V\right]-V_{\mathrm{i}})}= \tag{33}\] \[\frac{1+\gamma_{\mathrm{ei}}}{\left(\frac{1+e^{-w_{\mathrm{i}}}} {1-e^{-w_{\mathrm{e}}}}\right)+\gamma_{\mathrm{ei}}\left(\frac{1+e^{-w_{ \mathrm{e}}}}{1-e^{-w_{\mathrm{i}}}}\right)}\leq 1\,,\] which follows from neglecting the passive leak in the high-conductance regime [70]. As for \(0<\gamma_{\mathrm{ei}}<\infty\), we necessarily have \(V_{\mathrm{i}}<\mathbb{E}\left[V\right]<V_{\mathrm{e}}\), this shows that variability is preserved in the infinite-size limit for fixed synaptic weights in the absence of correlations. This observation is by contrast with current-based models for which variability diverges, while holding independent of any balance conditions. Moreover observe that equality in (33) is achieved for \(w_{\mathrm{e}/\mathrm{i}}\to\infty\). This indicates that even in the limit of arbitrary large weights, variability is maintained but the voltage distribution becomes bimodal with support on \(\{V_{\mathrm{e}},V_{\mathrm{i}}\}\). For small weights \(4w_{\mathrm{e}}=w_{\mathrm{i}}\ll 1\), the voltage distribution remains unimodal around its mean value \(\mathbb{E}\left[V\right]\). Actually, one can check that maximum variance is attained for \(\gamma_{\mathrm{ei}}\simeq 6.6\), which corresponds to a depolarization of \(\mathbb{E}\left[V\right]\simeq 27.5\)mV above resting potential. This value yields the upper bound estimate \(\mathbb{V}\left[V\right]\leq 1200w_{\mathrm{e}}\mathrm{mV}^{2}\), which amounts to \(\simeq 1.2\)mV\({}^{2}\) for moderate synaptic weight (\(w_{\mathrm{e}}=0.001\)) and \(\simeq 12\)mV\({}^{2}\) for large synaptic weights (\(w_{\mathrm{e}}=0.01\)). Thus, as expected the neglect of the passive conductance compared to synaptic conductances incurs a moderate but significant increase of neural variability. The above discussion shows that naive infinite-size limits with fixed-size synaptic weights preserve neural variability in conductance-based models, at least for AONCB neurons. However, these naive limits are problematic for restricting modeled neurons to operate in the high-conductance regime, whereby the passive conductance properties of the cell plays no role. Such a regime is biophysically unrealistic as it implies that the cell would respond to perturbations infinitely fast. We propose to address this issue by considering a new type of variability-preserving limit models obtained with for the classical scaling but in the presence of synchrony-based correlations. For simplicity, let us consider our correlated input model with excitation alone in the limit of an arbitrary large number of inputs \(K_{\mathrm{e}}\to\infty\). When \(\rho_{\mathrm{e}}>0\), the small-weight approximation Eq. (31) suggests that adopting the scaling \(w_{\mathrm{e}}\sim\Omega_{\mathrm{e}}/K_{\mathrm{e}}\), where \(\Omega_{\mathrm{e}}\) denotes the aggregate synaptic weight, yields a nonzero contribution when \(K_{\mathrm{e}}\to\infty\) as the numerator scales as \(O(K_{\mathrm{e}}^{2}w_{\mathrm{e}}^{2})\). It turns out that this choice can be shown to be valid without resorting to any approximations. Indeed, under the classical scaling assumption, we show in Appendix M that the discrete jump distribution \(p_{\mathrm{e},k}\) weakly converges to the continuous density \(\mathrm{d}\nu_{\mathrm{e}}/\mathrm{d}w\) in the sense that \[b_{\mathrm{e}}\sum_{k=1}^{K_{\mathrm{e}}}p_{\mathrm{e},k}\delta \left(\frac{w}{\Omega_{\mathrm{e}}}-\frac{k}{K_{\mathrm{e}}}\right)\,\mathrm{d }w\xrightarrow{K_{\mathrm{e}}\to\infty} \tag{34}\] \[\nu_{\mathrm{e}}(\mathrm{d}w)=\frac{r_{\mathrm{e}}\beta_{\mathrm{ e}}}{w}\left(1-\frac{W_{\mathrm{e}}}{w}\right)^{\beta_{\mathrm{e}}-1}\, \mathrm{d}w\,.\] The above density has infinite mass over \([0,\Omega_{\mathrm{e}}]\) owing to its diverging behavior in zero and is referred to as a degenerate beta distribution. In spite of its degenerate nature, it is known that densities of the above form define well-posed processes, the so-called beta processes, which have been studied extensively in the field of nonparametric Bayesian inference [50; 51]. Originally introduced by Hjort for survival analysis [49], beta processes are examples of positive completely random measures \(Z\) on \(\mathbb{R}\). Completely random measures \(Z\) on \(\mathbb{R}\) are set-indexed processes such that the masses \(Z(S_{1}),\ldots Z(S_{k})\) assigned to disjoint subsets \(S_{1},\ldots,S_{k}\) in \(\mathbb{R}\) specify independent random variables, whose laws are uniquely characterized by a positive measure \(\nu\) on \(\mathbb{R}\times\mathbb{R}^{+}\), called the Levy measure [71; 72]. Beta processes \(Z\sim\mathrm{BetaP}(\beta_{\mathrm{e}})\) correspond to Levy measures on \(\mathbb{R}\times[0,\Omega_{\mathrm{e}}]\) that are precisely of the form \(\nu(\mathrm{d}t,\mathrm{d}w)=\nu_{\mathrm{e}}(\mathrm{d}w)\,\mathrm{d}t\), where \(\nu_{\mathrm{e}}\) is given by Eq. (34). Owing to the degeneracy of the Levy measure \(\nu_{\mathrm{e}}(\mathrm{d}w)\,\mathrm{d}t\), beta processes \(Z\) can only be represented over a time interval \([0,T]\) as countably infinite sums of Dirac delta masses \[Z=\sum_{k}w_{\mathrm{e},k}\delta_{t_{\mathrm{e},k}}\,.\] where the pairs \((t_{\mathrm{e},k},w_{\mathrm{e},k})\) are defined as points from a bivariate Poisson process over \(\mathbb{R}\times[0,\Omega_{\mathrm{e}}]\) with intensity given by the Levy measure \(\nu_{\mathrm{e}}(\mathrm{d}w)\,\mathrm{d}t\), [50; 51]. Within our modeling framework, \(\{t_{\mathrm{e},k}\}\) represents the infinite set of synaptic activation times obtained in the limit of an arbitrary large number of inputs \(K_{\mathrm{e}}\to\infty\), whereas \(\{w_{\mathrm{e},k}\}\) represents the associated jump size amplitudes, whose fluctuating size captures correlations via \(\rho_{\mathrm{e}}=1/(1+\beta_{\mathrm{e}})\). Importantly, notice that although there is of infinite number of jumps, the overall mass \(Z([0,T])\), i.e., the cumulative jump size, remains finite with probability one. Actually, one can check that \[\mathbb{E}\left[Z([0,T])\right]=\int_{[0,T]\times[0,\Omega_{\mathrm{e}}]}w\, \nu_{\mathrm{e}}(\mathrm{d}w)\,\mathrm{d}t=r_{\mathrm{e}}T\Omega_{\mathrm{e}}\,,\] thereby showing that \(\Omega_{\mathrm{e}}\) can be interpreted as an effective mean jump size. The point of the above discussion is to justify that taking the infinite-size limit \(K_{\rm e}\to\infty\) with classical scaling \(w_{\rm e}\sim 1/K_{\rm e}\) specifies well-defined input drives as jump processes, at least when considering excitation alone. By contrast with the compound Poisson processes obtained for finite input numbers \(K_{\rm e}<\infty\), these processes admit a countably infinite, dense set of activation times \(\{t_{\rm e,}k\}\), as intuition suggests for \(K_{\rm e}\to\infty\). Rather than being defined by a probability distribution as for compound Poisson processes, the statistics of the positive jumps \(\{w_{\rm e,}k\}\) occurring at \(\{t_{\rm e,}k\}\) is specified by a Levy measure \(\nu_{\rm e}\). This Levy measure typically exhibits a nonintegrable degeneracy in zero but are such that all moments are finite, allowing one to specify the corresponding spiking correlation via \[\rho_{\rm e}=\frac{\int_{0}^{\Omega_{\rm e}}w^{2}\nu_{\rm e}({\rm d}w)}{\Omega _{\rm e}\int_{0}^{\Omega_{\rm e}}w\nu_{\rm e}({\rm d}w)}\,,\] which directly generalizes Eq. (7) to processes with a countable infinity of positive jumps. This shows that the Levy measure \(\nu_{\rm e}\) fully parametrizes our correlated excitation input model in the infinite-size limit with classical synaptic scaling. Then, the key observation is that these generalized input models can serve as the drive of AONCB neurons, just as compound Poisson processes do. Moreover, as processes parametrized via Levy measures can be obtained as limits of compound Poisson processes, all our analytical results will remain valid for this more generic class of processes. Concretely, for excitation alone, our results generalize by replacing all expectations of the form \(b_{\rm e}\mathbb{E}_{\rm e}\left[\cdot\right]\) by integral with respect to the Levy measure \(\nu_{\rm e}\). One can easily check that these expectations, which feature prominently in the definition of the various synaptic efficacies, all remain finite under the condition of moments integrability listed above. Thus, the voltage mean and variance of AONCB neurons remain finite with \[\mathbb{E}\left[V\right]=\frac{V_{\rm e}\int_{0}^{\Omega_{\rm e} }(1-e^{-w})\nu_{\rm e}({\rm d}w)}{1/\tau+\int_{0}^{\Omega_{\rm e}}(1-e^{-w}) \nu_{\rm e}({\rm d}w)}\,,\] \[\mathbb{V}\left[V\right]=\frac{(V_{\rm e}-\mathbb{E}\left[V \right])^{2}\int_{0}^{\Omega_{\rm e}}(1-e^{-w})^{2}\nu_{\rm e}({\rm d}w)}{2/ \tau+\int_{0}^{\Omega_{\rm e}}(1-e^{-2w})\nu_{\rm e}({\rm d}w)}\,.\] Observe that as \((1-e^{-w})^{2}\leq w^{2}\) for all \(w\geq 0\), the definition of the spiking correlation in Eq. (35) implies that we have \(\mathbb{V}\left[V\right]=O(\rho_{\rm e})\) so that neural variability consistently vanishes in the absence of correlations. ## V Discussion ### Synchrony modeling We have presented a parametric representation of the neuronal drives resulting from a finite number of asynchronous or (weakly) synchronous synaptic inputs. Several parametric statistical models have been proposed for generating correlated spiking activities in a discrete setting [73; 74; 75; 48]. Such models have been used to analyze the activity of neural populations via Bayesian inference methods [76; 77; 78], as well as maximum entropy methods [79; 80]. Our approach is not to simulate or analyze complex neural dependencies but rather to derive from first principles the synchronous input models that could drive conductance-based neuronal models. This approach primarily relies on extending the definition of discrete-time correlated spiking models akin to [48] to the continuous-time setting. To do so, the main tenet of our approach is to realize that input synchrony and spiking correlation represent equivalent measures under the assumption of input exchangeabilty. Input exchangeabilty posits that the driving inputs form a subset of an arbitrarily large pool of exchangeable random variables [44; 45]. In particular, this implies that the main determinant of the neuronal drive is the number of active inputs, as opposed to the magnitude of these synaptic inputs. Then, de Finetti theorem [46] states that the probability of observing a given input configuration can be represented in the discrete setting under an integral form (see Eq. (3)) involving a directing probability measure \(F\). Intuitively, \(F\) represents the probability distribution of the fraction of coactivating inputs at any discrete time. Our approach identifies the directing measure \(F\) as a free parameter that captures input synchrony. The more dispersed the distribution \(F\), the more synchronous the inputs, as previously noted in [81; 82]. Our work elaborates on this observation to develop computationally tractable statistical models for synchronous spiking in the continuous-time limit, i.e., for vanishing discrete time step \(\Delta t\to 0^{+}\). We derive our results using a discrete-time directing measure chosen as beta distribution \(F\sim B(\alpha,\beta)\), where the parameters \(\alpha\) and \(\beta\) can be related to the individual spiking rate \(r\) and the spiking correlation \(\rho\) via \(r\Delta t=\alpha/(\alpha+\beta)\) and \(\rho=1/(1+\alpha+\beta)\). For this specific choice of distribution, we are able to construct statistical models of the correlated spiking activity as generalized beta-binomial processes [49], which play an important role in statistical Bayesian inference [50; 51]. This construction allows us to fully parametrize the synchronous activity of a finite number of inputs via the jump distribution of a compound Poisson process, which depends explicitly on the spiking correlation. For being continuously indexed in time, stationary compound Poisson processes can naturally serve as the drive to biophysically relevant neuronal models. The idea to utilize compound Poisson processes to model input synchrony was originally proposed in [83; 84], but without constructing these processes as limits of discrete spiking models and without providing explicit functional form for their jump distributions. We expect our framework to apply to any exchangeable spiking models for which the directing probability measure \(F\) is such that \(\mathbb{E}\left[\theta\right]\sim\Delta t\) and \(\mathbb{V}\left[\theta\right]\sim\Delta t\) in the vanishing timescale limit \(\Delta t\to 0^{+}\). Moreover, our frame work generalizes to multidimensional compound Poisson process when applied to partially exchangeable neural populations [52], which is necessary to account for the distinction between excitatory and inhibitory neuronal populations. Generic dependencies between distinct populations can be achieved via classical statistical techniques such as copulas [53; 54], supporting the flexibility of our exchangeability-based modeling approach. That being said, it is worth mentioning that such approaches also suffer from a range of limitations that we will discuss later. ### Moment analysis We also present an analytical characterization of the subthreshold variability of a tractable conductance-based neuronal model, the AONCB neurons, when driven by synchronous synaptic inputs. The analytical characterization of a neuron's voltage fluctuations has been the focus of intense research [85; 86; 35; 87; 36]. These attempts have considered neuronal models that already incorporate some diffusion scaling hypotheses [88; 89], formally obtained by assuming an infinite number of synaptic inputs. The primary benefit of these diffusion approximations is that one can treat the corresponding Fokker-Planck equations to quantify neuronal variability in conductance-based integrate-and-fire models, while also including the effect of post-spiking reset [29; 30]. In practice, subthreshold variability is often estimated in the effective-time-constant approximation, while neglecting the multiplicative noise contributions due to considering voltage-dependent membrane fluctuations [85; 86; 35], although an exact treatment is also possible without this simplifying assumption [30]. By contrast, the analysis of conductance-based models has resisted exact treatments when driven by shot noise, as for compound Poisson input processes, rather than by Gaussian white noise, as in the diffusion approximation [60; 59]. The exact treatment of shot-noise-driven neuronal dynamics is primarily hindered by the limitations of the Ito/Stratonovich integrals [55; 56] to capture the effects of point-process-based noise sources, even without including a reset mechanism. These limitations were originally identified by Marcus, who proposed to approach the problem via a new type of stochastic equation [33; 34]. The key to Marcus equation is to define shot noise as limits of regularized, well-behaved approximations of that shot noise, for which classical calculus applies [57]. In practice, these approximations are canonically obtained as the solutions of shot-noise-driven Langevin equations with relaxation time scale \(\tau_{s}\), and shot noise is formally recovered in the limit \(\tau_{s}\to 0^{+}\). Our assertion here is that all-or-none conductances implement such a form of shot-noise regularization for which a natural limiting process can be defined when synapses operate instantaneously, i.e., \(\tau_{s}\to 0^{+}\). The main difference with the canonical Marcus approach is that our regularization is all-or-none, substituting each Dirac delta impulse with a finite step-like impulse of duration \(\tau_{s}\) and magnitude \(1/\tau_{s}\), thereby introducing a synaptic timescale but without any relaxation mechanism. The above assertion is the basis for introducing AONCB neurons, which is supported by our ability to obtain exact formulas for the first two moments of their stationary voltage dynamics (see Eq. (19) and Eq. (30)). For \(\tau_{s}>0\), these moments can be expressed in terms of synaptic efficacies that takes exact but rather intricate integral forms. Fortunately, these efficacies drastically simplify in the instantaneous synapse limit \(\tau_{s}\to 0^{+}\), for which the canonical shot-noise drive is recovered. These resulting formulas mirror those obtained in the diffusion and effective-time-constant approximations [35; 36], except that the featured dimensionless coefficients are specified as the first-order efficacies \(a_{\mathrm{e/i,1}}\) for the mean (see Eq. (17) and Eq. (18)), and as the second-order efficacies \(a_{\mathrm{e/i,2}}\), \(a_{\mathrm{e/i,12}}\), and \(c_{\mathrm{ei}}\) for the variance (see Eq. (25), Eq. (26), Eq. (27), Eq. (28), and Eq. (29)). These synaptic efficacies differ from the coefficients obtained in the diffusion and effective-time-constant approximations in three ways: First, independent of input synchrony, these efficacies all have exponential forms and saturate in the limit of large synaptic weights \(w_{\mathrm{e}},w_{\mathrm{i}}\to\infty\), with \(a_{\mathrm{e/i,1}}\leq b\tau\) and \(a_{\mathrm{e/i,2}},a_{\mathrm{e/i,12}},c_{\mathrm{ei}}\leq b\tau/2\). Such saturation is a general characteristic of shot-noise-driven, continuously-relaxing systems [90; 91; 92]. Second, these efficacies are defined as expectations with respect to the jump distribution \(p_{\mathrm{ei}}\) of the driving compound Poisson process (see Eq. (11) and Appendix B). A nonzero dispersion of \(p_{\mathrm{ei}}\), indicating that synaptic activation is truly modeled via random variables \(W_{\mathrm{e}}\) and \(W_{\mathrm{i}}\), is the hallmark of input synchrony [83; 84]. Third, these efficacies involve the overall rate of synaptic activation \(b\) (see (12)), which also depends on input synchrony. Such dependence can be naturally understood within the framework of Palm calculus [93], a form of calculus specially developed for stationary point processes (see Appendix C). Finally, note that our approach is distinct from those adopted in recent computational and theoretical works [94; 95; 96; 97] as our focus is on the derivation of exact formulas with explicit dependence on inputs numbers, sizes and correlations. Importantly, the moment expressions obtained in the diffusion and effective-time-constant approximations can be recovered within our framework by making the two independent assumptions that (\(i\)) synaptic weights are small, i.e., \(w_{\mathrm{e,1}},w_{\mathrm{i,1}}\ll 1\) and that (\(ii\)) input synchrony can be neglected, i.e. \(\rho_{\mathrm{e}}=\rho_{\mathrm{i}}=\rho_{\mathrm{ei}}=0\). Moreover, observe that our exact results are obtained for shot-noise drives without any approximation, including for nonzero synaptic time constant \(\tau_{s}>0\), and only take an interpretable form in the instantaneous synapse limit \(\tau_{s}\to 0^{+}\). Our moment formulas, derived for compound Poisson processes, directly generalize to the larger mathematical class of Levy processes with positive jumps [40; 41], which may be useful to define new scaling limits for neuronal activity. ### Biophysical relevance Our analysis allows us to investigate quantitatively how subthreshold variability depends on the numbers and strength of the synaptic contacts. This approach requires that we infer synaptic weights from the typical peak time and peak amplitude of the somatic membrane fluctuations caused by post-synaptic potentials [65; 68; 69]. Within our modeling framework, these weights are dimensionless quantities that we estimate by fitting the AONCB neuronal response to a single all-or-none synaptic activation at rest. For biophysically relevant parameters, this yields typically small synaptic weights in the sense that \(w_{\rm e},w_{\rm i}\ll 1\). These small values warrant adopting the small-weight approximation, for which expressions Eq. (19) and Eq. (30) simplify. In the small-weight approximation, the mean voltage becomes independent of input synchrony, whereas the simplified voltage variance Eq. (31) only depends on input synchrony via the spiking correlation coefficients \(\rho_{\rm e}\), \(\rho_{\rm i}\), and \(\rho_{\rm ei}\), as opposed to depending on a full jump distribution. Spike-count correlations have been experimentally shown to be weak in cortical circuits [7; 8; 9] and for this reason, virtually all theoretical approaches assume no spiking correlation structure [16; 98; 99; 100; 98; 101] and argued for asynchronous activity [102]. A putative role for correlations in neural computations remains a matter of debate [103; 104; 105]. When distributed over large networks, weak correlations can still give rise to precise synchrony, once information is pooled from a large enough number of synaptic inputs [23; 24]. In this view, and assuming that distinct inputs play comparable roles, correlations measure the propensity of distinct synaptic inputs impinging on a neuron to co-activate, which represents a clear form of synchrony. Our analysis shows that considering input synchrony in amounts consistent with the weak level of observed spiking correlation is enough to account for the surprisingly large magnitude of subthreshold neuronal variability [1; 17; 18; 19]. In contrast, the asynchronous regime yields unrealistically low variability, an observation the challenges the basis for the asynchronous state hypothesis. Recent theoretical works [29; 30] have also noted that the asynchronous state hypothesis seems at odds with certain features of the cortical activity such as the emergence of spontaneous activity or the maintenance of significant average polarization during evoked activity. Zerlaut _et al._ have analyzed under which conditions conductance-based networks can achieve a spectrum of asynchronous states with realistic neural features. In their work, a key variable to achieve this spectrum is a strong afferent drive that modulates a balanced network with moderate recurrent connections. Moderate recurrent conductances are inferred from allowing for up to 2mV somatic deflections at rest, whereas the afferent drive is provided via even stronger synaptic conductances that can activate synchronously. These inferred conductances appear large in light of recent _in-vivo_ measurements [65; 68; 69], and the corresponding synaptic weights all satisfy \(w_{\rm e},w_{\rm i}\geq 0.01\) within our framework. Correspondingly, the typical connectivity numbers considered are small with \(K_{\rm e}=200\), \(K_{\rm i}=50\) for recurrent connections and \(K_{\rm e}=10\) for the co-activating afferent projections. Thus, results from [29] appear consistent with our observation that realistic subthreshold variability can only be achieved asynchronously for a restricted number of large synaptic weights. Our findings, however, predict that these results follow from connectivity sparseness and will not hold in denser networks, for which the pairwise spiking correlation will exceed the empirical criteria for asynchrony, e.g., \(\rho_{\rm e}>1/K_{\rm e}\) (\(\rho_{\rm e}<0.005\leq 1/K_{\rm e}\) in [29]). Sanzeni _et al._ have pointed out that implementing the effective-time-constant approximation in conductance-based models suppresses subthreshold variability, especially in the high-conductance state [70]. As mentioned here, this suppression causes the voltage variability to decay as \(O(w_{\rm e})+O(w_{\rm i})\) in any scaling limit with vanishing synaptic weights. Sanzeni _et al._ observe that such decay is too fast to yield realistic variability for the balanced scaling, which assumes \(w_{\rm e}\sim 1/\sqrt{K_{\rm e}}\) and \(w_{\rm i}\sim 1/\sqrt{K_{\rm i}}\). To remedy this point, these authors propose to adopt a slower scaling of the weights, i.e., \(w_{\rm e}\sim 1/\ln K_{\rm e}\) and \(w_{\rm i}\sim 1/\ln K_{\rm i}\), which can be derived from the principle of rate conservation in neural networks. Such a scaling is sufficiently slow for variability to persist in networks with large connectivity number (\(\simeq 10^{5}\)). However, as any scaling with vanishing weights, Figure 12: **Diffusion approximations in the presence of synchrony.** (a) Comparison of an asynchronously driven integrate-and-fire AONCB neuron (blue trace) with its diffusion approximation obtained via the effective-time-constant approximation (red trace). (b) Comparison of a synchronously driven integrate-and-fire AONCB neuron (blue trace) with its diffusion approximation obtained by our exact analysis (red trace). Parameters: \(K_{e}=1000\), \(K_{i}=350\), \(\tau=15\) ms, \(w_{e}=0.001\), \(w_{i}=0.004\), \(r_{e}=r_{i}=25\) Hz, \(\rho_{e}=\rho_{i}=0.03\), \(\rho_{ei}=0\), \(V_{\rm T}=15\) mV, and \(V_{\rm R}=12\) mV. our exact analysis shows that such scaling must eventually lead to decaying variability, thereby challenging the basis for the synchronous state hypothesis. Both of these studies focus on the network dynamics of conductance-based networks under the diffusion and effective-time-constant approximations. The effective-time-constant approximation follows the prior assumption that the diffusion approximation is valid [35, 85, 86, 87, 83, 84, 85]. In turn, diffusive behaviors only rigorously emerge under some scaling limit with vanishing weights [88, 89]. By focusing on the single-cell level rather than the network level, we are able to demonstrate that the effective-time-constant approximation holds exactly for shot-noise driven, conductance-based neurons, without any diffusive approximations. Consequently, suppression of variability must occur independent of any scaling choice, except in the presence of input synchrony. Although this observation poses a serious theoretical challenge to the asynchronous state hypothesis, observe that it does not invalidate the practical usefulness of the diffusion approximation. For instance, we show in Fig. 12 that the mean spiking response of an a shot-noise driven AONCB neuron with an integrate-and-fire mechanism can be satisfactorily captured via the diffusion approximation. In addition, our analysis allows one to extend the diffusion approximation to include input synchrony. One can address the above theoretical challenge by recognizing that albeit large, neural networks are finite and never operate in the idealized regime obtained in a scaling limit. Adopting biophysically relevant parameters shows that even in finite networks, stationary asynchronous regimes produce unrealistically low subthreshold variability. In that respect, we find that achieving a realistic range of mean voltage variation for moderate synaptic weights requires \(\simeq 10^{3}\) driving synapses. This is lower than the upper range for synaptic contact numbers (\(\simeq 10^{4}\)), but consistent with the idea that only a subset of synaptic contacts share the same tuning specificities during evoked activity [106]. More generally, our analysis suggests that large but finite networks are such that they operate with weak but significant spiking correlations. Such spiking correlations amount to a form of input synchrony, which in turn can explain the observed level of subthreshold variability. That said, by focusing on the single-cell level, our analysis makes no predictions about the origin of such correlations and about how correlation may differ in the spontaneous or evoked regime [17, 18, 107]. ### Limitations of the approach A first limitation of our analysis is that we neglect the spike-generating mechanism as a source of neural variability. Most diffusion-based approaches model spike generation via the integrate-and-fire mechanism, whereby the membrane voltages reset to fixed value upon reaching a spike-initiation threshold [30, 35, 85, 86, 87, 85, 86, 87]. Accounting for such a mechanism can impact our findings in two ways: \((i)\) By confining voltage below the spiking threshold, the spiking mechanism may suppress the mean response enough for the neuron to operate well in the high-conductance regime for large input drives. Such a scenario will still produce exceedingly low variability due to variability quenching in the high-conductance regime, consistent with [1]. \((ii)\) The additional variability due to post-spiking resets may dominate the synaptic variability, so that a large overall subthreshold variability can be achieved in spite of low synaptic variability. This possibility also seems unlikely as dominant yet stereotypical resets would imply a quasi-deterministic neural response [64]. Addressing the above limitations quantitatively requires extending our exact analysis to include the integrate-and-fire mechanism using technique from queueing theory [93]. This is beyond the scope of this work. We note, however, that implementing a post-spiking reset to a fixed voltage level yields simulated trajectories that markedly differ from physiological ones (see Fig. 1), for which the post-spiking voltage varies across conditions [17, 18, 19]. The limitations due to the spike-generating mechanism can be circumvented experimentally by studying the spontaneous and evoked subthreshold responses in artificially silenced, through the injection of hyperpolarizing current neurons. Our analysis can then be used to infer the correlation regime of the synaptic inputs by fitting our voltage moment formulas. A second limitation of our analysis is our assumption of exchangeability, which is the lens through which we operate a link between spiking correlations and input drives. Taken literally, the exchangeability assumption Figure 13: **Impact of jittering synchronous inputs.** Voltage mean and variance of AONCB neurons in response to synchronous inputs that have been jittered for parameters: \(K_{\mathrm{e}}=1000\), \(w_{\mathrm{e}}=0.001\), \(\rho_{\mathrm{e}}=0.03\). Specifically, each input timing has been independently shifted in time by a centered Gaussian random variable with standard deviation \(\sigma\). (a) If the mean response is largely independent of jittering, the variance steadily decreases with the jittering, which erases synchrony-based correlation over the timescale \(\sigma\). Accordingly, for large timescale, \(\sigma\geq\tau\), we recover variance values obtained for asynchronous drive with \(\rho_{\mathrm{e}}=\rho_{\mathrm{i}}=\rho_{\mathrm{i}}=0\). (b) Variability estimates are reliable for jittering at timescales \(\sigma\leq 2\mathrm{ms}\). states that synapses all have a typical strength and that conductance variability primarily stems from the variable numbers of co-activating synapses. This is certainly an oversimplification as synapses exhibit heterogeneity [108], which likely plays a role in shaping neural variability [95]. Distinguishing between heterogeneity and correlation contributions, however, is a fundamentally ambiguous task [109]. For instance, considering \(K_{\mathrm{e}}\) synchronous inputs with weight \(w_{\mathrm{e}}\) at rate \(b_{\mathrm{e}}\) and with jump probability \(p_{\mathrm{e},k}\) (see Eq. (4) and Eq. (8)) is indistinguishable from considering \(K_{\mathrm{e}}\) independent inputs with heterogeneous weights \(\{w_{\mathrm{e}},2w_{\mathrm{e}},\ldots,K_{\mathrm{e}}w_{\mathrm{e}}\}\) and rates \(K_{\mathrm{e}}r_{\mathrm{e}}p_{\mathrm{e},k}\). Within our modeling approach, accounting for synaptic heterogeneity, with dispersed distribution for synaptic weights \(q_{\mathrm{e}}(w)\), can be done by taking the jump distribution \(p_{\mathrm{e}}\) as \[p_{\mathrm{e}}(w)=\sum_{k=1}^{K}q_{\mathrm{e}}^{(*k)}(w)p_{\mathrm{e},k}\,,\] where \(q_{\mathrm{e}}^{(*k)}\) refers to the \(k\)-fold convolution of \(q_{\mathrm{e}}(w)\). This leads to an overdispersion of the jump distribution \(p_{\mathrm{e}}\), and thus increased subthreshold neural variability. Therefore, while we have assumed exchangeability, our approach can accommodate weight heterogeneity. The interpretation of our results in term of synchrony rather than heterogeneity follows from the knowledge that cortical activity displays weak but nonzero spiking correlations [25, 26, 27, 28] and from recent experimental evidence that cortical response selectivity derives from strength in numbers of synapses, rather than difference in synaptic weights [106]. A third limitation of our analysis is to consider a perfect form of synchrony, with exactly simultaneous synaptic activations. Although seemingly unrealistic, we argue that perfect input synchrony can still yield biologically relevant estimates of the voltage variability. This is because voltage fluctuations result from the integration of inputs over a time scale set by the passive membrane time constant \(\tau\sim 20\)ms. As a result, synaptic activation times that differ by significantly less than \(\tau\) can be considered as synchronous inputs. To illustrate this point, we show in Fig. 13 the dependence of the voltage variance on the degree of synchrony by gradually jittering initially synchronous synaptic inputs. Assuming \(K_{\mathrm{e}}=1000\) excitatory inputs alone with spiking correlation \(\rho_{\mathrm{e}}=0.03\), one can check that the neural variability is left unchanged by jittering synaptic activations over time scales \(\sigma\leq 2\)ms. One can also check that jittering over larger timescales than the synaptic time constant yields neural variability similar to that obtained in the absence of correlation in the inputs. This supports that our finding are robust to including temporal variability on timescales \(\sigma\leq 2\)ms, which is consistent with typical heterogeneities with axonal or dendritic conduction delays. A functional role for precise timing in cortical activity remains a matter of debate [110]. Here, we point out that weakened forms of synchrony will yield lower variability, so that our challenge to the asynchronous state will remain. One remaining limitation of our synchrony modeling is that our analysis can only account for instantaneous correlations between excitation and inhibition, while in reality such correlations are expected to peak at a non-zero time lag. A fourth limitation of our analysis is that it is restricted to a form of synchrony that ignores temporal heterogeneity. This is a limitation because a leading hypothesis for the emergence of variability is that neurons generate spikes as if through a doubly stochastic process, i.e., as a Poisson process with temporally fluctuating rate [111]. To better understand this limitation, let us interpret our exchangeability-based modeling approach within the framework of doubly stochastic processes [40, 41]. This can be done most conveniently by reasoning on the discrete correlated spiking model specified by Eq. (3). Specifically, given fixed bin size \(\Delta t>0\), one can interpret the collection of _i.i.d._ variables \(\theta\sim F\) as an instantaneously fluctuating rate. In this interpretation, nonzero correlations can be seen as emerging from a doubly stochastic process for which the rate fluctuates as uncorrelated noise, i.e., with zero correlation time. This zero correlation time is potentially a serious limitation as it has been argued that shared variability is best modeled by a low-dimensional latent process evolving with slow, or even smooth, dynamics [75]. Addressing this limitation will require developing limit spiking model with nonzero correlation time using probabilistic techniques that are beyond the scope of this work [45]. Obtaining exact results for such input will represent another open challenge as the resulting driving processes may not be well-approximated by compound Poisson processes. We expect rate temporal heterogeneities to only play a significant role for the spontaneous regime of activity so that our analysis should remain valid in the evoked regime [18]. A final limitation of our analysis is that it does not explain the consistent emergence of synchrony in network dynamics. It remains conceptually unclear how synchrony can emerge and persist in neural networks that are fundamentally plagued by noise and exhibit large degrees of temporal and cellular heterogeneity. It may well be that carefully taking into account the finite-size of networks will be enough to produce the desired level of synchrony-based correlation, which is rather weak after all. Still, one would have to check wether achieving a given degree of synchrony requires the tuning of certain network features, such as the degree of shared input or the propensity of certain recurrent motifs [107] or the relative width of recurrent connections with respect to feedforward projections [112]. From a theoretical standpoint, the asynchronous state hypothesis answers the consistency problem by assuming no spiking correlations, and thus no synchrony. One can justify this assumption in idealized mathematical models by demonstrating the so-called "propagation-of-chaos" property [113], which rigorously holds for certain scaling limits with vanishing weights and under the assumption of exchangeability [99; 100; 101]. In this light, the main theoretical challenge posed by our analysis is extending the latter exchangeability-based property to include nonzero correlations [114], and hopefully characterize irregular synchronous state in some scaling limits. ###### Acknowledgements. This work was supported by the CRCNS program of the National Science Foundation under award number DMS-2113213 and by the Vision Research program of the National Institutes of Health under award number R01EY024071. We would like to thank Francois Baccelli David Hansel, and Nicholas Brunel for insightful discussions. ## Appendix A Discrete-time spiking correlation In this appendix, we consider first the discrete-time version of our model for possibly correlated excitatory synaptic inputs. In this model, we consider that observing \(K_{\mathrm{e}}\) synaptic inputs during \(N\) time steps specifies a \(\left\{0,1\right\}\)-valued matrix \(\left\{X_{k,i}\right\}_{1\leq k\leq K_{\mathrm{e}},1\leq i\leq N}\), where \(1\) indicates that an input is received and \(0\) indicates an absence of inputs. For simplicity, we further assume that the inputs are independent across time \[\mathbb{P}\left[\left\{X_{k,i}\right\}_{1\leq k\leq K_{\mathrm{e}},1\leq i \leq N}\right]=\prod_{i=1}^{N}\mathbb{P}\left[\left\{X_{k,i}\right\}_{1\leq k \leq K_{\mathrm{e}}}\right]\,,\] so that we can drop the time index and consider the population vector \(\left\{X_{k}\right\}_{1\leq k\leq K_{\mathrm{e}}}\). Consequently, given the individual spiking rate \(r_{\mathrm{e}}\), we have \(\mathbb{E}\left[X_{k}\right]=\mathbb{P}\left[X_{k}=1\right]=r_{i}\Delta t\), where \(\Delta t\) is the duration of the time step where a spike may or may not occur. Under the assumptions that \(\left\{X_{k}\right\}_{1\leq k\leq K_{\mathrm{e}}}\) belongs to an infinitely exchangeable set of random variables, de Finetti theorem states that there exists a probability measure \(F_{\mathrm{e}}\) on \(\left[0,1\right]\) such that \[\mathbb{P}\left[\left\{X_{k}\right\}_{1\leq k\leq K_{\mathrm{e}}}\right]=\int \prod_{k=1}^{K_{\mathrm{e}}}\theta_{\mathrm{e}}^{X_{k}}(1-\theta_{\mathrm{e}} )^{1-X_{k}}\,\mathrm{d}F_{\mathrm{e}}(\theta_{\mathrm{e}})\,.\] Assuming the directing measure \(F_{\mathrm{e}}\) known, we can compute the spiking correlation attached to our model. To see this, first observe that specifying the above probabilistic model for \(K_{\mathrm{e}}=1\), we have \[\mathbb{E}\left[X_{k}\right]=\mathbb{E}\left[\mathbb{E}\left[X_{k}\,|\,\theta _{\mathrm{e}}\right]\right]=\mathbb{E}\left[\theta_{\mathrm{e}}\right]=\int \theta_{\mathrm{e}}\mathrm{d}F_{\mathrm{e}}(\theta_{\mathrm{e}})\,.\] Then, using the total law of covariance and specifying the above probabilistic model for \(K=2\), we have \[\mathbb{C}\left[X_{k},X_{l}\right] =\mathbb{E}\left[\mathbb{C}\left[X_{k},X_{l}\,|\,\theta_{ \mathrm{e}}\right]\right]+\mathbb{C}\left[\mathbb{E}\left[X_{k}\,|\theta_{ \mathrm{e}}\right],\mathbb{E}\left[X_{l}\,|\,\theta_{\mathrm{e}}\right]\right]\,,\] \[=\mathbb{1}_{\{k=l\}}\mathbb{E}\left[\forall\left[X_{k}\,|\, \theta_{\mathrm{e}}\right]\right]+\mathbb{C}\left[\theta_{\mathrm{e}},\theta_{ \mathrm{e}}\right]\,,\] \[=\mathbb{1}_{\{k=l\}}\mathbb{E}\left[\theta_{\mathrm{e}}(1-\theta _{\mathrm{e}})\right]+\mathbb{V}\left[\theta_{\mathrm{e}}\right]\,,\] \[=\mathbb{1}_{\{k=l\}}\mathbb{E}\left[\theta_{\mathrm{e}}\right](1 -\mathbb{E}\left[\theta_{\mathrm{e}}\right])+\mathbb{1}_{\{k\neq l\}}\mathbb{ V}\left[\theta_{\mathrm{e}}\right]\,.\] This directly yields that the spiking correlation reads \[\rho_{\mathrm{e}}=\frac{\mathbb{C}\left[X_{k},X_{l}\right]}{\mathbb{V}\left[X_ {k}\right]}=\frac{\mathbb{V}\left[\theta_{\mathrm{e}}\right]}{\mathbb{E}\left[ \theta_{\mathrm{e}}\right](1-\mathbb{E}\left[\theta_{\mathrm{e}}\right])} \tag{101}\] The exact same calculations can be performed for the partially exchangeable case of mixed excitation and inhibition. The assumption of partial exchangeability requires that when considered separately, the \(\left\{0,1\right\}\)-valued vectors \(\left\{X_{1},\ldots,X_{K_{\mathrm{e}}}\right\}\) and \(\left\{Y_{1},\ldots,Y_{K_{\mathrm{e}}}\right\}\) each belong to an infinitely exchangeable sequence of random variables. Then, de Fenetti's theorem states that the probability to find the full vector of inputs \(\left\{X_{1},\ldots,X_{K_{\mathrm{e}}},Y_{1},\ldots,Y_{K_{\mathrm{e}}}\right\}\) in any particular configuration is given by \[\mathbb{P}\left[X_{1},\ldots,X_{K_{\mathrm{e}}},Y_{1},\ldots,Y_{K_{\mathrm{i}} }\right]=\int\prod_{k=1}^{K_{\mathrm{e}}}\theta_{\mathrm{e}}^{X_{k}}(1-\theta _{\mathrm{e}})^{1-X_{k}}\prod_{l=1}^{K_{\mathrm{i}}}\theta_{\mathrm{i}}^{Y_{l }}(1-\theta_{\mathrm{i}})^{1-Y_{l}}\,\mathrm{d}F_{\mathrm{ei}}(\theta_{ \mathrm{e}},\theta_{\mathrm{i}})\,, \tag{102}\] where the directing measure \(F_{\rm ei}\) fully parametrizes our probabilistic model. Performing similar calculations as for the case of excitation alone within this partially exchangeable setting yields \[\rho_{\rm ei}=\frac{\mathbb{C}\left[X_{k},Y_{\rm i}\right]}{\sqrt{\mathbb{V} \left[X_{k}\right]\mathbb{V}\left[Y_{\rm i}\right]}}=\frac{\mathbb{C}\left[ \theta_{\rm e},\theta_{\rm i}\right]}{\sqrt{\mathbb{E}\left[\theta_{\rm e} \right](1-\mathbb{E}\left[\theta_{\rm e}\right])\mathbb{E}\left[\theta_{\rm i} \right](1-\mathbb{E}\left[\theta_{\rm i}\right])}}\,. \tag{104}\] ## Appendix B Compound Poisson processes as continuous-time limits Let us consider the discrete-time model specified by Eq. (103), which is obtained under the assumption of partial infinite exchangeability. Under this assumption, the probability laws of the inputs is entirely determined by the distribution of \((k_{\rm e},k_{\rm i})\), where \(k_{\rm e}\) denotes the number of active excitatory inputs and \(k_{\rm i}\) denotes the number of inhibitory inputs. This distribution can be computed as \[P_{{\rm ei},kl}=\mathbb{P}\left[k_{\rm e}=k,k_{\rm i}=l\right]=\binom{K_{\rm e }}{k}\binom{K_{\rm i}}{l}\int\theta_{\rm e}^{k}(1-\theta_{\rm e})^{K_{\rm e}-k }\theta_{\rm i}^{l}(1-\theta_{\rm i})^{K_{\rm i}-l}\,\mathrm{d}F_{\rm ei}( \theta_{\rm e},\theta_{\rm i})\,.\] It is convenient to choose the directing measure as beta distributions since these are conjugate to the binomial distributions. Such a choice yields a class of probabilistic models referred to as beta-binomial models, which have been studied extensively [50; 51]. In this appendix, we always assume that the marginals \(F_{\rm e}\) and \(F_{\rm i}\) have the form \(F_{\rm e}\sim\text{Beta}(\alpha_{\rm e},\beta_{\rm e})\) and \(F_{\rm i}\sim\text{Beta}(\alpha_{\rm i},\beta_{\rm i})\). Then, direct integrations shows that the marginal distributions for the number of excitatory inputs and inhibitory inputs are \[P_{{\rm e},k}=\sum_{l=0}^{K_{\rm i}}P_{{\rm ei},kl}=\binom{K_{\rm e }}{k}\frac{B(\alpha_{\rm e}+k,\beta_{\rm e}+K_{\rm e}-k)}{B(\alpha_{\rm e}, \beta_{\rm e})}\quad\text{and}\quad P_{{\rm i},l}=\sum_{k=0}^{K_{\rm e}}P_{{ \rm ei},kl}=\binom{K_{\rm i}}{l}\frac{B(\alpha_{\rm i}+l,\beta_{\rm i}+K_{\rm i }-l)}{B(\alpha_{\rm i},\beta_{\rm i})}\,.\] Moreover, given individual spiking rates \(r_{\rm e}\) and \(r_{\rm i}\) within a time step \(\Delta t\), we have \[r_{\rm e}\Delta t=\mathbb{E}\left[X_{k}\right]=\mathbb{P}\left[X_{k}=1\right] =\mathbb{E}\left[\theta_{\rm e}\right]=\frac{\alpha_{\rm e}}{\alpha_{\rm e}+ \beta_{\rm e}}\quad\text{and}\quad r_{\rm i}\Delta t=\mathbb{E}\left[Y_{\rm i }\right]=\mathbb{P}\left[Y_{\rm i}=1\right]=\mathbb{E}\left[\theta_{\rm i} \right]=\frac{\alpha_{\rm i}}{\alpha_{\rm i}+\beta_{\rm i}}\,.\] The continuous-time limit is obtained by taking \(\Delta t\to 0^{+}\), which implies that the parameters \(\alpha_{\rm e}\) and \(\alpha_{\rm i}\) jointly vanish. When \(\alpha_{\rm e},\alpha_{\rm i}\to 0^{+}\), the beta distributions \(F_{\rm e}\) and \(F_{\rm i}\) becomes deficient and we have \(P_{{\rm e},0},P_{{\rm i},0}\to 1\). In other words, time bins of size \(\Delta t\) almost surely have no active inputs in the limit \(\Delta t\to 0^{+}\). Actually, one can show that \[1-P_{{\rm e},0}\sim\left(\psi(K_{\rm e}+\beta_{\rm e})-\psi(\beta_{\rm e}) \right)\alpha_{\rm e}\quad\text{and}\quad 1-P_{{\rm i},0}\sim\left(\psi(K_{\rm i}+ \beta_{\rm i})-\psi(\beta_{\rm i})\right)\alpha_{\rm i}\,,\] where \(\psi\) denotes the digamma function. This indicates in the limit \(\Delta t\to 0^{+}\), the times at which some excitatory inputs or some inhibitory inputs are active define a point process. Moreover, owing to the assumption of independence across time, this point process will actually be a Poisson point process. Specifically, consider a time \(T>0\) and set \(\Delta t=T/N\) for some large integer \(N\). Define the sequence of times \[T_{{\rm e},n} =\frac{T}{N}\cdot\inf\left\{i>NT_{{\rm e},n-1}/T\left|k_{{\rm e}, i}\geq 1\right\}\quad\text{with}\quad T_{{\rm e},1}=\frac{T}{N}\cdot\inf\left\{i\geq 0 \left|k_{{\rm e},i}\geq 1\right\}\,,\] \[T_{{\rm i},n} =\frac{T}{N}\cdot\inf\left\{i>NT_{{\rm i},n-1}/T\left|k_{{\rm i}, i}\geq 1\right\}\quad\text{with}\quad T_{{\rm i},1}=\frac{T}{N}\cdot\inf\left\{i\geq 0 \left|k_{{\rm i},i}\geq 1\right\}\,.\] Considered separately, the sequences of times \(\{T_{{\rm e},n}\}_{n\geq 1}\) and \(\{T_{{\rm i},n}\}_{n\geq 1}\) constitute binomial approximations of Poisson processes which we denote by \(N_{\rm e}\) and \(N_{\rm i}\), respectively. It is a classical result that these limit Poisson processes are recovered exactly when \(N\to\infty\) and that their rates are respectively given by \[b_{\rm e} =\lim_{\Delta t\to 0^{+}}\frac{1-P_{{\rm e},0}}{\Delta t}=\left(\psi(K_{ \rm e}+\beta_{\rm e})-\psi(\beta_{\rm e})\right)\left(\lim_{\Delta t\to 0^{+}}\frac{\alpha_{\rm e}}{ \Delta t}\right)=\left(\psi(K_{\rm e}+\beta_{\rm e})-\psi(\beta_{\rm e})\right) \beta_{\rm e}r_{\rm e}\,,\] \[b_{\rm i} =\lim_{\Delta t\to 0^{+}}\frac{1-P_{{\rm i},0}}{\Delta t}=\left(\psi(K_{ \rm i}+\beta_{\rm i})-\psi(\beta_{\rm i})\right)\left(\lim_{\Delta t\to 0^{+}}\frac{\alpha_{\rm i}}{ \Delta t}\right)=\left(\psi(K_{\rm i}+\beta_{\rm i})-\psi(\beta_{\rm i}) \right)\beta_{\rm i}r_{\rm i}\,.\] For all integer \(K>1\), the function \(\beta\mapsto\beta\left(\psi(K+\beta)-\psi(\beta)\right)\) is an increasing analytic functions on the domain \(\mathbb{R}^{\,+}\) with range \([1,K]\). Thus, we always have \(r_{\rm e}\leq b_{\rm e}\leq K_{\rm e}r_{\rm e}\) and \(r_{\rm i}\leq b_{\rm i}\leq K_{\rm i}r_{\rm i}\) and the extreme cases are achieved for perfect or zero correlations. Perfect correlations are achieved when \(\rho_{\rm e}=1\) or \(\rho_{\rm i}=1\), which corresponds to \(\beta_{\rm e}\to 0\) or \(\beta_{\rm i}\to 0\). This implies that \(b_{\rm e}=r_{\rm e}\) and \(b_{\rm i}=r_{\rm i}\), consistent with all synapses activating simultaneously. Zero correlations are achieved when \(\rho_{\rm e}=0\) or \(\rho_{\rm i}=0\), which corresponds to \(\beta_{\rm e}\to\infty\) or \(\beta_{\rm i}\to\infty\). This implies that \(b_{\rm e}=K_{\rm e}r_{\rm e}\) and \(b_{\rm i}=K_{\rm i}r_{\rm i}\), consistent with all synapses activating asynchronously, so that no inputs simultaneously activate. Observe that in all generality, the rates \(b_{\rm e}\) and \(b_{\rm i}\) are such that the mean number of spikes over the duration \(T\) is conserved in the limit \(\Delta t\to 0^{+}\). For instance, one can check that \[K_{\rm e}r_{\rm e}T=\mathbb{E}\left[\sum_{T_{\rm e,n}\leq T}k_{{\rm e},NT_{\rm e,n}/T}\right]=\mathbb{E}\left[\sum_{n=1}^{N_{\rm e}(T)}k_{{\rm e},n}\right]= \mathbb{E}\left[N_{\rm e}(T)\right]\mathbb{E}\left[k_{\rm e}\right]=b_{\rm e} T\mathbb{E}\left[k_{\rm e}\right]\] When excitation and inhibition are considered separately, the limit process \(\Delta t\to 0^{+}\) specifies two compound Poisson processes \[t\mapsto\sum_{n=1}^{N_{\rm e}(t)}k_{{\rm e},n}\quad\text{and}\quad t\mapsto \sum_{n=1}^{N_{\rm i}(t)}k_{{\rm i},n}\,,\] where \(N_{\rm e}\) and \(N_{\rm i}\) are Poisson processes with rate \(b_{\rm e}\) and \(b_{\rm i}\) and where \(\{k_{{\rm e},n}\}_{n\geq 1}\) are i.i.d according to \(p_{{\rm e},k}\) and \(\{k_{{\rm e},n}\}_{n\geq 1}\) are i.i.d according to \(p_{{\rm i},k}\). Nonzero correlations between excitation and inhibition emerge when the Poisson processes \(N_{\rm e}\) and \(N_{\rm i}\) are not independent. This corresponds to the processes \(N_{\rm e}\) and \(N_{\rm i}\) sharing times, so excitation and inhibition occur simultaneously at these times. To understand this point intuitively, let us consider the limit Poisson process \(N\) obtained by considering synaptic events without distinguishing excitation and inhibition. For perfect correlation, i.e., \(\rho_{\rm ei}=1\), all synapses activate synchronously and we have \(N=N_{\rm e}=N_{\rm i}\): all times are shared. By contrast, for zero correlation, i.e., \(\rho_{\rm ei}=0\), no synapses activate simultaneously and we have \(N=N_{\rm e}+N_{\rm i}\): no times are shared. For intermediary regime of correlations, a nonzero fraction of times will be shared resulting in a driving Poisson process \(N\) with overall rate \(b\) satisfying \(\min(b_{\rm e},b_{\rm i})\leq b<b_{\rm e}+b_{\rm i}\). We investigate the above intuitive statements quantitatively in Appendix C by inspecting two key examples. Let us conclude this appendix by recapitulating the general form of the limit compound process \(Y\) obtained in the continuous-time limit \(\Delta t\to 0^{+}\) when jointly considering excitation and inhibition. This compound Poisson process can be represented as \[t\mapsto Y(t)=\left(\sum_{n}^{N(t)}W_{{\rm e},n},\sum_{n}^{N(t)}W_{{\rm i},n} \right)\,,\] where \(N\) is that Poisson process registering all synaptic events without distinguishing excitation and inhibition and where the pairs \((W_{{\rm e},n,W_{{\rm i},n}})\) are i.i.d. random jumps in \(\mathbb{R}\times\mathbb{R}\setminus\{0,0\}\). Formally, such a process is specified by the rate of \(N\), denoted by \(b\), and the bivariate distribution of the jumps \((W_{{\rm e},n},W_{{\rm i},n})\), denoted by \(p_{{\rm ei},kl}\). These are defined as \[b=\lim_{\Delta t\to 0^{+}}\frac{1-P_{{\rm e},0}}{\Delta t}\quad\text{and} \quad p_{{\rm ei},kl}=\lim_{\Delta t\to 0^{+}}\frac{P_{{\rm e},kl}}{1-P_{{\rm e},0 }}\quad\text{for}\quad(k,l)\neq(0,0)\,, \tag{101}\] where \(P_{{\rm ei},00}\) is the probability to register no synaptic activation during a time step \(\Delta t\). According to these definitions, \(b\) is the infinitesimal likelihood that an input is active within a time bin, whereas \(p_{{\rm ei},kl}\) is the probability that \(k\) excitatory inputs and \(l\) inhibitory inputs are active given that at least one input is active. One can similarly define the excitatory and inhibitory rates of events \(b_{\rm e}\) and \(b_{\rm i}\), as well as the excitatory jump distribution \(p_{{\rm e},k}\) and the inhibitory jump distribution \(p_{{\rm i},l}\). Specifically, we have \[b_{\rm e}=\lim_{\Delta t\to 0^{+}}\frac{1-P_{{\rm e},0}}{\Delta t} \quad\text{and}\quad p_{{\rm e},k}=\lim_{\Delta t\to 0^{+}}\frac{P_{{\rm e},k}}{1-P_{{ \rm e},0}}\quad\text{for}\quad k\neq 0\,, \tag{102}\] \[b_{\rm i}=\lim_{\Delta t\to 0^{+}}\frac{1-P_{{\rm i},0}}{\Delta t} \quad\text{and}\quad p_{{\rm i},l}=\lim_{\Delta t\to 0^{+}}\frac{P_{{\rm i},l}}{1-P_{{ \rm i},0}}\quad\text{for}\quad l\neq 0\,,\] with \(P_{{\rm e},k}=\sum_{l=0}^{K_{\rm i}}P_{{\rm ei},k,l}\) and \(P_{{\rm i},k}=\sum_{k=0}^{K_{\rm e}}P_{{\rm ei},k,l}\). Observe that thus defined, the jump distribution \(p_{{\rm e},k}\) and \(p_{{\rm i},k}\) are specified as conditional marginal distributions of the joint jump distribution \(p_{{\rm ei},kl}\) on the events \(\{k_{\rm e}>0\}\) and \(\{k_{\rm i}>0\}\), respectively. These are such that \(p_{{\rm e},k}=(b/b_{\rm e})\sum_{l=0}^{K_{\rm i}}p_{{\rm ei},kl}\) and \(p_{{\rm i},l}=(b/b_{\rm i})\sum_{k=0}^{K_{\rm e}}p_{{\rm ei},kl}\). To see why, observe for instance that \[p_{{\rm e},k}=\lim_{\Delta t\to 0^{+}}\frac{P_{{\rm e},k}}{1-P_{{\rm e},0}}= \lim_{\Delta t\to 0^{+}}\sum_{l=0}^{K_{\rm i}}\frac{P_{{\rm ei},kl}}{1-P_{{\rm e},0 }}\frac{1-P_{{\rm ei},00}}{1-P_{{\rm e},0}}=\left(\sum_{l=0}^{K_{\rm i}}p_{{ \rm ei},kl}\right)\left(\lim_{\Delta t\to 0^{+}}\frac{1-P_{{\rm ei},00}}{1-P_{{\rm e},0}} \right)=\frac{b}{b_{\rm e}}\sum_{l=0}^{K_{\rm i}}p_{{\rm ei},kl} \tag{103}\] where we have used the definitions of the rates \(b\) and \(b_{\rm e}\) given in Eq. (101) and Eq. (102) to establish that \[\lim_{\Delta t\to 0^{+}}\frac{1-P_{{\rm ei},00}}{1-P_{{\rm e},0}}= \frac{\lim_{\Delta t\to 0^{+}}(1-P_{{\rm ei},00})/\Delta t}{\lim_{\Delta t\to 0^{+}}(1-P_{{\rm e},0})/\Delta t}= \frac{b}{b_{\rm e}}\,.\] ## Appendix C Two examples of limit compound Poisson processes The probability \(P_{\rm ei,00}\) that plays a central role in Appendix B can be easily computed for zero correlation, i.e., \(\rho_{\rm ei}=0\), by considering a directing measure under product form \(F_{\rm ei}(\theta_{\rm e},\theta_{\rm i})=F_{\rm e}(\theta_{\rm e})F_{\rm i}( \theta_{\rm i})\). Then integration with respect to the separable variables \(\theta_{\rm e}\) and \(\theta_{\rm i}\) yields \[P_{\rm ei,kl}=P_{\rm e,k}P_{\rm i,l}=\binom{K_{\rm e}}{k}\frac{B(\alpha_{\rm e }+k,\beta_{\rm e}+K_{\rm e}-k)}{B(\alpha_{\rm e},\beta_{\rm e})}\binom{K_{\rm i }}{l}\frac{B(\alpha_{\rm i}+l,\beta_{\rm i}+K_{\rm i}-l)}{B(\alpha_{\rm i}, \beta_{\rm i})}\,.\] In turn, the limit compound Poisson process can be obtain in the limit \(\Delta t\to 0^{+}\) by observing that \[1-P_{\rm e,0}=b_{\rm e}\Delta t+o(\Delta t)\,,\quad 1-P_{\rm e,0}P_{\rm i,0}=b_{ \rm i}\Delta t+o(\Delta t)\,,\quad\text{and}\quad 1-P_{\rm e,0}P_{\rm i,0}=(b_{\rm e }+b_{\rm i})\Delta t+o(\Delta t)\,,\] which implies that the overall rate is determined as \(b=\lim_{\Delta t\to 0^{+}}(1-P_{\rm e,0}P_{\rm i,0})/\Delta t=b_{\rm e}+b_{\rm i}\), as expected. To characterize the limit compound Poisson process, it remains to exhibit \(p_{\rm ei,kl}\), the distribution of the jumps \(k_{\rm e}\) and \(k_{\rm i}\), Suppose that \(k\geq 1\), then we have \[p_{\rm ei,kl} =\lim_{\Delta t\to 0^{+}}\frac{P_{\rm e,k}P_{\rm i,l}}{1-P_{\rm e,0}P_ {\rm i,0}}\,,\] \[=\lim_{\Delta t\to 0^{+}}\left[\left(\frac{1-P_{\rm e,0}}{1-P_{ \rm e,0}P_{\rm i,0}}\right)P_{\rm i,l}\left(\frac{P_{\rm e,k}}{1-P_{\rm e,0}} \right)\right]\,,\] \[=\left(\lim_{\Delta t\to 0^{+}}\frac{1-P_{\rm e,0}}{1-P_{\rm e,0}P_ {\rm i,0}}\right)\left(\lim_{\Delta t\to 0^{+}}P_{\rm i,l}\right)\left(\lim_{ \Delta t\to 0^{+}}\frac{P_{\rm e,k}}{1-P_{\rm e,0}}\right)\,.\] Then one can use the limit behaviors \[\lim_{\Delta t\to 0^{+}}\frac{1-P_{\rm e,0}}{1-P_{\rm e,0}P_{\rm i,0}}=\frac{b_{ \rm e}}{b_{\rm e}+b_{\rm i}}\quad\text{and}\quad\lim_{\Delta t\to 0^{+}}P_{\rm i,l}= \mathbb{1}_{\{l=0\}}\,.\] so that for \(k\geq 1\), we have \[p_{\rm ei,kl}=\frac{b_{\rm e}}{b_{\rm e}+b_{\rm i}}\mathbb{1}_{\{l=0\}}p_{\rm e,k}\quad\text{with}\quad p_{\rm e,k}=\lim_{\Delta t\to 0^{+}}\frac{P_{\rm e,k}}{1-P_{ \rm e,0}}=\binom{K_{\rm e}}{k}\frac{B(k,\beta_{\rm e}+K_{\rm e}-k)}{\psi(K_{ \rm e}+\beta_{\rm e})-\psi(\beta)}\,.\] A similar calculation shows that for all \(l\geq 1\), we have \(p_{\rm ei,kl}=b_{\rm i}/(b_{\rm e}+b_{\rm i})\mathbb{1}_{\{k=0\}}p_{\rm i,l}\). Thus \(p_{\rm ei,kl}=0\) whenever \(k,l\geq 1\), so that the support of \(p_{\rm ei,kl}\) is \(\{1,\ldots,K_{\rm e}\}\times\{0\}\cup\{0\}\times\{1,\ldots,K_{\rm i}\}\). This is consistent with the intuition that excitation and inhibition happen at distinct times in the absence of correlations. Let us now consider the case of maximum correlation for \(F_{\rm e}=F_{\rm i}=F\), where \(F\) is a beta distribution with parameters \(\alpha\) and \(\beta\). Moreover, let us assume the deterministic coupling \(\theta_{\rm e}=\theta_{\rm i}\) such that \(F_{\rm ei}(\theta_{\rm e},\theta_{\rm i})=F(\theta_{\rm e})\delta(\theta_{\rm i }-\theta_{\rm e})\). Then, the joint distribution of the jumps \((k_{\rm e},k_{\rm i})\) can be evaluated via direct integration as \[P_{\rm ei,kl} =\binom{K_{\rm e}}{k}\binom{K_{\rm i}}{l}\int\theta_{\rm e}^{k} (1-\theta_{\rm e})^{K_{\rm e}-k}\theta_{\rm e}^{l}(1-\theta_{\rm i})^{K_{\rm i }-l}\,dF(\theta_{\rm e})\delta(\theta_{\rm i}-\theta_{\rm e})\,,\] \[=\binom{K_{\rm e}}{k}\binom{K_{\rm i}}{l}\int\theta^{k+l}(1- \theta)^{K_{\rm e}+K_{\rm i}-k-l}\,dF(\theta)\,,\] \[=\binom{K_{\rm e}}{k}\binom{K_{\rm i}}{l}\frac{B(\alpha+k+l, \beta+K_{\rm e}+K_{\rm i}-k-l)}{B(\alpha,\beta)}\,.\] As excitation and inhibition are captured separately by the same marginal functions \(F_{\rm e}=F_{\rm i}=F\), we necessarily have \(\alpha/(\alpha+\beta)=\mathbb{E}\left[X_{k}\right]=\mathbb{E}\left[Y_{l}\right] =r_{\rm e}\Delta t=r_{\rm i}\Delta t\) and we refer to the common spiking rate as \(r\). Then the overall rate of synaptic activation is obtained as \[b=\lim_{\Delta t\to 0^{+}}\frac{1-P_{\rm ei,00}}{\Delta t}=\lim_{\alpha\to 0^{+}} \frac{1-P_{\rm ei,00}}{\alpha}\lim_{\Delta t\to 0^{+}}\frac{\alpha}{\Delta t}=\left( \psi(K_{\rm e}+K_{\rm i}+\beta)-\psi(\beta)\right)\beta r\,, \tag{101}\] and one can check that \(b\) differs from the excitatory- and inhibitory-specific rates \(b_{\rm e}\) and \(b_{\rm i}\), which satisfy \[b_{\rm e}=\lim_{\Delta t\to 0^{+}}\frac{1-P_{\rm e,0}}{\Delta t}=\left(\psi(K_{\rm e }+\beta)-\psi(\beta)\right)\beta r\quad\text{and}\quad b_{\rm i}=\lim_{\Delta t \to 0^{+}}\frac{1-P_{\rm i,0}}{\Delta t}=\left(\psi(K_{\rm i}+\beta)-\psi( \beta)\right)\beta r\,. \tag{102}\] To characterize the limit compound Poisson process, it remains to exhibit \(p_{\rm ei,kl}\), the joint distribution of the jumps \((k_{\rm e},k_{\rm i})\). A similar calculation as for the case of excitation alone yields \[p_{\rm ei,kl}=\lim_{\Delta t\to 0^{+}}\frac{P_{\rm ei,kl}}{1-P_{\rm ei,00}}= \binom{K_{\rm e}}{k}\binom{K_{\rm i}}{l}\frac{B(k+l,\beta+K_{\rm e}+K_{\rm i}-k-l )}{\psi(K_{\rm e}+K_{\rm i}+\beta)-\psi(\beta)}\,.\] Remember that within our model, spiking correlations do not depends on the number of neurons and that by construction we have \(\rho_{\rm ei}\leq\sqrt{\rho_{\rm e}\rho_{\rm i}}\). Thus, for the symmetric case under consideration, maximum correlation corresponds to \(\rho_{\rm ei}=\rho_{\rm e}=\rho_{\rm i}=1/(1+\beta)\). In particular perfect correlation between excitation and inhibition can only be attained for \(\beta\to 0\). When \(\beta>0\), i.e., for partial correlations, the Poisson processes \(N_{\rm e}\) and \(N_{\rm i}\) only share a fraction of their times, yielding an aggregate Poisson process \(N\) such that \(\min(b_{\rm e},b_{\rm i})<b<b_{\rm e}+b_{\rm i}\). The relations between \(b\), \(b_{\rm e}\), and \(b_{\rm i}\) can be directly recovered from the knowledge of \(p_{\rm ei,kl}\) by observing that \[\mathbb{P}\left[k_{\rm e}=0,k_{\rm i}>0\right] = \sum_{l=1}^{K_{\rm i}}p_{\rm ei,0l}=\frac{\psi(K_{\rm e}+K_{\rm i }+\beta)-\psi(K_{\rm e}+\beta)}{\psi(K_{\rm e}+K_{\rm i}+\beta)-\psi(\beta)}\] \[\mathbb{P}\left[k_{\rm i}=0,k_{\rm e}>0\right] = \sum_{k=1}^{K_{\rm e}}p_{\rm ei,k0}=\frac{\psi(K_{\rm e}+K_{\rm i }+\beta)-\psi(K_{\rm i}+\beta)}{\psi(K_{\rm e}+K_{\rm i}+\beta)-\psi(\beta)}\,.\] \[\mathbb{P}\left[k_{\rm i}>0,k_{\rm e}>0\right] = \sum_{k=1}^{K_{\rm e}}\sum_{l=1}^{K_{\rm i}}p_{\rm ei,kl}=1-\frac{ 2\psi(K_{\rm e}+K_{\rm i}+\beta)-\psi(K_{\rm e}+\beta)-\psi(K_{\rm i}+\beta)}{ \psi(K_{\rm e}+K_{\rm i}+\beta)-\psi(\beta)}\,.\] This implies that the fraction of times with nonzero excitation is given by \[\mathbb{P}\left[k_{\rm e}>0\right]=\mathbb{P}\left[k_{\rm e}>0,k_{\rm i}=0 \right]+\mathbb{P}_{0}\left[k_{\rm e}>0,k_{\rm i}>0\right]=\frac{\psi(K_{\rm e }+\beta)-\psi(\beta)}{\psi(K_{\rm e}+K_{\rm i}+\beta)-\psi(\beta)}\,,\] so that we consistently recover the value of \(b_{\rm e}\) already obtained in Eq. (8) and Eq. (100) via \[b_{\rm e}T=\mathbb{E}\left[N_{\rm e}(T)\right]=\mathbb{E}\left[\mathbb{1}_{ \{k_{\rm e}>0\}}N(T)\right]=bT\mathbb{E}_{\rm ei}\left[\mathbb{1}_{\{k_{\rm e }>0\}}\right]=bT\mathbb{P}_{0}\left[k_{\rm e}>0\right]\,.\] ## Appendix D Continuous-time spiking correlation Eq. (101) and Eq. (102) carry over to the continuous time limit \(\Delta t\to 0^{+}\) by observing that for limit compound Poisson processes to emerge, one must have that \(\mathbb{E}\left[X_{k}\right]=\mathbb{E}\left[\theta_{\rm e}\right]=O(\Delta t)\) and \(\mathbb{E}\left[Y_{l}\right]=\mathbb{E}\left[\theta_{\rm i}\right]=O(\Delta t)\). This directly implies that when \(\Delta t\to 0^{+}\), we have \[\rho_{\rm e}=\frac{\mathbb{C}\left[X_{k},X_{l}\right]}{\mathbb{V}\left[X_{k} \right]}\sim\frac{\mathbb{E}\left[X_{k}X_{l}\right]}{\mathbb{E}\left[X_{k}^{2} \right]}=\frac{\mathbb{E}\left[X_{k}X_{l}\right]}{\mathbb{E}\left[X_{k}\right] }\quad\text{and}\quad\rho_{\rm ei}=\frac{\mathbb{C}\left[X_{k},Y_{l}\right]}{ \sqrt{\mathbb{V}\left[X_{k}\right]}\,\mathbb{V}\left[Y_{l}\right]}\sim\frac{ \mathbb{E}\left[X_{k}Y_{l}\right]}{\sqrt{\mathbb{E}\left[X_{k}^{2}\right]}\, \mathbb{E}\left[Y_{l}^{2}\right]}=\frac{\mathbb{E}\left[X_{k}Y_{l}\right]}{ \sqrt{\mathbb{E}\left[X_{k}\right]}\,\mathbb{E}\left[Y_{l}\right]}\,. \tag{102}\] All the stationary expectations appearing above can be computed via the jump distribution of the limit point process emerging in the limit \(\Delta t\to 0^{+}\)[93]. Because this limit process is a compound Poisson process with discrete bivariate jumps, the resulting jump distribution \(p_{\rm ei,kl}\) is specified over \(\{1,\ldots,K_{\rm e}\}\times\{1,\ldots,K_{\rm i}\}\setminus\{0,0\}\). Denoting by \(b\) the overall rate of synaptic events, one has \(\lim_{\Delta t\to 0^{+}}\mathbb{E}\left[X_{k}Y_{l}\right]/\Delta t=b\mathbb{E}_{\rm ei }\left[X_{k}Y_{l}\right]\). Then by partial exchangeability of the \(\{0,1\}\)-valued population vectors \(\{X_{k}\}_{1\leq k\leq K_{\rm e}}\) and \(\{Y_{l}\}_{1\leq l\leq K_{\rm i}}\), we have \[\mathbb{E}_{\rm ei}\left[X_{k}Y_{l}\right]=\mathbb{E}_{\rm ei}\left[\mathbb{E} \left[X_{k}Y_{l}\right](k_{\rm e},k_{\rm i})\right]\right]=\mathbb{E}_{\rm ei }\left[\frac{k_{\rm e}}{K_{\rm e}}\frac{k_{\rm i}}{K_{\rm i}}\right]=\sum_{k=0}^ {K_{\rm e}}\sum_{l=0}^{K_{\rm e}}\frac{k}{K_{\rm e}}\frac{l}{K_{\rm i}}p_{\rm ei,kl}=\frac{\mathbb{E}_{\rm ei}\left[k_{\rm e}k_{\rm i}\right]}{K_{\rm e}K_{ \rm i}}\,. \tag{103}\] where the bivariate jump \((k_{\rm e},k_{\rm i})\) is distributed as \(p_{\rm ei,kl}\). To further proceed, it is important to note the relation between the expectation \(\mathbb{E}_{\rm ei}\left[\cdot\right]\), which is tied to the overall input process with rate \(b\), and the expectation \(\mathbb{E}_{\rm e}\left[\cdot\right]\) which is tied to the excitatory input process with rate \(b_{\rm e}\). This relation is best captured by remarking that \(p_{\rm e,k}\) are not defined as the marginals of \(p_{\rm ei,kl}\), but as only as conditional marginals on \(\{k_{\rm e}>0\}\). In other words, we have \(p_{\rm e,k}=(b/b_{\rm e})\sum_{l=0}^{K_{\rm i}}p_{\rm ei,kl}\), which implies that \(b\mathbb{E}_{\rm ei}\left[X_{k}X_{l}\right]=b_{\rm e}\mathbb{E}_{\rm e}\left[X _{k}X_{l}\right]\) and \(\mathbb{E}\left[X_{k}\right]=b\mathbb{E}_{\rm ei}\left[X_{k}\right]=b_{\rm e} \mathbb{E}_{\rm e}\left[X_{k}\right]\) with \[\mathbb{E}_{\rm e}\left[X_{k}X_{l}\right]=\mathbb{E}_{\rm e} \left[\mathbb{E}\left[X_{k}X_{l}\right]\!\!\left[k_{\rm e}\right]\right]= \mathbb{E}_{\rm e}\left[\frac{k_{\rm e}(k_{\rm e}-1)}{K_{\rm e}(K_{\rm e}-1)} \right]=\sum_{k=0}^{K_{\rm e}}\frac{k(k-1)}{K_{\rm e}(K_{\rm e}-1)}p_{\rm e,k}= \frac{\mathbb{E}_{\rm e}\left[k_{\rm e}(k_{\rm e}-1)\right]}{K_{\rm e}(K_{\rm e }-1)}\,, \tag{104}\] \[\mathbb{E}_{\rm e}\left[X_{k}\right]=\mathbb{E}_{\rm e}\left[ \mathbb{E}\left[X_{k}\right]\!\!\left[k_{\rm e}\right]\right]=\mathbb{E}_{\rm e }\left[\frac{k_{\rm e}}{K_{\rm e}}\right]=\sum_{k=0}^{K_{\rm e}}\sum_{l=0}^ {K_{\rm i}}\frac{k}{K_{\rm e}}p_{\rm e,k}=\frac{\mathbb{E}_{\rm e}\left[k_{\rm e }\right]}{K_{\rm e}}\,, \tag{105}\] with similar expressions for the inhibition-related quantities. Injecting (103), (104), and (105) in Eq. (102) yields \[\rho_{\rm e}=\frac{\mathbb{E}_{\rm e}\left[k_{\rm e}(k_{\rm e}-1)\right]}{ \mathbb{E}_{\rm e}\left[k_{\rm e}\right]\left(K_{\rm e}-1\right)}\quad\text{and} \quad\rho_{\rm ei}=\frac{b\mathbb{E}_{\rm ei}\left[k_{\rm e}k_{\rm i}\right]}{ \sqrt{K_{\rm e}b_{\rm e}\mathbb{E}_{\rm e}\left[k_{\rm e}\right]K_{\rm i}b_{ \rm e}\mathbb{E}_{\rm i}\left[k_{\rm e}\right]}}=\frac{\mathbb{E}_{\rm ei} \left[k_{\rm e}k_{\rm i}\right]}{\sqrt{K_{\rm e}\mathbb ## Appendix E Marcus jump rule The goal of this appendix is to justify the Marcus-type update rule given in Eq. (13). To do so let us first remark that given a finite time interval \([0,T]\), the number of synaptic activation times \(\{T_{n}\}_{n\in\mathbb{Z}}\) falling in this interval is almost surely finite. In particular, we have \(\Delta=\inf_{0\leq T_{n}\neq T_{m}\leq T}|T_{n}-T_{m}|>0\) almost surely. Consequently, taking \(\epsilon<\Delta/\tau_{s}\) ensures that synaptic activation events do not overlap in time, so that it is enough to consider a single synaptic activation triggered with no lack of generality in \(T_{0}=0\). Let us denote the voltage just before the impulse onset as \(V(T_{0}^{-})=V_{0}\), which will serve as initial condition for the ensuing voltage dynamics. As the synaptic conductances remains equals to \(W_{e}/(\epsilon\tau)\) and \(W_{i}/(\epsilon\tau)\) for a duration \([0,\epsilon\tau]\), the voltage \(V_{\epsilon}\) satisfies \[\tau\dot{V_{\epsilon}}=-V_{\epsilon}+(W_{\rm e}/\epsilon)(V_{\rm e}-V_{ \epsilon})+(W_{\rm i}/\epsilon)(V_{\rm i}-V_{\epsilon})\,,\quad 0\leq t\leq \epsilon\tau\,,\] where we assume \(I=0\) for simplicity. The unique solution satisfying \(V(0^{-})=V_{0}\) is \[V_{\epsilon}(t)=V_{0}e^{-t/\tau(1+W_{\rm e}/\epsilon+W_{\rm i}/ \epsilon)}+\frac{W_{\rm e}V_{\rm e}+W_{\rm i}V_{\rm i}}{\epsilon+W_{\rm e}+W_ {\rm i}}\left(1-e^{-t/\tau(1+W_{\rm e}/\epsilon+W_{\rm i}/\epsilon)}\right)\,, \quad 0\leq t\leq\epsilon\tau\,.\] The Marcus-type rule follows from evaluating the jump update as the limit \[\lim_{\epsilon\to 0^{+}}V_{\epsilon}(\epsilon\tau)-V_{0} =\lim_{\epsilon\to 0^{+}}\left\{V_{0}\left(e^{-(\epsilon+W_{ \rm e}+W_{\rm i})}-1\right)+\frac{W_{\rm e}V_{\rm e}+W_{\rm i}V_{\rm i}}{ \epsilon+W_{\rm e}+W_{\rm i}}\left(1-e^{-(\epsilon+W_{\rm e}+W_{\rm i})} \right)\right\}\,,\] \[=\left(\frac{W_{\rm e}V_{\rm e}+W_{\rm i}V_{\rm i}}{W_{\rm e}+W_{ \rm i}}-V_{0}\right)\left(1-e^{-(W_{\rm e}+W_{\rm i})}\right)\,,\] which has the same form as the rule announced in Eq. (13). Otherwise, at fixed \(\epsilon>0\), the fraction of time for which the voltage \(V_{\epsilon}\) is exponentially relaxing toward the leak reversal potential \(V_{\rm L}=0\) is larger than \(1-N\epsilon/T\), where \(N\) denotes the almost surely finite number of synaptic activations, which does not depends on \(\epsilon\). Thus, the voltage \(V=\lim_{\epsilon\to 0^{+}}V_{\epsilon}\) exponentially relaxes toward \(V_{\rm L}=0\), except when it has jump discontinuities in \(\{T_{n}\}_{n\in\mathbb{Z}}\). ## Appendix F Evaluation of \(Q_{\epsilon}(t,s)\) for \(\epsilon>0\) The goal here is to justify the closed-form expression of \(Q_{\epsilon}(t,s)=\mathbb{E}\left[e^{H_{\epsilon}(t)+H_{\rm i}(s)}\right]\) via standard manipulation of exponential functionals of Poisson processes. By definition, assuming with no loss of generality the order \(0\geq t\geq s\), we have \[H_{\rm e}(t)+H_{\rm i}(s) =-\frac{1}{\tau}\left(\int_{t}^{0}h_{\rm e}(u)\,\mathrm{d}u+\int_ {s}^{0}h_{\rm i}(u)\,\mathrm{d}u\right)\,,\] \[=-\frac{1}{\epsilon\tau}\left(\int_{t}^{0}\mathrm{d}u\sum_{N(u- \epsilon\tau)+1}^{N(u)}W_{\rm e,k}+\int_{s}^{0}\mathrm{d}u\sum_{N(u-\epsilon \tau)+1}^{N(u)}W_{\rm i,k}\right)\,,\] \[=-\frac{1}{\epsilon\tau}\left(\int_{t}^{0}\mathrm{d}u\sum_{N(u- \epsilon\tau)+1}^{N(u)}(W_{\rm e,k}+W_{\rm i,k})+\int_{s}^{t}\mathrm{d}u\sum_{ N(u-\epsilon\tau)+1}^{N(u)}W_{\rm i,k}\right)\,. \tag{101}\] We will evaluate \(Q_{\epsilon}(t,s)=\mathbb{E}\left[e^{H_{\epsilon}(t)+H_{\rm i}(s)}\right]\) as a product of independent integral contributions. Isolating these independent contributions from Eq. (101) requires to establish two preliminary results about the quantity \[I(t,s)=\int_{s}^{t}\sum_{k=N(u-\Delta)+1}^{N(u)}X_{k}\,\mathrm{d}u\,, \tag{102}\] where \(N\) denotes a Poisson process, \(X_{k}\) denotes i.i.d. nonnegative random variables, and where \(\Delta\) is positive activation time. Assume \(t-s\geq\Delta\), then given some real \(w<u-\Delta\), we have \[I(t,s) =\int_{s}^{t}\mathrm{d}u\sum_{k=N(v)+1}^{N(u)}X_{k}-\int_{s}^{t} \mathrm{d}u\sum_{k=N(v)+1}^{N(u-\Delta)}X_{k}\,,\] \[=\int_{s}^{t}\mathrm{d}u\sum_{k=N(v)+1}^{N(u)}X_{k}-\int_{s- \Delta}^{t-\Delta}\mathrm{d}u\sum_{k=N(v)+1}^{N(u)}X_{k}\,,\] \[=\int_{t-\Delta}^{t}\mathrm{d}u\sum_{k=N(v)+1}^{N(u)}X_{k}-\int_{ s-\Delta}^{s}\mathrm{d}u\sum_{k=N(v)+1}^{N(u)}X_{k}\,,\] \[=\left(\int_{t-\Delta}^{t}\mathrm{d}u\sum_{k=N(v)+1}^{N(t-\Delta) }X_{k}+\int_{t-\Delta}^{t}\mathrm{d}u\sum_{k=N(t-\Delta)+1}^{N(u)}X_{k}\right) -\left(\int_{s-\Delta}^{s}\mathrm{d}u\sum_{k=N(v)+1}^{N(s)}X_{k}-\int_{s-\Delta }^{s}\mathrm{d}u\sum_{k=N(u)+1}^{N(s)}X_{k}\right)\,,\] \[=\int_{t-\Delta}^{t}\mathrm{d}u\sum_{k=N(t-\Delta)+1}^{N(u)}X_{k }+\Delta\sum_{k=N(s)+1}^{N(t-\Delta)}X_{k}+\int_{s-\Delta}^{s}\mathrm{d}u\sum_ {k=N(u)+1}^{N(s)}X_{k}\,. \tag{100}\] One can check that the three terms in Eq. (100) above are independent for involving independent numbers of i.i.d. draws over the intervals \((t-\Delta,t]\), \((s,t-\Delta]\), and \((s-\Delta,s]\), respectively. Similar manipulations for the order for \(t-s\leq\Delta\) yields \[I(t,s)=\int_{s}^{t}\mathrm{d}u\sum_{k=N(s)+1}^{N(u)}X_{k}+(t-s) \sum_{k=N(t-\Delta)+1}^{N(s)}X_{k}+\int_{s-\Delta}^{t-\Delta}\mathrm{d}u\sum_ {k=N(u)+1}^{N(t-\Delta)}X_{k}\,, \tag{101}\] where that three independent contributions corresponds to independent numbers of i.i.d. draws over the intervals \((s,t]\), \((t-\Delta,s]\), and \((s-\Delta,t-\Delta]\), respectively. As evaluating \(Q_{\epsilon}\) only involves taking the limit \(s\to t^{-}\) at fixed \(\epsilon>0\), it is enough to consider the order \(0\geq-\epsilon\tau\geq t\geq s\geq t-\epsilon\tau\) With that in mind, we can apply Eq. (100) Eq. (101) with \(\Delta=\epsilon\tau\) and \(X_{k}=W_{\mathrm{e},k}+W_{\mathrm{i},k}\) or \(X_{k}=W_{\mathrm{i},k}\), to decompose the two terms of Eq. (100) in six contributions \[I(t,s) =\int_{-\epsilon\tau}^{0}\mathrm{d}u\sum_{k=N(t-\Delta)+1}^{N(u)} (W_{\mathrm{e},k}+W_{\mathrm{i},k})+\epsilon\tau\sum_{k=N(t)+1}^{N(-\epsilon \tau)}(W_{\mathrm{e},k}+W_{\mathrm{i},k})+\int_{t-\epsilon\tau}^{t}\mathrm{d}u \sum_{k=N(u)+1}^{N(t)}(W_{\mathrm{e},k}+W_{\mathrm{i},k})\] \[\quad+\int_{s}^{t}\mathrm{d}u\sum_{k=N(s)+1}^{N(u)}W_{\mathrm{i},k}+(t-s)\sum_{k=N(t-\epsilon\tau)+1}^{N(s)}W_{\mathrm{i},k}+\int_{s-\epsilon \tau}^{t-\epsilon\tau}\mathrm{d}u\sum_{k=N(u)+1}^{N(t-\epsilon\tau)}W_{ \mathrm{i},k}\,. \tag{102}\] It turns out that the contribution of the third term overlaps with that of the fourth and fifth terms. Further splitting of that third term produces the following expression \[I(t,s) =\underbrace{\int_{-\epsilon\tau}^{0}\mathrm{d}u\sum_{k=N(t- \Delta)+1}^{N(u)}(W_{\mathrm{e},k}+W_{\mathrm{i},k})}_{I_{1}}+\underbrace{ \epsilon\tau\sum_{k=N(t)+1}^{N(-\epsilon\tau)}(W_{\mathrm{e},k}+W_{\mathrm{i},k})}_{I_{2}(t)}\] \[\quad+\underbrace{\int_{s}^{t}\mathrm{d}u\left(\sum_{k=N(u)+1}^ {N(t)}(W_{\mathrm{e},k}+W_{\mathrm{i},k})+\sum_{k=N(s)+1}^{N(u)}W_{\mathrm{i},k}\right)+(s-t+\epsilon\tau)\sum_{k=N(s)+1}^{N(t)}(W_{\mathrm{e},k}+W_{ \mathrm{i},k})}_{I_{3}(t,s)}\] \[\quad+\underbrace{\left(\int_{t-\epsilon\tau}^{s}\mathrm{d}u\sum _{k=N(u)+1}^{N(s)}(W_{\mathrm{e},k}+W_{\mathrm{i},k})+(t-s)\sum_{k=N(t- \epsilon\tau)+1}^{N(s)}W_{\mathrm{i},k}\right)}_{I_{4}(s,t)}+\underbrace{ \int_{s-\epsilon\tau}^{t-\epsilon\tau}\mathrm{d}u\sum_{k=N(u)+1}^{N(t-\epsilon \tau)}W_{\mathrm{i},k}}_{s-\epsilon\tau}\,, \tag{103}\] where all five terms correspond to independent numbers of i.i.d. draws over the intervals \((-\epsilon\tau,0]\), \((t,-\epsilon\tau]\), \((s,t]\), \((t-\epsilon\tau,s]\), and \((s-\epsilon\tau,t-\epsilon\tau]\). Then, we have \[Q_{\epsilon}(t,s)=\mathbb{E}\left[e^{H_{\epsilon}(t)+H_{\mathrm{i}}(s)} \right]=\mathbb{E}\left[e^{-I_{1}/(\epsilon\tau)}\right]\mathbb{E}\left[e^{-I _{2}(t)/(\epsilon\tau)}\right]\mathbb{E}\left[e^{-I_{3}(t,s)/(\epsilon\tau)} \right]\mathbb{E}\left[e^{-I_{4}(s,t)/(\epsilon\tau)}\right]\mathbb{E}\left[e^ {-I_{5}(t,s)/(\epsilon\tau)}\right]\,,\] where all expectation terms can be computed via standard manipulation of the moment-generating function of Poisson processes [40]. The trick is to remember that for all \(t\geq s\), given that a Poisson process admits \(K=N(t)-N(s)\) points in \((s,t]\), all these \(K\) points are uniformly i.i.d. over \((s,t]\). This trick allows one to simply represent all integral terms in terms of uniform random variables, whose expectations are easily computable. To see this, let us consider \(A_{3}(t,s)\) for instance. We have \[I_{3}(t,s) =(t-s)\sum_{k=N(s)+1}^{N(t)}\left[(1-U_{k})(W_{\mathrm{e},k}+W_{ \mathrm{i},k})+U_{k}W_{\mathrm{i},k}\right]+(s-t+\epsilon\tau)\sum_{k=N(s)+1}^ {N(t)}(W_{\mathrm{e},k}+W_{\mathrm{i},k})\,,\] \[=(t-s)\sum_{k=N(s)+1}^{N(t)}U_{k}W_{\mathrm{e},k}+\epsilon\tau \sum_{k=N(s)+1}^{N(t)}(W_{\mathrm{e},k}+W_{\mathrm{i},k})\,,\] where \(\{U_{k}\}_{N(s)+1\leq k\leq N(t)}\) are uniformly i.i.d. on \([0,1]\). From the knowledge of the moment-generating function of Poisson random variables [40], one can evaluate \[\mathbb{E}\left[e^{-I_{3}(t,s)/(\epsilon\tau)}\right] =\mathbb{E}\left[e^{-\frac{t-s}{\epsilon\tau}\sum_{k=N(s)+1}^{N(t )}U_{k}W_{\mathrm{e},k}-\sum_{k=N(s)+1}^{N(t)}(W_{\mathrm{e},k}+W_{\mathrm{i},k})}\right]\,,\] \[=\mathbb{E}\left[\mathbb{E}\left[e^{-\frac{t-s}{\epsilon\tau}UW_ {\mathrm{e}}-(W_{\mathrm{e}}+W_{\mathrm{i}})}\right]N(t)-N(s)\right]\,,\] \[=\exp\left(b(t-s)\left(\mathbb{E}\left[e^{-\frac{t-s}{\epsilon \tau}UW_{\mathrm{e}}-(W_{\mathrm{e}}+W_{\mathrm{i}})}\right]-1\right)\right), \tag{100}\] where \((W_{\mathrm{e}},W_{\mathrm{i}})\) denotes exemplary conductance jumps and \(U\) denotes an independent uniform random variable. Furthermore we have \[\mathbb{E}\left[e^{-\frac{t-s}{\epsilon\tau}UW_{\mathrm{e}}-(W_{ \mathrm{e}}+W_{\mathrm{i}})}\right] =\mathbb{E}\left[\mathbb{E}\left[e^{-\frac{t-s}{\epsilon\tau}UW_ {\mathrm{e}}-(W_{\mathrm{e}}+W_{\mathrm{i}})}\right]\,\Big{|}\,W_{\mathrm{e}},W_{\mathrm{i}}\right]\,,\] \[=\mathbb{E}_{\mathrm{ei}}\left[e^{-(W_{\mathrm{e}}+W_{\mathrm{i} })}\mathbb{E}\left[e^{-\frac{t-s}{\epsilon\tau}UW_{\mathrm{e}}}\right]\right]\,,\] \[=\mathbb{E}_{\mathrm{ei}}\left[e^{-(W_{\mathrm{e}}+W_{\mathrm{i} })}\frac{\left(1-e^{-\frac{t-s}{\epsilon\tau}W_{\mathrm{e}}}\right)}{\frac{t- s}{\epsilon\tau}W_{\mathrm{e}}}\right]\,,\] so that we finally obtain \[\ln\mathbb{E}\left[e^{-I_{3}(t,s)/(\epsilon\tau)}\right]=\epsilon b\tau\left( \mathbb{E}_{\mathrm{ei}}\left[e^{-(W_{\mathrm{e}}+W_{\mathrm{i}})}\frac{ \left(1-e^{-\frac{t-s}{\epsilon\tau}W_{\mathrm{e}}}\right)}{W_{\mathrm{e}}} \right]-\frac{t-s}{\epsilon\tau}\right)\,. \tag{101}\] Similar calculations show that we have \[\ln\mathbb{E}\left[e^{-I_{1}/(\epsilon\tau)}\right]=\epsilon b\tau\left( \mathbb{E}_{\mathrm{ei}}\left[\frac{1-e^{-(W_{\mathrm{e}}+W_{\mathrm{i}})}}{W_ {\mathrm{e}}+W_{\mathrm{i}}}\right]-1\right)\,, \tag{102}\] \[\ln\mathbb{E}\left[e^{-I_{2}(t)/(\epsilon\tau)}\right]=b(\epsilon\tau+t) \left(1-\mathbb{E}_{\mathrm{ei}}\left[e^{-(W_{\mathrm{e}}+W_{\mathrm{i}})} \right]\right)\,,\] \[\ln\mathbb{E}\left[e^{-I_{4}(s,t)/(\epsilon\tau)}\right]=\epsilon b\tau\left( \mathbb{E}_{\mathrm{ei}}\left[e^{-\frac{t-s}{\epsilon\tau}W_{\mathrm{i}}} \frac{\left(1-e^{-\left(1+\frac{t-s}{\epsilon\tau}\right)(W_{\mathrm{e}}+W_{ \mathrm{i}})}\right)}{W_{\mathrm{e}}+W_{\mathrm{i}}}\right]-\left(1+\frac{s- t}{\epsilon\tau}\right)\right)\,,\] \[\ln\mathbb{E}\left[e^{-I_{6}(t,s)/(\epsilon\tau)}\right]=\epsilon b\tau\left( \mathbb{E}_{\mathrm{ei}}\left[\frac{1-e^{-\frac{t-s}{\epsilon\tau}W_{\mathrm{i} }}}{W_{\mathrm{i}}}\right]-\frac{t-s}{\epsilon\tau}\right)\,. \tag{103}\] Appendix G Expression of \(R_{\epsilon}(t,u,s,v)\) on \(\mathcal{O}_{\epsilon}\) and \(\mathcal{D}_{\epsilon}\) Using the similar calculations as in Appendix (F), we can evaluate the quadrivariate expectation \(R_{\epsilon}(t,u,s,v)\) on the region \(\mathcal{O}_{\epsilon}\), for which the \(O\)-order holds: \(0\geq-\epsilon\tau\geq t\geq u\geq t-\epsilon\tau\geq u-\epsilon\tau\geq s \geq v\geq s-\epsilon\tau\geq v-\epsilon\tau\). This requires to isolate consider 9 independent contributions, corresponding to the 9 contiguous intervals specified by the \(O\)-order. We find \[\ln R_{\epsilon}(t,u,s,v)=A_{1}+A_{2}(t)+A_{3}(t,u)+A_{4}(u,t)+A_{5}(t,u)+A_{6 }(u,s)+A_{7}(s,v)+A_{8}(v,s)+A_{9}(s,v)\,,\] where the nonnegative terms making up the above sum are defined as \[A_{1}=\epsilon b\tau\left(\mathbb{E}_{\rm ei}\left[\frac{1-e^{-2(W_{\rm e}+W_ {\rm i})}}{2(W_{\rm e}+W_{\rm i})}\right]-1\right)\,,\] \[A_{2}(t)=b(\epsilon\tau+t)\left(1-\mathbb{E}_{\rm ei}\left[e^{-2(W_{\rm e}+W_ {\rm i})}\right]\right)\,,\] \[A_{3}(t,u)=\epsilon b\tau\left(\mathbb{E}_{\rm ei}\left[e^{-2(W_{\rm e}+W_{ \rm i})}\frac{\left(1-e^{-\frac{t-u}{\epsilon\tau}W_{\rm e}}\right)}{W_{\rm e }}\right]-\frac{t-u}{\epsilon\tau}\right)\,,\] \[A_{4}(u,t)=\epsilon b\tau\left(\mathbb{E}_{\rm ei}\left[e^{-W_{\rm e}-\left( 1+\frac{t-u}{\epsilon\tau}\right)W_{\rm i}}\frac{\left(1-e^{-\left(1+\frac{u- t}{\epsilon\tau}\right)(W_{\rm e}+W_{\rm i})}\right)}{W_{\rm e}+W_{\rm i}} \right]-\left(1+\frac{u-t}{\epsilon\tau}\right)\right)\,,\] \[A_{5}(t,u)=\epsilon b\tau\left(\mathbb{E}_{\rm ei}\left[e^{-(W_{\rm e}+W_{\rm i })}\frac{\left(1-e^{-\frac{t-u}{\epsilon\tau}W_{\rm i}}\right)}{W_{\rm e}} \right]-\frac{t-u}{\epsilon\tau}\right)\,,\] \[A_{6}(u,s)=b(s+\epsilon\tau-u)\left(1-\mathbb{E}_{\rm ei}\left[e^{-(W_{\rm e} +W_{\rm i})}\right]\right)\] \[A_{7}(s,v)=\epsilon b\tau\left(\mathbb{E}_{\rm ei}\left[e^{-(W_{\rm e}+W_{\rm i })}\frac{\left(1-e^{-\frac{s-v}{\epsilon\tau}W_{\rm e}}\right)}{W_{\rm e}} \right]-\frac{s-v}{\epsilon\tau}\right)\,,\] \[A_{8}(v,s)=\epsilon b\tau\left(\mathbb{E}_{\rm ei}\left[e^{-\frac{s-v}{ \epsilon\tau}W_{\rm i}}\frac{\left(1-e^{-\left(1-\frac{s-v}{\epsilon\tau} \right)(W_{\rm e}+W_{\rm i})}\right)}{W_{\rm e}+W_{\rm i}}\right]-\left(1- \frac{s-v}{\epsilon\tau}\right)\right)\,,\] \[A_{9}(s,v)=\epsilon b\tau\left(\mathbb{E}_{\rm ei}\left[\frac{\left(1-e^{- \frac{s-v}{\epsilon\tau}W_{\rm i}}\right)}{W_{\rm i}}\right]-\frac{s-v}{ \epsilon\tau}\right)\,.\] One can check that \(A_{3}(t,t)=A_{5}(t,t)=0\) and \(A_{7}(s,s)=A_{9}(s,s)=0\) and that \(A_{1}\), \(A_{4}(u,t)\), and \(A_{8}(v,s)\) are all uniformly \(O(\epsilon)\) on the region \(\mathcal{O}_{\epsilon}\). This implies that for all \((t,s)\) in \(\mathcal{O}_{\epsilon}\), we have \[R(t,s)=\lim_{\epsilon\to 0^{+}}R_{\epsilon}(t,t,s,s)=\lim_{\epsilon\to 0^{+}}e^{A_{2}(t)+A_{ 6}(t,s)}=e^{2bta_{i,2}-b|t-s|a_{i,1}}\,.\] Using the similar calculations as in Appendix (F), we can evaluate the quadrivariate expectation \(R_{\epsilon}(t,u,s,v)\) on the region \(\mathcal{D}_{\epsilon}\), for which the \(D\)-order holds: \(0\geq-\epsilon\tau\geq t\geq u\geq s\geq v\geq t-\epsilon\tau\geq u-\epsilon \tau\geq s-\epsilon\tau\geq v-\epsilon\tau\). This requires to isolate consider 9 independent contributions, corresponding to the 9 contiguous intervals specified by the \(O\)-order. We find \[\ln R_{\epsilon}(t,u,s,v)= \tag{121}\] \[B_{1}+B_{2}(t)+B_{3}(t,u)+B_{4}(t,u,s)+B_{5}(t,u,s,v)+B_{6}(t,u,s,v )+B_{7}(t,u,s,v)+B_{8}(u,s,v)+B_{9}(s,v)\] where the nonnegative terms making up the above sum are defined as \[B_{1}=\epsilon b\tau\left(\mathbb{E}_{\rm ei}\left[\frac{1-e^{-2(W_{\rm e}+W_{\rm i })}}{2(W_{\rm e}+W_{\rm i})}\right]-1\right)\,,\] \[B_{2}(t)=b(\epsilon\tau+t)\left(1-\mathbb{E}_{\rm ei}\left[e^{-2(W_{\rm e}+W_{ \rm i})}\right]\right)\,,\] \[B_{3}(t,u)=\epsilon b\tau\left(\mathbb{E}_{\rm ei}\left[e^{-2(W_{\rm e}+W_{ \rm i})}\frac{\left(1-e^{-\frac{t-u}{\epsilon\tau}W_{\rm e}}\right)}{W_{\rm e}} \right]-\frac{t-u}{\epsilon\tau}\right)\,,\] \[B_{4}(t,u,s)=\epsilon b\tau\left(\mathbb{E}_{\rm ei}\left[e^{-\left(2-\frac{t- u}{\epsilon\tau}\right)W_{\rm e}-\left(2-\frac{u-v}{\epsilon\tau}\right)W_{ \rm i}}\frac{\left(1-e^{-\frac{u-v}{\epsilon\tau}\left(2W_{\rm e}+W_{\rm i} \right)}\right)}{W_{\rm e}+W_{\rm i}}\right]-\frac{u-s}{\epsilon\tau}\right)\,,\] \[B_{5}(t,u,s,v)=\epsilon b\tau\left(\mathbb{E}_{\rm ei}\left[e^{-\left(2-\frac {t-v}{\epsilon\tau}\right)W_{\rm e}-\left(2-\frac{u-v}{\epsilon\tau}\right)W_ {\rm i}}\frac{\left(1-e^{-\frac{t-v}{\epsilon\tau}\left(2W_{\rm e}+W_{\rm i} \right)}\right)}{2W_{\rm e}+W_{\rm i}}\right]-\frac{s-v}{\epsilon\tau}\right)\,,\] \[B_{6}(t,u,s,v)=\epsilon b\tau\left(\mathbb{E}_{\rm ei}\left[e^{-\left(\frac{t -v}{\epsilon\tau}\right)W_{\rm e}-\left(\frac{u-v}{\epsilon\tau}\right)W_{\rm i }}\frac{\left(1-e^{-\frac{t-u}{\epsilon\tau}\left(W_{\rm e}+2W_{\rm i} \right)}\right)}{W_{\rm e}+2W_{\rm i}}\right]-\left(1-\frac{t-v}{\epsilon\tau} \right)\right)\,,\] \[B_{7}(t,u,s,v)=\epsilon b\tau\left(\mathbb{E}_{\rm ei}\left[e^{-\left(\frac{u -v}{\epsilon\tau}\right)W_{\rm e}-\left(\frac{u-v}{\epsilon\tau}\right)W_{\rm i }}\frac{\left(1-e^{-\frac{t-u}{\epsilon\tau}\left(W_{\rm e}+2W_{\rm i} \right)}\right)}{W_{\rm e}+2W_{\rm i}}\right]-\frac{t-u}{\epsilon\tau}\right)\,,\] \[B_{8}(u,s,v)=\epsilon b\tau\left(\mathbb{E}_{\rm ei}\left[e^{-\left(\frac{t-v }{\epsilon\tau}\right)W_{\rm i}}\frac{\left(1-e^{-\frac{u-v}{\epsilon\tau}W_{ \rm i}}\right)}{W_{\rm e}+W_{\rm i}}\right]-\frac{u-s}{\epsilon\tau}\right)\,,\] \[B_{9}(s,v)=b\epsilon\tau\left(\mathbb{E}_{\rm ei}\left[e^{-\left(\frac{t-v}{ \epsilon\tau}\right)W_{\rm i}}\frac{\left(1-e^{-\frac{t-u}{\epsilon\tau}W_{\rm i }}\right)}{W_{\rm i}}\right]-\frac{s-v}{\epsilon\tau}\right)\,.\] Observe that \(B_{1}=A_{1}\) and \(B_{2}(t)=A_{2}(t)\) and that \(B_{3}(t,t)=B_{7}(t,t,s,v)=0\) and \(B_{5}(t,u,s,s)=B_{9}(s,s)=0\). Moreover, one can see that \(R(t,s)\) is continuous over the whole negative orthant by checking that: \[\lim_{s\to(t-\epsilon\tau)^{-}}B_{4}(t,t,s)=\lim_{s\to(t-\epsilon \tau)^{+}}A_{4}(t,s)\,,\] \[\lim_{s\to(t-\epsilon\tau)^{-}}B_{6}(t,t,s,s)=\lim_{s\to(t- \epsilon\tau)^{+}}A_{6}(t,s)\,,\] \[\lim_{s\to(t-\epsilon\tau)^{-}}B_{8}(t,s,s)=\lim_{s\to(t-\epsilon \tau)^{+}}A_{8}(t,s)\,.\] Actually, by computing the appropriate limit values of the relevant first- and second-order derivatives of \(R_{\epsilon}(t,u,s,v)\), one can check that for \(\epsilon>0\), all the integrands involved in specifying the coefficients of the quadratic form Eq. (21) define continuous functions. ## Appendix H Integrals of the quadratic terms on \(\mathcal{D}_{\epsilon}\) Here, we only treat the quadratic term \(A_{\rm e}\) as the other quadratic terms \(A_{\rm i}\) and \(B_{\rm ei}\) involve a similar treatment. The goal here is to compute \(A_{\rm e}^{\prime\prime}\), which is defined as the contribution to \(A_{\rm e}\) resulting from integrating \(\lim_{(u,v)\to(t,s)}\partial_{t}\partial_{s}R_{\epsilon}(t,u,s,v)\) over the diagonal region \(\mathcal{D}_{\epsilon}=\{t,s\leq 0\,|\,\tau\epsilon\geq|t-s|\}\), in the limit \(\epsilon\to 0^{+}\). To this end we first remark that \[\frac{\partial_{t}\partial_{s}R_{\epsilon}(t,u,s,v)}{R_{\epsilon}(t,u,s,v)}= \partial_{t}\partial_{s}\ln R_{\epsilon}(t,u,s,v)+\big{(}\partial_{t}\ln R_{ \epsilon}(t,u,s,v)\big{)}\big{(}\partial_{s}\ln R_{\epsilon}(t,u,s,v)\big{)} \,.\] Injecting the analytical expression Eq. (101) into the above relation and evaluating \(I_{\epsilon}(t,s)=\lim_{(u,v\to(t,s)-)}\partial_{t}\partial_{s}R_{\epsilon}(t, u,s,v)\) reveals that \(I_{\epsilon}(t,s)\) scales as \(1/\epsilon\), so that one expects that \[A_{\rm e}^{\prime\prime}=\lim_{\epsilon\to 0^{+}}\iint_{\mathcal{D}_{ \epsilon}}e^{\frac{i+s}{\tau}}I_{\epsilon}(t,s)\,{\rm d}t{\rm d}s>0\,.\] To compute the exact value of \(A_{\rm e}^{\prime\prime}\), we perform the change of variable \(x=(t-s)/(\epsilon\tau)\Leftrightarrow s=t-\epsilon\tau x\) to write \[\iint_{\mathcal{D}_{\epsilon}}e^{\frac{t+s}{\tau}}I_{\epsilon}(t,s)\,{\rm d}t {\rm d}s=2\int_{-\infty}^{0}\left(\int_{0}^{1}\epsilon\tau e^{-\epsilon x}I_{ \epsilon}(t,t+\epsilon\tau x)\,{\rm d}x\right)e^{\frac{2t}{\tau}}{\rm d}t\,,\] where the function \(\epsilon e^{-\frac{t\tau}{\tau}}I_{\epsilon}(t,t+\epsilon x)\) remains of order one on \(\mathcal{D}_{\epsilon}\) in the limit of instantaneous synapses. Actually, one can compute that \[\lim_{\epsilon\to 0^{+}}\epsilon e^{-\epsilon x}I_{\epsilon}(t,t+\epsilon\tau x )=\frac{2\tau}{2\tau}\mathbb{E}_{\rm ei}\left[\frac{W_{\rm e}^{2}}{W_{\rm e} +W_{\rm i}}e^{-\frac{\epsilon}{\tau}(W_{\rm e}+W_{\rm i})}\left(1-e^{-2(1-x) (W_{\rm e}+W_{\rm i})}\right)\right]e^{2bta_{\rm ei,2}}\,.\] Then, for dealing with positive, continuous, uniformly bounded functions, one can safely exchange the integral and limit operations to get \[A_{\rm e}^{\prime\prime} =2\int_{-\infty}^{0}\left(\int_{0}^{1}\lim_{\epsilon\to 0^{+}} \epsilon\tau e^{-\epsilon x}I_{\epsilon}(t,t+\epsilon\tau x)\,{\rm d}x\right) e^{\frac{2t}{\tau}}{\rm d}t\,,\] \[=\left(\int_{-\infty}^{0}e^{2\frac{1}{\tau}(1+a_{\rm ei,2})}\,{ \rm d}t\right)\left(\int_{0}^{1}b\mathbb{E}_{\rm ei}\left[\frac{W_{\rm e}^{2} }{W_{\rm e}+W_{\rm i}}e^{-\frac{\epsilon}{\tau}(W_{\rm e}+W_{\rm i})}\left(1 -e^{-2\left(1-\frac{\epsilon}{\tau}\right)(W_{\rm e}+W_{\rm i})}\right)\right] \,{\rm d}x\right)\,,\] \[=\frac{b\tau}{2(1+a_{\rm ei,2})}\,\mathbb{E}_{\rm ei}\left[\frac {W_{\rm e}^{2}}{(W_{\rm e}+W_{\rm i})^{2}}\left(1-e^{-(W_{\rm e}+W_{\rm i})^ {2}}\right)\right]\,.\] A similar calculation for the quadratic cross term \(B_{\rm ei}^{\prime\prime}\) yields \[B_{\rm ei}^{\prime\prime}=\frac{2c_{\rm ei}}{1+a_{\rm ei,2}}\quad\text{with} \quad c_{\rm ei}=\frac{b\tau}{2}\mathbb{E}_{\rm ei}\left[\frac{W_{\rm e}W_{ \rm i}}{(W_{\rm e}+W_{\rm i})^{2}}\left(1-e^{-(W_{\rm e}+W_{\rm i})^{2}} \right)\right]\,.\] In order to express \(A_{\rm e}^{\prime\prime}\) in term of \(c_{\rm ei}\), we need to introduce the quantity \(a_{\rm e,12}=a_{\rm e,1}-a_{\rm e,2}\) which satisfies \[a_{\rm e,12} =b\tau\mathbb{E}_{\rm ei}\left[\frac{W_{\rm e}}{(W_{\rm e}+W_{ \rm i})}\left(1-e^{-(W_{\rm e}+W_{\rm i})}\right)\right]-\frac{1}{2}\mathbb{E} _{\rm ei}\left[\frac{W_{\rm e}}{(W_{\rm e}+W_{\rm i})}\left(1-e^{-(W_{\rm e}+W_ {\rm i})}\right)^{2}\right]\,,\] \[=b\tau\mathbb{E}_{\rm ei}\left[\frac{W_{\rm e}}{(W_{\rm e}+W_{ \rm i})}\left(1-e^{-(W_{\rm e}+W_{\rm i})}\right)\left(1-\frac{1}{2}\left(1-e^{ -(W_{\rm e}+W_{\rm i})}\right)\right)\right]\,,\] \[=b\tau\mathbb{E}_{\rm ei}\left[\frac{W_{\rm e}}{(W_{\rm e}+W_{ \rm i})}\left(1-e^{-(W_{\rm e}+W_{\rm i})}\right)\left(\frac{\left(1+e^{-(W_{ \rm e}+W_{\rm i})}\right)}{2}\right)\right]\,,\] \[=\frac{b\tau}{2}\mathbb{E}_{\rm ei}\left[\frac{W_{\rm e}}{(W_{ \rm e}+W_{\rm i})}\left(1+e^{-(W_{\rm e}+W_{\rm i})}\right)^{2}\right]\,,\] With the above observation, we remark that \[(1+a_{\rm ei,2})A_{\rm e}^{\prime\prime}-a_{\rm e,12} =\frac{b\tau}{2}\left(\mathbb{E}_{\rm ei}\left[\frac{W_{\rm e}^{2} }{(W_{\rm e}+W_{\rm i})^{2}}\left(1-e^{-(W_{\rm e}+W_{\rm i})}\right)^{2} \right]-\mathbb{E}_{\rm ei}\left[\frac{W_{\rm e}}{(W_{\rm e}+W_{\rm i})}\left(1 -e^{-(W_{\rm e}+W_{\rm i})}\right)^{2}\right]\right)\,,\] \[=\frac{b\tau}{2}\mathbb{E}_{\rm ei}\left[\frac{W_{\rm e}^{2}-W_{ \rm e}(W_{\rm e}+W_{\rm i})}{(W_{\rm e}+W_{\rm i})^{2}}\left(1-e^{-(W_{\rm e}+W _{\rm i})}\right)^{2}\right]\,,\] \[=-\frac{b\tau}{2}\mathbb{E}_{\rm ei}\left[\frac{W_{\rm e}W_{ \rm i}}{(W_{\rm e}+W_{\rm i})^{2}}\left(1-e^{-(W_{\rm e}+W_{\rm i})}\right)^{2} \right]\,,\] \[=-c_{\rm ei}\] so that we have the following compact expression for the quadratic diagonal term \[A_{\rm e}^{\prime\prime}=\frac{a_{\rm e,12}-c_{\rm ei}}{1+a_{\rm ei,2}}\,.\] ## Appendix I Compact variance expression Our goal is find a compact, interpretable formula for the stationary variance \(\mathbb{V}\left[V\right]\) from the knowledge of the quadratic form \[\mathbb{E}\left[V^{2}\right]=A_{\rm e}V_{\rm e}^{2}+B_{\rm ei}V_{\rm e}V_{\rm i }+A_{\rm i}V_{\rm i}^{2}+\left(V_{\rm e}B_{\rm eI}+V_{\rm i}B_{\rm iI}\right) \left(I/G\right)+A_{I}(I/G)^{2}\,.\] Let us first assume no current injection, \(I=0\), so that one only has to keep track of the quadratic terms. Specifying the quadratic coefficient \(A_{\rm e}=A_{\rm e}^{\prime}+A_{\rm e}^{\prime\prime}\), \(A_{\rm i}=A_{\rm i}^{\prime}+A_{\rm i}^{\prime\prime}\) and \(B_{\rm ei}=B_{\rm ei}^{\prime}+B_{\rm ei}^{\prime\prime}\) in Eq. (II), we get \[\mathbb{E}\left[V^{2}\right] = \left(\frac{a_{\rm e,1}(2a_{\rm e,2}-a_{\rm e,1})}{(1+a_{\rm ei,1})(1+a_{\rm ei,2})}+\frac{a_{\rm e,12}-c_{\rm ei}}{1+a_{\rm ei,2}}\right)V _{\rm e}^{2}\] \[+\left(\frac{a_{\rm e,1}(2a_{\rm i,2}-a_{\rm i,1})}{(1+a_{\rm ei,1})(1+a_{\rm ei,2})}+\frac{2c_{\rm ei}}{1+a_{\rm ei,2}}\right)V_{\rm e}V_{ \rm i}\] \[+\left(\frac{a_{\rm i,1}(2a_{\rm i,2}-a_{\rm i,1})}{(1+a_{\rm ei,1})(1+a_{\rm ei,2})}+\frac{a_{\rm i,12}-c_{\rm ei}}{1+a_{\rm ei,2}}\right)V _{\rm i}^{2}\,,\] \[= \left(\frac{a_{\rm e,1}(2a_{\rm e,2}-a_{\rm e,1})+(1+a_{\rm e,1} +a_{\rm i,1})(a_{\rm e,1}-a_{\rm e2})}{(1+a_{\rm ei,1})(1+a_{\rm ei,2})} \right)V_{\rm e}^{2}\] \[+\left(\frac{a_{\rm e,1}(2a_{\rm i,2}-a_{\rm i,1})+a_{\rm i,1}(2a _{\rm e,2}-a_{\rm e,1})}{(1+a_{\rm ei,1})(1+a_{\rm ei,2})}\right)V_{\rm e}V _{\rm i}\] \[+\left(\frac{a_{\rm i,1}(2a_{\rm i,2}-a_{\rm i,1})+(1+a_{\rm e,1} +a_{\rm i,1})(a_{\rm i,1}-a_{\rm i2})}{(1+a_{\rm ei,1})(1+a_{\rm ei,2})} \right)V_{\rm i}^{2}-\frac{c_{\rm ei}}{1+a_{\rm ei,2}}(V_{\rm e}-V_{\rm i})^ {2}\,,\] where we collect separately all the terms containing the coefficient \(c_{\rm ei}\) and where we use the facts that by definition \(a_{\rm e,12}=a_{\rm e,1}-a_{\rm e,2}\), \(a_{\rm i,12}=a_{\rm i,1}-a_{\rm i,2}\), and \(a_{\rm ei,1}=a_{\rm e,1}+a_{\rm i,1}\). Expanding and simplifying the coefficients of \(V_{\rm e}^{2}\) and \(V_{\rm i}^{2}\) above yield \[\mathbb{E}\left[V^{2}\right] = \left(\frac{a_{\rm e,1}a_{\rm e,2}+(1+a_{\rm i,1})(a_{\rm e,1}-a _{\rm e2})}{(1+a_{\rm ei,1})(1+a_{\rm ei,2})}\right)V_{\rm e}^{2}\] \[+\left(\frac{a_{\rm e,1}(2a_{\rm i,2}-a_{\rm i,1})+a_{\rm i,1}(2a _{\rm e,2}-a_{\rm e,1})}{(1+a_{\rm ei,1})(1+a_{\rm ei,2})}\right)V_{\rm e}V _{\rm i}\] \[+\left(\frac{a_{\rm i,1}a_{\rm i,2}+(1+a_{\rm e,1})(a_{\rm i,1}-a _{\rm i2})}{(1+a_{\rm ei,1})(1+a_{\rm ei,2})}\right)V_{\rm i}^{2}-\frac{c_ {\rm ei}}{1+a_{\rm ei,2}}(V_{\rm e}-V_{\rm i})^{2}\,.\] Then, we can utilize the expression above for \(\mathbb{E}\left[V^{2}\right]\) together with the stationary mean formula \[\mathbb{E}\left[V\right]=\frac{a_{\rm e,1}V_{\rm e}+a_{\rm i,1}V_{\rm i}}{1+a _{\rm ei,1}}\,,\] (II) to write the variance \(\mathbb{V}\left[V\right]=\mathbb{E}\left[V^{2}\right]-\mathbb{E}\left[V \right]^{2}\) as \[\mathbb{V}\left[V\right] = \left(\frac{(a_{\rm e,1}-a_{\rm e,2})(1+a_{\rm i,1})^{2}+(a_{\rm i,1}-a_{\rm i,2})a_{\rm e,1}^{2}}{(1+a_{\rm ei,1})^{2}(1+a_{\rm ei,2})} \right)V_{\rm e}^{2}\] \[-\left(\frac{(a_{\rm e,1}-a_{\rm e,2})a_{\rm e,1}(1+a_{\rm e,1})+a _{\rm i,1}(a_{\rm i,1}-a_{\rm i,2})(1+a_{\rm i,1})}{(1+a_{\rm ei,1})^{2}(1+a _{\rm ei,2})}\right)V_{\rm e}V_{\rm i}\] \[+\left(\frac{(a_{\rm i,1}-a_{\rm i,2})(1+a_{\rm e,1})^{2}+(a_{\rm e,1}-a_{\rm e,2})a_{\rm i,1}^{2}}{(1+a_{\rm ei,1})^{2}(1+a_{\rm ei,2})} \right)V_{\rm i}^{2}-\frac{c_{\rm ei}}{1+a_{\rm ei,2}}(V_{\rm e}-V_{\rm i}) ^{2}\,.\] To factorize the above expression, let us reintroduce \(a_{\mathrm{e},12}=a_{\mathrm{e},1}-a_{\mathrm{e},2}\) and \(a_{\mathrm{i},12}=a_{\mathrm{i},1}-a_{\mathrm{i},2}\) and collect the terms where these two coefficients occur. This yields \[\mathbb{V}\left[V\right] =\frac{a_{\mathrm{e},12}}{(1+a_{\mathrm{e},1})^{2}(1+a_{\mathrm{ e},2})}\left((1+a_{\mathrm{i},1})^{2}V_{\mathrm{e}}^{2}-a_{\mathrm{i},1}(1+a_{ \mathrm{e},1})^{2}V_{\mathrm{e}}V_{\mathrm{i}}+(a_{\mathrm{e},1})^{2}V_{ \mathrm{i}}^{2}\right)\] \[\quad+\frac{a_{\mathrm{i},12}}{(1+a_{\mathrm{e},1})^{2}(1+a_{ \mathrm{e},2})}\left((1+a_{\mathrm{e},1})^{2}V_{\mathrm{i}}^{2}-a_{\mathrm{e}, 1}(1+a_{\mathrm{i},1})^{2}V_{\mathrm{e}}V_{\mathrm{i}}+(a_{\mathrm{i},1})^{2} V_{\mathrm{e}}^{2}\right)\] \[\quad-\frac{c_{\mathrm{ei}}}{1+a_{\mathrm{e},2}}(V_{\mathrm{e}}- V_{\mathrm{i}})^{2}\,,\] \[=\frac{a_{\mathrm{e},12}}{1+a_{\mathrm{e},2}}\left(\frac{(1+a_{ \mathrm{i},1})V_{\mathrm{e}}-a_{\mathrm{e},1}V_{\mathrm{i}}}{1+a_{\mathrm{e}, 1}}\right)^{2}+\frac{a_{\mathrm{i},12}}{1+a_{\mathrm{e},2}}\left(\frac{(1+a_{ \mathrm{i},e})V_{\mathrm{i}}-a_{\mathrm{i},1}V_{\mathrm{e}}}{1+a_{\mathrm{e}, 1}}\right)^{2}\] \[\quad-\frac{c_{\mathrm{ei}}}{1+a_{\mathrm{e},2}}(V_{\mathrm{e}}- V_{\mathrm{i}})^{2}\,.\] Finally, injecting the expression of stationary mean Eq. (I1) in both parentheses above produces the compact formula \[\mathbb{V}\left[V\right] =\frac{a_{\mathrm{e},12}}{1+a_{\mathrm{e},2}}\left(V_{\mathrm{e}} -\mathbb{E}\left[V\right]\right)^{2}+\frac{a_{\mathrm{i},12}}{1+a_{\mathrm{e},1}}\left(V_{\mathrm{i}}-\mathbb{E}\left[V\right]\right)^{2}-\frac{c_{\mathrm{ ei}}}{1+a_{\mathrm{e},2}}(V_{\mathrm{e}}-V_{\mathrm{i}})^{2}\,,\] (I2) which is the same as the one given in Eq. (30). ## Appendix J Factorized variance expression In this appendix, we reshape the variance expression given in Eq. (I2) under a form that is clearly nonnegative. To this end, let us first remark that the calculation in Appendix H shows that \[a_{\mathrm{e},12}-c_{\mathrm{ei}}=\frac{b\tau}{2}\mathbb{E}_{\mathrm{ei}} \left[\frac{W_{\mathrm{e}}^{2}}{(W_{\mathrm{e}}+W_{\mathrm{i}})^{2}}\left(1+ e^{-(W_{\mathrm{e}}+W_{\mathrm{i}})}\right)^{2}\right]\,.\] Then, setting \((V_{\mathrm{e}}-V_{\mathrm{i}})^{2}=((V_{\mathrm{e}}-\mathbb{E}\left[V\right] )-(V_{\mathrm{i}}-\mathbb{E}\left[V\right]))^{2}=(V_{\mathrm{e}}-\mathbb{E} \left[V\right])^{2}-2(V_{\mathrm{e}}-\mathbb{E}\left[V\right])(V_{\mathrm{i}} -\mathbb{E}\left[V\right])+(V_{\mathrm{i}}-\mathbb{E}\left[V\right])^{2}\) in Eq. (I2), we obtain \[\mathbb{V}\left[V\right] =\frac{1}{1+a_{\mathrm{e},2}}\left(a_{\mathrm{e},12}(V_{\mathrm{ e}}-\mathbb{E}\left[V\right])^{2}+a_{\mathrm{i},12}(V_{\mathrm{i}}-\mathbb{E} \left[V\right])^{2}-c_{\mathrm{ei}}(V_{\mathrm{e}}-V_{\mathrm{i}})^{2}\right)\,,\] \[=\frac{1}{1+a_{\mathrm{e},2}}\left((a_{\mathrm{e},12}-c_{\mathrm{ ei}})(V_{\mathrm{e}}-\mathbb{E}\left[V\right])^{2}+2c_{\mathrm{ei}}(V_{\mathrm{e}}- \mathbb{E}\left[V\right])(V_{\mathrm{i}}-\mathbb{E}\left[V\right])+(a_{ \mathrm{i},12}-c_{\mathrm{ei}})(V_{\mathrm{i}}-\mathbb{E}\left[V\right])^{2} \right)\,,\] \[=\frac{b\tau}{1+a_{\mathrm{e},2}}\] \[\qquad\mathbb{E}_{\mathrm{ei}}\left[\left(\frac{W_{\mathrm{e}}^ {2}(V_{\mathrm{e}}-\mathbb{E}\left[V\right])^{2}}{2(W_{\mathrm{e}}+W_{\mathrm{ i}})^{2}}+\frac{2W_{\mathrm{e}}(V_{\mathrm{e}}-\mathbb{E}\left[V\right])W_{\mathrm{i}}(V_{ \mathrm{i}}-\mathbb{E}\left[V\right])}{2(W_{\mathrm{e}}+W_{\mathrm{i}})^{2}}+ \frac{W_{\mathrm{i}}^{2}(V_{\mathrm{i}}-\mathbb{E}\left[V\right])^{2}}{2(W_{ \mathrm{e}}+W_{\mathrm{i}})^{2}}\right)\left(1-e^{-(W_{\mathrm{e}}+W_{\mathrm{ i}})}\right)^{2}\right]\,,\] \[=\frac{b\tau}{2(1+a_{\mathrm{e},2})}\mathbb{E}_{\mathrm{ei}} \left[\left(\frac{\left[W_{\mathrm{e}}(V_{\mathrm{e}}-\mathbb{E}\left[V \right])+W_{\mathrm{i}}(V_{\mathrm{i}}-\mathbb{E}\left[V\right])\right]^{2}}{ (W_{\mathrm{e}}+W_{\mathrm{i}})^{2}}\right)\left(1-e^{-(W_{\mathrm{e}}+W_{ \mathrm{i}})}\right)^{2}\right]\,.\] (J1) Note that the above quantity is clearly non negative as any variance shall be. From there, one can include the impact of the injected current \(I\) by further considering all the terms in Eq. (I1), including the linear and inhomogeneous current-dependent terms. Similar algebraic manipulations confirm that Eq. (J1) remains valid so that the only impact of \(I\) is via altering the expression \(\mathbb{E}\left[V\right]\), so that we ultimately obtain the following explicit compact form: \[\mathbb{V}\left[V\right] =\frac{\mathbb{E}_{\mathrm{ei}}\left[\left(\frac{W_{\mathrm{e}}V_{ \mathrm{e}}+W_{\mathrm{i}}V_{\mathrm{i}}}{W_{\mathrm{e}}+W_{\mathrm{i}}}- \mathbb{E}\left[V\right]\right)^{2}\left(1-e^{-(W_{\mathrm{e}}+W_{\mathrm{i}})} \right)^{2}\right]}{2/(b\tau)+\mathbb{E}_{\mathrm{ei}}\left[\left(1-e^{-2(W_{ \mathrm{e}}+W_{\mathrm{i}})}\right)\right]}\quad\text{with}\quad\mathbb{E} \left[V\right]=\frac{b\tau\mathbb{E}_{\mathrm{ei}}\left[\left(\frac{W_{ \mathrm{e}}V_{\mathrm{e}}+W_{\mathrm{i}}V_{\mathrm{i}}}{W_{\mathrm{e}}+W_{ \mathrm{i}}}\right)\left(1-e^{-(W_{\mathrm{e}}+W_{\mathrm{i}})}\right)\right]+I/G} {1+b\tau\mathbb{E}_{\mathrm{ei}}\left[\left(1-e^{-(W_{\mathrm{e}}+W_{\mathrm{i}})} \right)\right]}\,.\] The above expression shows that as expected \(\mathbb{V}\left[V\right]\geq 0\) and that the variability vanishes if and only if \(W_{\mathrm{e}}/W_{\mathrm{i}}=(\mathbb{E}\left[V\right]-V_{\mathrm{i}})/(V_{ \mathrm{e}}-\mathbb{E}\left[V\right])\) with probability one. In turn plugging this relation into the mean voltage expression and solving for \(\mathbb{E}\left[V\right]\) reveals that we necessarily have \(\mathbb{E}\left[V\right]=I/G\). This is consistent with the intuition that variability can only vanish if excitation and inhibition perfectly cancel one another. ## Appendix K Variance in the small-weight approximation In this appendix, we compute the simplified expression for the variance \(\mathbb{V}\left[V\right]\) obtained via the small-weight approximation. Second, let us compute the small-weight approximation of the second-order efficacy \[c_{\rm ei}=\frac{b\tau}{2}\mathbb{E}_{\rm ei}\left[\frac{W_{\rm e}W_{\rm i}}{( W_{\rm e}+W_{\rm i})^{2}}\left(1-e^{-(W_{\rm e}+W_{\rm i})}\right)^{2}\right]\simeq \frac{b\tau}{2}\mathbb{E}_{\rm ei}\left[W_{\rm e}W_{\rm i}\right]=\frac{b\tau }{2}w_{\rm e}w_{\rm i}\mathbb{E}_{\rm ei}\left[k_{\rm e}k_{\rm i}\right]\,,\] which amounts to compute the expectation of the crossproduct of the jumps \(k_{\rm e}\) and \(k_{\rm i}\). To estimate the above approximation, it is important to remember that first that \(p_{\rm e,k}\) and \(p_{\rm i,k}\) are not defined as the marginals of \(p_{\rm ei,kl}\), but as conditional marginals, for which we have \(p_{\rm e,k}=(b/b_{\rm e})\sum_{l=0}^{K_{\rm i}}p_{\rm ei,kl}\) and \(p_{\rm i,l}=(b/b_{\rm i})\sum_{k=0}^{K_{\rm e}}p_{\rm ei,kl}\). Then by the definition of the correlation coefficient \(\rho_{\rm ei}\) in Eq. (9), we have \[\rho_{\rm ei}=\frac{b\mathbb{E}_{\rm ei}\left[k_{\rm e}k_{\rm i}\right]}{ \sqrt{K_{\rm e}b\mathbb{E}_{\rm ei}\left[k_{\rm e}\right]K_{\rm i}b\mathbb{E} _{\rm ei}\left[k_{\rm i}\right]}}=\frac{b\mathbb{E}_{\rm ei}\left[k_{\rm e}k_ {\rm i}\right]}{\sqrt{K_{\rm e}b\mathbb{E}_{\rm e}\left[k_{\rm e}\right]K_{ \rm i}b\mathbb{E}_{\rm i}\left[k_{\rm i}\right]}}=\frac{b\mathbb{E}_{\rm ei} \left[k_{\rm e}k_{\rm i}\right]}{K_{\rm e}K_{\rm i}\sqrt{r_{\rm e}r_{\rm i}}}\,,\] as the rates \(b_{\rm e}\) and \(b_{\rm i}\) are such that \(b_{\rm e}\mathbb{E}_{\rm e}\left[k_{\rm e}\right]=K_{\rm e}r_{\rm e}\) and \(b_{\rm i}\mathbb{E}_{\rm e}\left[k_{\rm i}\right]=K_{\rm i}r_{\rm i}\). As a result, we obtain a simplified expression for the cross-correlation coefficient: \[c_{\rm ei}=(\rho_{\rm ei}\sqrt{r_{\rm e}r_{\rm i}}\tau/2)(K_{\rm e}w_{\rm e} )(K_{\rm i}w_{\rm i})\,.\] Observe that as expected, \(c_{\rm ei}\) vanishes when \(\rho_{\rm ei}=0\). Second, let us compute the small-weight approximation of the second-order efficacy \[a_{\rm e,12}=\frac{b\tau}{2}\mathbb{E}_{\rm ei}\left[\frac{W_{\rm e}}{W_{\rm e }+W_{\rm i}}\left(1-e^{-(W_{\rm e}+W_{\rm i})}\right)^{2}\right]\simeq\frac{b \tau}{2}\mathbb{E}_{\rm ei}\left[W_{\rm e}(W_{\rm e}+W_{\rm i})\right]=\frac{ b\tau}{2}\left(w_{\rm e}^{2}\mathbb{E}_{\rm ei}\left[k_{\rm e}^{2} \right]+w_{\rm e}w_{\rm i}\mathbb{E}_{\rm ei}\left[k_{\rm e}k_{\rm i}\right] \right)\,.\] To estimate the above approximation, we use the definition of the correlation coefficient \(\rho_{\rm e}\) in Eq. (7), \[\rho_{\rm e}=\frac{b_{\rm e}\mathbb{E}_{\rm e}\left[k_{\rm e}(k_{\rm e}-1) \right]}{b_{\rm e}\mathbb{E}_{\rm e}\left[k_{\rm e}\right](K_{\rm e}-1)}=\frac {b\mathbb{E}_{\rm ei}\left[k_{\rm e}(k_{\rm e}-1)\right]}{K_{\rm e}(K_{\rm e}- 1)r_{\rm e}}\,,\] as the rate \(b_{\rm e}\) is such that \(b_{\rm e}\mathbb{E}_{\rm e}\left[k_{\rm e}\right]=K_{\rm e}r_{\rm e}\). This directly implies that \[b\mathbb{E}_{\rm ei}\left[k_{\rm e}^{2}\right]=b\mathbb{E}_{\rm ei}\left[k_{ \rm e}(k_{\rm e}-1)\right]+b\mathbb{E}_{\rm ei}\left[k_{\rm e}\right]=\rho_{ \rm e}K_{\rm e}(K_{\rm e}-1)r_{\rm e}+K_{\rm e}r_{\rm e}=K_{\rm e}r_{\rm e}(1+ \rho_{\rm e}(K_{\rm e}-1))\,.\] so that we evaluate \[a_{\rm e,12}=\frac{b\tau}{2}\left(w_{\rm e}^{2}\mathbb{E}_{\rm ei}\left[k_{ \rm e}^{2}\right]+w_{\rm e}w_{\rm i}\mathbb{E}_{\rm ei}\left[k_{\rm e}k_{\rm i }\right]\right)=\frac{r_{\rm e}\tau}{2}K_{\rm e}(1+\rho_{\rm e}(K_{\rm e}-1))w _{\rm e}^{2}+\rho_{\rm ei}\frac{\sqrt{r_{\rm i}r_{\rm e}}\tau}{2}(K_{\rm e}w_ {\rm e})(K_{\rm i}w_{\rm i})\,,\] which simplifies to \(a_{\rm e,12}=(r_{\rm e}\tau/2)K_{\rm e}(1+\rho_{\rm e}(K_{\rm e}-1))w_{\rm e }^{2}\) when excitation and inhibition act independently. A symmetric expression holds for the inhibitory efficacy \(a_{\rm i,12}\). Plugging the above expressions for synaptic efficacies in the variance expression Eq. (30) yields the small-weight approximation \[\mathbb{V}\left[V\right]\simeq\frac{(1+\rho_{\rm e}(K_{\rm e}-1))K _{\rm e}r_{\rm e}w_{\rm e}^{2}(V_{\rm e}-\mathbb{E}\left[V\right])^{2}+(1+ \rho_{\rm i}(K_{\rm i}-1))K_{\rm i}r_{\rm i}w_{\rm i}^{2}(V_{\rm i}-\mathbb{E }\left[V\right])^{2}}{2(1/\tau+K_{\rm e}r_{\rm e}w_{\rm e}+K_{\rm i}r_{\rm i}w _{\rm i})}\] \[+\frac{\rho_{\rm ei}\sqrt{r_{\rm e}r_{\rm i}}(K_{\rm e}w_{\rm e})( K_{\rm i}w_{\rm i})\big{[}(V_{\rm e}-\mathbb{E}\left[V\right])^{2}+(V_{\rm i}- \mathbb{E}\left[V\right])^{2}-(V_{\rm e}-V_{\rm i})^{2}\big{]}}{2(1/\tau+K_{ \rm e}r_{\rm e}w_{\rm e}+K_{\rm i}r_{\rm i}w_{\rm i})}\,.\] Let us note that the first-term in the right-hand side above represents the small-weight approximation of the voltage variance in the absence of correlation between excitation and inhibition, i.e., for \(\rho_{\rm ei}=0\). Denoting the latter approximation by \(\mathbb{V}\left[V\right]|_{\rho_{\rm ei}=0}\) and using the fact that the small-weight expression for the mean voltage \[\mathbb{E}\left[V\right]=\frac{K_{\rm e}r_{\rm e}w_{\rm e}V_{\rm e}+K_{\rm i}r_{ \rm i}w_{\rm i}V_{\rm i}}{1/\tau+K_{\rm e}r_{\rm e}w_{\rm e}+K_{\rm i}r_{\rm i }w_{\rm i}}\,,\] is independent of correlations, we observe that as intuition suggests, synchrony-based correlation between excitation and inhibition results in a decrease of the neural variability: \[\Delta\mathbb{V}\left[V\right]_{\rho_{\rm ei}}=\mathbb{V}\left[V\right]- \mathbb{V}\left[V\right]\left|{}_{\rho_{\rm ei}=0}\simeq-\frac{\rho_{\rm ei} \sqrt{r_{\rm e}r_{\rm i}}(K_{\rm e}w_{\rm e})(K_{\rm i}w_{\rm i})(V_{\rm e}- \mathbb{E}\left[V\right])(\mathbb{E}\left[V\right]-V_{\rm i})}{1/\tau+K_{\rm e }r_{\rm e}w_{\rm e}+K_{\rm i}r_{\rm i}w_{\rm i}}\leq 0\,.\] However, the overall contribution of correlation is to increase variability in the small-weight approximation. This can be shown under the assumptions that \(K_{\mathrm{e}}\gg 1\) and \(K_{\mathrm{i}}\gg 1\), by observing that \[\Delta\mathbb{V}\left[V\right]_{\rho_{\mathrm{e}i,\rho_{\mathrm{e }i}}}=\mathbb{V}\left[V\right]-\mathbb{V}\left[V\right]|_{\rho_{\mathrm{e}i}= \rho_{\mathrm{e}i}=0} \simeq\frac{\left(\sqrt{\rho_{\mathrm{e}}r_{\mathrm{e}}}K_{\mathrm{e }}w_{\mathrm{e}}(V_{\mathrm{e}}-\mathbb{E}\left[V\right])-\sqrt{\rho_{ \mathrm{i}}r_{\mathrm{i}}}K_{\mathrm{i}}w_{\mathrm{i}}(V_{\mathrm{i}}-\mathbb{ E}\left[V\right])\right)^{2}}{2(1/\tau+K_{\mathrm{e}}r_{\mathrm{e}}w_{\mathrm{e}}+K_{ \mathrm{i}}r_{\mathrm{i}}w_{\mathrm{i}})}\] \[+(\sqrt{\rho_{\mathrm{e}}\rho_{\mathrm{i}}}-\rho_{\mathrm{e}i}) \frac{\sqrt{r_{\mathrm{e}}r_{\mathrm{i}}}(K_{\mathrm{e}}w_{\mathrm{e}})(K_{ \mathrm{i}}w_{\mathrm{i}})(V_{\mathrm{e}}-\mathbb{E}\left[V\right])(\mathbb{E} \left[V\right]-V_{\mathrm{i}})}{1/\tau+K_{\mathrm{e}}r_{\mathrm{e}}w_{ \mathrm{e}}+K_{\mathrm{i}}r_{\mathrm{i}}w_{\mathrm{i}}}\geq 0\,,\] where both terms are positive since we always have \(0\leq\rho_{\mathrm{e}i}\leq\sqrt{\rho_{\mathrm{e}}\rho_{\mathrm{i}}}\). ## Appendix L Validity of the small-weight approximation Biophysical estimates of the synaptic weights \(w_{\mathrm{e}}<0.01\), \(w_{\mathrm{i}}<0.04\) and the synaptic input numbers \(K_{\mathrm{e}}<10000\), \(K_{\mathrm{i}}<2500\), suggest that neurons operates in the small-weight regime. In this regime, we claim that exponential corrections due to finite-size effect can be neglected in the evaluation of synaptic efficacies, as long as the spiking correlations remains weak. Here, we make this latter statement quantitative by focusing on the first-order efficacies in the case of excitation alone. The relative error due to neglecting exponential corrections can be quantified as \[\mathcal{E}=\frac{\mathbb{E}_{\mathrm{e}}\left[W_{\mathrm{e}}\right]-\mathbb{ E}_{\mathrm{e}}\left[1-e^{-W_{\mathrm{e}}}\right]}{\mathbb{E}_{\mathrm{e}} \left[1-e^{-W_{\mathrm{e}}}\right]}\geq 0\,.\] Let us evaluate this relative error, assumed to be small, when correlations are parametrized via beta distributions with parameter \(\beta_{\mathrm{e}}=1/\rho_{\mathrm{e}}-1\). Assuming correlations to be weak, \(\rho_{\mathrm{e}}\ll 1\), amounts to assuming large, \(\beta_{\mathrm{e}}\gg 1\) Under the assumptions of small error, we can compute \[\mathbb{E}_{\mathrm{e}}\left[1-e^{-W_{\mathrm{e}}}\right]\simeq\mathbb{E}_{ \mathrm{e}}\left[W_{\mathrm{e}}\right]=w_{\mathrm{e}}\mathbb{E}_{\mathrm{e}} \left[k_{\mathrm{e}}\right]\quad\text{and}\quad\mathbb{E}_{\mathrm{e}}\left[W _{\mathrm{e}}-1+e^{-W_{\mathrm{e}}}\right]\simeq\mathbb{E}_{\mathrm{e}}\left[W _{\mathrm{e}}^{2}\right]/2=w_{\mathrm{e}}^{2}\mathbb{E}_{\mathrm{e}}\left[k_{ \mathrm{e}}^{2}\right]/2\,,\] By the calculations carried out in Appendix L we have \[b_{\mathrm{e}}\mathbb{E}_{\mathrm{e}}\left[k_{\mathrm{e}}\right]=K_{\mathrm{ e}}r_{\mathrm{e}}\quad\text{and}\quad b_{\mathrm{e}}\mathbb{E}_{\mathrm{e}} \left[k_{\mathrm{e}}^{2}\right]=K_{\mathrm{e}}r_{\mathrm{e}}(1+\rho_{\mathrm{ e}}(K_{\mathrm{e}}-1))\,.\] Remembering that \(\beta_{\mathrm{e}}=1/\rho_{\mathrm{e}}-1\), this implies that we have \[\mathcal{E}\simeq\frac{\mathbb{E}_{\mathrm{e}}\left[W_{\mathrm{e}}^{2}\right] /2}{\mathbb{E}_{\mathrm{e}}\left[W_{\mathrm{e}}\right]-\mathbb{E}_{\mathrm{e }}\left[W_{\mathrm{e}}^{2}\right]/2}\simeq\frac{w_{\mathrm{e}}(1+\rho_{ \mathrm{e}}(K_{\mathrm{e}}-1))/2}{1-w_{\mathrm{e}}(1+\rho_{\mathrm{e}}(K_{ \mathrm{e}}-1))/2}\,,\] For a correlation coefficient \(\rho_{\mathrm{e}}\leq 0.05\), this means that neglecting exponential corrections incurs less than a \(e=3\%\) error if the number of inputs is smaller than \(K_{\mathrm{e}}\leq 1000\) for moderate synaptic weight \(w_{\mathrm{e}}=0.001\) or than \(K_{\mathrm{e}}\leq 100\) for large synaptic weight \(w_{\mathrm{e}}=0.01\). ## Appendix M Infinite-size limit with spiking correlations The computation of the first two moments \(\mathbb{E}\left[V\right]\) and \(\mathbb{E}\left[V^{2}\right]\) requires to evaluate various efficacies as expectations. Upon inspection, these expectations are all of the form \(b\mathbb{E}_{\mathrm{e}i}\left[f(W_{\mathrm{e}},W_{\mathrm{i}})\right]\), where \(f\) is a smooth positive function that is bounded on \(\mathbb{R}^{+}\times\mathbb{R}^{+}\) with \(f(0,0)=0\). Just as for the Levy-Khintchine decomposition of stable jump processes [71; 72], this observation allows one to generalize our results to processes that exhibit and countable infinity of jumps over finite, nonzero time intervals. For our parametric forms based on beta distributions, such processes emerge in the limit of an arbitrary large number of inputs, i.e., for \(K_{\mathrm{e}},K_{\mathrm{i}}\to\infty\). Let us consider the case of excitation alone for simplicity. Then, we need to make sure that all expectations of the form \(b_{\mathrm{e}}\mathbb{E}_{\mathrm{e}i}\left[f(W_{\mathrm{e}})\right]\) remain well-posed in the limit \(K_{\mathrm{e}}\to\infty\) for smooth, bounded test function \(f\) with \(f(0)=0\). To check this, observe that for all \(0<k\leq K_{\mathrm{e}}\), we have by Eq. (6) and Eq. (8) that \[b_{\mathrm{e}}p_{\mathrm{e},k}=\beta r_{\mathrm{e}}\binom{K_{\mathrm{e}}}{k}B(k,\beta+K_{\mathrm{e}}-k)=\beta r_{\mathrm{e}}\frac{\Gamma(K_{\mathrm{e}}+1)}{ \Gamma(k+1)\Gamma(K_{\mathrm{e}}-k+1)}\frac{\Gamma(k)\Gamma(\beta+K_{\mathrm{e} }-k+1)}{\Gamma(\beta+K_{\mathrm{e}})}\,,\] where we have introduce the Gamma function \(\Gamma\). Rearranging terms and using the fact that \(\Gamma(z+1)=z\Gamma(z)\) for all \(z>0\), we obtain \[b_{\mathrm{e}}p_{\mathrm{e},k}=\frac{\beta r_{\mathrm{e}}}{k}\frac{K_{\mathrm{e }}\Gamma(K_{\mathrm{e}})}{\Gamma(\beta+K_{\mathrm{e}})}\frac{\Gamma(\beta+K_{ \mathrm{e}}-k)}{(K_{\mathrm{e}}-k)\Gamma(K_{\mathrm{e}}-k)}=\frac{\beta r_{ \mathrm{e}}}{k}\left(1-\frac{k}{K_{\mathrm{e}}}\right)^{\beta-1}+o\left(\frac{1 }{K_{\mathrm{e}}}\right)\,,\] where the last equality is uniform in \(k\) and follows from the fact that for all \(x>0\), we have \[\lim_{z\to\infty}\frac{\Gamma(z+x)}{\Gamma(z)}=z^{x}\left(1+\binom{x}{2}\frac{1} {z}+o\left(\frac{1}{z}\right)\right)\] From there, given a test function \(f\), let us consider \[b_{\mathrm{e}}\mathbb{E}_{\mathrm{e}}\left[f(W_{\mathrm{e}})\right] =\int\sum_{k=1}^{K_{\mathrm{e}}}b_{\mathrm{e}}p_{\mathrm{e},k} \delta\left(W_{\mathrm{e}}-\frac{k\Omega_{\mathrm{e}}}{K_{\mathrm{e}}}\right) f(W_{\mathrm{e}})\,\mathrm{d}W_{\mathrm{e}}\,,\] \[=\sum_{k=1}^{K_{\mathrm{e}}}b_{\mathrm{e}}p_{\mathrm{e},k}f\left( \frac{k\Omega_{\mathrm{e}}}{K_{\mathrm{e}}}\right)\,,\] \[=r_{\mathrm{e}}\sum_{k=1}^{K_{\mathrm{e}}}\frac{\beta}{k}\left(1 -\frac{k}{K_{\mathrm{e}}}\right)^{\beta-1}f\left(\frac{k\Omega_{\mathrm{e}}}{ K_{\mathrm{e}}}\right)+o(1)\,.\] The order zero term above can be interpreted as a Riemann sum so that one has \[\lim_{K_{\mathrm{e}}\to\infty}b_{\mathrm{e}}\mathbb{E}_{\mathrm{ e}}\left[f(W_{\mathrm{e}})\right] =r_{\mathrm{e}}\lim_{K_{\mathrm{e}}\to\infty}\frac{1}{K_{\mathrm{ e}}}\sum_{k=1}^{K_{\mathrm{e}}}\frac{\beta K_{\mathrm{e}}}{k}\left(1-\frac{k}{K_{ \mathrm{e}}}\right)f\left(\frac{k\Omega_{\mathrm{e}}}{K_{\mathrm{e}}}\right)\,,\] \[=r_{\mathrm{e}}\int_{0}^{1}\beta\theta^{-1}(1-\theta)^{\beta-1}f (\theta\Omega_{\mathrm{e}})\,\mathrm{d}\theta\,,\] \[=r_{\mathrm{e}}\int_{0}^{\Omega_{\mathrm{e}}}\frac{\beta}{w} \left(1-\frac{w}{\Omega_{\mathrm{e}}}\right)^{\beta-1}f(w)\,\mathrm{d}w\,.\] Thus, the jump densities is specified via the Levy-Khintchine measure \[\nu_{\mathrm{e}}(w)=\frac{\beta}{w}\left(1-\frac{w}{\Omega_{\mathrm{e}}}\right) ^{\beta-1}\,,\] which is a deficient measure for admitting a pole in zero. This singular behavior indicates that the limit jump process obtained when \(K_{\mathrm{e}}\to\infty\) has a countable infinity of jumps within any finite, nonempty time interval. Generic stationary jump processes with independent increments, as is the case here, are entirely specified by their Levy-Khintchine measure \(\nu_{\mathrm{e}}\)[71; 72]. Moreover, one can check that given knowledge of \(\nu_{\mathrm{e}}\), one can consistently estimate the corresponding pairwise spiking correlation as \[\rho_{\mathrm{e}}=\lim_{K_{\mathrm{e}}\to\infty}\frac{\mathbb{E}_{\mathrm{e}} \left[k_{\mathrm{e}}(k_{\mathrm{e}}-1)\right]}{\mathbb{E}_{\mathrm{e}}\left[k _{\mathrm{e}}\right](K_{\mathrm{e}}-1)}=\lim_{K_{\mathrm{e}}\to\infty}\frac{b_ {\mathrm{e}}\mathbb{E}_{\mathrm{e}}\left[(k_{\mathrm{e}}/K_{\mathrm{e}})^{2} \right]}{b_{\mathrm{e}}\mathbb{E}_{\mathrm{e}}\left[k_{\mathrm{e}}/K_{\mathrm{ e}}\right]}=\frac{\int_{0}^{\Omega_{\mathrm{e}}}w^{2}\nu_{\mathrm{e}}(w)\, \mathrm{d}w}{\Omega_{\mathrm{e}}\int_{0}^{\Omega_{\mathrm{e}}}w\nu_{\mathrm{e }}(w)\,\mathrm{d}w}\,.\]
2305.00833
Learning to Reason and Memorize with Self-Notes
Large language models have been shown to struggle with multi-step reasoning, and do not retain previous reasoning steps for future use. We propose a simple method for solving both of these problems by allowing the model to take Self-Notes. Unlike recent chain-of-thought or scratchpad approaches, the model can deviate from the input context at any time to explicitly think and write down its thoughts. This allows the model to perform reasoning on the fly as it reads the context and even integrate previous reasoning steps, thus enhancing its memory with useful information and enabling multi-step reasoning. Experiments across a wide variety of tasks demonstrate that our method can outperform chain-of-thought and scratchpad methods by taking Self-Notes that interleave the input text.
Jack Lanchantin, Shubham Toshniwal, Jason Weston, Arthur Szlam, Sainbayar Sukhbaatar
2023-05-01T14:02:48Z
http://arxiv.org/abs/2305.00833v2
# Learning to Reason and Memorize with Self-Notes ###### Abstract Large language models have been shown to struggle with limited context memory and multi-step reasoning. We propose a simple method for solving both of these problems by allowing the model to take _self-notes_. Unlike recent scratchpad approaches, the model can deviate from the input context at any time to explicitly think. This allows the model to recall information and perform reasoning on the fly as it reads the context, thus extending its memory and enabling multi-step reasoning. Our experiments on multiple tasks demonstrate that our method can successfully generalize to longer and more complicated instances from their training setup by taking self-notes at inference time. Machine Learning, Self-notes, Self-notes ## 1 Introduction Transformers (Vaswani et al., 2017) and similar variants have shown impressive results on sequence-based tasks (Brown et al., 2020). Notably, large language models (LMs) such as GPT-3 (Brown et al., 2020) use transformers and are capable of solving various NLP tasks such as question answering (QA). When a LM is used for a QA task, it is fed a context prompt containing factual information along with a question, and then the model generates the answer directly, as shown in Fig. 1 (top). However, this autoregressive "one-step" approach struggles with multi-step reasoning tasks (Austin et al., 2021; Press et al., 2022; Creswell et al., 2023). We argue that that this arises from the fact that vanilla LMs have a fixed computation for each token, and do not have the option to "think" more depending on the current context. Recently, Nye et al. (2021) proposed the use of a scratchpad that allows the model to generate reasoning tokens before answering the question, but _after_ it has read the full context and question, illustrated in Fig. 1 (middle). Similarly, chain-of-thought prompting methods (Wei et al., 2022; Zelikman et al., 2022; Huang et al., 2022) push the model to explain their answer one step at a time, leading to more coherent final answers. In addition to the "one-step" problem, transformers as a feed-forward model lack memory for state-tracking and solving highly nonlinear tasks (Fan et al., 2020), something that recurrent predecessor models such as the LSTM (Hochreiter and Schmidhuber, 1997) are well equipped for. Modifications to the feed-forward transformer architecture that use a recurrent mechanism improve state-tracking results (Fan et al., 2020; Ju et al., 2022; Hutchins et al., 2022), but still use a fixed amount of computation for a given prompt. In this paper, we propose an approach that simultaneously makes the challenges in multi-step reasoning and state-tracking memory more tractable. Our method, "_Self-Notes_", allows the LM to deviate from the context prompts on the fly to generate explicit reasoning tokens. Unlike a scratchpad, the model can interleave generated tokens with the input context as demonstrated in Fig. 1 (bottom). Such Self-Notes can act as both explicit intermediate reasoning steps and memory for state-tracking. Specifically, if a reasoning step requires combining two facts, the resulting inference can be written into a Self-Note and used for future reasoning, thus acting as an intermediate reasoning step. For example, given "Alice has the box" and "Alice is at the park" one can infer "_The box is at the park_" and write it to a Self-Note, which can be further combined with a future statement "The key is in the box" to conclude that "_The key is at the park_". Additionally, the Self-Note can act as a form of working memory because the model can write the latest state of an entity as new tokens while it traverses the context. For example, in a programming environment, assume x=5 initially, and then x gets incremented by 1. Assuming the model correctly writes x=\(\delta\) as a Self-Note, it can safely remove the original x=\(5\) statement from its context. If the model is then inquired about the value of x, it already has the answer. The main difference between our proposed method and prior work such as scratchpads (Nye et al., 2021), chain-of-thought (Wei et al., 2022), or inner monologue (Huang et al., 2022) is that we allow the model to explicitly write out multiple notes _as it reads_ each context statement sequentially. In other words, our approach is an _in-line_ form of scratchpad that augments the context with information which might be useful for future reasoning. We view this as a form of reading (and writing) between the lines to infer information that isn't explicitly stated, similar to how humans read (van den Broek et al., 2009). Prior methods allow the model to ruminate after it reads the full context, forcing it to do a large chunk of reasoning at the end, rather than while it's reading. Furthermore, such post-context reasoning cannot act as memory because earlier context tokens may already be out of the model's context window before the reasoning starts. For example, consider an intelligent agent with weeks or months of interaction history. Intuitively, it makes sense for it to be able to use reasoning steps it made in previous interactions without thinking again from scratch. To teach the model to generate Self-Notes, during training we consider providing the language model with ground truth Self-Notes as part of the input context. During inference, the model can deviate from the context and generate a Self-Note if it generates a special token learned during training. When the model finishes generating a Self-Note, the original context tokens will continue to be fed. This allows the model to reason and create memory while processing input tokens, not just at the end. We also propose semi-supervised and unsupervised methods for training Self-Notes. We test our method on five text datasets designed to evaluate multi-step reasoning and state-tracking: a proposed synthetic Toy-Story task, two synthetic program evaluation tasks (Fan et al., 2020; Anil et al., 2022), and two real-world chess game tasks (Toshniwal et al., 2022). Our method outperforms both a fine-tuned language model which does not do any explicit note-taking, as well as a scratchpad baseline. ## 2 Method Let us consider an autoregressive transformer model \(\mathcal{M}\) that predicts the next token in a sequence \[x_{t+1}=\mathcal{M}(x_{1},...,x_{t}).\] Such a model, \(\mathcal{M}\) is the foundation of many tasks like language modeling and question answering. In such tasks, the model is given a context \(C=\{x_{1},...,x_{t}\}\) and potentially a question \(Q\) as input and asked to generate \(A\) which is the sequence of next words or an answer to a question. Our Self-Notes method expands the capability of \(\mathcal{M}\) by allowing it to enrich context \(C\) with "note tokens" \(n_{i}\) before producing the final output \(A\). Note tokens share the same vocabulary as input tokens, but they are generated by the model itself. Self-Notes generated in this way can interleave with the context tokens and therefore can be used for writing down a newly inferred fact or tracking variable values. While processing input tokens \(x_{t}\in C\) one by one, the model can start taking a note by generating a token that belongs to a predefined set of start tokens \(N_{\text{sta}}\). A note ends when the model generates an end token \(n_{i}\in N_{\text{end}}\), or after a fixed number of tokens are generated. Once the note ends, the generated note tokens are appended to the context where the start token was generated, and the model continues to process the rest of the input tokens. For example, a context \(C=\{x_{1},x_{2},x_{3},x_{4}\}\) can be enriched to become \(\{x_{1},x_{2},n_{1},n_{2},n_{3},x_{3},x_{4}\}\) if the start token is generated after \(x_{2}\): \[n_{1} =\mathcal{M}(x_{1},x_{2})\in N_{\text{sta}}\] \[n_{2} =\mathcal{M}(x_{1},x_{2},n_{1})\notin N_{\text{end}}\] \[n_{3} =\mathcal{M}(x_{1},x_{2},n_{1},n_{2},)\in N_{\text{end}}.\] By repeating this mechanism, the context \(C\) can be enriched with multiple notes at different locations. An overview of our method is shown in Figure 1 (bottom). The model can use notes as a form of working memory by writing information that might be useful in the future. It can also use a note as an intermediate reasoning step by inferring new facts as it reads. In particular, it can ask a question and answer it within it. This is useful in multi-step reasoning where a final question requires answering multiple sub-questions. Unlike implicit reasoning occurring internally within \(\mathcal{M}\), Self-Notes are fed back to the model, Figure 1: **(top) Baseline vanilla LM directly generates the answer (A) given the context (C) and the question (Q). (middle) Scratchpad allows the model to generate intermediate reasoning tokens before answering the question but after it has seen the context. (bottom) Our Self-Notes method allows the model to deviate from the input context at any time to reason and take notes.** making it available to future reasoning steps. This feedback loop also allows the model to overcome the limitation of transformers as a feedforward network (Fan et al., 2020), making it possible to do state-tracking. ### Supervised Self-Notes One way to train \(\mathcal{M}\) to generate useful notes is to use supervised learning on data that is enriched with "ground-truth" Self-Notes interspaced within the context. This training procedure is simple as we just have to train \(\mathcal{M}\) on this enriched data using the standard LM training loss. After training, we can use \(\mathcal{M}\) to generate Self-Notes, so we can apply it to test data that does not contain any Self-Notes or reasoning labels. \(\mathcal{M}\) can generate a Self-Note at test time by predicting the next token in the context to be from \(N_{\text{sta}}\). ### Semi-supervised Self-Notes We also consider a semi-supervised setting where only a subset of the training samples have ground truth Self-Notes. In this case, we prepend a special token \(s\) to training samples without Self-Notes and train all samples with the standard LM loss: \(C=\{s,x_{1},...,x_{t}\}\). As a result, the model is conditioned to generate Self-Notes during test time because the test context does not contain the special token \(s\) prefix. This signals to the model that it should do extra reasoning and generate Self-Notes. ### Unsupervised Self-Notes Finally, we introduce a method for utilizing Self-Notes when no ground truth note is available for training. This method relies on the fact that when the model is trained using the LM loss on all tokens in a QA task, it learns to not only generate answers but also questions. We leverage this property by letting the model generate its own questions and insert their answers as Self-Notes (i.e., interleaved throughout the context) during test time. If we train the model to predict the final question and answer with varying length samples, the model will learn to generate a question after any number of statements. At the same time, we allow the model to write a Self-Note after each intermediate statement. Assuming the model has learned how to answer the shorter samples, it is likely to write the correct value in the intermediate locations. It can then leverage that information on the longer samples. If the relevant intermediate questions are asked and answered, this will make it easier to answer the final question. We consider two potential problems with approach. The first problem is that as the context is enriched by Self-Notes, it can become longer than what the model has seen during training, or it can contain new tokens that it didn't see in the context during training. A simple solution is to fine-tune the model on the Self-Notes enriched samples during training. The training procedure therefore has two simultaneous objectives: learn how to write Self-Note QA pairs after any number of context tokens, and leverage the written Self-Note answers for the final question. The second problem is that the model might not ask enough questions because training samples contain only one final question. We solve this by simply multiplying the probability of generating a Self-Note start token (any of the tokens in \(N_{\text{sta}}\)), by a "boosting" constant \(B>1\). Furthermore, since we can sample questions during enrichment, we can generate multiple versions of enrichment per sample, then we can select the enrichment that leads to the most confident answer. ## 3 Experiments We compare against two baseline methods: a vanilla transformer language model, and a transformer language model trained to generate a chain-of-thought "scratchpad". The _Vanilla_ baseline is the pretrained GPT-2 base model (Radford et al., 2019) from Hugging Face (Wolf et al., 2020) fine-tuned to predict answer tokens given only the context and question. For the _Scratchpad_ baseline, we fine-tune the same GPT-2 model to write a scratchpad of reasoning steps after it has seen the context and question, similar to Nye et al. (2021). For the proposed _Self-Notes_ model, we fine-tune GPT-2 to take Self-Notes. During testing, no ground-truth scratchpad or Self-Notes are provided, but both Scratchpad and Self-Notes models are allowed to generate tokens in addition to the answer. ### Tasks In this section, we explain each task we test our models on. Table 1 shows a sample for each task with each different method: Vanilla, Scratchpad, and Self-Notes. For each task, we evaluate on both an in-distribution and out-of-distribution (OOD) test set. A summary of the dataset statistics is given in Appendix Table 7. **Toy-Story.** As we read a story, we are often required to infer things that are not explicitly mentioned (van den Broek et al., 2009). For example, reading "Frodo went to Mount Doom. Sam accompanied him.", a reader can infer that "_Sam went to Mount Doom_". Making such inferences in an online manner as we're reading the story makes it easier to understand the rest of the story. Such forward reasoning is natural for understanding sequential stories like books or movies. It is also more fitting for dialog models as such a model needs to make inferences as conversation happens and respond accordingly. In contrast, backward reasoning starts with a question and tries to find the relevant facts from a given context to answer it, potentially leading to a more narrow understanding of context. Here we introduce a new synthetic QA task for testing the ability of language models to do forward reasoning. The task is to answer a question after reading a short story that consists of multiple sentences. Each sentence states a simple relation between people, items, and places such as "Alice is at the park" or "The ball is in the bag". There are 5 different types of relations. This dataset is inspired by the bAbI tasks (Weston et al., 2016), with greater controllability of required reasoning steps. Unlike bAbI, our dataset mixes different reasoning steps to create more "hops" in order to answer a question. The challenge in this dataset is that by applying pragmatic principles, unseen relations can be inferred from observed relations. For example, given the text "Alice is at the park. Bob is with Alice.", we can infer that "Bob is at the park.". Furthermore, a newly inferred relation can lead to inference of another unseen relation. In the previous example, if the next sentence is "Bob has the key.", then we can infer that "The key is at the park" using our previous inference about Bob's location. This recursive inference in Toy-Story makes it possible to create questions that require multi-step reasoning. We call a question \(k\)-hop if it requires \(k\) observations combined through \(k\)-\(1\) reasoning steps (1-hop questions only require repeating of an observed fact). While a backward reasoning model needs to take \(k\) reasoning steps to answer a \(k\)-hop question, a forward reasoning model will infer unseen relations as each sentence is processed. As a result, forward reasoning can uncover all relations by the end of the story and can answer any question with no additional reasoning. Considering the relevance of forward-reasoning to the Toy-Story task, Self-Notes is therefore a natural fit. Self-Notes should explicitly infer all implied relations. For this dataset, the Self-Note start and end tokens are "So:" and ".", respectively. Following the start token, the model can ask and answer a question, e.g., "So: Where is Bob? Bob is at the park.". The Scratchpad method should infer the same relations, but it will be forced to do it after the question is asked, requiring backward-reasoning. To test generalization, we train the model on 10k 1-hop and 2-hop queries, and test on 3-hop and 4-hop queries. If the model correctly learns to infer relations during training, then it can easily answer 3 and 4-hop queries by inferring the intermediate (2-hop) relations. Specifically, by writing a Self-Note, the model can turn a 3-hop query into two separate 2-hop queries. **Algorithmic.** While the Toy-Story task is designed for testing multi-step reasoning, it doesn't require tracking the state or value of an entity over multiple steps since it assumes the world is static. To evaluate state-tracking, we adopt the Algorithmic task from (Fan et al., 2020), which requires printing \begin{table} \begin{tabular}{l l l l} \hline \hline **Task** & **Vanilla** & **Scratchpad** & **Self-Notes** \\ \hline Toy-Story & Mary has the ball. & Mary has the ball. & Mary has the ball. \\ & The ball is inside the box. & The ball is inside the box. & The ball is inside the box. \\ & The key is inside the box. & The key is inside the box. & SG: Who has the box. \\ & O: Who has the key? & Q: Who has the key? & Mary has the box. \\ & Mary has the key. & Mary has the box. & SG: Who has the key? \\ & & & & & \\ \hline Algorithmic & e = 3 ; e ++; & e = 3 ; if i \textless{e} ; e ++; & i = 3 ; if i \textless{e} ; e ++; & print e e = 4j \\ & print e e = 5 ; & print e = 3 ; if i \textless{e} ; e ++; & print e e = 5 ; \\ \hline Boolean Van- able & w = False ; v = True ; & w = False ; v = True ; & w = False ; v = True ; \\ & v = w xor v ; & v = w xor v ; & v = w xor v ; & print e = True ; \\ & print w True ; & print w & w = False ; v = True ; \\ & & & & \\ \hline Chess Piece- type & c2 c4 e7 e5 g2 g3 b8 c6 f1 g2 g6 & c2 c4 e7 e5 g2 g3 b8 c6 f1 g2 g8 & c2 c4 e7 e5 g2 g3 b8 c6 f1 g2 g8 & c2 c4 e7 e5 g2 g3 \\ & f6 b1 c3 f8 b4 c3 & f6 b1 c3 f8 b4 c3 & b8 c6 f1 g2 g8 & N6 f6 b1 \\ & PIECE N & PIECE & c3 f8 b4 c3 & f6 b4 c3 \\ & & & & & \\ & & & & & \\ & & & & & \\ & & & & & \\ \hline \hline \end{tabular} \end{table} Table 1: Input-Output pairs for the Vanilla, Scratchpad, and Self-Notes method across four different tasks. The input consists of the input context and the question, the answer is to be generated by the model. The highlighted text for Scratchpad and Self-Notes is available only during training, and is to be generated by the model during inference. the state, or value, of an _integer_ variable given a sequence of algorithmic program statements such as increment, decrement, and conditionals. While the original task has separate input and label tokens, we unify them into a single sequence to fit the language modeling task. In this dataset, the context is the sequence of statements, e.g. "x = 1 ; x++ ; y = 3 ; if x > 2: y++ ;", and the final question is to print the last value of one of the variables, e.g. "print x". For the Self-Notes model, the notes are print statements specifying the intermediate value of a certain variable as they are modified. For example, if the previous statement was "x++,", the Self-Note would be to print the value of x: "print x x = 1?". The Self-Note start token is "print" and the end token is ";". The Scratchpad method generates the identical print statements, but it also has to copy all of the original statements to the scratchpad and figure out where to insert the prints, thus introducing an additional "alignment" complexity in comparison to the Self-Notes method. We train the models on 2 to 100 statements in each sample. We test on 2-100 (in-distribution) and 101-200 (OOD) statements. **Boolean Variable.** In this task, the context consists of a valid Python program where each statement contains a _boolean_ variable assignment operation. The question is to print the final value of an arbitrary variable (True or False). The main difference to the Algorithmic task is that the statements are constrained to boolean logic operations. We use the "chain-like" split from Anil et al. (2022), which consists only of operations that compose the values of already defined variables. This results in long chains of dependencies between values of the variable. Similar to the Algorithmic task, Self-Notes prints the value of the variable that was modified in the previous statement. The start and end tokens are "print" and ";", respectively. Following Anil et al. (2022), we train on 3-8 statements and test on 3-8 and 9-19 statements. **Chess Piecetype.** The goal of this task is to track the state of chess pieces over a sequence of moves in a real chess game (Toshniwal et al., 2022). Chess games written in UCI notation consist of a sequence of (start position, end position pairs), e.g. c2 c4, which denotes a move of the piece from c2 to c4.1 The piecetypes of the moves are never given explicitly, but since each game begins with pieces in the same positions, the piecetypes can be implicitly tracked given the moves. In other words, since each game starts with a pawn (P) at board position c2, we know that given the move c2 c4, there is now a pawn at position c4. In this task, given a long sequence of moves, e.g. "c2 c4 e7 e5 g2 g3 b8 c6 f1 g2 g8 f6 b1 c3 f8 b4 c6", the objective is to predict the piece at the last position mentioned ("c6"). Footnote 1: To ease tokenization, we split a move in UCI notation from c2c4 to c2 c4. We add all the 64 board squares to the language model’s vocabulary. For our proposed method, we consider the Self-Notes to be the piecetypes. That is, the start tokens \(N_{\text{sta}}\) are the set of piecetypes (P, R, N, B, Q, K) and there is no end token. A Self-Note is inserted after the start position of each move to explicitly remind the model which piecetype is at that position. So the previous example would be written as "c2 P c4 e7 P e5 g2 P3 b8 N c6 f1 B g2 g8 N f6 b1 N c3 f8 B b4 c6", and it is therefore much easier with Self-Notes to predict the piecetype at "c6", since we know that the last piece moved to "c6" was a knight during the move "b8 N c6". To test length generalization, we consider a different number of moves during training and testing. We train our models on 200k samples which include up to 80 moves. We evaluate on both up to 80 moves (in-distribution) as well as more than 80 moves (OOD). **Chess Move.** This task is to predict the end position of the current move given the start position (Toshniwal et al., 2022). For example, given the sequence of moves "c2 c4 e7 e5 g2 g3 b8 c6 f1 g2 g8 f6 b1 c3 f8 b4 c6", the answer is the ground truth final position made in the game: "e5". This task is harder than the Chess Piecetype task as the model needs to learn, state tracking, chess rules, and chess strategy in order to predict the most likely move. The Self-Notes are the same as chess piece, where the model is trained to generate the piece at each starting square as it makes a move. We report the exact match accuracy. In Table 1, the Chess Move task is the same as Chess Piecetype, but the answer is the next board position. We use the same train/valid/test split as the Chess Piecetype task. ## 4 Results ### Supervised Self-Notes Table 2 shows the results for the five tasks described in Section 3.1. **Toy-story.** For both the 3-hop and 4-hop settings, we see that the Self-Notes model substantially outperforms the Vanilla model which has to perform multi-step reasoning in "one-step". We observe a slight improvement of the Self-Notes model over the Scratchpad model. We reason that the drop in Scratchpad's performance has to do with the model having to postpone until after processing the entire input context, which increases the distance between the input context and the reasoning. In comparison, the Self-Notes model writes reasoning tokens on the fly as the relevant facts are stated. We note that for this task, the full context fits into the GPT-2 context window. **Algorithmic.** We observe that the Vanilla GPT-2 model struggles to track the state of the variables over many statements, and significantly worsens for OOD sequence lengths. Self-Notes, which allows the model to generate intermediate print statements, achieves high accuracy on both the in-distribution and OOD statement splits. Scratchpad fails at most examples since the context length exceeds the maximum length of GPT-2 (1024 tokens). This leads to a worse performance than the Vanilla model because it tries to write the scratchpad which involves copying the original context, but then runs out of room and can't answer the question. These results show a significant advantage of our method: as long as the model takes a Self-Note about a variable, it will keep it in the memory by pushing its value to the most recent context. The Scratchpad method has to copy the entire context in its scratchpad, often going past the maximum context length, resulting in poor accuracy. **Boolean Variable.** Unlike the Algorithmic task, none of the models run out of context length for this task since there are fewer statements. Therefore, Scratchpad is able to perform similarly to Self-Notes. However, we still see a small increase in performance with Self-Notes, likely due to copy alignment errors in Scratchpad. Both improve over Vanilla. **Chess Piecetype and Chess Move.** The chess tasks primarily measure the ability of a model to track the identity and state of variables over a sequence of changes. In the Chess Piecetype task, both the Self-Notes and Scratchpad models outperform the Vanilla model. As with other tasks, this confirms that Vanilla transformers are improved with extra tokens in order to accurately track the state of a set of variables, particularly when the test-time sequence lengths vary from the training length. For Chess Piecetype, Self-Notes is not significantly better than Scratchpad. This is a fairly simple task for Scratchpad since it simply requires copying the piece at each move, assuming it knows where the pieces start. This is different from the Algorithmic and Boolean Variable tasks which not only need to copy the variable, but also increment, decrement, or negate it. In the Chess Move task, Self-Notes is slightly better than Vanilla, but Scratchpad is significantly worse than both. In this task, the Self-Notes and Scratchpad "note" tokens (pieces) are not the same as the final question (move board position). We hypothesize that Scratchpad cannot learn to simultaneously copy the identity of pieces _and_ predict the chess move. ### Semi-supervised Self-Notes Figure 2 shows the performance of the Self-Notes method with varying amounts of Self-Note supervision for the Toy-Story and the Algorithmic tasks. That is, we randomly sample some percentage of the training samples that get Self-Note supervision. For Toy-Story, we find that even Self-Note supervision using as little as 1% of the training set (100 samples), leads to performance gains over the Vanilla model, and the performance starts to saturate around 25% supervision. On the other hand, for the Algorithmic task, we observe gains with Self-Note supervision starting at around 5% supervision, and the performance steadily improves with more Self-Note supervision. \begin{table} \begin{tabular}{l|c|c} \hline \hline Dataset & Vanilla & Self-Notes (unsupervised) \\ \hline 1-var (20k) & 65.6 & 98.1 \\ 2-var (100k) & 76.1 & 86.3 \\ \hline \hline \end{tabular} \end{table} Table 4: Algorithm unsupervised \begin{table} \begin{tabular}{l l l l l} \hline \hline **Task** & **Test Set** & **Vanilla** & **Scratchpad** & **Self-Notes** \\ \hline \multirow{3}{*}{**Toy-Story**} & **1/2-hop** & 92.4 \(\pm\)0.7 & 99.6 \(\pm\)0.1 & 99.8 \(\pm\)0.1 \\ & **3-hop*** & 57.0 \(\pm\)0.3 & 96.4 \(\pm\)0.9 & **98.5 \(\pm\)0.3** \\ & **4-hop*** & 37.4 \(\pm\)0.8 & 94.2 \(\pm\)2.0 & **97.8 \(\pm\)0.4** \\ \hline \multirow{2}{*}{**Algorithmic**} & **2-100** & 44.6 \(\pm\)1.0 & 72.2 \(\pm\)5.7 & **95.5 \(\pm\)0.2** \\ & **101-200*** & 24.4 \(\pm\)2.1 & 11.6 \(\pm\)2.0 & **85.0 \(\pm\)0.6** \\ \hline \multirow{2}{*}{**Boolean Variable**} & **3-8** & 99.7 \(\pm\)0.1 & 100.0 \(\pm\)0.0 & 100.0 \(\pm\)0.0 \\ & **9-19*** & 71.3 \(\pm\)0.8 & 73.7 \(\pm\)2.4 & **75.2 \(\pm\)2.1** \\ \hline \multirow{2}{*}{**Chess Piecetype**} & \(\leq\)**80** & 98.5 \(\pm\)0.4 & 98.5 \(\pm\)0.3 & 98.8 \(\pm\)0.2 \\ & \(\geq\)**81*** & 82.9 \(\pm\)2.3 & **94.7 \(\pm\)0.7** & **94.8 \(\pm\)0.7** \\ \hline \multirow{2}{*}{**Chess Move**} & \(\leq\)**80** & 49.0 \(\pm\)0.4 & 37.0 \(\pm\)0.8 & **50.8 \(\pm\)1.1** \\ & \(\geq\)**81*** & 39.8 \(\pm\)0.2 & 29.9 \(\pm\)0.8 & **41.8 \(\pm\)0.9** \\ \hline \hline \end{tabular} \end{table} Table 2: Test Accuracy (in %) for the reasoning and state-tracking tasks. “*” indicates out-of-distribution harder test settings. \begin{table} \begin{tabular}{l l l} \hline \hline **Method** & **3-hop** & **4-hop** \\ \hline Vanilla\({}^{2}\) & 79.4 & 57.9 \\ + Self-Notes & 79.7 & 61.8 \\ + Boost Questions & 82.4 & 68.2 \\ + Multi-sample & 91.3 & 79.1 \\ + Finetune & 94.2 & 85.8 \\ \hline \hline \end{tabular} \end{table} Table 3: Toy-Story setting without ground-truth notes. ### Unsupervised Self-Notes In our final set of experiments, we apply Self-Notes to a 1-variable Algorithmic task and Toy-Story task when we have no ground-truth Self-Notes to train on. First, we conducted experiments in the unsupervised setting for the 1-variable Algorithmic task. We train on datasets that contain varying length samples (i.e. varying numbers of algorithmic statements per sample), so the model will generate intermediate Self-Notes on its own in a QA form. In this task, we allow the model to generate Self-Notes, and then conditioning on the previous note to predict the next Self-Note and final answer during training, departing from the standard parallel training procedure. The model therefore has to do two simultaneous tasks during training, write the correct Self-Notes, and predict the final answer given the written Self-Notes. Since we only use 1-variable samples, it makes it straightforward to learn which Self-Note questions to write (it will always be print x, where x is the variable in that sample. We can see from Figure 3, that around 10k samples, the unsupervised Self-Notes method starts to learn how to write and leverage Self-Notes that improve accuracy over the Vanilla method. With 20k samples, the unsupervised Self-Notes method achieves near 100% accuracy, with a significant increase over the Vanilla model. The second task we consider in the unsupervised setting is Toy-Story. Here, the training data has 100k samples with 1 and 2 hop questions, but contains no Self-Notes. This task is more difficult since there are many more variables (people, objects, locations) and model needs to ask the right questions in Self-Notes. We first train a Vanilla model to generate the final question and answer, with test accuracy shown at the top of Table 3. Next, we test the vanilla model with Self-Notes by allowing it to generate QAs as notes. Here, we only add the answer parts to the context because the model has never seen a question in the context during training. Additionally, because the model is trained on 1-hop questions, it often asks a question whose answer is already in the context. We ignore such duplicate answers and move on to the next context sentence. Simply adding Self-Notes to the Vanilla model during testing does not improve the performance much because the model is not trained to ask the right questions. We therefore encourage the model to ask more questions by boosting the probability of the question start token "Q.". Boosting Self-Notes by \(B\)=5 does improve the performance over the Vanilla model. Furthermore, generating multiple different Self-Notes by sampling questions and selecting the most confident one also helps. This is likely because when the right question is asked and answered, the model becomes more confident in its final answer. Finally, finetuning the model on a Self-Note version of the original training data improves the performance, as it adapts to longer stories. In summary, we see a significant increase in accuracy over the Vanilla results by allowing the model to generate Self-Notes and finetuning the model on the generations. ### Ablations **Oracle vs no Self-Notes during inference.** We perform two ablations regarding Self-Notes during inference. The first is an upper bound where we provide 100% Self-Notes supervision to the model during both training and testing \begin{table} \begin{tabular}{c c c c c c} \hline \hline **Train** & **Test** & \multicolumn{2}{c}{**Algorithmic**} & \multicolumn{2}{c}{**Toy-Story**} \\ **Self-Notes** & **2-100** & **101-200*** & **3-hop*** & **4-hop*** \\ \hline 100\% & 100\% & 100.0 \(\pm\)0.0 & 100.0 \(\pm\)0.0 & 99.9 \(\pm\)0.1 & 99.8 \(\pm\)0.3 \\ 100\% & none & 21.3 \(\pm\)0.6 & 9.2 \(\pm\)0.5 & 37.7 \(\pm\)3.9 & 29.0 \(\pm\)1.6 \\ \hline \hline \end{tabular} \end{table} Table 5: Ablation comparing the performance of Self-Notes with (i) ground truth self-notes at test time, and (ii) abstaining from generation of self-notes during inference. Figure 2: Performance of the proposed Self-Notes method with varying amounts of Self-Notes supervision. (rather than just training). The second is where we give 100% Self-Notes supervision during training, but restrict the model from generating Self-Notes during testing. This baseline analyzes whether Self-Notes-augmented training data can still help the model learn about the task in its weights even without using Self-Notes at test time. These results are shown in Table 5. As expected, oracle Self-Notes (100% Self-Note testing supervision) improves the performance. On the other hand, not allowing the model to generate Self-Notes leads to a drastic drop in performance due to distribution shift at inference time. **Dummy tokens: separating content and compute.** We introduced Self-Notes as a method for language models to do reasoning in the form of explicit tokens. To understand the value of these extra tokens, we seek to measure the separate contribution the additional compute allotted by these tokens from their content. We do this by inserting the dummy token "_" at various locations throughout the context as an ablation. We first consider the Toy-Story task. In the vanilla setting, there are only facts and the question (e.g., "Bob is in the park. Bob has the key. Where is the key?"). The first comparison is inserting a dummy after every fact, which we call "Naive Dummy" ("Bob is in the park. _ Bob has the key. _ Where is the key?"). The alternative, which we call "Self-Notes Dummy" is adding dummy tokens in the same locations where Self-Notes are written. In other words, at the locations where a relation between two facts can be inferred ("Bob is in the park. Bob has the key. _ Where is the key?"). Finally, we consider adding the dummy tokens where the Scratchpad tokens would be ("Bob is in the park. Bob has the key. Where is the key? _"). This setting adds the same amount of dummy tokens as are used in Self-Notes Dummy (only 1 in the example described). Figure 5 shows the results on the 3-hop and 4-hop test sets for the different dummy token settings compared to the natural language tokens used by Self-Notes and Scratchpad. Intelligently inserting dummy tokens into the positions where Self-Notes should be performs the best out of the four settings. Importantly, it is better than inserting at the end of the context, where Scratchpad tokens are added. This alludes to the fact that allowing the model to do extra computations in the middle of the context can be better than after the context.3 However, the gain from Self-Notes Dummy over other \begin{table} \begin{tabular}{l l l l} \hline \hline **Task** & **Vanilla** & **Dummy** & **Self-Notes** \\ \hline **Chess Piecetype** & 82.9 \(\pm\)2.3 & 84.8 \(\pm\)1.7 & **94.8**\(\pm\)0.7 \\ **Chess Move** & 39.8 \(\pm\)0.2 & 40.4 \(\pm\)1.2 & **41.8**\(\pm\)0.9 \\ **WikiText-103** & 25.9ppl & **24.9ppl** & n/a \\ \hline \hline \end{tabular} \end{table} Table 6: Results with Dummy Tokens. For the chess tasks, we report the results for the OOD setting (over 80 moves). Figure 4: Toy-story task Self-Notes vs Vanilla sample comparison. Figure 5: Ablation comparing the impact of (a) extra compute due to additional tokens, and (b) position of the additional tokens. Figure 3: Self-Notes Unsupervised vs Vanilla on the 1-variable Algorithmic task. dummy variants pales in comparison to the gains of actual Self-Notes over its dummy counterpart, suggesting that the content of the intermediate notes matters more than just the additional compute. We also test the usefulness of dummy tokens in three other settings: Chess Piecetype, Chess Move, and WikiText-103 language modeling. For the chess experiments, we compare the vanilla moves ("c2 c4"), a dummy token between move positions ("c2 _. c4"), and the generated (non-dummy) piece tokens in Self-Notes ("c2 P c4"). For WikiText-103 language modeling, we compare the vanilla text ("The cat sat on the mat") with dummy tokens inserted between each word in the text ("The _ cat _ sat _ on _ the _ mat"). There are no ground-truth Self-Notes for WikiText-103, so we neglect such experiments. Table 6 shows the accuracy results for chess and perplexity for WikiText-103. Dummy tokens are not reported in the accuracy or perplexity numbers. For each task, dummy tokens improves the vanilla setting by a small margin. **Labeled training set size comparison of Self-Notes vs Vanilla** Each Self-Notes training sample has intermediate questions and answers, therefore increasing the total number of QA pairs in the training set. For example if the final question and answer is 'Where is the ball? The ball is in the park'', but there was a Self-Note in the middle of the context labeled 'Who has the ball? Alice has the ball'', then that sample has two QA pairs. We therefore also run a comparison of the total number of labelled QA pairs between Self-Notes and Vanilla. Specifically, the 10k Self-Notes training data for Toy-Story has 10k total samples, and therefore 10k final QA pairs. However, it also includes roughly 70k Self-Note QA pairs which means that the total amount QA pairs is around 80k. Figure 4 shows the effect of increasing the training size for the Vanilla baseline compared to a fixed set of 10k Self-Notes training samples (80k labeled QA pairs) in the Toy-Story task. We see that the Self-Notes model with 10k samples still vastly outperforms the Vanilla model with a 1500% increase in training samples (and roughly a 100% increase in the amount of QA pairs to that of Self-Notes). ## 5 Related Work There are several strands of related work, including prior work on using rationales, length extrapolation, and adaptive computation. **Implicit Reasoning.** bAbI (Weston et al., 2016) was a set of synthetic tasks for testing different reasoning capabilities and showed the advantage of attention-based models over recurrent neural networks (Sukhbaatar et al., 2015). Now attention-based transformers (Vaswani et al., 2017) became a foundation of language-based reasoning (Devlin et al., 2019). However, the feedforward nature of transformers makes it unsuitable for state-tracking (Fan et al., 2020) and several recurrent versions have been proposed (Dehghani et al., 2019; Ju et al., 2022; Hutchins et al., 2022). Futher, transformer-based large LMs are shown to struggle at multi-step reasoning (Press et al., 2022). **Explicit Rationales.** Use of rationales has been explored for interpretability (Camburu et al., 2018), and for performing intermediate computations (Nye et al., 2021; Wei et al., 2022). In particular, the Scratchpad method by Nye et al. (2021) is closest to our proposed Self-Notes method which can be interpreted as an _online_-variant of Scratchpad. Use of rationales for reasoning and arithmetic tasks, referred to as _chain-of-thought_, has been shown to be particularly beneficial for zero- and few-shot in-context learning with large language models (Wei et al., 2022; Kojima et al., 2022; Press et al., 2022). Zelikman et al. (2022) showed the possibility of bootstrapping from a small set of reasoning labels to a larger unlabeled dataset. Trivedi et al. (2022) propose interleaving chain-of-thought reasoning with knowledge retrieval steps (both after context). However, as with Scratchpad, the _chain-of-thought_ reasoning is done after reading the entire input context rather than while reading it as in Self-Notes. Furthermore, this emergent capability of chain-of-thought prompting is only possible in large (?GB) models, and we focus on studying the properties of the transformer architecture itself. **Length Extrapolation.** Length extrapolation, or generalization to longer instances during inference than those seen during training is an important property for an intelligent agent. In the context of transformer-based models, length extrapolation has been explored for language modeling (Press et al., 2022), machine translation (Neishi and Yoshinaga, 2019; Kiyono et al., 2021), models trained on artificial datasets (Hupkes et al., 2020; Anil et al., 2022), and a variety of other tasks. One of the reasons for the limited length generalization capability of transformer models is due to the way position is handled with learnable embeddings (Kiyono et al., 2021; Sinha et al., 2022). **Adaptive Computation.** When humans read and write text, we often spend a different amount of time per sentence. Transformers are designed to process and generate each sentence using the same amount of computation regardless of the complexity. Ideally, we would like to have models that spend more time on difficult pieces of text and less on more simple pieces. Several works have addressed the problem of fixed computation time (Graves, 2016; Bolukbasi et al., 2017; Banino et al., 2021), but require a modification to the training procedure or model architecture. Self-Notes can be viewed as a form of adaptive computation because the model decides when it wants to deviate from the context and "think" by generating supplementary tokens before processing the remaining text. Unlike previous adaptive computation approaches, Self-Notes can easily be applied to existing large language architectures and training procedures. **Editing of Generated Text.** Several works have introduced variants of the transformer architecture to allow insertions, deletions, and revisions to the generated text (Schick et al., 2023; Gu et al., 2019; Stern et al., 2019; Elgohary et al., 2019; Kim et al., 2022). Other works have generated "inner monologue" tokens as a form of intermediate reasoning after it has processed a full prompt (Huang et al., 2022; Ahn et al., 2022). Contrary to these methods that revise post-context generations, Self-Notes revisise the original prompt context in by inserting tokens in the middle of it. ## 6 Conclusion We proposed a general method that allows language models to explicitly reason and memorize in the form of taking Self-Notes. Unlike scratchpad and chain-of-thought methods that postpone reasoning until all input tokens are processed, our method can deviate from the input sequence at any time for Self-Notes. One advantage of interleaving reasoning with the context in this way is that the reasoning steps can be closer to their relevant context. Another advantage is that it can act as a recurrent memory as the Self-Note answers are fed back to the model. Both these advantages make the method scale better to longer sequences unseen during training, as shown in our experiments. In addition, we showed that the amount of Self-Note supervision during training can be reduced without a significant performance drop. Future work should explore two complementary directions aimed at reducing the amount of supervision: (1) using reinforcement learning to discover the optimal Self-Notes, and (2) whether scale (very large models) make it possible to ask good Self-Note questions out of the box. Another possible future direction is to combine our method with a scratchpad, which has the advantage of seeing the question and performing backward reasoning to reach the answer. Our experiments validate Self-Notes on the 124M parameter GPT-2 base model across five different synthetic and real-world tasks. Training a larger model takes a significant more amount of resources and is left for future work.
2310.15736
Excitons and singlet fission at hybrid inorganic-organic semiconductor interfaces
Excitons in organic crystalline semiconductors play a crucial role in the operation of optoelectronic devices such as organic solar cells, light-emitting diodes, and photodetectors. The excitonic properties of materials are dramatically affected by the presence of surfaces and interfaces. In this work, we investigate the influence of a neutral hydrogen-passivated 1x2 reconstructed (100) silicon substrate on excitons within the crystalline tetracene layer deposited on the top of it. Our findings reveal that singlet excitons in the contact tetracene layer are situated within the continuum of unbound Wannier-Mott excitonic states in silicon, with noteworthy hybridization between these states. Consequently, in the contact tetracene layer, all singlet excitons exhibit a pronounced interlayer charge transfer character, while the triplet exciton remains confined to the tetracene layer. This makes the singlet fission effect highly improbable for the contact tetracene layer. Additionally, the presence of the silicon substrate results in a modification of the singlet-triplet gap by 144 meV. This change is solely attributed to the hybridization with excitons in silicon, which influences the exchange energy. Our results show that the dynamic dielectric screening caused by the substrate does not impact the singlet-triplet gap but alters the exciton binding energies.
M. V. Klymenko, L. Z. Tan, S. P. Russo, J. H. Cole
2023-10-24T11:21:27Z
http://arxiv.org/abs/2310.15736v1
# Excitons and singlet fission at hybrid inorganic-organic semiconductor interfaces ###### Abstract Excitons in organic crystalline semiconductors play a crucial role in the operation of optoelectronic devices such as organic solar cells, light-emitting diodes, and photodetectors. The excitonic properties of materials are dramatically affected by the presence of surfaces and interfaces. In this work, we investigate the influence of a neutral hydrogen-passivated 1x2 reconstructed (100) silicon substrate on excitons within the crystalline tetracene layer deposited on the top of it. Our findings reveal that singlet excitons in the contact tetracene layer are situated within the continuum of unbound Wannier-Mott excitonic states in silicon, with noteworthy hybridization between these states. Consequently, in the contact tetracene layer, all singlet excitons exhibit a pronounced interlayer charge transfer character, while the triplet exciton remains confined to the tetracene layer. This makes the singlet fission effect highly improbable for the contact tetracene layer. Additionally, the presence of the silicon substrate results in a modification of the singlet-triplet gap by 144 meV. This change is solely attributed to the hybridization with excitons in silicon, which influences the exchange energy. Our results show that the dynamic dielectric screening caused by the substrate does not impact the singlet-triplet gap but alters the exciton binding energies. ## 1 Introduction Singlet fission (SF) in organic crystals, first observed in 1965, [1] has garnered increasing attention in recent years due to its potential applications in photovoltaic devices. The main feature of this effect is the down-conversion of one singlet excited state into two long-lived triplet excitons avoiding thermal losses. These two triplet excitons can then be converted into four charge carriers, thus increasing the yield of charge carriers per photon. [2] Several photovoltaic devices based on the SF effect have been proposed. One of the most promising proposals considers SF in hybrid inorganic-organic semiconductor (HIOS) heterostructures, such as tetracene-silicon interfaces, which we also study in this work. [3, 4] The chain of transitions that leads to the generation of electron-hole pair in silicon as a result of SF in tetracene can be schematically written as: \[S_{0}S_{0}\xrightarrow{h\nu}S_{0}S_{1}\xrightarrow{k_{fis}}T_{1}T_{1} \xrightarrow{k_{tr}}S_{0}+2h_{Si}^{+}+2e_{Si}^{-} \tag{1}\] Here, a photon with energy \(h\nu\) excites the tetracene molecule from its ground state \(S_{0}\) to the singlet state \(S_{1}\). Through the SF effect, the latter decays into two triplets \(T_{1}\) with the rate \(k_{fis}\). The energy transfer of the triplet states across the interface, with the rate constant \(k_{tr}\), generates electron-hole pairs in silicon. In this setup, the organic layer enables efficient generation of excitons through SF, while the inorganic layer provides efficient separation and transport of charge carriers to the electrodes. The energy of the triplet exciton in the organic semiconductor should exceed the band gap of the inorganic semiconductor to facilitate resonant energy transfer. [2] This is a critical requirement for attaining high quantum efficiency and surpassing the limitations imposed by the Shockley-Queisser limit. [5] The triplet excitons should also be located in close proximity to the silicon substrate due to the short range nature of the Dexter exciton transfer. In what follows, we refer to the tetracene molecular layer in contact with silicon as the contact layer. The chain of the reactions (1) indicates that the concentration of electron-hole pairs in silicon is also dependent on the SF rate \(k_{fis}\) determining the concentration of triplets in the contact layer. Note that the intralayer diffusion of singlet excitons greatly exceeds the interlayer one.[6] As a result, the majority of singlet excitons, once generated, are unlikely to escape the layer in which they were generated within the timeframe of their lifetimes. This motivates interest in studying excitons in an organic semiconductor that are specifically located in the contact layer. The aim of this work is to estimate the effect of the silicon substrate on the exciton binding energies and its implications for the SF effect in the tetracene contact layer. The thermodynamics of SF reads that the rate of SF is larger when the singlet energy is slightly larger (exothermic SF), or at least not much smaller (endothermic SF) than twice the triplet exciton energy.[2, 7] This states so-called thermodynamic and kinetic conditions for SF.[8] Another requirement concerns the wave function of excitons: the wave function for \(S_{1}\) should manifest a charge transfer character, ensuring a significant overlap with the wave functions of the intermediate states along the pathway to the eventual triplet states.[9, 10, 11, 12] Note that an excessively long-range charge transfer character can be detrimental to SF. This is because such states are weakly coupled to the multiexciton triplet-pair manifold, as has been previously established in Ref. [11]. When the substrate induces alterations in the energies and wave functions of singlet excitons via the dynamic dielectric screening and orbital hybridization, it consequently exerts an influence on the SF process. The exciton binding energy is determined by both the strength of Coulomb coupling between electrons and holes and their dispersion law (band structure, effective masses etc.). Modifying dielectric properties of the media by surfaces, interfaces, and nanostructuring allows for engineering excitonic features. The effect of the dielectric screening on the exciton binding energy has been thoroughly studied in the context of inorganic semiconductor materials in 3D, 2D, 1D and 0D III-V semiconductors[13, 14] as well as in recently developed 2D van-der-Waals heterostructures [15, 16, 17], but less so for the HIOS heterostructures. Note, that the Wannier-Mott excitons in inorganic semiconductors and Frenkel excitons in organic semiconductors exhibit several distinctive features. For instance, the local-field effects play an exceptional role only in the case of Frenkel excitons [18, 19]. In this work, we study the impact of the substrate on the exciton binding energies and SF effect within the contact tetracene layer, elucidating specifically the role of the dynamic dielectric screening and orbital hybridization between excitons in organic and inorganic semiconductors. Using a combination of the GW theory and Bethe-Salpeter equation (BSE) [18, 20, 21, 22], we perform a series of computational experiments with several models representing the effect of substrate. One relies on a brute-force approach that utilizes slab models and large supercells that encompass both the tetracene and silicon slabs. Another is based on the so-called "Add-chi" method [23], which can be potentially used with the dielectric embedding [24, 25] to reduce the sizes of supercells and, consequently, computational burden. The "Add-chi" and dielectric embedding techniques rely on the additivity of the polarizability matrix, which holds true under specific conditions. They have previously been employed in GW computations, effectively reducing computational expenses. These methods are particularly well-suited for heterogeneous structures characterized by weak van der Waals interactions between components, where orbital hybridization can be neglected. Notably, this study represents marks the first instance of combining dielectric embedding with BSE. The combination of GW and BSE methods has previously demonstrated success in predicting the excitonic properties of various materials. For instance, it has been employed to investigate the influence of crystal packing on exciton probability distributions [26] or excitonic signatures in optical absorption spectra of inorganic semiconductors [27]. ## Results and discussion ### Atomic model of tetracene-silicon interface In this work, we consider tetracene deposited onto neutral hydrogen-passivated 1x2 reconstructed (100) silicon surface in the "upright-standing" configuration [28, 29]. This configuration has been previously grown and characterized experimentally [30]. The 1x2 reconstructed Si(100) surface has a rectangular unit cell with lateral sizes of \(\sqrt{2}a_{Si}\) and \((\sqrt{2}/2)a_{Si}\) along the axes [110] and [\(\bar{1}\)10] respectively, where \(a_{Si}\) is the lattice constant of bulk silicon. The minimal Figure 1: a) Atomic structure of the van-der-Waals interface between the crystalline silicon with the 1x2 reconstructed (100) surface and tetracene after atomic relaxation. b) Alignment of the silicon and tetracene unit cells at the interface. Numbers in blue color denote sizes of unit cells for bulk materials published in the literature, black color numbers correspond to the sizes of the super-cell after relaxation computed in this work. \(a_{Si}\), \(a_{Tc}\), and \(b_{Tc}\) are crystal lattices of bulk silicon and tetracene respectively. unit cell spans giving the best match of sublattices are \(1\times 2\) for tetracene and \(1\times 3\) for Si, shown schematically in Fig. 1. All computations in this work are performed for a slab model containing 16 atomic layers of silicon and two molecular layers of tetracene. We need at least two molecular layers to simulate accurately the dielectric environment for the tetracene molecules at the interface. The atomic coordinates have been obtained from the geometry optimization within DFT with a plane-wave basis set and norm-conservative pseudo-potentials [31, 32]. The computations were performed using Quantum Espresso, the plane-wave DFT software [33, 34]. The dispersion forces, responsible for the physisorption of tetracene on the Si surface, are introduced in the model via the non-local exchange-correlation functional vdW-DF2-C09 [35]. The computations have been performed on the 4x3x1 Monkhorst-Pack k-space grid, using a kinetic energy cutoff of 80 Ry for wavefunctions and of 320 Ry for charge densities. This choice is justified by a series of convergence tests (see Supplementary info). For the geometry optimization, we used the use Broyden-Fletcher-Goldfarb-Shanno quasi-newton algorithm with variable cell parameters. The system is periodic only in two in-plane dimensions. The effect of the periodic boundary conditions in the third dimension was canceled by the dipole correction [36]. The band gap and exciton binding energy are both properties of the excited state. While the static DFT is fundamentally a ground state theory, it can still provide rough estimates of these properties for specific material systems by employing well-designed approximated exchange-correlation functionals, such as range-separated hybrid functionals [37]. More accurate predictions in a more systematic way, however, can be achieved by employing specialized excited state theories like time-dependent density-functional theory or post-Hartree-Fock methods, such as multi-configurational self-consistent field [38], coupled clusters method [39], configuration interaction methods [40], or many-body Green's function approach with GW approximation [18]. The latter works exceptionally well for large extended systems when combined with techniques like "Add-chi" [23] or dielectric-embedding methods [24, 25]. Within this method, the quasiparticle energies are determined by the poles of the retarded Green's functions, which, in turn, are solutions of the Dyson equation. In the general case, this equation can be solved self-consistently as a part of the closed system of Hedin's equations [18, 41]. Detailed information on the application of the GW method to the tetracene-silicon interface can be found in Ref. [42]. After obtaining the quasiparticle energies and orbitals through the GW method, the exciton binding energies and exciton wave functions can be determined in the subsequent iteration of Hedin's equation by solving BSE, which, in the general case, reads [20, 43]: \[\begin{split} L\left(12;1^{\prime}2^{\prime}\right)=& G\left(12^{\prime}\right)G\left(21^{\prime}\right)+\int d \left(3456\right)\times\\ & G\left(13\right)G\left(41^{\prime}\right)K\left(35;46\right)L \left(62;52^{\prime}\right)\end{split} \tag{2}\] where each number in the parenthesis denotes the set of variables consisting of position, spin, and time coordinates, \(\left(1\right)=\left(\mathbf{r}_{1},s_{1},t_{1}\right)\), \(G\left(12^{\prime}\right)\) is the non-interacting one-particle propagator (computed, for instance, by the GW method), \(L\left(12;1^{\prime}2^{\prime}\right)\) is the electron-hole correlation function and \(K\left(35;46\right)\) is the electron-hole interaction kernel. The diagrammatic representation of Eq. (2) is shown in Fig. 2. The poles of the functions \(L\left(12;1^{\prime}2^{\prime}\right)\) correspond to the exciton energy levels. The kernel consists of the exchange and screened direct Coulomb interaction [20]: \[\begin{split} K\left(35;46\right)=&\delta\left(34 \right)\delta\left(56\right)v\left(36\right)-\\ &\delta\left(36\right)\delta\left(45\right)W\left(34\right)\end{split} \tag{3}\] Figure 2: The diagrammatic representation of the Bethe-Salpeter equation. where \(v\left(36\right)\) and \(W\left(34\right)\) are bare and screened Coulomb potentials respectively. The integral equation (2) can be converted into a linear algebra problem by expressing all its components in matrix form and transitioning from the time domain to the frequency domain through a Fourier transform. The matrix representations were derived using the orbitals obtained from the GW method as the basis set. In the matrix form, the screened Coulomb potential is given by: \[W=\epsilon^{-1}v=\left[1-v\chi\right]^{-1}v \tag{4}\] where: \(\epsilon^{-1}\) is the dielectric matrix inverse, \(\chi\) is the irreducible polarizability of the medium, and \(v\) is the matrix representation of the bare Coulomb potential. The explicit expressions for the matrix representations of \(\chi\) and \(v\), utilizing the GW orbitals or Kohn-Sham orbitals as the basis set, can be found in Refs. [18] and [20]. In this work, we, for the first time, apply the dielectric embedding technique with BSE to obtain estimates for the effect of dielectric screening, caused by the substrate, on excitons in tetracene. Let us consider the two-particle correlation function \(L\left(1,2;1^{\prime},2^{\prime}\right)\) for a heterostructure formed by two slabs \(A\) and \(B\). In the general case, the initial and final positional coordinates can be located in any of two materials: \(\left\{1,2,1^{\prime},2^{\prime}\right\}\in A\cup B\). If our focus is solely on excitons in one of the materials, say \(A\), and the hybridization between Kohn-Sham orbitals in \(A\) and \(B\) is negligibly small, then the set of coordinates is \(\left\{1,2,1^{\prime},2^{\prime}\right\}\in A\). In other words, the electron-hole pair before and after interaction remains in the material \(A\). The only possible effect that material B can cause on the excitons in A is the contribution to the dielectric screening of the Coulomb potential. The dielectric embedding is based on the assumption that the polarizability matrix \(\chi\) is additive and can be computed independently for isolated slabs constituting the heterostructure: [24, 25] \[\chi\approx\chi^{A}+\chi^{B}. \tag{5}\] To save computational resources, the contributions \(\chi^{A}\) and \(\chi^{B}\) can be computed by utilizing supercells with sizes adjusted to geometrical parameters of the isolated slabs \(A\) and \(B\). In this case, before summing them in the reciprocal space, it is essential to adjust these contributions by transforming them into real space, incorporating zero-padding in real space, and then reverting them back to reciprocal space. This procedure forms the essence of the dielectric embedding method, which has been previously employed with the GW method [24, 25]. In the subsequent discussion, we will denote the solutions of the Bethe-Salpeter equation obtained without the dielectric embedding method as the "double-slab model" (DSM), while the alternative case will be referred to as the "single-slab model" (SSM). Since we are only interested in the poles of the functions \(L\left(12;1^{\prime}2^{\prime}\right)\), BSE in the Tamm-Dancoff approximation reduces to the following eigenvalue problem:: [20] \[\left(E_{i}^{QP}-E_{j}^{QP}\right)A_{jiji}+\sum_{j^{\prime}i^{\prime}}K_{j^{ \prime}i^{\prime}ji}A_{j^{\prime}i^{\prime}ji}=E_{ji}A_{jiji} \tag{6}\] where \(E_{i}^{QP}\) are quasiparticle energies obtained from GW computations, \(A_{j^{\prime}i^{\prime}ji}\) represents the eigenvectors, which determine the amplitudes in the representation of the two-particle wave function, and \(E_{ji}\) is the eigenvalues that can be interpreted as exciton energies, measured relative to the fundamental band gap, \(K_{j^{\prime}i^{\prime}ji}\) is the matrix representation of the kernel defined by Eq. (3) in the basis set of the GW orbitals, and indices \(i\) and \(j\) run over conduction and valance bands respectively. The matrix \(A_{ji}\) allows for recovering the electron-hole correlation function: \[\Psi_{ij}\left(\mathbf{r}_{h},\mathbf{r}_{e}\right)=\sum_{i^{\prime}j^{\prime }}A_{j^{\prime}i^{\prime}ji}\psi_{i^{\prime}}\left(\mathbf{r}_{e}\right)\psi_ {j^{\prime}}\left(\mathbf{r}_{h}\right) \tag{7}\] where \(\psi_{i^{\prime}}\left(\mathbf{r}_{e}\right)\) and \(\psi_{j^{\prime}}\left(\mathbf{r}_{h}\right)\) are GW wave functions for electrons and holes respectively. All the computations related to GW and BSE have been performed using BerkeleyGW [21]. BSE has been solved with the Tamm-Dancoff approximation. In this work we use two techniques to reduce the number of the bands explicitly participating in the computations: first, we apply the modified static remainder approach [44] and use the extrapolation technique based on fitting the Coulomb-hole self-energy by a hyperbolic function [45]. After a series of convergence tests discussed in Supplementary information we have derived the following parameters: the kinetic energy cutoff for the dielectric matrix is of 15 Ry, 534 unoccupied orbitals (1200 orbitals in total) were used to build matrix representation of the Green's functions, and the k-grid is same as for the DFT calculations. ### Exciton binding energies Solving the eigenvalue problem given by Eq. (6) yields a spectrum of exciton binding energies, shown in Fig. 3 for DSM. The resulting spectrum consists of about ten thousand eigenvalues within the energy interval spanning the band gap of tetracene and comprises both discrete and quasi-continuous parts (numerical solution of Eq. (6) is possible for a finite number Figure 3: Energy levels of singlet (\(\bullet\)) and triplet (\(\square\)) excitations computed by solving numerically Bethe-Salpeter equation for the supercell containing contacting silicon and tetracene slabs. The inset panel provides a close-up view of a segment of the spectrum, highlighting the region where the tetracene singlet excitons are located. The figure also contains the values of band gaps, \(E_{g}\), of the considered materials. of wave vectors that leads to the discretization of the continuous part of the spectrum). The discrete and continuous parts correspond to localized and delocalized exciton states respectively. The latter is related to correlated motion of unbound electrons and holes, which, in particular, manifests itself as the Sommerfeld enhancement in the absorption spectrum [46]. For this study, our attention is directed toward excitons situated within the tetracene contacting layer. These particular exciton states have been identified in the spectrum through a projection of their corresponding exciton wavefunctions onto a bounding box that encapsulates this layer. Another discriminant that can be employed to distinguish excitons in tetracene from other types of excitons is their dipole moment. Results of our numerical experiments show that excitons in organic semiconductors typically exhibit dipole moments at least one order of magnitude larger than Wannier-Mott excitons. Our results obtained from DSM show that, while the triplet excitons in tetracene are located in the fundamental band gap, the singlet exciton states in tetracene are embedded into the continuous spectrum of the unbound exciton states in silicon (see Fig. 3). The singlet exciton energy levels are broadened as a result of interaction with the silicon substrate, whereas the triplet states remain very narrow. To avoid any ambiguity associated with this broadening, henceforth, when referencing the energy of a singlet exciton, we specifically imply the energy of the singlet exciton that is characterized by the maximal overlap with the tetracene contacting layer. In Fig. 4, we compare the exciton binding energies, as indicated by the red arrows, for three cases: an isolated tetracene slab, a tetracene-silicon heterostructure treated with SSM, and a tetracene-silicon heterostructure treated with DSM. Comparing the isolated slab with SSM shows solely the effect of dielectric screening, while DSM takes into account both dielectric screening and orbital hybridization between slabs. The results of the computations show that the presence of the silicon substrate decreases the exciton binding energy for both singlet and triplet states. This outcome is expected; we observe the same behavior in 2D inorganic semiconductors, where the exciton binding Figure 4: Exciton energy levels (blue lines) and band gaps for the contact tetracene layer computed for (a) isolated tetracene slab and (b) tetracene-silicon interface within SSM and (c) tetracene-silicon interface within DSM. The binding energies for the exciton singlet state \(S_{1}\) and triplet state \(T_{1}\) are marked by red arrows with an annotating text. energy similarly decreases in the presence of a substrate due to the dielectric screening of the Coulomb coupling between electrons and holes [15, 16, 17]. The reduction in binding energies is qualitatively captured by both SSM and DSM. However, quantitatively, these two models exhibit a slight discrepancy, especially in the case of the triplet: the triplet binding energy is 201 meV larger in the latter case, whereas the binding energy of the singlet exciton is only 57 meV larger for DSM. As a result, SSM underestimates the singlet-triplet energy gap by 144 meV. The singlet-triplet gap is determined by the exchange interaction energy, which is non-zero for triplet states and zero for singlet states. This exchange interaction is inherently short-range and depends on the overlap of the singly-occupied orbital. Consequently, it remains unaffected by long-range dynamical dielectric screening, as evident in the comparison between the results for the isolated slab and those for the SSM case in Figure 4. However, the singlet-triplet gap changes when accounting for exciton hybridization between silicon and tetracene, as observed in the comparison between the SSM and DSM cases in Figure 4. Figure 4 contains values of band gaps (depicted by the black lines) resulting from the GW calculations, which have been previously reported and analyzed in Ref. [42]. The bandgap decreases when tetracene comes into contact with silicon due to the dynamic dielectric screening effect well reproduced by SSM. However, DSM predicts even a narrower bandgap as compared to SSM, and this can be attributed to the reduction of the confinement potential of the tetracene molecules in the contact layer. By subtracting the binding energies from the Figure 5: Potential barrier lowering caused by silicon substrate in the confinement potential of tetracene molecules in the contact layer. The orange line indicates the potential barrier for the case of the isolated slab. band gap energy of tetracene, we can obtain an estimate of the energy of the lowest singlet and triplet excited states relative to the ground state. In comparison to the isolated slab case, the energies of excitons increase slightly when the substrate is introduced in the SSM case. Conversely, in the DSM case, the singlet energy remains unchanged, while the triplet energy decreases by 60 meV. When comparing SSM and DSM, we can conclude that taking the hybridization of exciton states between tetracene and silicon into account leads to a reduction in exciton energies relative to the ground state for both singlets and triplets. This can be explained by the overall lowering of the potential barrier caused by silicon in the confinement potential of tetracene molecules, as illustrated in Fig. 5. The silicon substrate changes the bandgap more dramatically compared to the exciton energies. This is because uncorrelated electrons and holes are charged particles, while the Figure 6: Averaged electron density for the lowest singlet (a, b, c) and triplet (d, e, f) exciton states. The data on the panels (a) and (d) is for the isolated tetracene slab, (b) and (e) is for the tetracene on the silicon substrate computed with SSM, and (c) and (f) is for the tetracene on the silicon substrate computed with DSM. The isosurface of the electron density is plotted at a value of 0.005 of its maximum. bounded electron-hole pair is less sensitive to the external electrostatic environment due to mutual electrostatic screening. For the contact tetracene layer, SSM gives the singlet exciton energy of 2.276 eV and bandgap of 3.022 eV being in good agreement with the results for the bulk tetracene published in Ref. 3 (S\({}_{1}\)=2.2 eV and E\({}_{g}\)=3.0 eV). It means the screening of the exciton produced by tetracene molecules is of the same order of magnitude as the screening by the silicon substrate. The discrepancy in the triplet energy is more pronounced in these cases (1.39 eV and 1.1 eV respectively). It can be associated with well-known inaccuracy of the GW-BSE method in describing the triplet spin states - it is known from the computations based on the configuration interaction method that the singlet state is well approximated by a single configuration, while the triplet state is a mixture of several configurations that can not be reproduced within the many-body perturbation theory approach. ### Exciton delocalization and charge transfer character To visualize spatial characteristics of excitons, we compute the electron probability density from the electron-hole correlation function \(\Psi\left(\mathbf{r}_{h},\mathbf{r}_{e}\right)\), see Eq. (7). Since our slab model is periodic in two dimensions but finite in the third dimension, to compute this function we fix the hole coordinates in the x-y plane, where periodic boundary conditions are applied, and then average over its positions along the confinement direction (along axis \(z\)): \[\left|\Psi\left(\mathbf{r}_{e}\right)\right|^{2}=\frac{1}{L}\int d\mathbf{r}_{ h}\left|\Psi\left(\mathbf{r}_{h},\mathbf{r}_{e}\right)\right|^{2}\delta\left(x-x _{0}\right)\delta\left(y-y_{0}\right), \tag{8}\] where \(\mathbf{r}_{h}=\left(x,y,z\right)\) and \(\left(x=x_{0},\,y=y_{0}\right)\) defines the geometric locus of points forming the line that is perpendicular to the slabs and passes through the center of the supercell, and \(L\) represents the extend of the supercell along the z-axis. With Eq. (8), we compute the averaged electron probability density for excitons in the contact tetracene layer for the three cases discussed above - isolated slab, SSM, and DSM. The densities shown in Fig. 6 are displayed from the viewpoint located at the plane of the interface, directed along the molecular axis, rather than being perpendicular to that plane. As is expected, the singlet exciton is more delocalized compared to the triplet exciton. The results for the isolated tetracene slab and SSM exhibit remarkable similarity. Both predict that for the singlet state, most of the electron density is located on the first nearest neighbor molecules, while for the triplet state, the electron and hole are situated on the same molecule. Consequently, the lowest singlet state possesses a charge transfer exciton Figure 7: Averaged electron density for the lowest singlet (a, b) and triplet (c, d) exciton states. The data on the panels (a) and (c) is for DSM, and the data on the panels (b) and (d) is for SSM. The isosurface of the electron density is plotted at a value of 0.01 of its maximum. character, and the lowest triplet exciton exhibits a Frenkel exciton character. The contact with the silicon substrate, as simulated by SSM, only slightly increases the charge transfer character of the singlet exciton by reducing the binding energy. The situation dramatically changes in the case of DSM. Taking into account the hybridization of the orbitals at the interface significantly affects the singlet exciton, while the triplet exciton still remains almost unchanged. In the former case, the crystalline symmetry of the silicon 1x2 reconstructed surface is imposed on the exciton, which results in an asymmetry of electron distribution relative to the center of the supercell where the hole is located. The side view of the electron density, shown in Fig. 7 a, reveals that the singlet exciton has a very pronounced inter-layer charge transfer character, with the election delocalized over the interface. Integrating this density over the region that confines the contact tetracene layer results in a value of 0.22 for the singlet exciton and 1.0 for the triplet exciton. Note, that the value 0.22 is the maximal value observed for all singlet excitons in the contact tetracene layer. The SF rate can be computed using the Fermi's Golden Rule [47]: \[W=2\pi\hbar^{-1}\left|\langle S_{1}|H_{int}|T_{1}T_{1}\rangle\right|^{2}\rho(E) \tag{9}\] where \(H_{int}\) is the interaction Hamiltonian and \(\rho(E)\) is the density of states at the energy of the triplets. Next, we employ the following expansion of unity \(1=\sum_{\bf r}|{\bf r}\rangle\langle{\bf r}|\), where \({\bf r}\) is the radius-vector, and insert it from both the left and right sides of the interaction operator that leads to the following expression: \[W=2\pi\hbar^{-1}\rho(E)\sum_{{\bf r},{\bf r}^{\prime}}\left|\langle S_{1}|{ \bf r}\rangle\langle{\bf r}|H_{int}|{\bf r}^{\prime}\rangle\langle{\bf r}^{ \prime}|T_{1}T_{1}\rangle\right|^{2} \tag{10}\] In order to get a rough estimate of how the substrate affects the SF effect, we make the assumption that the interaction Hamiltonian decays rapidly with distance and use the ap proximation \(H_{int}=\alpha\delta({\bf r}-{\bf r}^{\prime})\), which enables us to factor it out of the integral: \[W=2\pi\hbar^{-1}|\alpha|^{2}\rho(E)\sum_{\bf r}|\langle S_{1}|{\bf r}\rangle \langle{\bf r}|T_{1}T_{1}\rangle|^{2} \tag{11}\] Assuming that the states \(T_{1}\) are entirely confined within tetracene and considering that \(|\langle{\bf r}|T_{1}T_{1}\rangle|\) is bounded from above by one, the upper bound for \(W\) is: \[W\sim 2\pi\hbar^{-1}|\alpha|^{2}\rho(E)\sum_{{\bf r}_{Tc}}|\langle S_{1}|{\bf r }\rangle|^{2} \tag{12}\] where \({\bf r}_{Tc}\) is constrained within the tetracene molecular layer. Both in the bulk tetracene and in an isolated tetracene slab, the value of \(\sum_{{\bf r}_{Tc}}|\langle S_{1}|{\bf r}\rangle|^{2}\) is close to one. This is because the state \(S_{1}\) is entirely localized within the same layer as the states \(T_{1}\). In the case of the contact with a silicon substrate, this value has been estimated to be approximately 0.22, as discussed earlier. Consequently, the SF rate is approximately 4.5 smaller, considering solely the overlap between the initial and product states. Another, more dramatic implication for the SF effect in the presence of the substrate is the existence of an alternative relaxation path through the thermalization of the exciton in silicon. When the singlet state \(S_{1}\) is hybridized with the continuum of unbound excitonic states in silicon, it can quickly undergo thermalization, resulting in a low-energy interlayer charge transfer exciton or Wannier-Mott exciton in silicon. In such a scenario, the efficient singlet-triplet down-conversion of the exciton is replaced by thermal losses. This process is facilitated by photon-assisted dephasing, which can take several femtoseconds in inorganic semiconductors at room temperature [13]. Consequently, the singlet state \(S_{1}\) has a shorter lifetime when it is in contact with the silicon substrate. Thus, our findings indicate that the hybridization with unbound excitonic states in silicon reduces the probability of the SF process in the contact tetracene layer depicted by the equation (1). In the contact molecular layer, the dissociation of the singlet exciton after photo-excitation is more probable instead: \[S_{0}S_{0}\xrightarrow{h\nu}S_{0}S_{1}\to S_{0}+h_{Tc}^{+}+e_{Si}^{-} \tag{13}\] The triplet excitons are relatively less influenced by the interface and maintain long lifetimes. However, they cannot be generated through SF within the contacting layer. Instead, they are produced through the SF effect in neighboring layers and subsequently migrate to the contact layer due to their diffusion: \[S_{0}^{*}S_{0}^{*}\xrightarrow{h\nu}S_{0}^{*}S_{1}^{*}\xrightarrow{k_{fis}}T_{1 }^{*}T_{1}^{*}\xrightarrow{k_{dif}}T_{1}T_{1}\xrightarrow{k_{tr}}S_{0}+2h_{Si}^ {+}+2e_{Si}^{-} \tag{14}\] where the asterisk denotes states in molecular layers that are not directly adjacent to the silicon surface and \(k_{dif}\) is the diffusion rate. Note, that SF in the contact tetracene layer can be restored by introducing a dielectric spacer between the substrate and tetracene.[4] ## 3 Conclusions In this work, we have demonstrated that when the singlet exciton in tetracene comes into contact with the clean 1x2 reconstructed Si(100) surface, it undergoes hybridization with the unbound excitonic states in silicon. As a result, the singlet exciton state \(S_{1}\) exhibits a pronounced interlayer charge transfer character with delocalization of the exciton across the interface. For the singlet exciton, the maximum probability of both the electron and hole being located within the contacting tetracene layer does not exceed 0.22. Unlike the singlet exciton, the triplet exciton remains completely localized within the tetracene layer. This is a consequence of two key factors. First, the energy levels of the triplet exciton are positioned within the fundamental band gap and the triplet wave function is more localized. In contrast, the energy levels of the singlet excitons are embedded within the continuum of delocalized excitonic states of silicon and the singlet exciton has a more pronounced charge transfer character. Due to hybridization with silicon, the energy levels of the singlet exciton become broader. The weak localization of the singlet exciton results in a smaller overlap with the product triplet states for the SF effect. The exciton hybridized with the unbound excitons in silicon is unstable, and its lifetime is significantly shorter compared to the singlet fission rate \(k_{fis}\). This reduces the probability of SF for the contact tetracene layer. Furthermore, this hybridization leads to the emergence of an alternative relaxation pathway involving the thermalization process in silicon. The presence of the silicon substrate significantly increases the singlet-triplet gap for the triplet excitons in the contact layer, by 144 meV, compared to the case of the isolated slab or the SSM model. Therefore, this deviation is primarily attributed to the hybridization with the unbound excitonic states in silicon leading to changes in the exchange energy. Our results indicate that the dynamic dielectric screening due to the substrate does not affect the singlet-triplet gap but does alter the exciton binding energies. This effect cannot be captured by models based on the "Add-chi" or dielectric embedding techniques. Nevertheless, these techniques are still valuable for predicting the impact of dielectric screening induced by the substrate on excitons, allowing us to isolate and study this specific effect independently from other factors. ## Acknowledgements The authors acknowledge the support of the Australian Research Council Center of Excellence in Exciton Science through grant CE170100026. Work at the Molecular Foundry was supported by the Office of Science, Office of Basic Energy Sciences of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. This project was undertaken with the assistance of resources and services from the National Computational Infrastructure, which is supported by the Australian Government.
2304.01804
Bridging the Gap between Model Explanations in Partially Annotated Multi-label Classification
Due to the expensive costs of collecting labels in multi-label classification datasets, partially annotated multi-label classification has become an emerging field in computer vision. One baseline approach to this task is to assume unobserved labels as negative labels, but this assumption induces label noise as a form of false negative. To understand the negative impact caused by false negative labels, we study how these labels affect the model's explanation. We observe that the explanation of two models, trained with full and partial labels each, highlights similar regions but with different scaling, where the latter tends to have lower attribution scores. Based on these findings, we propose to boost the attribution scores of the model trained with partial labels to make its explanation resemble that of the model trained with full labels. Even with the conceptually simple approach, the multi-label classification performance improves by a large margin in three different datasets on a single positive label setting and one on a large-scale partial label setting. Code is available at https://github.com/youngwk/BridgeGapExplanationPAMC.
Youngwook Kim, Jae Myung Kim, Jieun Jeong, Cordelia Schmid, Zeynep Akata, Jungwoo Lee
2023-04-04T14:00:59Z
http://arxiv.org/abs/2304.01804v1
# Bridging the Gap between Model Explanations in ###### Abstract Due to the expensive costs of collecting labels in multi-label classification datasets, partially annotated multi-label classification has become an emerging field in computer vision. One baseline approach to this task is to assume unobserved labels as negative labels, but this assumption induces label noise as a form of false negative. To understand the negative impact caused by false negative labels, we study how these labels affect the model's explanation. We observe that the explanation of two models, trained with full and partial labels each, highlights similar regions but with different scaling, where the latter tends to have lower attribution scores. Based on these findings, we propose to boost the attribution scores of the model trained with partial labels to make its explanation resemble that of the model trained with full labels. Even with the conceptually simple approach, the multi-label classification performance improves by a large margin in three different datasets on a single positive label setting and one on a large-scale partial label setting. Code is available at [https://github.com/youngwk/BridgeGapExplanationPAMC](https://github.com/youngwk/BridgeGapExplanationPAMC). ## 1 Introduction Multi-label image classification is the task of predicting all labels corresponding to a given image. Since web-crawled images often contain multiple objects/concepts [3, 32, 35, 44], the importance of this task is rising. However, it faces a significant issue of huge annotation costs. We need C binary labels for each training image to provide exhaustive annotation for a model that classifies images into C categories. It acts as a severe obstacle to scaling multi-label classification datasets. For this reason, partially annotated multi-label classification [2, 11, 13, 17, 21, 24] has recently become an actively studied topic. In this setting, instead of exhaustive annotation, only a few categories are labeled for each training image. We can effectively reduce the burden of annotation by adopting partial annotation strategies. One baseline approach for solving a partially annotated multi-label classification task is assuming unobserved labels as negative labels (Assume Negative, AN) [4, 6, 36, 40]. It is a reasonable assumption since most labels are negative labels in the multi-label scenario [33]. However, this assumption causes label noise in a form of false negatives since the actually positive but unannotated labels are incor Figure 1: **CAM Observation.** We compare the class activation map (CAM) output from two multi-label classification models: one trained with full labels (_CAM\({}_{full}\)_) and the other trained with partial labels and AN assumption (_CAM\({}_{partial}\)_). We observe that the overall structure of _CAM\({}_{partial}\)_ is not much affected by the noisy false negative labels during training. This observation motivates us to make _CAM\({}_{partial}\)_ similar to _CAM\({}_{full}\)_ by boosting its relatively large attribution scores. Best viewed in color. rectly assumed to be negative. Since this label noise perturbs the learning process of the model [1, 7, 18, 45], recent studies on a partially annotated multi-label classification focus on suppressing the influence of label noise by ignoring or correcting the loss of samples that are likely to be false negatives [2, 21]. Aside from recent research directions, we delve into "how" false negative labels influence a multi-label classification model. We conduct control experiments with two models. One is the model trained with partial labels and AN assumption where false negative labels exist. The other is the model trained with full annotations and thus trained without false negatives. We compare the class activation map (CAM) [49] output between the two models to see the difference in how each model understands the input image and makes a prediction result. Figure 1 shows that a model trained with false negatives still highlights similar regions to one trained with full annotation. However, the attribution scores in the highlighted areas are much small. This observation leads us to think that if we scale up the damaged score of the highlighted region in the model trained with false negatives, the explanation of this model will become similar to that of the model trained with full annotation. To this end, we introduce a simple piece-wise linear function, named BoostLU, that bridges the gap between the explanation of two models trained with false negatives and with full annotation each. Concretely, we use the modified CNN model to get CAM during the forward pass directly [47], and the logit in the modified CNN model is the mean of attribution scores of CAM. The BoostLU function is applied element-wisely to the CAM output of the modified CNN to boost the scores of the highlighted regions, thereby compensating for the decrease of attribution scores in CAM caused by false negatives. It increases the logit value for positive labels and thus makes a better prediction. Furthermore, when we combine BoostLU with the recently proposed methods [21] that explicitly detect and modify false negatives during training, it helps to detect false negatives better, thus leading to better performance. As a result, we achieve state-of-the-art performance on PASCAL VOC [14], MS COCO [28], NUSWIDE [10], and Openimages V3 [23] datasets in a partial label setting. We summarize the contributions of this paper as follows. 1. We analyze how the false negative labels affect the explanation of the model in a partially annotated multi-label classification scenario. 2. We propose a simple but effective function, named BoostLU, that compensates for the damage of false negatives in a multi-label classification model with little extra computational cost. 3. When applied during inference, BoostLU boosts the baseline method (AN)'s test performance without additional training. 4. Combined with recent methods of detecting and modifying false negatives during training, BoostLU boosts the state-of-the-art performance on single positive and large-scale partial label settings. ## 2 Related Works **Partially annotated multi-label classification.** One primary stream to solve the partially annotated multi-label classification problem is to view unobserved labels as _missing labels_. Earlier works tackled this problem by solving matrix completion [5, 15, 43] or employing the Bayesian model [19, 37]. However, these works require loading all data into memory at once, thus making it infeasible to train deep neural networks. Curriculum labeling [13] proposed a bootstrapping strategy using model prediction. IMCL [17], SE [24], and SST [8] exploited label correlation and image similarity to generate regularization losses or pseudo-labels for missing labels. SARB [31] performed a category-wise mixup on feature space between labeled and unlabeled images to propagate information into missing labels. Zhou et al. [50] proposed entropy maximization loss that suppresses gradients from missing labels to promote learning from observed labels. Since a significant part of labels is negative in a multi-label setting [33], there is another stream to treat unobserved labels as negatives and try to lessen the harmful impact of false negatives. In other words, it views unobserved labels as _noisy labels_. ROLE [11] proposed to estimate unobserved labels while simultaneously regularizing the estimation with an average number of positive labels online. Kim et al. [21] observed the memorization effect [1] in a noisy multi-label classification setting that the model learns from clean labels first. Thus false negative labels are likely to show large loss values during training. Then they suggested three methods, LL-R, LL-Ct, and LL-Cp, that prevent false negatives from being memorized by rejecting, temporally correcting, and permanently correcting samples with large losses, respectively. P-ASL [2] assigned different scaling rates between annotated negatives and assumed negatives. It also ignored losses from categories with high prediction scores or label prior values. In this work, we look at false negatives differently and study their effect on model explanation. **Class activation mapping.** Class activation mapping (CAM) [49] provides information about where the classification model is attending to generate prediction scores. There are several follow-up works, including Grad-CAM [34], which generates model-agnostic attention maps, and CALM [20], which strengthens the interpretability of attention maps. Since CAM provides localization ability to classification models, it has been widely used for various vision tasks, such as weakly supervised object localization [9, 12, 41, 42] and weakly supervised semantic segmentation [25, 26, 29, 42, 46]. Recently, Zhang et al. [48] utilized CAM in facial expression recognition in the presence of noisy labels. They found that the model trained with noisy labels highlights only part of the features and suggested a random masking strategy to prevent memorizing partial features. Although there is a similarity in that they inspected the CAM output of the model in noisy label situations, our work is different since we focus on the noisy multi-label classification setting with another type of noise. ## 3 Preliminary This section introduces the formal definition of a partially annotated multi-label classification in SS3.1. Next, we briefly summarize the class activation map (CAM) in SS3.2. ### Problem Definition We aim to train a multi-label classification model with dataset \(\mathcal{D}\) consisting of pairs of input image \(\mathbf{x}\) and partially annotated label \(\mathbf{y}\). Each category can have three kinds of labels: 0, 1, and \(\phi\). In other words, \(\mathbf{y}\in\mathcal{Y}=\{0,1,\phi\}^{C}\) where \(\phi\) indicates the absence of annotation and \(C\) is the number of total categories. Denote the index set of positive labels, negative labels, and unannotated labels as \(\mathcal{I}^{p}\), \(\mathcal{I}^{n}\), and \(\mathcal{I}^{\phi}\), respectively. We study the setting where labels are sparsely annotated, i.e., \(|\mathcal{I}^{p}|+|\mathcal{I}^{n}|\ll|\mathcal{I}^{\phi}|\). A straightforward approach to train the model given partial labels is to treat unannotated labels by assuming negative (AN) and use binary cross-entropy as a loss function: \[\mathcal{L}_{AN}=\frac{1}{C}\left[\ \sum_{i\,\in\,\mathcal{I}^{p}}\ \mathcal{L}_{+}+\sum_{i\,\in\, \mathcal{I}^{n}\cup\mathcal{I}^{\phi}}\ \mathcal{L}_{-}\right] \tag{1}\] where \(\mathcal{L}_{+}=-\log(\sigma(g_{i}))\), \(\mathcal{L}_{-}=-\log(1{-}\sigma(g_{i}))\) and \(g_{i}\) is a logit for \(i\)-th category. However, labels whose true label is positive but unannotated are incorrectly assumed to be negative and become false negatives. Denote the index set of true negative and false negative labels as \(\mathcal{I}^{tn}\) and \(\mathcal{I}^{fn}\), then \(\mathcal{I}^{\,n}\,\cup\,\mathcal{I}^{\,\phi}=\mathcal{I}^{tn}\,\cup\, \mathcal{I}^{fn}\). We set the approach of training the model with Equation (1) as the baseline method and investigate the influence of false negatives on the multi-label classification model. ### Recap CAM Most CNN architectures consist of several convolution layers (Convs), followed by a Global Average Pooling (GAP) layer [27] and a fully connected (FC) layer. Let the last convolutional feature map be \(\mathbf{F}\in\mathbb{R}^{H\times W\times D}\), and a weight matrix of the FC layer be \(\mathbf{W}\in\mathbb{R}^{C\times D}\) where \((H,W)\) and \(D\) are the spatial size and channel size of the feature map, respectively. We can obtain the class activation map (CAM) [49] for class c (\(\mathbf{M}_{c}\)) by \[\mathbf{M}_{c}=\sum_{d=1}^{D}\mathbf{W}_{cd}\mathbf{F}_{d}\, \tag{2}\] where \(\mathbf{F}_{d}\) denotes \(d\)-th channel of \(\mathbf{F}\). \(\mathbf{M}_{c}\) explains the model's prediction by attributing scores on each pixel. Instead of performing post-processing to get CAM as in Equation (2), we can directly get CAM during the forward pass by reordering the last two layers from Convs-GAP-FC to Convs-1x1Conv-GAP where 1x1Conv is the one-by-one convolutional layer with the weight \(\mathbf{W}\)[47]. The output feature maps of 1x1Conv become the same as \(\mathbf{M}\), and the logit \(g_{c}\) becomes \[g_{c}=\frac{1}{HW}\sum_{i=1}^{H}\sum_{j=1}^{W}(\mathbf{M}_{c})_{ij}. \tag{3}\] Thus, we can interpret each element \((\mathbf{M}_{c})_{ij}\) as an _attribution score_ at spatial location \((i,j)\) contributing to the logit for class \(c\). For the following sections, we utilize this modified architecture to facilitate the application of our method. ## 4 Impact of False Negatives on CAM It is well known that neural networks can memorize wrong labels due to their large model capacities [45]. Likewise, if we train a multi-label classification model with AN loss (Equation (1)) when given partial labels, the model is damaged by memorizing false negative labels [21]. It results in poor performance compared to the model trained with full labels, which false negatives have not influenced. To better understand why the model trained with partial labels performs less than that with full labels, we analyze the behavioral difference between these two models. Concretely, we use a class activation map (CAM) [49] to explain each model's prediction and compare the explanation results. We train two multi-label classification models on a COCO dataset [28] with the same CNN architecture ResNet-50 [16]: one model with full labels using binary cross entropy loss and the other with partial labels using AN loss (Equation (1)). We denote the CAM output from each model as \(\textit{CAM}_{full}\) and \(\textit{CAM}_{partial}\), respectively. To analyze the explanation of these two models, we first compute the Spearman correlation between \(\textit{CAM}_{full}\) and \(\textit{CAM}_{partial}\) on positive labels. We show the distribution of the correlation values on the test set in Figure 1(a). For comparison, we consider a 2D Gaussian image centered at the midpoint and calculate the Spearman correlation coefficient between this Gaussian image and \(\textit{CAM}_{full}\). We observe that there is mainly a positive correlation between \(\textit{CAM}_{full}\) and _CAM\({}_{partial}\)_, while the correlation of the control group is distributed widely but mostly around zero. It implies that the overall structure (i.e., the attribution ranking among pixels) of _CAM\({}_{partial}\)_ is preserved despite the influence of false negative labels, therefore having a high Spearman correlation with _CAM\({}_{full}\)_. We can also visually inspect the similar structure between _CAM\({}_{partial}\)_ and _CAM\({}_{full}\)_ in Figure 1, where both CAMs highlight similar regions. Since we know that the overall structure is similar between _CAM\({}_{full}\)_ and _CAM\({}_{partial}\)_, we then compare the range of attribution scores between _CAM\({}_{full}\)_ and _CAM\({}_{partial}\)_. Concretely, we compute the mean of the highest 5% of attribution scores and the lowest 5%, respectively, for each CAM and summarize the distribution of these values on the test set in Figure 2b and 2c. Note that we take an average of 5% of scores to reduce the effect of outliers. We observe that top-ranking attribution scores of _CAM\({}_{partial}\)_ from positive labels drop sharply compared to _CAM\({}_{full}\)_, while these scores from negative labels remain similar. Also, there is little difference in bottom-ranking attribution scores between _CAM\({}_{full}\)_ and _CAM\({}_{partial}\)_, both on positive and negative labels. It implies that false negatives mainly affect the model's understanding in regions with relatively high attribution scores, especially for positive labels. Consequently, the decrease of attribution scores at specific regions in CAM leads to a decrease in the logit value (since logit is the average of attribution scores on CAM as in Equation (3)), making the model predicts a lower score for the positive category. The change of gradient during training can explain the occurrence of this phenomenon. **Gradient analysis.** In Equation (1), recall that the BCE loss is \(\mathcal{L}_{+}\) with a positive target and \(\mathcal{L}_{-}\) with a negative one. Their gradients with respect to the logit \(g\) are \[\frac{\partial\mathcal{L}_{+}}{\partial g}=\sigma(g)-1,\ \frac{\partial \mathcal{L}_{-}}{\partial g}=\sigma(g). \tag{4}\] For a training image \(\mathbf{x}\), the gradient difference on the logit \(g\) between partial label (with AN assumption) and full label case is given by \[\frac{1}{C}\left[\sum_{i\,\in\,\mathcal{I}\,r}\frac{\partial \mathcal{L}_{+}}{\partial g_{i}}+\sum_{i\,\in\,\mathcal{I}^{fn}}\frac{ \partial\mathcal{L}_{-}}{\partial g_{i}}+\sum_{i\,\in\,\mathcal{I}^{tn}}\frac {\partial\mathcal{L}_{-}}{\partial g_{i}}\right] \tag{5}\] \[-\frac{1}{C}\left[\sum_{i\,\in\,\mathcal{I}\,r}\frac{\partial \mathcal{L}_{+}}{\partial g_{i}}+\sum_{i\,\in\,\mathcal{I}^{fn}}\frac{ \partial\mathcal{L}_{+}}{\partial g_{i}}+\sum_{i\,\in\,\mathcal{I}^{tn}}\frac {\partial\mathcal{L}_{-}}{\partial g_{i}}\right]\] \[=\frac{1}{C}\left[\sum_{i\,\in\,\mathcal{I}^{fn}}(\frac{ \partial\mathcal{L}_{-}}{\partial g_{i}}-\frac{\partial\mathcal{L}_{+}}{ \partial g_{i}})\right]\ =\ \frac{|\mathcal{I}^{fn}|}{C}\.\] Equation (5) shows that the logit receives more gradients proportional to the number of false negative labels on a partial label setting. Therefore, as training progresses, the additional gradients from false negatives are gradually accumulated in the logit, making the logit smaller than the model trained on full labels. Since the logit is equal to the average of CAM, the attribution scores of _CAM\({}_{partial}\)_ become lower than that of _CAM\({}_{full}\)_. ## 5 Proposed Method In this section, we propose a conceptually simple but effective method to make the model trained with partial labels resemble the model trained with full labels by mimicking the explanation. We propose a function BoostLU devised to compensate for the damaged attribution score of the explanation due to false negatives in SS5.1. We then introduce three scenarios that utilize our function through SS5.2 \(\sim\) SS5.4. ### BoostLU From the modified CNN architecture described in SS3.2, define convolutional layers (Convs-1x1Conv) as \(\Phi\). Given an input image \(\mathbf{x}\), its class activation map (CAM) is Figure 2: **CAM Analysis on COCO test set.** (a): Distribution of Spearman correlation coefficients between _CAM\({}_{full}\)_ and _CAM\({}_{partial}\)_ from the same image. Overall positive correlation implies that _CAM\({}_{partial}\)_ has a structure similar to _CAM\({}_{full}\)_. _/_ (b), (c): Boxplot of the average of top/bottom 5% of attribution scores, respectively. The damage of false negative labels to the model mainly lowers the upper attribution scores for positive labels while maintaining its overall structure in CAM. \(\Phi(\mathbf{x})\). Our goal is to make the explanation \(\mathbf{M}\) of the model trained with partial labels closer to the explanation of the model trained with full labels, even if we do not have access to the full labels, thus improving the prediction performance. In the previous section, we observe that when a multi-label classification model is trained with AN loss, the way the model understands images is damaged by false negatives. However, we also find that this damage is mainly focused on a drop in the relatively high attribution scores while the overall spatial structure of CAM is preserved. Based on these findings, we conjecture that if the damaged high attribution scores are scaled up in the model trained with partial labels, _CAM\({}_{\textit{partial}}\)_ will become similar to _CAM\({}_{\textit{full}}\)_. To achieve this, we devise a piece-wise linear function that boosts the attribution scores that are above a certain threshold: \[f(x)=\begin{cases}\alpha x+(1-\alpha)\beta,&x\geq\beta\\ x,&x<\beta\end{cases}, \tag{6}\] where \(\alpha\) is a scaling factor with \(\alpha>1\), and \(\beta\) is a threshold determining whether to boost the score. Since top-ranking attribution scores on CAM tend to have large positive values for positive labels and around zero for negative labels (as seen in Figure 2b), we search for the values of \(\beta\) around zero. Since we empirically observe no significant difference in model performance for different \(\beta\) (these results are reported in the Appendix), we only consider the simplest case of \(\beta=0\). Then we can rewrite Equation (6) in a ReLU-like form as \[\text{BoostLU}(x)=max(x,\alpha x). \tag{7}\] By applying BoostLU to each element of CAM, as illustrated in Figure 3, BoostLU boosts positive attribution scores by \(\alpha\) times, which are the main target to be damaged by false negatives, while maintaining the negative scores unchanged. These selectively boosted attribution scores are aggregated through the GAP layer to produce a logit value as \[g(\mathbf{x})=(\text{GAP}\circ\text{BoostLU}\circ\Phi)(\mathbf{x}). \tag{8}\] From now on, we will consider three different scenarios for applying BoostLU in multi-label classification. ### Usage 1: BoostLU in inference only Since the idea of BoostLU comes from analyzing the CAM of a model which finished training with AN loss, we first propose to apply BoostLU only during the inference phase of that model. Initially, this model produces low logits for categories whose label is positive. However, applying BoostLU increases the corrupted attribution scores and produces higher logits. At the same time, boosting effect is not much for categories whose label is negative; therefore, its logits remain almost the same. As a result, prediction scores are better separated between samples with positive and negative labels, improving average precision. ### Usage 2: BoostLU in both training and inference Next, we consider applying BoostLU during the training phase with AN loss and the inference phase. The gradient of logit \(g\) with respect to the attribution score on CAM at location \((i,j)\) (i.e., \(\mathbf{M}_{ij}\)) then becomes \[\frac{\partial g}{\partial\mathbf{M}_{ij}}=\begin{cases}\alpha/HW,&\mathbf{M}_{ij}\geq 0 \\ 1/HW,&\mathbf{M}_{ij}<0\.\end{cases} \tag{9}\] Compared to the case that does not use BoostLU, where every spatial location gets a uniform gradient of \(1/HW\), the locations with positive attribution scores receive gradients boosted by \(\alpha\) times. Thanks to the boosted gradients, these locations are encouraged to produce higher attribution scores during training when the model receives a positive label. Also, when a true negative label comes in, these locations are encouraged to produce lower attribution scores. However, in practice, we observe only marginal improvement in model performance. It is because the boosted gradients have an adverse effect when false negatives come in as input. That is, BoostLU also boosts the wrong direction of gradients from false negatives, which can be easily seen by combining Equation (4) and (9): \[\frac{\partial\mathcal{L}_{-}}{\partial\mathbf{M}_{ij}}=\frac{\partial\mathcal{L}_ {-}}{\partial g}\cdot\frac{\partial g}{\partial\mathbf{M}_{ij}} \tag{10}\] Note that \(\partial\mathcal{L}_{-}/\partial g\) has a wrong sign for false negatives, and it decreases CAM values. ### Usage 3: Combination with Large Loss Modification To alleviate the problem mentioned above, we propose combining our BoostLU with recent studies [21, 2] that detect and treat suspicious false negatives while training multi-label classification models. We especially adopt three methods, i.e., LL-R, LL-Ct, and LL-Cp [21], since they work on several partial label settings. When these methods are combined with BoostLU, they suppress the side effects caused by false negatives. As a result, the model can Figure 3: **Schematic diagram of applying BoostLU.** BoostLU is applied to the model’s CAM output element-wisely to compensate for the attribution scores damaged by false negative labels. take full advantage of the boosted gradients from the positive labels during training. Moreover, because these combined methods consider samples with relatively high prediction scores among unobserved labels as false negatives, BoostLU helps the model detect more false negatives by boosting their logit values. ## 6 Experiments To validate the efficacy of our proposed method, we report our experimental results on two partial label settings: single positive label (SS6.1) and large-scale partial label (SS6.2). In both sections, we adopt mean Average Precision (mAP) as an evaluation metric and report the performance on a test set using the model weight with the highest mAP in the validation set. We fix our hyperparameters as \(\alpha=5,\,\beta=0\). Next, we present analysis results on SS6.3. ### Single positive label **Datasets.** We target four multi-label classification datasets: PASCAL VOC 2012 [14], MS COCO 2014 [28], NUSWIDE [10], and CUB [38]. Each dataset is annotated for 20 classes, 80 classes, 81 concepts, and 312 attributes. Since they are fully annotated, we only keep one positive label and drop the rest of the labels for every training image to build a single positive label setting identical to [11]. **Hyperparameter settings.** For a fair comparison, we set the same search space as [11, 50]: \(\{8,16\}\) for batch size and \(\{10^{-2},10^{-3},10^{-4},10^{-5}\}\) for learning rate. We train the model for 10 epochs with Adam optimizer [22]. LL-R, LL-Ct, and LL-Cp [21] have a hyperparameter \(\Delta_{rel}\) that controls the slope of increase in the modification rate. We set \(\Delta_{rel}=0.5\) for LL-R, \(0.2\) for LL-Ct, and \(0.1\) for LL-Cp, respectively. We set a 10x learning rate for the last 1x1Conv layer. **Implementation details.** We also follow identical configurations as [11, 50, 21]. Specifically, 20% of the original training set is used for validation. ResNet-50 [16] CNN backbone pre-trained on ImageNet [44] is used as a feature extractor. Each image is resized to 448x448 before being fed to CNN, and only random horizontal flipping is used for data augmentation during training. Note that some categories do not have positive labels in the CUB dataset on a generated single positive label setting. In these categories, we do not apply BoostLU when training as the benefit from the boosted gradient becomes weakened. **Results of ablation study.** We first conduct ablation studies on PASCAL VOC and COCO datasets. Its results are reported in Table 1. First, we show the performance of the model trained with AN loss in the first row. In the second row, it can be seen that when BoostLU is applied during inference of this model, its test performance is improved even without additional training. It confirms the property of BoostLU that compensates for the damaged attribution score. However, if we further apply BoostLU while training (third row), the performance improvement is lower than when BoostLU is applied only during inference. We can observe the side effect of BoostLU that the gradient received by the region with a positive attribution score is boosted even for false negative labels. In the fourth row, we show the performance of LL-R, which rejects large losses during training. We then train the model by applying both LL-R and BoostLU during training and BoostLU during inference. Its performance is reported in the final row, and its improvement is much more significant than the case where LL-R is not applied (+0.63 \begin{table} \begin{tabular}{c c c c||c c} \hline \hline \multirow{2}{*}{\begin{tabular}{} \end{tabular} } & \multirow{2}{*}{\begin{tabular}{} \end{tabular} } & \multirow{2}{*}{ \begin{tabular}{} \end{tabular} } & \multicolumn{2}{c}{Performance} \\ \cline{3-5} & & & & VOC & COCO \\ \hline \hline \multicolumn{5}{c||}{} & & & 86.10 & 64.58 \\ \hline ✓ & & & 87.31 & 66.27 \\ \hline ✓ & ✓ & & 86.73 & 65.33 \\ \hline \multicolumn{5}{c||}{} & ✓ & 88.24 & 70.60 \\ \hline & ✓ & ✓ & 87.18 & 68.45 \\ \hline ✓ & & ✓ & 88.90 & 70.87 \\ \hline ✓ & ✓ & ✓ & **89.27** & **72.82** \\ \hline \hline \end{tabular} \end{table} Table 1: **Ablation study on BoostLU and LL-R.** We test seven combinations of using BoostLU and LL-R [21] on VOC and COCO datasets. Training a model with both LL-R and BoostLU and applying BoostLU during inference shows the best mAP. \begin{table} \begin{tabular}{l||c c c c} \hline \hline \multirow{2}{*}{\begin{tabular}{} \end{tabular} } & \multirow{2}{*}{\begin{tabular}{} \end{tabular} } & \multirow{2}{*}{\begin{tabular}{} \end{tabular} } & \multirow{2}{*}{ \begin{tabular}{} \end{tabular} } \\ \cline{2-3} & & & & \\ \hline LL-R [21] & 88.27 & 70.70 & 48.76 & 19.56 \\ \multicolumn{5}{c||}{} & **89.29** & **72.89** & **49.59** & 19.80 \\ LL-Ct [21] & 87.79 & 70.29 & 48.08 & 19.06 \\ \multicolumn{5}{c||}{} & **88.61** & 71.78 & 48.37 & 19.25 \\ LL-Cp [21] & 87.44 & 70.27 & 47.92 & 19.21 \\ \multicolumn{5}{c||}{} & **87.81** & 71.41 & 48.61 & 19.34 \\ \hline \hline \end{tabular} \end{table} Table 2: **Experimental results on various datasets with single positive label setting.** Each number shows the average of mAP in three experiments. A bold number means the best performance. Results of methods except for LL-R, LL-Ct, and LL-Cp are taken from [50]. We report the reimplemented results for LL-R, LL-Ct, and LL-Cp with the same hyperparameter search space as [11, 50]. v.s. +1.03 on PASCAL, and +0.75 v.s. +2.22 on COCO). Thanks to LL-R filtering out false negatives, the side effect of the boosted gradient becomes minimized. Moreover, since our BoostLU helps LL-R detect false negatives, its advantage is further amplified. We also find in the last three rows that when we combine BoostLU with LL-R, applying BoostLU either during training or during inference results in a performance drop compared to applying it during both phases. This shows that BoostLU plays a vital role both in training and inference, together with large loss modification methods. In particular, it is crucial to apply BoostLU during inference to achieve high performance. Additional discussion about this is described in the Appendix. From now on, we will only report the experimental results using the configuration of the last row (BoostLU in inference + BoostLU in training + LL-R in training). **Comparison with prior arts.** We compare our results with recent state-of-the-art: Label Smoothing (LS) [30], Asymmetric loss (ASL) [33], ROLE (with LinearInit) [11], and Entropy-maximization loss (EM) with Asymmetric Pseudo-Labeling (APL) [50]. We train the network three times and report the average test performance. The results are shown in Table 2. We find that applying BoostLU in both training and inference consistently improves the performance of LL-R, LL-Ct, and LL-Cp in all datasets, only with little additional computational cost. It achieves +1.02, +0.82, and +0.37 mAP improvement in VOC, as well as +2.19, +1.49, and +1.14 mAP improvement in COCO, respectively. Especially the performance of LL-R + BoostLU shows the most significant increase, achieving state-of-the-art performance and reaching closest to the full label performance on VOC, COCO, and NUSWIDE. It also surpasses the previous state-of-the-art method EM+APL which does not use AN assumption on these datasets. However, the performance improvement is not that large in CUB. Since CUB has an annotation for attributes, the number of false negative labels is much higher, and this may increase the side effect of BoostLU when applied during training. ### Large-scale partial label **Dataset.** We target a partially annotated OpenImages V3 [23] dataset which consists of 3.4M training images, 41,620 validation images, and 125,436 test images with 5,000 trainable classes (having more than 30 human-verified samples in the training set and 5 in the valid or test sets). We sort these classes in ascending order by the number of annotations in the training set and divide them into five groups of equal size 1,000. We report the mAP score averaged within each group and the entire 5,000 classes. **Implementation details.** We use ImageNet [44] pre-trained ResNet-101 [16] as a feature extractor, the same as prior works. We follow [21] to set the learning rate as \(2\times 10^{-5}\) and batch size as 288. We train the model for 20 epochs and set \(\Delta_{rel}=0.005\). We resize every image to 224x224 resolution and perform a random horizontal flip during training. We set a 10x learning rate for the last 1x1Conv layer. **Results.** We compare our results with prior works: CNN-RNN [39], Curriculum Labeling [13], IMCL [17], and P-ASL [2]. As shown in Table 3, BoostLU also works well in a real partial label scenario. Combined with LL-R, LL-Ct, and LL-Cp, it boosts their performance by a large margin: improvement of +1.33, +1.36, and +1.58 mAP, respectively. All of the combined methods surpass other previous methods and achieve state-of-the-art performance. In particular, LL-Cp + BoostLU shows the highest 84.04 mAP. ### Analysis **Qualitative results.** Figure 4 visualizes the CAM results from four different methods. The category corresponding to the CAM is shown above the image. The prediction score, \begin{table} \begin{tabular}{|l||c c c c c|c|} \hline Methods & Group 1 & Group 2 & Group 3 & Group 4 & Group 5 & All Classes \\ \hline \hline CNN-RNN [39] & 68.76 & 69.70 & 74.18 & 78.52 & 84.61 & 75.16 \\ Curriculum Labeling [13] & 70.37 & 71.32 & 76.23 & 80.54 & 86.81 & 77.05 \\ IMCL [17] & 70.95 & 72.59 & 77.64 & 81.83 & 87.34 & 78.07 \\ P-ASL [2] & 73.19 & 78.61 & 85.11 & 87.70 & 90.61 & 83.03 \\ \hline LL-R [21] & 77.76 & 79.07 & 81.94 & 84.51 & 89.36 & 82.53 \\ + BoostLU (Ours) & 79.28 & 80.81 & 83.32 & 85.63 & 90.27 & 83.86 \\ LL-Ct [21] & 77.76 & 79.18 & 81.97 & 84.46 & 89.51 & 82.58 \\ + BoostLU (Ours) & 79.43 & 80.75 & 83.41 & 85.70 & 90.41 & 83.94 \\ LL-Cp [21] & 77.49 & 79.22 & 81.89 & 84.51 & 89.18 & 82.46 \\ + BoostLU (Ours) & 79.53 & 81.04 & 83.40 & 85.85 & 90.39 & **84.04** \\ \hline \end{tabular} \end{table} Table 3: **Experimental results on a OpenImages V3 dataset.** Each group includes 1,000 classes without overlapping. Group 1 has the smallest annotations, and Group 5 has the most. The number of annotations increases as the group number increases. LL-R, LL–Ct, and LL-Cp are reimplemented and the other results are borrowed from [2]. A bold number shows the best performance. obtained by averaging attribution scores on CAM and applying sigmoid activation, is shown above each CAM. First, column (c) shows that a model trained with AN loss gives low prediction scores due to the damage of false negatives. Although this model highlights similar regions for a given input image, the attribution scores of the corresponding regions are considerably shrunk compared to column (b). When we perform inference by attaching BoostLU to this model, it can be seen in column (d) that BoostLU successfully recovers the model's explanation, yielding high prediction scores. For LL-R + BoostLU in column (e), its model explanation is further improved due to the role of LL-R and BoostLU during training which further accelerates the improvement of the attribution score of the highlighted region. It is the most similar to the explanation of the model trained with full annotation (column (b)) compared to other methods. **Synergy effect of BoostLU and large loss modification methods during training.** We train LL-R and LL-R + BoostLU on the COCO dataset with the same \(\Delta_{rel}=0.5\) to make both models reject the same number of samples during training. We then inspect how many of the rejected samples are false negative labels. Figure 5 shows the number of false negative labels rejected by each model per epoch. It can be seen that after the warmup phase (first epoch), LL-R + BoostLU rejects more false negatives than LL-R in every epoch. It is because BoostLU boosts the logit value of false negative samples, thus boosting the large loss modification methods' ability to detect false negatives. At the same time, it also reduces the number of true negative samples that the model incorrectly rejects, further contributing to performance improvement. ## 7 Conclusion In this paper, we studied the effect of false negative labels on model explanation when assuming unobserved labels as negatives in a partially annotated multi-label classification situation. We found that the overall spatial shape of the explanation tends to be preserved, but the scale of attribution scores is significantly affected. Based on these findings, we proposed a conceptually simple piece-wise linear function BoostLU that compensates for the damaged attribution scores. Through several experiments, we confirmed that BoostLU successfully contributed to bridging the explanation of the model closer to the explanation of the model trained with full labels. Furthermore, combined with large loss modification methods, it achieved state-of-the-art performance on several multi-label datasets. **Acknowledgements.** Youngwook Kim thanks Junghyun Lee and Youngmin Ro for their valuable help. Jae Myung Kim thanks the European Laboratory for Learning and Intelligent Systems (ELLIS) PhD program and the International Max Planck Research School for Intelligent Systems (IMPRS-IS) for support. This work is in part supported by National Research Foundation of Korea (NRF, 2021R1A4A1030898(10%)), Institute of Information & communications Technology Planning & Evaluation (IITP, 2021-0-00106 (50%), 2021-0-01059 (20%), 2021-0-00180 (20%)) grant funded by the Ministry of Science and ICT (MSIT), INMAC, and BK21-plus. This work is also supported by DFG project number 276693517, by BMBF FKZ: 01IS18039A, by the ERC (853489 - DEXIM), and by EXC number 2064/1 - project number 390727645. Figure 4: **Qualitative results.** Categories and their corresponding prediction scores are displayed above the images and CAM results. LL-R + BoostLU is the closest to the explanation and prediction score of the model trained with full labels. Figure 5: **Comparison of the number of rejected false negative labels.** BoostLU helps LL-R detect more false negative labels in every epoch.
2308.09155
Dielectric Screening and Electric Field Control of Ferromagnetism at the CaMnO$_3$/CaRuO$_3$ Interface
Control of magnetism by an applied electric field is a desirable technique for the functionalization of magnetic materials. Motivated by recent experiments, we study the electric field control of the interfacial magnetism of CaRuO$_3$/CaMnO$_3$ (CRO/CMO) (001), a prototype interface between a non-magnetic metal and an antiferromagnetic insulator. Even without the electric field, the interfacial CMO layer acquires a ferromagnetic moment due to a spin-canted state, caused by the Anderson-Hasegawa double exchange (DEX) between the Mn moments and the leaked electrons from the CRO side. An electric field would alter the carrier density at the interface, leading to the possibility of controlling the magnetism, since DEX is sensitive to the carrier density. We study this effect quantitatively usingdensity-functional calculations in the slab geometry. We find a text-book like dielectric screening of the electric field, which introduces polarization charges at the interfaces and the surfaces. The extra charge at the interface enhances the ferromagnetism via the DEX interaction, while away from the interface the original AFM state of the Mn layers remains unchanged. The effect could have potential application in spintronics devices.
Churna Bhandari, S Satpathy
2023-08-17T19:03:55Z
http://arxiv.org/abs/2308.09155v1
Dielectric Screening and Electric Field Control of Ferromagnetism at the CaMnO\({}_{3}\)/CaRuO\({}_{3}\) Interface ###### Abstract Control of magnetism by an applied electric field is a desirable technique for the functionalization of magnetic materials. Motivated by recent experiments, we study the electric field control of the interfacial magnetism of CaRuO\({}_{3}\)/CaMnO\({}_{3}\) (CRO/CMO) (001), a prototype interface between a non-magnetic metal and an antiferromagnetic insulator. Even without the electric field, the interfacial CMO layer acquires a ferromagnetic moment due to a spin-canted state, caused by the Anderson-Hasegawa double exchange (DEX) between the Mn moments and the leaked electrons from the CRO side. An electric field would alter the carrier density at the interface, leading to the possibility of controlling the magnetism, since DEX is sensitive to the carrier density. We study this effect quantitatively using density-functional calculations in the slab geometry. We find a text-book like dielectric screening of the electric field, which introduces polarization charges at the interfaces and the surfaces. The extra charge at the interface enhances the ferromagnetism via the DEX interaction, while away from the interface the original AFM state of the Mn layers remains unchanged. The effect could have potential application in spintronics devices. ## I Introduction There is a considerable interest in controlling the magnetism of magnetic materials by an external electric field because of its potential applications in spintronics. Heterostructures between transition metal oxides have been identified as possible platforms for achieving this magnetoelectric coupling effect. [1; 2; 3; 4] One such prototypical interface is the (001) interface between the paramagnetic metal CaRuO\({}_{3}\) (CRO) and the antiferromagnetic insulator CaMnO\({}_{3}\) (CMO), which has been well studied, both experimentally and theoretically. [1; 5; 6; 7; 8] While CMO is an antiferromagnetic insulator in the bulk, the interface layer adjacent to the paramagnetic CRO acquires a net ferromagnetic moment, while the remaining part of the heterostructure remains unchanged. This has been explained [1; 5] to be due to the Anderson-Hasegawa-de Gennes double exchange (DEX) interaction [9; 10; 11] between the interfacial Mn magnetic moments and the leaked electrons from the metallic CRO side to the CMO side. The leaked electrons occupy the itinerant Mn-\(e_{g}\) states, which then mediate the DEX interaction between the Mn-\(t_{2g}\) core moments, fixed on the lattice sites. The amount of leaked electrons is sufficiently large to produce a spin canted state in the interfacial MnO layer, resulting in a robust net magnetic moment of about \(0.85\mu_{B}\) per interfacial Mn atom.[1] In the DEX mechanism, the spin canting angle is quite sensitive to the itinerant carrier concentration \(x\), driving the AFM state into a spin-canted state at first and eventually into an FM state with increasing \(x\). This is apparent from the De Gennes expression [11] for the canting angle, to which we return later, viz., \(\theta_{c}=2\cos^{-1}(2^{-1}|t|x/J)\), where \(t\) is the electron hopping integral and \(J\) is the AFM Heisenberg exchange. It is therefore expected that an applied electric field would affect the DEX interaction by modifying the carrier concentration in the magnetic layers. However, the extent of this effect is unknown since dielectric screening theory indicates merely that the polarization charges would accumulate somewhere in the boundary regions, not necessarily in the magnetic layers. Therefore, this issue needs to be studied in detail. Indeed, as our density functional calculations find, much of the surface polarization charges, for example, appear in the vacuum region. It is only the carriers that appear in the magnetic layers that matter as far as the DEX mechanism is concerned. We have chosen the prototypical CRO/CMO system Figure 1: Schematic diagram of the CMO/CRO heterostructure and the spin-canted state in the interfacial MnO\({}_{2}\) layer. Shown is the supercell used in the DFT calculations along with the electric potential seen by the electrons (blue line) due to the applied electric field. By increasing the charge transfer across the interface, the electric field enhances the interfacial ferromagnetism via double exchange by reducing the canting angle \(\theta\). Apart from the enhanced magnetism of the interfacial MnO\({}_{2}\) layer,(001) we find that the anti-ferromagnetism of the remaining CMO layers remain more or less unaffected by the electric field. for our work, since there are already several experimental studies on this system reported in the literature. In fact, Grutter et al.[8] have recently studied experimentally the electric field dependence of magnetism in this system. They find an increase of the ferromagnetic moment with an applied electric field and conclude that it originates from the interface MnO\({}_{2}\) layer. In this work, we study the effect of an electric field on the electronic structure and magnetism of the CRO/CMO interface in the slab geometry from density-functional calculations. We find a text-book like dielectric screening of the applied field, which leads to a charge accumulation at the slab surfaces and the interface. However, quite interestingly, not all screening charges occur in the surface or the interface atomic layers. For example, the surface polarization charge is found to occur outside the nominal surface, with little or no charge accumulated on the Mn surface layers or the bulk layers. As for the interfacial Mn layer, a significant amount of extra charge does accumulate there, which reduces the spin canting angle via double exchange when the electric field is applied, leading to an increased net ferromagnetic moment as a result. ## II Density-functional method In our calculations, we considered a slab consisting of five layers of CMO and three layers of CRO, (CMO)\({}_{5}\)/(CRO)\({}_{3}\), with each layer consisting of two formula units to describe the anti-ferromagnetic Mn moments in CMO. An extra layer of electrically neutral CaO was added as shown in Fig. 1, so that the metal-oxygen octahedra MO\({}_{6}\) is complete on both surfaces. Test calculations using larger number of layers did not substantially change the results. We used the same in-plane lattice constant as the bulk CRO (\(a=5.27\) A), while the out-of-plane lattice constant was adjusted to conserve the bulk volume of each constituent material and a vacuum region of 14 A was added on each side of the slab. A sawtooth shaped electrostatic potential was added, as indicated by the dashed line in Fig. 1, which was the supercell we used in the DFT calculations. Dipole correction was included following the work of Bengtsson. [12] The atomic positions were relaxed using the Projector Augmented Wave method (PAW)[13; 14] in the generalized gradient approximation (GGA) for the exchange-correlation functional as implemented in the Vienna Simulation Package (VASP). [15] The Quantum Espresso code [16] was used to study the effect of the external electric field on the electronic and magnetic properties, where a norm-conserving ultrasoft pseudo-potential was used together with the GGA exchange-correlation functional with the Hubbard parameters \(U=5\) eV and \(J=0\) for the Mn atoms. ## III Dielectric screening charges: model and DFT results The results of our DFT calculations, both with and without an electric field, are shown in Figs. 2 and 3, where we have shown the planar averaged Kohn-Sham potential \(V(z)\) and the charge density \(\rho(z)\), respectively. The planar-averaged quantities are given by the expression \(V(z)=A_{\rm cell}^{-1}\int_{\rm cell}V(\vec{r})d^{2}r\) and similarly for \(\rho(z)\), where the integration is along the plane, normal to the interface, and \(A_{\rm cell}\) is the surface cell area. The positions of the individual atomic layers such as MnO\({}_{2}\) can be identified in both figures from the \(\delta\)-function like peaks. As seen from Fig. 2, the planar-averaged quantities for with and without the electric field nearly overlap with one another, since the differences are very small. The differences, \(\Delta V(z)\) and \(\Delta\rho(z)\), induced by the electric field are shown as blue lines in Figs. 2 and 3, respectively, on an exaggerated scale. The DFT results reveal a remarkable text-book like behavior for the dielectric screening. Points to note are: (a) Piecewise linear potentials in all regions of the slab (Fig. 2), corresponding to the screened electric fields predicted by elementary electrostatics theory (Fig. 4) and (b) Accumulation of the screening charges at the two surfaces and the interface layer. A somewhat surprising result is that the screening charges at the two surfaces with the vacuum do not occur on the surface atomic layers as might have been anticipated, but they rather occur well inside the vacuum region. As seen from Fig. 3, where the polarization charges at the two surfaces have been indicated by colored areas, the polarization charges occur outside the surface CaO layers, at a distance of \(\sim\)1.3 A away from the atomic planes. The screened potential from the DFT calculations (Fig. 2) compares very well with the results of the di Figure 2: Planar averaged potential \(V(z)\) seen by the electron both with (red line) and without the electric field (black dashed line). The difference between them \(\Delta V\), shown as the blue line, follows a text-book like linear behavior in each dielectric region as predicted from the dielectric model. The dashed line next to the blue line is a guide to the eye indicating the piece-wise linear behavior, the slope of which yields the screened electric field. electric screening from elementary electrostatics theory, shown in Fig. 4. The heterostructure is placed between two capacitor plates that produce the electric field \(E\). In the dielectric model, the polarization charges at various boundaries are determined from the Gauss' Law, and these are indicated in Fig. 4. Taking \(\kappa\) to be the dielectric constant of the insulator (CMO), the surface charge densities are: \(\sigma_{0}=-\epsilon_{0}\)E at the metal surface, where \(\epsilon_{0}\) is the vacuum permittivity, \(\sigma_{1}=-\sigma_{0}/\kappa\) is the charge density at the interface between the metal and the dielectric, and \(\sigma_{2}=-\sigma_{0}(1-1/\kappa)\) is the charge density at the surface dielectric surface. Taking the value \(\kappa\approx 5\) to fit with our DFT results for the screening charges and the vacuum permittivity \(\epsilon_{0}=8.85\times 10^{-12}F/m=5.53\times 10^{-3}|e|/\) (V\(\cdot\) A), for the case E = 0.1 eV/A, we get the numerical values: \(\sigma_{0}=5.53\), \(\sigma_{1}=-1.11\), and \(\sigma_{2}=-4.42\), in units of \(10^{-4}\)\(|e|/\)A\({}^{2}\). These values together with the corresponding DFT results have been listed in Table 1. The DFT values were computed by integrating the planar averaged charge difference \(\Delta\rho\) near the CRO and CMO surfaces indicated by the colored areas in Fig. 3. The computed values are \(\sigma_{0}^{DFT}=5.4\times 10^{-4}\)\(|e|/\)A\({}^{2}\) and \(\sigma_{2}^{DFT}=-4.3\times 10^{-4}\)\(|e|/\)A\({}^{2}\). Since the interface charge \(\sigma_{1}\) is relatively smaller and charges fluctuate quite a bit near the CMO/CRO interface, we were not able to get the value of \(\sigma_{1}\) reliably by direct integration. Instead, we obtained \(\sigma_{1}\) from the charge neutrality condition, viz., \(\sum_{i=1}^{3}\sigma_{i}=0\), using the integrated values for \(\sigma_{0}\) and \(\sigma_{2}\), with the result \(\sigma_{1}^{DFT}=-1.1\times 10^{-4}\)\(|e|/\)A\({}^{2}\). All these values agreed quite well with the polarization charges obtained from the dielectric model (Table 1), assuming the dielectric constant to be \(\kappa\approx 5\). In comparison to this, the corresponding experimental value \(\kappa\approx 7\), inferred from the optical conductivity data[17], is somewhat larger. The reason for this difference could be due to the approximate nature of the functionals used in the DFT calculations or due to the small number of layers in the supercell used, so that the bulk dielectric screening limit has not been reached. As seen from Figs. 2 and 4 and Table 1, the DFT results agree quite well with the text-book like screening profile including the screened electric fields and the polarization charges at the boundaries. Fig. 2 shows that the final screened electric fields in various regions are uniform (linear \(\Delta\)V) as expected from the electrostatics model. While in the vacuum region, the applied electric field is unchanged, it is completely screened in the metallic region (CRO) as expected (\(\kappa=\infty\)) and is reduced by the dielectric constant \(\kappa\) in the insulating region (CMO). Taking the ratio of the screened electric field in the CMO region to the applied electric field (Fig. 2), we get a second estimate \(\kappa\approx 4.4\), which is similar to the value \(\kappa\approx 5\) obtained from the surface polarization charges discussed above. As already mentioned, we find that the polarization charges do not necessarily reside on the atomic layers. For our purpose, it is important to study the electronic charges on the individual atomic layers, especially the Mn layers, as the itinerant Mn-e\({}_{g}\) electrons mediate the DEX between the core t\({}_{2g}\) spins leading to spin canting. For this purpose, we have computed the layer-resolved partial density of states (PDOS) on the individual MnO\({}_{2}\) and RuO\({}_{2}\) layers, which are shown in Fig. 5. In the CMO \begin{table} \begin{tabular}{c|c|c|c} & \(\sigma_{0}\) & \(\sigma_{1}\) & \(\sigma_{2}\) \\ \hline DFT & 5.4 & -1.1 & -4.3 \\ Dielectric model & 5.53 & -1.11 & -4.42 \\ \end{tabular} \end{table} Table 1: Surface polarization charge densities induced by the applied electric field at the interface (\(\sigma_{1}\)) and the two surfaces (\(\sigma_{0}\) and \(\sigma_{2}\)), computed from the DFT as well as from the electrostatics theory. The applied electric field is \(E=0.1\) V/Å, the relative permittivity \(\kappa=5\) is used in the dielectric model, and the surface charge densities are expressed in units of \(10^{-4}|e|\)/Å\({}^{2}\). Figure 3: Planar averaged electron density for \(E=0\) (red line) and the extra electrons accumulated (polarization charge) (blue line) when the electric field \(E=0.1\) V/ Å is applied. The colored areas under the blue line indicate the net accumulation of charges (positive or negative) at the two surfaces, which are listed in Table 1 from direct integration. bulk, the material is an insulator with filled majority-spin t\({}_{2g}\) bands and empty e\({}_{g}\) bands, as indicated in the bottom panel of Fig. 5. There is some charge transfer across the interface from the RuO side to the two neighboring MnO\({}_{2}\) layers as indicated in the figure. By directly integrating the area of the occupied Mn-e\({}_{g}\) states (marked in red in Fig. 5), we can compute the charge transfer into various MnO\({}_{2}\) layers in the structure. There is significant charge transfer only to the first two MnO\({}_{2}\) layers at the interface as indicated in Fig. 5. The charge transfer to the various MnO\({}_{2}\) layers from the RuO side are also listed in Table 2. Without the electric field, there is already a charge transfer from the CRO side to the CMO side[5]. This leads to a net dipole moment with a positive charge on the CRO side and a negative charge on the CMO side, but there is no net monopole charge. As seen from Table 2, for \(E=0\), the charge accumulated on the first MnO\({}_{2}\) layer is 0.117 \(e^{-}\)/Mn atom \(\approx\) 8.4 \(\times 10^{-3}\)\(e^{-}\)/A\({}^{2}\). The accumulated electrons occupy the Mn \(e_{g}\) states, serving as the itinerant electrons that mediate the double exchange between the Mn \(t_{2g}\) core spins, which we discuss in more detail in Section IV. When the electric field is applied, there are monopole charges \(\sigma_{i}\) that accumulate at various boundaries in order to screen out the applied field. These add to the layer charges already existing for \(E=0\). Table 2 shows that with \(E=0.1\) V/A, the first MnO\({}_{2}\) layer gains a small additional charge making the total in that layer to be 0.121 \(e^{-}\)/Mn atom, which translates into an additional charge of 0.004 \(e^{-}\)/Mn atom ( -2.9 \(\times 10^{-4}\)\(|e|/\)A\({}^{2}\)). Note that although it is of the same order of magnitude as the interface polarization charge \(\sigma_{1}\) seen from Table 2, they are not necessarily the same, as \(\sigma_{1}\) is not necessarily located entirely on the interfacial MnO\({}_{2}\) layer. For the DEX interaction on the interfacial MnO layer, it is only the net charge (itinerant \(e_{g}\) electrons) on that layer that matters, not the total polarization charge that accumulates in the interface region due to the dielectric screening. As seen from Fig. 3, the polarization charge \(\sigma_{1}\) is spread over several monolayers at the interface region, both on the CMO and the CRO sides. As we move away from the interface, the accumulated charge in the MnO\({}_{2}\) layers quickly reverts to the bulk value as seen from Table 2. The bulk limit is already reached as quickly as the third layer and beyond. An interesting point to note regarding the surface charges at the vacuum interface is that even though there is a considerable polarization charge (\(\sigma_{2}=-4.3\times 10^{-4}|e|\)/A\({}^{2}\) at the CMO/vacuum interface from Table 1), only a small fraction of it appears on the surface MnO\({}_{2}\) layer (layer-5 in Table 2). Indeed as observed already, much of the charge of both \(\sigma_{2}\) and \(\sigma_{0}\) at the two surfaces appear well inside the vacuum region, with the peaks appearing about 1.3 A outside of the terminal CaO surface. Thus, the surface MnO\({}_{2}\) layer being more or less similar to the bulk, with very little additional charge transfer due to the electric field, the magnetism continues to remain anti-ferromagnetic, i. e., the same as in the bulk. This is also confirmed from the total energy calculations within the DFT, which shows the MnO\({}_{2}\) surface layer to remain anti-ferromagnetic. ## IV Spin Canting and Interfacial Ferromagnetism ### Ferromagnetism at the interface To study the stability of the interfacial magnetism, we computed the total energy of the two magnetic structures, one where all Mn atoms are anti-ferromagnetic \begin{table} \begin{tabular}{c|c|c|c|c|c} E & layer-1 & layer-2 & layer-3 & layer-4 & layer-5 \\ \hline 0 & 0.117 & 0.044 & 0.002 & 0.000 & 0.000 \\ \(\alpha\) 1 & \(\alpha\) 1-21 & \(\alpha\) 0.4\(\pi\) & \(\alpha\) 0.0\(\alpha\) & \(\alpha\) 0.0\(\alpha\) & \(\alpha\) 0.0\(\alpha\) \\ \end{tabular} \end{table} Table 2: Extra electrons per Mn atom, as compared to the bulk, accumulated at various MnO\({}_{2}\) layers near the interface. The electrons occupy the Mn \(e_{g}\) states as indicated from Fig. 5. Electric field \(E\) is in units of (V/Å). These numbers are to be multiplied with the factor \(7.2\times 10^{-2}\) to get the electron numbers in units of \(e^{-}\)/Å\({}^{2}\) for the corresponding MnO layer for comparison with the polarization charges shown in Table 1. Figure 5: Partial densities of states (PDOS) for Mn and Ru layers in units of states/eV/ formula unit (MnO\({}_{2}\) or RuO\({}_{2}\)) including both spins. Charge transfer across the interface to the CMO side (primarily Mn-e\({}_{g}\) states) are indicated by two arrows in the middle two panels. Here, the electric field is \(E=0\). With the applied electric field, the figure remains more or less the same except that the charge transfer to the CMO side is a bit larger as listed in Table 2. (AFM) as in the bulk and a second structure, where only the interfacial MnO\({}_{2}\) layer is ferromagnetic (FM), while the remaining layers retain the AFM structure of the bulk. We find that even though the energy difference between the FM and AFM configurations for the Mn-Mn bond at the interface is somewhat sensitive to the magnitude of the Coulomb repulsion parameter \(U\) used in the DFT calculations, the FM state is always more stable. In Table 3, we have listed the results for \(U=2\) eV, for which the computed \(\Delta E\equiv E_{\uparrow\downarrow}-E_{\uparrow\uparrow}=16.1\) meV value for \(E=0\) is comparable to the experimental value of 13.1 meV for the bulk CMO.[18; 19] The other quantities such as the charge transfer and the density-of-states are not sensitive to the value of \(U\), and were calculated with \(U=5\) eV. Table 3 shows that the FM state of the interfacial MnO\({}_{2}\) layer is more stable both with and without the external electric field. As discussed later, the spin canted state, which has a reduced net FM moment, has actually even lower energy than the FM state, which has been confirmed earlier for the intrinsic sample (\(E=0\)) both from experiment and theory.[1; 5] With the application of the electric field, the total energy of the FM state is further reduced by about 3 meV, making the FM state even more stable in the presence of an electric field. As already mentioned, we have also computed the same energy difference for the surface MnO\({}_{2}\) layer and find that, in contrast to the interfacial MnO\({}_{2}\) layer, the AFM state at the surface continues to remain energetically favorable, both with and without the electric field. These results are consistent with the Anderson-Hasegawa DEX result, that the FM state becomes progressively more energetically favored over the AFM state as the itinerant carrier concentration is increased, in our case by the application of the electric field. However, the lowest energy state is neither FM nor AFM, but a spin canted state, and we discuss this by considering a simple DEX model on a square lattice, that describes the magnetism of the interfacial MnO\({}_{2}\) layer. ### Double-exchange model and spin canting We consider the well known Anderson-Hasegawa double exchange model [9; 10; 11; 20] and apply it to a square lattice appropriate for the MnO\({}_{2}\) layer. The Hamiltonian is \[\mathcal{H}=t\sum_{\langle ij\rangle\sigma}c_{i\sigma}^{\dagger}c_{j\sigma}+h. c.+\sum_{\langle ij\rangle}J\hat{S}_{i}.\hat{S}_{j}-2J_{H}\sum_{i}\vec{S}_{i}. \vec{s}_{i}, \tag{1}\] which describes the motion of the itinerant Mn (\(e_{g}\)) electrons (the corresponding field operators are \(c_{i\sigma}^{\dagger},c_{j\sigma}\) with \(i\) and \(\sigma\) being the site and the spin indices) moving in a lattice of Mn \(t_{2g}\) core spins (\(S=3/2\)). Here \(t\) is the tight binding nearest neighbor hopping, \(\mathbf{s}_{i}=1/2\sum_{\mu\nu}c_{j\mu}^{\dagger}\tau_{\mu\nu}c_{j\nu}\) is the spin of the itinerant electron, with the Pauli matrices \(\tau\), \(J\) is the superexchange, \(J_{H}\) is the Hund's coupling, and the angular brackets indicate sum over distinct pairs of bonds in the lattice. Typical parameters for CMO are[5; 21]: \(t=-0.15\) eV, \(J=7\) meV, and \(J_{H}=0.85\) eV. It is instructive to consider the de Gennes result[11] for the limiting case \(J_{H}=\infty\), which suggests a spin-canted state in the presence of the itinerant carriers. In this limit, since only one spin channel parallel to the core spins is available for the itinerant electrons, Eq. (1) is equivalent to the spinless Hamiltonian \(\mathcal{H}=\sum_{\langle ij\rangle}t\cos(\theta_{ij}/2)\ c_{i}^{\dagger}c_{j }+h.c.+\sum_{\langle ij\rangle}J\hat{S}_{i}.\hat{S}_{j}\), where the hopping has been modified by the well known Anderson cosine factor[9], with \(\theta_{ij}\) being the polar angle difference between the neighboring core spins. Taking a bipartite square lattice, with the spins in the two sublattices (A and B) canted by the angle \(\theta\) with respect to one another, and considering a small concentration \(x\) of the itinerant carriers, the electrons occupy the band bottom \(E_{b}=-z|t|\cos(\theta/2)\), where \(z=4\) is the number of nearest neighbors. The canting angle \(\theta_{c}\) is obtained by minimizing the total energy \[E=E_{b}x+(z/2)J\cos\theta, \tag{2}\] which yields the result \[\theta_{c}=2\cos^{-1}\big{(}\frac{|t|x}{2J}\big{)}. \tag{3}\] For the \(J_{H}=\) finite case, no such analytical result is possible, and we must solve for the band structure energies \(\varepsilon_{nk}\) by keeping both spin channels in the Hamiltonian (1) and sum over the occupied states. The canting angle \(\theta_{c}\) is obtained by numerical minimization of the total energy \[E=(z/2)J\cos\theta+\sum_{nk}^{\rm occ}\varepsilon_{nk}. \tag{4}\] The computed total energy is shown in Fig. 6 (a) as a function of the canting angle \(\theta\) for various electron concentration \(x\). As seen from the figure, when the electron concentration \(x=0\), the minimum energy occurs at the canting angle \(\theta_{c}=\pi\) resulting in an AFM state, obviously due to the super exchange interaction \(J\), which is the only interaction without any itinerant carriers. With \begin{table} \begin{tabular}{|c|c|c|} E & FM & AFM \\ \hline 0 & -16.1 & 0 \\ 0.1 & -19.2 & 0 \\ \end{tabular} \end{table} Table 3: Calculated total energy, where the CMO layer at the interface is either FM or AFM, with the remaining layers being AFM, i.e., the same as in the bulk. Energies are per interfacial bond and in units of meV for the electric fields \(E=0\) and \(E=0.1\) V/Å. increasing \(x\), the strength of the DEX interaction slowly increases, producing a spin canted state, and eventually, beyond a critical value \(x>x_{c}\), the DEX dominates resulting in an FM state (\(\theta_{c}=0\)). The critical concentration in the \(J_{H}=\infty\) limit is given by Eq. (3) and has the value \(x_{c}=2J/|t|\approx 0.09\ |e|/\) interfacial Mn atom, which is also seen from Fig. 6 (b), where we have presented the concentration dependence of the canting angle for several values of \(J_{H}\). It is clear that for \(J_{H}=0\), the itinerant and the core spins are not coupled and therefore the system remains AFM for all \(x\), due to the super exchange interaction of the core spins, up to the full occupation of the bands. As \(J_{H}\) is increased from zero, the critical concentration \(x_{c}\) monotonically decreases, eventually approaching the de Gennes result \(x_{c}=2J/|t|\) for \(J_{H}=\infty\). The critical values of \(x_{c}\) shown in Fig. 6 (b) for the three values of \(J_{H}\) are consistent with this expectation. We now discuss the effect of the electric field on the charge transfer across the interface into the MnO\({}_{2}\) layer, which in turn affects the spin canting and therefore the net ferromagnetism. As seen from Table 2, there is already a significant charge leakage to the interfacial MnO\({}_{2}\) layer even for \(E=0\), which leads to a canted AFM state. The magnitude of the canting angle \(\theta_{c}\) can be estimated from Fig. 6 (b). With the applied electric field, the charge transfer increases due to the build up of the dielectric screening charges. As a result, the canting angle decreases, thereby leading to the enhancement of the net FM moment. The net FM moment per Mn atom in the MnO\({}_{2}\) layer is given by the expression \(m=m_{s}(1+\cos\theta_{c})/2\), where \(m_{s}\approx 3\ \mu_{B}\) is the Mn core spin moment. If we take \(J_{H}\approx 0.85\) eV for CMO[21], the predicted increase obtained from Fig. 6 (b) is from \(m=2.1\ \mu_{B}\ (E=0)\) to \(2.3\ \mu_{B}\ (E=0.1\ \mathrm{V/\AA})\), corresponding to the change in the electron concentration of \(x=0.117\) to \(0.12\) electron/ Mn atom, as seen from Table 2. Indeed, such an enhancement of the net FM moment has been observed in the neutron reflectivity experiments[8]. However, the experiment shows a much larger increase in the FM moment, viz., from \(1\ \mu_{B}\) to \(2.5\) - \(3.0\ \mu_{B}\), corresponding to the transition from a canted AFM state to a fully FM state of the Mn\({}^{+4}\) ion at the interface. However, notice from Fig. 6 (b) that the canting angle is quite sensitive to the itinerant carrier concentration \(x\) in the interfacial MnO\({}_{2}\) layer, and a critical value of \(x_{c}\approx 0.14\) (for \(J_{H}=0.85\) eV) would turn the system completely ferromagnetic. This reflects an increase of the itinerant carriers by just \(0.02\ |e|/\) Mn atom by the electric field on top of the \(\sim 0.12|e|/\) Mn atom that already exists in the intrinsic interface for \(E=0\). Even though the theory and experiments agree qualitatively on the increase of the FM moment with the electric field, a quantitative comparison is difficult owing to several factors. One, it is difficult to experimentally determine the exact magnitude of the electric field that is applied to the CRO/CMO heterostructure, since the structure is capped by several other layers of materials in the actual sample.[22] Second, transition-metal oxide samples are notorious for the oxygen stoichiometry issues and it is quite conceivable that the applied electric field leads to a migration of the oxygen atoms to the interface, leading to an extra mechanism of charge accumulation at the interface. Since the double exchange mechanism becomes stronger with an increase of the carrier concentration \(x\), this would increase the tendency towards ferromagnetism, and as already pointed out just an extra \(0.02\)\(|e|/\) Mn atom at the interface is needed to drive the system completely ferro-magnetic. Finally, there may be substrate-induced strain in the interface, which was not studied in the experiment, nor was it considered in our Figure 6: Energetics of the spin canted state. (a) Energy from Eq. (4) as a function of the angle \(\theta\) between spins in the two sublattices, A and B, for several values of the electron concentration \(x\). The minimum yields the canting angle \(\theta_{c}\) (indicated by an arrow for the \(x=0.04\) case). Starting from the AFM state (\(\theta_{c}=\pi\)) for \(x=0\), the system turns into an FM state (\(\theta_{c}=0\)) beyond the critical concentration \(x_{c}\approx 0.135\), so that one obtains an FM state for the case \(x=0.16\). The parameters are: \(J=7\) meV, \(t=-0.15\) eV, and \(J_{H}=0.85\) eV. (b) The spin canting angle \(\theta_{c}\) as a function \(x\) for three cases: \(J_{H}=0.5\) eV, \(0.85\) eV, and \(\infty\), with the parameters \(J\) and \(t\) being the same as in Fig. (a). The critical concentration \(x_{c}\), beyond which an FM state is obtained (\(\theta_{c}=0\)), is where the curves meet the \(x\) axis. With increasing \(x\), \(\theta_{c}\) decreases, leading to an enhancement of the net ferromagnetic moment. theory. It would be desirable to study these effects further. ## V Summary In summary, we studied the effect of an external electric field on the CRO/CMO (001) interface using density functional methods in order to understand the field tuning of the magnetism at the interface. This system was chosen due to the existing experiments, but the conclusions should be valid for a variety of interfaces. We found several interesting results. (1) The polarization charges induced at the interface and the surfaces with the vacuum to screen the applied electric field followed a text-book like profile. (2) Interestingly, the surface polarization charges occurred well inside the vacuum (at a distance of about 1.3 A from the surface atomic planes). Similarly, the interface polarization charge is spread over several atomic planes in the interface region, which means that not necessarily all of it participate in the interface phenomena such as the double exchange in our case. (3) The surface MnO\({}_{2}\) layer is predicted to remain AFM as in the bulk, so that the enhancement in the ferromagnetism seen in the experiments is unlikely to come from the surface, as has been suggested in the experiments.[8] (4) Our theoretical work supports the experimental observation that the interfacial magnetism is enhanced by the applied field and identifies the extra charge accumulation at the interface MnO\({}_{2}\) layer and the double exchange mechanism to be responsible for the enhancement. However, the effect is much stronger experimentally than the theory predicts. The difficulty of a quantitative comparison with the experiment is due to several factors, viz., (i) The possibility of electric-field driven oxygen migration to the interface, (ii) Unknown magnitude of the electric field at the interface due to the presence of the substrate and the cap layers in the experiments, and (iii) Possible strain in the structure due to the substrate. Nevertheless, both theory and experiment indicate a strong electric field tuning of the interfacial magnetism, with potential for application in magneto-electric devices. Acknowledgment-We thank Professor Yuri Suzuki for stimulating this work and for her insightful discussions. We acknowledge financial support from the US Department of Energy, Office of Basic Energy Sciences, Division of Materials Sciences and Engineering Grant No. DEFG02-00ER45818. Computational resources were provided by the National Energy Research Scientific Computing Center, a user facility also supported by the US Department of Energy.
2310.13347
NurViD: A Large Expert-Level Video Database for Nursing Procedure Activity Understanding
The application of deep learning to nursing procedure activity understanding has the potential to greatly enhance the quality and safety of nurse-patient interactions. By utilizing the technique, we can facilitate training and education, improve quality control, and enable operational compliance monitoring. However, the development of automatic recognition systems in this field is currently hindered by the scarcity of appropriately labeled datasets. The existing video datasets pose several limitations: 1) these datasets are small-scale in size to support comprehensive investigations of nursing activity; 2) they primarily focus on single procedures, lacking expert-level annotations for various nursing procedures and action steps; and 3) they lack temporally localized annotations, which prevents the effective localization of targeted actions within longer video sequences. To mitigate these limitations, we propose NurViD, a large video dataset with expert-level annotation for nursing procedure activity understanding. NurViD consists of over 1.5k videos totaling 144 hours, making it approximately four times longer than the existing largest nursing activity datasets. Notably, it encompasses 51 distinct nursing procedures and 177 action steps, providing a much more comprehensive coverage compared to existing datasets that primarily focus on limited procedures. To evaluate the efficacy of current deep learning methods on nursing activity understanding, we establish three benchmarks on NurViD: procedure recognition on untrimmed videos, procedure and action recognition on trimmed videos, and action detection. Our benchmark and code will be available at \url{https://github.com/minghu0830/NurViD-benchmark}.
Ming Hu, Lin Wang, Siyuan Yan, Don Ma, Qingli Ren, Peng Xia, Wei Feng, Peibo Duan, Lie Ju, Zongyuan Ge
2023-10-20T08:22:56Z
http://arxiv.org/abs/2310.13347v1
# NurViD: A Large Expert-Level Video Database for Nursing Procedure Activity Understanding ###### Abstract The application of deep learning to nursing procedure activity understanding has the potential to greatly enhance the quality and safety of nurse-patient interactions. By utilizing the technique, we can facilitate training and education, improve quality control, and enable operational compliance monitoring. However, the development of automatic recognition systems in this field is currently hindered by the scarcity of appropriately labeled datasets. The existing video datasets pose several limitations: 1) these datasets are small-scale in size to support comprehensive investigations of nursing activity; 2) they primarily focus on single procedures, lacking expert-level annotations for various nursing procedures and action steps; and 3) they lack temporally localized annotations, which prevents the effective localization of targeted actions within longer video sequences. To mitigate these limitations, we propose NurViD, a large video dataset with expert-level annotation for nursing procedure activity understanding. NurViD consists of over 1.5k videos totaling 144 hours, making it approximately four times longer than the existing largest nursing activity datasets. Notably, it encompasses 51 distinct nursing procedures and 177 action steps, providing a much more comprehensive coverage compared to existing datasets that primarily focus on limited procedures. To evaluate the efficacy of current deep learning methods on nursing activity understanding, we establish three benchmarks on NurViD: procedure recognition on untrimmed videos, procedure and action recognition on trimmed videos, and action detection. Our benchmark and code will be available at [https://github.com/minghu0830/NurViD-benchmark](https://github.com/minghu0830/NurViD-benchmark). ## 1 Introduction The application of deep learning (DL) in understanding nursing procedure activities has the potential to greatly enhance the quality of nurse-patient interactions, while also playing a crucial role in preventing medical disputes, minimizing missed nursing procedures, and reducing nursing errors [32; 42; 35; 37; 21; 11]. DL-based automatic approaches offer several key benefits for nurses, including: (1) _Reliable_: it enables objective, precise, and consistent assessments of nursing skills, eliminating the subjectivity inherent in traditional evaluations by experts [27]. (2) _Real-time guidance_: it facilitates immediate feedback on nurses' performance, empowering them to identify areas for improvement in a timely manner [30]. (3) _Cost-effective_: it can alleviate the burden of manual observations, evaluations, and training by experts [39]. However, the development of automatic nursing recognition systems is currently hindered by the absence of suitably labeled datasets for nursing activities. Existing public video datasets for action recognition primarily focus on generic daily activities or specific sports, with minimal attention given to nursing or healthcare scenarios [23; 13; 36; 16; 22; 24; 17]. For example, the Kinetics 700 dataset [23] with 700 category labels includes only two labels related to nursing activities. Although some initiatives have attempted to create nursing activity understanding video datasets (e.g., handwashing-specific datasets [15; 7; 44; 41; 4; 26]), limitations arise due to the gap between them and real-world clinical settings, which is summarized as follows: (1) _Limited procedure and action variety:_ Expensive annotation cost causes existing datasets only have a single nursing procedure, whereas real clinical environments involve a wide range of complex procedures. (2) _Simple scenes only:_ The recorded videos are typically captured in controlled settings such as instructional or laboratory environments, which do not accurately reflect the complexity and variability of nursing procedures in actual clinical practice. (3) _Un-professional labeling:_ They suffer from non-professional labeling or a lack of adherence to standard guidelines, leading to errors or inconsistencies. (4) _Short video sequence:_ They primarily consist of short action clips, which do not facilitate understanding long-term activities and their context. To mitigate these limitations, we proposed NurViD, a large-scale video benchmark for nursing procedure activity understanding. Compared to existing datasets, NurViD incorporates characteristics from the following aspects: (1) _Diverse procedure and action:_ NurViD comprises 144 hours of annotated videos, which is approximately four times longer than the largest existing nursing activity datasets. It also contains 1,538 videos depicting 51 nursing procedure categories, covering the majority of common procedures, along with 177 action steps, providing much more comprehensive coverage, compared to previous datasets that primarily focus on single procedures with limited action steps. (2) _Real-world clinic settings:_ Videos in NurViD were captured from over ten real clinical environments according to our statistics, including hospitals, clinics, and nursing homes. This diverse range of settings ensures that the models trained on NurViD are applicable in real-world clinical scenarios. (3) _Expert-level annotations:_ NurViD was labeled by professionals with high expertise and knowledge in nursing. The procedure and action annotation process follows the guideline of _Training Outline for Newly Employed Nurses_ issued by the _National Health Commission of China_[2], ensuring consistency and accuracy of the annotations. (4) _Support multiple recognition and detection tasks:_ We have established two different classification tasks and an action temporal localization benchmark specifically targeting the long-tail distribution of the dataset. We further compare our NurViD dataset with the other existing nursing activity video datasets and summarize the key difference in Table 1. The contributions of NurViD are summarized as follows: * NurViD is the most diverse video benchmark to date for nursing procedure activity understanding tasks. It has been meticulously annotated by nursing professionals, and the annotation process follows standardized nursing procedure guidelines or protocols. NurViD exhibits a competitive video size and much more comprehensive coverage of procedures and action categories compared to existing datasets. * In response to the practical needs of nursing and machine-learning communities, such as education and training, automatic action detection, and long-tail distribution, we establish three different recognition and localization benchmarks on NurViD. * Long-term retention and availability of dataset. NurViD has been sourced from YouTube and follows the CC BY 4.0 license agreement [1]. ## 2 Related Work Research aimed at enabling machines to understand human behavior and activities has led to advancements in various practical applications. However, building such systems comes with challenges that require appropriate datasets for training and evaluation. In recent years, numerous datasets have been created to support research in human behavior understanding [34; 13; 45; 36]. While these datasets have been valuable for general activity recognition, only a limited number cater to the specific needs of nursing professionals. **Sequential action prediction.** Standardized datasets have been meticulously designed for the purpose of evaluating the performance of algorithms in understanding and accurately recognizing intricate activities within real-world scenarios [22; 16]. These datasets are used in various fields, including computer vision, robotics, natural language processing, and surveillance systems. Focusing on specific sets of actions carried out in a well-defined order, sequential action understanding datasets differ from those that encompass a broader range of contexts [45]. In many fields, it is essential to adhere to a strict sequence of steps to ensure optimal results. For example, in the medical field, following a specific order of steps is crucial in procedures such as administering medication, where any deviation from the established sequence could result in serious consequences for the patient. Standardized datasets can help ensure that the actions performed in real-world situations are accurately represented and provide a benchmark for evaluating the performance of algorithms. **Nursing procedure video dataset.** Online learning has gained significant popularity, particularly through the utilization of instructional videos that provide step-by-step guidance, serving as valuable resources for teaching and learning specific tasks. Within the medical field, instructional videos have proven to be highly effective in conveying essential information using visual and verbal communication, thus benefiting learners [17]. On the other side, the absence of comprehensive and standardized nursing procedure video datasets presents a challenge in developing effective algorithms within the healthcare industry. The current datasets suffer from limited coverage, focusing only on nursing procedures relevant to specific healthcare settings or patient populations. Additionally, the quality of annotation plays a critical role in algorithm accuracy. The process of annotation involves identifying and labeling specific actions and events in the videos, which can be a time-consuming and challenging task. Annotation errors may arise due to factors such as human error, task ambiguity, or the absence of standardized protocols. ## 3 Building NurViD Dataset In this section, we describe the process of building NurViD, from selecting the nursing procedures to acquiring, filtering, and annotating the video data. We leveraged the extensive collection of medical instructional videos available on YouTube [6] and carefully selected and filtered our video collection to ensure the quality and relevance of the dataset. We also developed a standardized labeling scheme for the actions performed in each video, providing a valuable resource for developing and evaluating algorithms that recognize and understand nursing procedures. ### Procedure and Action Definition The selection of nursing procedures and corresponding action steps is crucial for maintaining the relevance and usefulness of NurViD within the nursing profession and the broader healthcare community. The procedure selection adheres to a widely accepted nursing taxonomy, taking into account frequency of use and expert guidance. Then the actions involved in these procedures are gathered and standardized. \begin{table} \begin{tabular}{l|c **Procedure selection.** We compiled various nursing procedures from Nurselabs website [3] and nursing procedure books [8; 29; 31; 20]. With expert consultation, we identified 51 commonly employed nursing procedures that align with the requirements of most nursing scenarios. To validate their relevance and prevalence, we searched for corresponding videos on YouTube. Table 5 provides the full names and summarized abbreviations of these 51 procedures. **Action definition.** We developed action steps for various nursing procedures based on college tutorials, and a nursing lecturer summarized the appropriate action labels by analyzing the action descriptions and video content. This was necessary for three main reasons: (1) There are variations in actions performed during specific nursing procedures, even within patients or instances of the same procedure, that require accurate representation of nuances. (2) The existence of diverse nursing procedure standards across countries and regions highlights the importance of establishing a unified standard. (3) Dealing with fine-grained video data from real-world nursing procedures requires rearranging action tags for the precise depiction of procedure nuances. ### Online Video Crawling Our objective in this stage was to gather sufficient videos demonstrating the pre-selected nursing procedures. By utilizing the extensive collection of medical instructional videos available on YouTube [6; 17], we acquire a wide range of videos without the need for third-party video production. To accomplish this, we queried YouTube using text-based searches for each procedure and obtained videos whose titles included the desired procedure keywords. To expand the video collection, we enhanced the search queries by including synonyms of each procedure. For example, _Subcutaneous Injection Insulin_ can also be called _Subcutaneous Insulin Administration_, _Subcutaneous Insulin Therapy_, or abbreviated as _SCII_. Each video was downloaded at the highest resolution available. During video retrieval, we prioritized videos shorter than 20 minutes to limit the total storage. ### Localization Annotation and Quality Control In the NurViD dataset, each video is divided into multiple temporal segments, each of which contains only one action. Each action is annotated with its starting and ending timestamps as well as its frame position in the video. The annotation process is performed by undergraduates with medical and nursing backgrounds to ensure the accuracy and consistency of the annotations. **Employing nursing professionals.** Data curation is an expensive process that typically involves extensive manual annotation. In some cases, datasets have employed a semi-automatic crowdsourcing approach for collection and annotation [12; 18; 40; 13]. For tasks that require greater reliability, Figure 1: The examples for the annotated target action boundaries for _Intravenous Blood Sampling_ and _Modified Seldinger Technique with Ultrasound for PICC Placement_ procedures. The frames marked in colored boxes denote the annotated temporal boundaries for the target action steps. certain datasets rely on domain experts for annotation, albeit at a higher cost. In our study, we formed a medical team of 26 individuals, consisting of a nursing lecturer and 25 nursing majors from a medical college, to perform labeling. Over half of the students have at least three years of undergraduate education, possess extensive practical experience in nursing procedures, and have successfully completed the university's standardized nursing procedure assessment. **Invalid video filtering.** Additionally, certain videos may contain irrelevant content due to inaccuracies in text-based retrieval. Thus, in the first stage of labeling, we excluded videos that fall into the following categories: 1) showcasing unrealistic environments (e.g., movies, animation), 2) providing only verbal descriptions instead of visual demonstrations, 3) featuring static images instead of continuous videos, and 4) lacking the specified procedure. **Action boundaries annotation.** To ensure localization annotation quality, we followed a three-round annotation process: (1) Each annotator was assigned 2-3 nursing procedures based on video count and tasked with filtering out inappropriate videos, (2) After filtering, the action segments of each procedure video were annotated by three members, (3) Finally, cross-checking of annotation results between every two groups was conducted to identify and rectify errors and omissions. This process resulted in a minimum of three annotated action boundaries for each video. To ensure reliable annotations, we employed the complete linkage algorithm [10] to cluster and merge various temporal boundaries into stable boundaries that received multiple agreements. It is important to mention that a single video may feature multiple separate instances of the target action, leading to multiple boundary definitions. Several examples of annotated target action boundaries are shown in Figure.1. ### NurViD Statistics Our NurViD dataset is a comprehensive collection of nursing procedure videos that includes 1,538 videos (144 hours) demonstrating 51 different nursing procedures and 177 actions for recognition and detection tasks, which are summarized in Table.5 and Table.6 in the supplementary material. To facilitate the development and evaluation of algorithms, we trimmed the videos based on annotated action boundaries, resulting in 5,608 trimmed video instances that totaled 50 hours. The trimmed videos have an average duration of 32 seconds, while the untrimmed videos have an average duration of 337 seconds. Over 74% videos have HD resolutions of 1280 \(\times\) 720 pixels or higher. We observed a long-tailed distribution in the number of collected videos for both procedure and action. Figure 3: NurViD dataset duration statistics. Figure 2: The average, maximum, and minimum number of action segments for each procedure. Experimental Results In our study, we focus on three tasks using the NurViD dataset: (1) procedure classification in untrimmed videos, (2) procedure and action classification on trimmed videos, and (3) action detection on untrimmed videos. To establish reliable baselines for these classification and detection tasks, we employ state-of-the-art models that have demonstrated effectiveness in human action recognition and detection. We provide a comprehensive analysis of the baseline models, taking into consideration the specific challenges posed by the NurViD dataset, such as long-tailed class distributions. For each task, we formulate the problem in detail and evaluate the performance of the baseline models. The results of our analysis can guide the development of more accurate and robust models for fine-grained action recognition and detection in healthcare applications. ### Procedure Classification on Untrimmed Videos This task involves identifying nursing procedures from untrimmed videos, which typically consist of multiple standardized action steps that must be executed in a specific order and other unrelated parts. The goal is to explore the effectiveness of DL technology in retrieving specific nursing procedures from a large video library. By accurately classifying nursing procedures, we can provide healthcare professionals with a powerful tool for quickly accessing relevant videos. **Data settings.** The examples from each procedure category are randomly divided into three sets: 70% for training, 10% for validation, and 20% for testing, resulting in 1,054 training, 173 validation, and 311 testing videos, respectively. **Class splits.** To account for the long-tailed nature of the NurViD dataset, we divided the procedure classes into three splits: _many_, _medium_, and _few_, based on the number of videos for each procedure. The accuracy of each split is the average accuracy of the included procedures within that split. Specifically, the _many_ category includes the top 19.6% most frequent classes, the _medium_ category includes the middle 43.1% classes, and the _few_ category includes the remaining 37.3% classes. The number of classes per split is presented in Table 2. **Baselines.** We compare the performance of SlowFast [14], I3D [9], and C3D [38] models on this task. These models are evaluated in two versions: 1) Random initialization training and 2) Pre-training with weights from Kinetics 400 [23], which is a human action recognition dataset. **Results.** The results for per-class accuracy are summarized in Table 2. We find that C3D [38] is able to achieve competitive results for all the splits. However, the best top-1 per-class accuracy is 14.8%, indicating that there is significant room for improvement in this challenging task. We also observe that transfer learning from the model that is pre-trained on Kinetics 400 [23] improves the classification accuracy for all splits. With the C3D model, this corresponds to a per-class accuracy gain from 10.7% to 21.5% for the _many_ category and from 5.1% to 11.3% for the _medium_ category. Despite this improvement, accurately predicting procedure categories remains a significant challenge. **Discussions.** Based on the results of the classification benchmarks established on NurViD, we found that even models pre-trained on Kinetics 400 [23] cannot achieve satisfactory classification performance. We speculate that this may be due to several reasons: (1) videos are not always exclusively focused on nursing procedure activities and may contain other unrelated content, such as verbal instructions and brief introductions to devices; (2) some actions, such as handwashing, disinfection, and document, are commonly used in various procedures, which may cause the model to obtain similar features in different procedural videos, making it difficult to classify them. Overall, focusing on the main procedure actions in the video remains a challenging task. \begin{table} \begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{Baselines} & \multicolumn{4}{c}{Procedure Classification} \\ & Many & Medium & Few & All \\ & 10 & 22 & 18 & 50 \\ \hline SlowFast [14] & 9.9 & 7.5 & 0.1 & 7.4 \\ C3D [38] & 10.7 & 5.1 & 1.8 & 7.7 \\ I3D [9] & 9.9 & 9.0 & 2.8 & 8.7 \\ \hline SlowFast* & 19.9 & 10.2 & 5.0 & 13.5 \\ C3D* & **21.5** & 11.3 & **5.8** & **14.8** \\ I3D* & 19.8 & **12.5** & 5.6 & 13.1 \\ \hline \hline \end{tabular} \end{table} Table 2: Per-class Top-1 accuracy for procedure prediction on untrimmed videos. The best performance for each split has been highlighted in **bold**. ### Procedure and Action Classification on Trimmed Videos This task focuses on classifying both the primary action that occurs in a trimmed video and their associated procedures. By accurately classifying procedures and actions, building automated systems can automate the monitoring of each step in the nursing process, thereby helping to identify potentially missed diagnoses, nursing errors, and other issues. These systems can also help doctors and nurses quickly find useful nursing procedure videos, thereby improving their learning and work efficiency. **Data settings.** The dataset was randomly partitioned, ensuring a balanced representation of examples from each procedure and action category. Specifically, we allocated 70% of the data for training (3,906 videos), 10% for validation (587 videos), and 20% for testing (1,122 videos). **Class splits.** In this study, we also explore the long-tailed nature of the NurViD. We partitioned the videos into three subsets based on the task-specific distribution: _many_, _medium_, and _few_. For procedure categories, the breakdown is as follows: _many_: top 26% frequent classes, _medium_: middle 41% classes, and _few_: the remaining 33% classes. For action categories, _many_: top 5% frequent classes, _medium_: middle 37%, and _few_: the remaining 58% classes. In addition to some unique actions, such as _establish a sterile zone_ that only exists in the _Modified Seldinger Technique with Ultrasound for PICC Placement_ procedure, hand washing, skin disinfection, and other steps are common. Therefore, we established a joint classification task to explore the mutual influence between procedures and actions. For joint classification, _many_: top 5% frequent classes, _medium_: middle 21% classes, _few_: the remaining 74% classes. We show the number of classes per each split in Table 3. **Baselines.** We compare the performance of three models, SlowFast [14], I3D [9], and C3D [38] on our tasks. These models are originally designed for human action recognition and lack the inherent ability to predict both a procedure and an action. To align with our joint prediction need, we introduce two task heads dedicated to procedure category recognition and action recognition, respectively. Consequently, we compute the joint loss, \(\mathcal{L}_{joint}\), to handle these tasks. Hyper-parameters are tuned using the validation data, and the detailed hyper-parameter settings for each model can be found in the supplementary material. The loss is defined as \(\mathcal{L}_{joint}=-\frac{1}{M}\sum_{i=1}^{M}y_{i}^{p}\log(p_{i}^{p})-\frac{ 1}{N}\sum_{j=1}^{N}y_{i}^{q}\log(p_{j}^{a})\) where \(M\) is the number of procedure classes, \(N\) is the number of action classes, \(p_{i}^{p}\) and \(y_{i}^{q}\) denote the procedure prediction probability and ground truth label for the category \(i\), \(p_{j}^{a}\) and \(y_{j}^{a}\) denote the action prediction probability and ground truth label for the category \(j\). **Results.** The results are summarized in Table 3. All models perform well in the procedure classification task, with C3D[38] achieving a top-1 per-class accuracy of 71.2% across all splits. C3D[38] also demonstrates competitive performance in action and joint classification for all splits, with the best top-1 per-class accuracy of 22.8% and 13.1%, respectively. Transfer learning from a pre-trained model on Kinetics 400 [23] further improves the accuracy of procedure and action classification. For instance, using the C3D model, the per-class accuracy increases from 70.1% to 73.2% for _many_ classification, from 48.8% to 60.0% for _medium_ classification, and from 33.0% to 39.6% for _few_ classification. **Discussions.** The experimental results indicate that the performance of procedure classification on trimmed videos is significantly better than on untrimmed videos, which may confirm that irrelevant motion information has been filtered out in the trimmed videos, and the model is more likely to learn motion features. However, The imbalance in class frequencies poses difficulties in achieving \begin{table} \begin{tabular}{l c c c c c c c c c c c c} \hline \hline & \multicolumn{3}{c}{Procedure Classification} & \multicolumn{3}{c}{Action Classification} & \multicolumn{3}{c}{Joint Classification} \\ \cline{2-13} Baselines & Many & Medium & Few & All & Many & Medium & Few & All & Many & Medium & Few & All \\ & 13 & 21 & 17 & 51 & 9 & 66 & 87 & 162 & 17 & 78 & 224 & 302 \\ \hline SlowFast [14] & 68.9 & 50.0 & 33.0 & 63.0 & 25.7 & 10.2 & 3.2 & 17.1 & 12.5 & 7.2 & 3.3 & 7.5 \\ C3D [38] & 70.1 & 48.8 & 33.0 & 63.9 & 22.9 & 9.3 & 2.9 & 15.9 & 13.8 & 7.3 & 3.5 & 7.7 \\ I3D [9] & 67.6 & 49.9 & 32.9 & 62.9 & 26.3 & 9.8 & 4.1 & 17.9 & 12.7 & 7.9 & 4.0 & 7.9 \\ \hline SlowFast* & 71.2 & **61.8** & 39.0 & 68.9 & 29.8 & **15.5** & 7.9 & 21.1 & 21.2 & 9.4 & 5.6 & 12.8 \\ C3D* & **73.2** & 60.0 & 39.6 & **71.2** & 28.1 & 14.6 & 7.3 & **22.8** & **21.8** & **10.8** & **5.6** & **13.1** \\ I3D* & 70.7 & 60.4 & **40.9** & 70.0 & **31.3** & 14.8 & **8.2** & 21.5 & 19.5 & 9.9 & 4.7 & 12.5 \\ \hline \hline \end{tabular} \end{table} Table 3: Per-class Top-1 accuracy (%) for the procedure, action, and their joint prediction on trimmed videos. * denotes the initialization from the model pre-trained on Kinetics 400 [23]. The best performance for each split has been highlighted in **bold**. satisfactory performance for those few classes. To alleviate it, further model design could incorporate long-tail learning techniques such as re-weighting or re-sampling techniques. These approaches can help mitigate the impact of class imbalance and improve the model's ability to generalize and classify the underrepresented classes more effectively. ### Action Detection on Untrimmed Videos The objective of this task is to accurately identify actions in untrimmed videos by determining the temporal extent of the main activity. To establish a benchmark for this task, we adopt three baseline models [43; 28; 33] that have shown effectiveness in temporal action localization [13; 43]. The evaluation metric used is the mean Average Precision (mAP), calculated at various temporal Intersection over Union (tIoU) thresholds [0.5:0.1:0.9]. Additionally, we provide the average mAP across different tIoUs. **Data settings.** Following previous approaches for the standard procedure and action classification, we adhere to a split ratio of [train: 0.7, val: 0.1, test: 0.2] to divide the untrimmed videos at the procedure-action composition level. Consequently, we have 1,077 videos for training, 153 for validation, and 308 for testing. **Baselines.** To evaluate the performance of our datasets, we utilize baseline models, namely ActionFormer [43], TAGS [28], and TriDet [33]. In order to generate features for the NurViD videos, we fine-tune a two-stream I3D model (I3D [9]) initially pre-trained on ImageNet [12]. Subsequently, we extract RGB and optical flow features for each video and concatenate them as the model input. **Results.** We show the action detection results in Table 4. Among the baselines, ActionFormer [43] achieves the highest performance, with an average mAP of 23.9% and an mAP of 32.9% for the threshold of 0.5. **Discussions.** The outputs of the ActionFormer [43] model are visualized in Figure. 4. These outputs consist of action scores and regression results, which are weighted by the action scores and presented as a weighted histogram. Nursing actions, unlike actions in natural datasets, involve precise and meticulous movements rather than large-scale motion from frame to frame. Therefore, a more detailed action representation is necessary to accurately describe and learn nursing actions. For instance, in the hand wash procedure recommended by the WHO [5], it is crucial to consider small changes in hand position and environmental factors. Since these movements are extremely subtle, algorithms capable of handling finer granularity can potentially detect these subtle motion changes. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & & & \multicolumn{3}{c}{mAP (\%)} \\ Baselines & 0.5 & 0.6 & 0.7 & 0.8 & 0.9 & Avg. \\ \hline TriDet [33] & 30.3 & 26.7 & 24.3 & 20.1 & 10.7 & 20.8 \\ TAGS [28] & 31.4 & 26.5 & 22.6 & 19.2 & 11.5 & 22.4 \\ ActionFormer [43] & **32.9** & **29.6** & **25.8** & **20.8** & **12.7** & **23.9** \\ \hline \hline \end{tabular} \end{table} Table 4: The results of action detection. We report mAP at the IoU thresholds of [0.5:0.1:0.9]. The average mAP is calculated by averaging the mAP scores across various tIoU thresholds. Figure 4: Visualization of action detection results. From top to bottom: (1) input video frames; (2) action scores at each time step; (3) histogram of action onsets and offsets computed by weighting the regression outputs using action scores. ## 5 Limitations We believe that NurViD is beneficial for advancing the development of AI technology in the nursing field. However, we must also consider the potential risks and impacts that may arise from anticipated or foreseeable applications. Additionally, NurViD is downloaded from diverse YouTube channels, video quality, production style, and regional nursing practice differences can introduce biases in nursing procedure representation. **Intended/Foreseeable Uses.** After reviewing relevant literature and discussing with nursing professors and students, we have noticed two main challenges in nursing training and learning: (1) the issue of imprecise recommendations encountered by students when searching for learning videos. For example, when we want to learn about Intravenous Injection procedure videos on YouTube, due to the limitations of the recommendation algorithm, the website may recommend Intravenous Blood Sampling procedure videos to some extent, which is quite common. However, the system trained on our dataset can provide more accurate classification recommendations for searches related to nursing operations. Additionally, since NurViD includes temporal localization annotations for actions, it means that we can dynamically adjust the playback range based on our specific interests, for instance, if I only need to watch the step of skin disinfection and not the other steps, I just need to enter the text "skin disinfection," and the system will automatically locate the specific segment for me. This undoubtedly saves some unnecessary time wastage, and (2) in China alone, there are over 200,000 undergraduate nursing students each year, and this number continues to grow. Each student is required to pass a professional nursing skills examination and undergo approximately 1000 to 2000 hours of practical training. However, currently, students still heavily rely on experienced teachers for real-time supervision and feedback during training, which requires a significant amount of human resources and time investment. (3) Real-time monitoring: NurViD is a dataset that leans more towards general models and is currently primarily used for testing and improving deep learning models. It aims to assist students' learning and training by recording, monitoring, and providing feedback during their practice sessions, reducing the need for teaching resources, and facilitating event documentation. In this mode, even rough feedback can save a significant amount of costs. However, AI systems cannot guarantee absolute accuracy, which means that detection errors or omissions may occur. Therefore, it is strongly recommended that any system built upon NurViD or similar technologies clearly communicate their limitations and potential errors from the outset and appropriately incorporate human assistance during usage. **Potential Privacy.** In human-action understanding of video datasets, it is often inevitable to encounter faces. Therefore, we provide a script that uses OpenCV's Haar classifier to detect the facial regions in videos and blur them. We will implement more advanced methods to further alleviate the privacy problem in the future. **Employment Risks.** The employment risks associated with nursing action recognition systems can involve the following aspects: **(1) Reduced Workforce Demand:** The integration of nursing action recognition systems may lead to a diminished need for human nursing professionals. The incorporation of automation technology can handle routine nursing tasks, thereby potentially decreasing the reliance on human caregivers. Consequently, this could result in certain nursing professionals facing job scarcity or encountering uncertainty in their employment prospects. **(2) Shift in Skill Requirements:** Nonetheless, our perspective is that the future implementation of nursing action recognition systems will likely require nursing professionals to acquire new skills and knowledge to adeptly interact with the technology rather than being replaced by it. This could involve gaining proficiency in various areas, such as effectively engaging with the system, accurately interpreting its outputs, and promptly addressing any discrepancies that arise. Nursing professionals encountering challenges in adapting to these evolving technological demands might find it necessary to undergo retraining efforts. **(3) Technical malfunctions and misidentification risks:** Nursing action recognition systems come with inherent risks of technical glitches and misidentifications. Inaccuracies or erroneous assessments made by the system could lead to misguided nursing judgments or actions. This potentially jeopardizes patient safety and well-being, obliging nursing professionals to dedicate extra time and effort toward rectifying system errors. To mitigate the employment risks linked to nursing action recognition systems, a comprehensive approach is crucial: (1) Workforce Enhancement Programs: Offering programs for upskilling and transitioning is vital to empower nursing professionals with the competencies needed to navigate evolving roles in tandem with the technology. (2) Ethical Guidelines and Standards: Establishing clear ethical guidelines ensures responsible and morally sound utilization of nursing action recognition systems within healthcare settings. (3) Data Privacy and Security: Implementing robust measures to safeguard patient data and privacy is paramount to engender trust in the system's operation. (4) Human Oversight and Decision-making: Maintaining human oversight ensures that critical nursing decisions are grounded in human judgment and understanding, acting as a safeguard against erroneous system outputs. (5) Continuous System Evaluation and Enhancement: Regularly evaluating and enhancing the system's performance is pivotal to addressing any technical shortcomings and refining its accuracy. (6) Stakeholder Engagement and Collaboration: Fostering engagement and collaboration among various stakeholders, including nursing professionals, technologists, and policymakers, promotes a holistic approach to system development and implementation. **Contestability/Explainability Issues.** It is important to acknowledge that while AI systems can assist in detecting standardized nursing procedures and actions, they are not infallible and cannot achieve perfect accuracy. Human supervision and oversight are indispensable in ensuring patient safety and quality care. Therefore, it is highly recommended that any system developed based on NurViD or similar technologies explicitly state their limitations and potential errors upfront. This can be achieved by providing clear disclaimers, agreements, or warnings to users, emphasizing the need for human involvement, critical thinking, and professional judgment when interpreting and acting upon system outputs. By transparently communicating the system's limitations, healthcare professionals can make informed decisions and use the technology as a supportive tool rather than relying solely on its outputs. **Potential Regional Biases.** Standardization poses a significant challenge due to the diverse origins of the videos and the variation in nursing procedure guidelines across countries. NurViD aims to cover almost all common action labels, providing flexibility for different regions to adopt their own standards based on it. However, despite our efforts, it is important to acknowledge that complete avoidance of bias is challenging. Additionally, the comprehensiveness of the dataset may be influenced by the sources and origins of the videos. The video collection process for NurViD may inadvertently introduce biases towards certain regions or healthcare settings. This bias can limit the generalizability of the dataset to a broader context, as it may not fully capture the diverse range of nursing procedures and actions practiced worldwide. Addressing this limitation requires ongoing efforts to collect data from diverse regions, collaborate with experts from different backgrounds, and ensure a balanced representation of nursing practices from various healthcare contexts. **Comprehensiveness of Nursing Procedures and Actions.** While efforts were made to include a wide range of common action labels, it is important to acknowledge that the dataset may not cover every possible nursing procedure or action. Variations in nursing practices and guidelines across different regions and healthcare systems can result in some actions being omitted or not adequately represented in the dataset. Furthermore, the dataset's composition may be influenced by the availability and accessibility of videos from different regions. Certain nursing procedures or actions that are more prevalent or emphasized in specific regions may be overrepresented, while others may be underrepresented. This regional bias could limit the dataset's generalizability to a global context and may require additional data collection efforts to ensure a more diverse representation of nursing procedures and actions across regions. Continuously expanding the dataset's coverage through collaboration with experts and professionals from diverse backgrounds can help address this limitation and enhance its comprehensiveness. ## 6 Conclusion We introduce NurViD, a comprehensive video dataset designed for nursing procedure activity understanding. By collecting videos from YouTube and meticulously annotating action sequences at an expert-level, NurViD offers a rich resource for studying nursing procedures. The dataset encompasses 1,538 untrimmed videos, each averaging 32 seconds in duration, covering 51 distinct procedures and 177 action steps. We present three tasks based on NurViD: procedure classification on untrimmed videos, procedure and action classification on trimmed videos, and action detection on untrimmed videos. Our experiments demonstrate that accurate recognition of procedures and the action steps they contain is challenging even with current state-of-the-art models, particularly when the dataset exhibits a long-tail distribution. To promote further development in the field of nursing procedure analysis, we will release all of our data and code to the public, enabling researchers to build upon our work and advance the understanding of nursing activities.
2308.06266
$n$ Walks in the Fictional Woods
This paper presents a novel exploration of the interaction between generative AI models, visualization, and narrative generation processes, using OpenAI's GPT as a case study. We look at the question "Where Does Generativeness Comes From", which has a simple answer at the intersection of many domains. Drawing on Umberto Eco's "Six Walks in the Fictional Woods", we engender a speculative, transdisciplinary scientific narrative using ChatGPT in different roles: as an information repository, a ghost writer, a scientific coach, among others. The paper is written as a piling of plateaus where the titling of each (sub-)section, the "teaser" images, the headers, and a biblock of text are strata forming a narrative about narratives. To enrich our exposition, we present a visualization prototype to analyze storyboarded narratives, and extensive conversations with ChatGPT. Each link to a ChatGPT conversation is an experiment on writing where we try to use different plugins and techniques to investigate the topics that, ultimately form the content of this portable document file. Our visualization uses a dataset of stories with scene descriptions, textual descriptions of scenes (both generated by ChatGPT), and images (generated by Stable Diffusion using scene descriptions as prompts). We employ a simple graph-node diagram to try to make a "forest of narratives" visible, an example of a vis4gen application that can be used to analyze the output of Large Languange + Image Models.
Victor Schetinger, Sara Di Bartolomeo, Edirlei Soares de Lima, Christofer Meinecke, Rudolf Rosa
2023-07-13T12:38:53Z
http://arxiv.org/abs/2308.06266v2
# \(n\) Walks in the Fictional Woods ###### Abstract This paper presents a novel exploration of the interaction between generative AI models, visualization, and narrative generation processes, using OpenArIS GPT as a case study. We look at the question "**Where Does Generaliveness Comes From**", which has a simple answer at the intersection of many domains. Drawing on Umbrella Eco's "Six Walks in the Fictional Woods", we engender a speculative, transdisciplinary scientific narrative using ChatGPT in different roles: as an information repository, a ghost writer, a scientific coach, among others. The paper is written as a piling of plateaus where the tilting of each (sub-)section, the "taser" images, the headers, and a biblock of text are strata forming a narrative about narratives. To enrich our coposation, we present a visualization prototype to analyze storyboard narratives, and extensive conversations with ChatGPT. Each link to a ChatGPT conversation is an experiment on writing where we try to use different plugs and techniques to investigate the topics that, ultimately form the content of this portable document file. Our visualization uses a dataset of stories with scene descriptions, textual descriptions of scenes (both generated by ChatGPT), and images (generated by Stable Diffusion using scene descriptions as prompts). We employ a simple graph-node diagram to try to make a "forest of narratives" visible, an example of a vis4gen application that can be used to analyze the output of Large Language + Image Models. ## 1 Introduction In "Six Walks in the Fictional Woods" [18], Umberto Eco discusses how the interplay between author, text, and reader can generate infinite combinatorial spaces. Rather than merely promoting tired nusus such as _"the reader brings himself to the text"_ or _"personal experience impacts interpretation_ ", Eco delves deeply into the vast expanse of cognitive constructs formed within the minds of readers while interacting with a text (i.e., what is being _rendered_ in the mind as one reads). Eco's foresight is evident in his acknowledgement of the potential role of AI in influencing this triadic process, a topic he touches upon multiple times in his books. The plot of "Focaula's Pendulum" [17], for example, involves a computer (Abulafia) that is used to generate ficions, and these ficions start affecting reality in unexpected ways. Eco's metaphorical woods, _......android and twisted like the forests of the Droids, and not orderly like a French garden._ "[sic], encapsulate the multidimensional, combinatorial spaces formed by the reading process of humans and machines. His six walks provide insights into varying facets of textual interaction, culminating in a discussion of "Fictional Protocols". Though the term "generative' is not explicitly employed, this chapter essentially discusses where 'generativeness' in text comes from. ## 2 Where Does Generaliveness Comes From? In short, 'generativeness' comes from the existence of latent spaces organized around _meaningful_ principles, which reality seems somehow to be plentiful of. A literary text, according to Eco, organizes a latent space around (the fictional _words_) based on its semiotic "hyperstructure", that is, its possible relationships with reality and potential readers' sign systems. While the philosopher Hreadiness says that you cannot step into the same river twice, Eco also says the same of the fictional woods. The second time a reader reads it, the mental constructs invoked (and rendered) in the mind will be affected by the first reading, and so on. Can we then say that every text is a sort of analogical generative model that, when coupled with a reader can produce infinite mental landscapes? Let us start by recognizing that humans are not equipped to deal with infinity. We can symbolically manipulate it, engineer it into useful things in the physical world, but we almost never really have face-to-face encounters with it. Those who learned calculus and struggled with the concept of limits realize there is always a Zeno-like leap of faith from infinitesimally surfing a curve to landing on a point. We are constantly drowning in a sea of infinities, and every thing we can actually experience or think is just \(a_{\text{no, mart, subset of all}}\) possibilities. Therefore, 'generativeness' is a ruse, generative models are less like factories producing new things and more like telescopes (or microscopes, or MRIs) pointing a unexplored places in vastly infinite latent spaces. ### O(n) the Scale of Human Potential A single human life, in seconds, is around \(2.29\times 10^{9}\) seconds. The estimated total number of humans (Homo Sapiens) who have ever lived is approximately \(108.5\times 10^{9}\). Therefore, the total number of seconds that all humans have lived, considering the average lifespan and the estimated total number of humans who have ever lived, is approximately \(2.48\times 10^{20}\) seconds. While this might look like a large number, any programmer will tell you is not, that these are took numbers. A badly coded sorting algorithm could easily beat this in terms of running time. If all humans coordinated to count all natural numbers, one (or ten, or a hundred) per second since the beginning of time, we would not be too far from the start. If we are talking about real numbers, we would not have arrived at 0.1. In comparison, the number of ways to shuffle a standard deck of 52 cards is approximately \(8.07\times 10^{67}\). This is an astronomically larger number, much bigger than the estimated amount of atoms in the universe, demonstrating that even the vast span of human existence is minuscule in comparison to the combinatorial possibilities of something as simple as shuffling a deck of cards. If we devoted ourselves to the task of shuffling all possible combinations instead of counting numbers, we would be doing even worse. Yet, humanity keeps tapping itself on the back for all its amazing achievements, alone in the universe with the burden of intelligence. How come? How could we have done this with such limited time, specially considering we have to split our time between the physical and mental worlds? ## What the World Affords Us In his book "Materialist Phenomenology: A Philosophy of Perception" [13], Del.and.bra provides clues to answer this question through a comprehensive exploration of the synthesis of the visual field, which he argues is a key component of our understanding and interaction with the world. He proposes a non-reductive materialist approach, which asserts that there are mental properties that are different from physical properties, that the existence of mental properties depends on the existence of physical properties, and that mental properties can confer causal powers on mental events. He poses that the world affords us structures, and that humans can heuristically attach themselves to the combinatorial spaces around these structures to effectively navigate reality despite our limited asymptotic time. The consistent physical behavior of things around us, having a ground under our feet, a sky over our head with stars, a moon, and a sun, act as constants that allow us to reduce our cognitive search spaces. In trying to minimize _surprise_1 in this environment, we develop our own generative models of it. Footnote 1: [https://www.youtube.com/watch?v=jZ1fXQz/?M4k4=585s](https://www.youtube.com/watch?v=jZ1fXQz/?M4k4=585s) Del.and.'s arguments are rooted in the belief that there are entities which are independent of the existence of our minds, such as the geological, climatological, and ecological processes that shaped the planet on which we evolved; and there are entities that are independent of the content of our minds, that is, entities that have a definite nature which does not change when our beliefs about it change, except by how we causally/de/lifferent. This perspective allows for a nuanced understanding of the human experience, acknowledging the complex interplay between our subjective experiences, the semiotic latent spaces they engender, and the objective realities of the world. It culminates in a sort of "food chain" of signs where different types of agents "diges" signs at different levels (protoselves, core selves, autobiographical selves), transforming them into lived experiences. ## A Semiotic Food-Chain Umberto Eco's semiotics and Manuel Del.and.a's philosophy of perception share a common thread in their focus on the interaction between signs and their consumers, and it could be argued that Del.and.a provides empirical foundations for Eco's semiotics. Del.and.a's philosophy of perception, with its detailed account of how different types of consumers interpret and consume signs, can be seen as a methodical, bottom-up development of Eco's semiotics. This is because Del.and.a's work derives into the mechanisms of perception and sign consumption at a more granular level than Eco's, starting from the level of protoselves and moving up to autobiographical selves, where Eco's discourse operates. This detailed exploration of the mechanisms of perception and sign consumption could be seen as providing an empirical basis for Eco's more abstract and theoretical discussion of semiotics. An analogy for this relationship might be found in the development of Darwin's theory of evolution by natural selection, as outlined in "On the Origin of Species" [7], which provided a broad framework for understanding the diversity and adaptation of life. However, Darwin lacked a mechanism to explain how traits were passed from generation to generation. This gap was filled by the field of genetics, particularly the work of Gregor Mendel [3], which provided the empirical, mechanistic basis for understanding heredity. Mendel's work on pea plants laid the foundation for the science of genetics, which in turn provided the empirical evidence and mechanisms that supported and expanded Darwin's theory of evolution. In this analogy, Eco's semiotics is akin to Darwin's theory of evolution, providing a broad theoretical framework for understanding the interaction between signs and their consumers. DeLanda's philosophy of perception, on the other hand, brought the peas by providing a more detailed and empirically grounded understanding of the mechanisms of perception and sign consumption. ## Seeing Is Believing Because the main subjects of DeLanda's book are vision and perception, it is specially interesting for the field of visualization in a foundational way, but not as practical as Bertin's seminalogy [4], for example. He discusses the perception of isolated properties, which he compares to an act of measuring. However, he argues that we must go beyond an analogy with speedometers or thermometers and give at least a rough sketch of all the mechanisms involved. These mechanisms involve contributions from the world (reflectances), from the body (sensory-motor dependencies), and from the brain (detecting spectral ranges, producing and consuming signs representing contrasts between these ranges). The contributions from the mind, the transformation of a measured property into a lived property, is the most controversial and speculative of all. The relationship between measurement, data representation, visual encoding, and cognitive symbolic manipulation is also of the heart of visualization, which assumes a metaphysical glue between phenomena and their traces [39, 47]. To exemplify his point, DeLanda provides an example of color constancy, a chromatic version of size and shape constancy. When looking at a uniformly colored object that is only half illuminated, we do not experience it as having two colors--a lighter hue in its illuminated portion and a darker one in the shadowed portion--but as possessing a single color. This effect is due to the separation of the contributions of reflectance and illumination. When viewing conditions allow observers to perceive the entire object at once, their brains can perform this separation, and the resulting phenomena effect--seeing a single hue instead of two--matches the object's reflectance better. However, if a screen with a small aperture is placed between the viewer and the object, so that only a small portion of the object is visible, the effect disappears, and the observer experiences two different colors. This shows that color constancy effects arise as part of the perception of objects, not the perception of properties, and that sight and belief are intimately connected in humans. ## The Ecology of Language DeLanda's exploration of color constancy serves as a poignant example of the intricate interplay between perception and interpretation, a concept that finds resonance in the work of psychiatatis and philosopher Carl Jung. It demonstrates how our brains actively engage in separating the contributions of reflectance and illumination to perceive a single, consistent color. This active process of interpretation is not limited to our visual perception. It extends to all our senses and even to our cognitive processes, shaping our understanding and interaction with the world. This active interpretation is not a solitary process. It is shaped by our interactions with others and with the world around us. Our interpretations are influenced by our cultural background, our personal experiences, and our current context. They are also shaped by our physical bodies, with their unique sensory capabilities and limitations. Thus, our navigation of the combinatorial spaces of reality is a deeply personal and subjective process, shaped by a multitude of factors. In a live lecture series, 2 DelLanda criticized Chomsky's approach to linguistics in relation to linguist William Labov. Instead of lingering in hindsight analysis of grammar, assuming one as sufficient authority in their own mother tongue, Labov went to the streets and sampled living language. Chomsky, instead, never "asked people: _do colorless green ideas sleep fatriously?"_ [sic], an illusion to his famous sentence that, while grammatically correct, should be "nonsensical" [32, 44]. This is not to discredit Chomsky or attack him. Both theory and practice are essential in the dialectics of science. However, a flexible type of epistemology is required to concilate both the reality of our empirical observations and our intuitively-evident knowledge. Local phenomenological conditions, be they material or otherwise, will produce variation, and once you have repetition with variation you have a population. Depending on how this variance can be encoded, and what codifies identity as part of that population, an ecosystem will be formed. Therefore, a generative space for scale-free object populations only needs two things: difference and repetition [15]. Footnote 2: [https://youtu.be/AJAXe3BR607-223](https://youtu.be/AJAXe3BR607-223) ## Assemblage Archetype This is where Jung's concept of the collective unconscious and archetypes [27] comes into play, providing a shared system of symbols and meanings that influence our perceptions and interpretations. Our brains are not passive receivers of information, but active interpreters, a concept that aligns with Jung's theory of individuation [25]. They sit through the vast amount of sensory data we encounter every moment, picking out patterns, making connections, and constructing a coherent picture of the world. This active interpretation allows us to navigate the combinatorial spaces that reality presents us with. It enables us to find structure in the chaos, to make sense of the seemingly infinite possibilities. The book "Atom and Archetype: The Pauli/Jung Letters 1932-1958" [26], is a collection of the correspondence between Nobel physicist Wolfgang Pauli, and psychologist C. G. Jung. It showcases a fascinating transdisciplinary approximation, documenting how psychology was continuated by physics modern ideas through Jung, and physics got influenced by psychoanalytical concepts, through Pauli. If we could imagine an analogous book where Deleuze (or DeLanda) talks with Jung (not that we are comparing anyone's level of achievement or scholarly merit), where both ontologies would man, a fascinating philosophy would emerge where assemblages play a central role. ## What Culture Affords Us According to Donald Hoffman, there is evidence that evolution favors fitness over truthfulness in representing reality [22, 23, 24]. In a 2020 paper with Prakash [42], the authors propose a framework for defining two resource strategies: one that maximizes fitness and another that maximizes truth. The "Fitness only" strategy employs Bayesian estimation while rejecting the interpretation assumptions usually associated with it. This strategy is based on the assumption of a fixed perceptual map and a fixed fitness function. Given a choice of available territories sensed through the sensory states, the organism's goal is to pick one of these, seeking to maximize its fitness payoff. This strategy does not concern itself with the truthfulness of the perceptual map, but only with the fitness payoff associated with the chosen territory. On the other hand, the "truth only" strategy seeks to maximize the accuracy of the perceptual map, regardless of the fitness payoff. This strategy is based on the assumption that the more accurately an organism can perceive its environment, the better is chances of survival and reproduction. However, Hoffman and Singh [42] argue that this strategy is not favored by natural selection, as it does not necessarily lead to the highest fitness payoff. This makes perfect sense when considering our human-time limitations. If the whole of humanity cannot fully sample the combinations of a deck of cards, there is comparatively very little we can achieve in a single human life. All these amazing things we can do as humans such as sitting in chairs, reading books, building bridges, planting corn, and so on depend on a fiction so powerful that keeps us from getting distracted and spending our previously limited cognitive flops in vain. Still, the question remains of how do we do it. What data structure is so effective at organizing reality and equipping us with path-finding through its woods? ### Magical Media The answer to this question lies in the power of stories and narratives. Narratives can be said to be media for compressed sensing [16, 49], which is a signal processing technique that allows for the reconstruction of a signal from a small number of measurements, often far fewer than required by the Nyquist-Shannon sampling theorem. This theorem, a fundamental principle in the field of information theory, states that perfect reconstruction of a signal is possible when the sampling frequency is greater than twice the maximum frequency of the signal. The "magic" of compressed sensing lies in its ability to break this limit. It allows for the reconstruction of sparse or compressible signals from a number of measurements that is significantly smaller than what the Nyquist-Shannon theorem dictates. This is achieved by leveraging the sparsity of the signal in some domain, which is a common characteristic of many natural signals. Coen's paper, "The storyfelling arms race: origin of human intelligence and the scientific mind" [6], presents a compelling argument that the evolution of human intelligence and the scientific mind can be traced back to the dual nature of stories - their ability to both inform and deceive. He suggests that a major factor in the evolution of human language and intelligence was an arms race between truth and deception in storytelling. Coen argues that as soon as honest proto-stories became possible, so did dishonest ones, ushering in an arms race between truth and deception. This arms race drove stories, language, and skills in detecting lies through contradictions to ever greater heights. In telling stories to others, humans also told them to themselves, allowing them to think consciously and plan ahead. Through collectively navigating fictional words, they could share understanding by making discrepancies stronger and more engaging. ### Scientific Media Science (and scientific thought), according to Coen [6], arose when skills in detecting lies through empirical contradictions were applied to stories about how the world operates. Scientists, by doubting these stories and testing them through observations, reasoning, and experiment, could come up with better explanations. They could then share their findings through their own stories, based on the problem-chain-resolution structure, allowing further critical evaluation and advances to be made. Narratives, as he suggests, are a way of representing the world and constructing fictional combinatoric spaces that compress useful information about the world. They are a form of data structure that is effective at organizing reality and equipping us with path-finding through its woods. The arms race in narrative space, between truth and deception, can be seen as a process of refining this data structure through adversarial learning, making it more efficient and effective at compressing and representing information. We know this transdisciplinary step might be a bit too much for the skeptical reader, who might be thinking "these steps are attempting to do speculative philosophy, is there real science going on here?" And we are glad to throw another name into the pot in response. Michael Levin, a leadingologist, presents a framework for understanding cognition in unconventional substrates [34, 35], arguing that all cognitive agents are collective intelligences because they are ultimately made of parts. This perspective aligns with DellLand's materialist phenomenology, which posits that the world is not just a passive recipient of form, but actively participates in the formation of its own structures and properties 3. Levin's exploration of how bioelectric networks scale cell computation into anatomical homeostasis [30, 33], and the evolutionary dynamics of multi-scale competency [5], provides a biological grounding for this perspective. Footnote 3: [https://youtu.be/hzBH7TGAGq/hz-3113](https://youtu.be/hzBH7TGAGq/hz-3113) ### Meet Media In one of his recent papers, Levin introduces the concept of "persaudability", which refers to the type of conceptual and practical tools that are optimal to rationally modify a given system's behavior. This concept is closely related to the Intentional Stance but made more explicit in terms of functional engineering approaches needed to implement prediction and control in practice. This perspective can be seen as a biological instantiation of the narrative arms race described by Coen, where the "sories" are not linguistic narratives but bioelectric and biochemical signals that cells use to communicate and coordinate their behavior. These signals, like stories, can both inform and deceive, and the evolution of multicellular organisms can be seen as an arms race between truth and deception in these signals. In this context, narratives can be seen as a form of "bioelectric story" that cells tell each other to coordinate their behavior and form complex structures. These narratives, like the linguistic narratives described by Coen, are a form of compressed sensing, allowing cells to reconstruct the state of the organism and their role in it from a limited number of measurements. This perspective provides a biological grounding for the concept of narratives as a form of compressed sensing, and suggests that the power of narratives to represent and navigate the world is not limited to human cognition, but is a fundamental aspect of life itself. Narratives and materiality are somehow interlinked [46]. ## You Almost Forgot this Was A Vis Paper But I promise you we did not. Looking at stories from a computational perspective, we can define them as sequences of events that unfold in both time and space, which creates intricate relationships on the causal connections between events, entities, and locations. Understanding the dynamic relationships and structures of events in a story domain is crucial in many contexts, such as in computational narratology to identify new narrative patterns [11, 19], in literary analysis and film studies to recognize similar stories and expose the anatomy of narratives [41, 10], and also in the process of creating new narratives, where authors can analyze and reuse ideas from existing stories [8, 9]. Creating comprehensible visual representations for stories in a way that allows people to analyze and understand the intrinsic narrative structures of a story domain is a complex challenge that motivates our research. Over the past years, many techniques for story visualization have been proposed, including methods for automatic generation of layouts for displaying storylines [40, 48], techniques for visualizing the hierarchical relationships between storylines and story entities [36, 34], methods for visualizing nonlinear narratives [12, 28], and many visualization methods based on different metaphors, such as tree-ning [50], time rings [52], time folding [1], and scroll/selling [31, 38]. However, most of these techniques are designed to work with a single storyline, which is not compatible with the complexity and diversity of narrative pathways that AI introduces to the 'words'. Using data from the Macunaima project,4 dealing with the generation of audiovisual narratives, we implemented a prototype that allows one to step into a small forest of possible narratives: [https://poicanna.github.io/kits.story](https://poicanna.github.io/kits.story). More than just an experimentation, this is a concrete effort in the task of understanding the outputs of ChatGPT (or any other LLM). How could one visualize local variations and patterns in what is produced? Quality? Biases? How can one, in practical terms, interact with the vast possibilities these systems offer while still maintaining personal control of expression? While still a very early prototype, this tool already allows one to assess the capabilities of the GPT 3.5 model. Footnote 4: [http://macunaima.info](http://macunaima.info) The quality of the outputs of GPT 3.5 when confronted with this task, without ingenious amounts of prompt engineering, is mixed. At first glance, it seems to split empty, washed-out plots that go nowhere and would not entertain a child over 6. The choice of initial prompt, _Create story about Macunaima, an AI partner, that solves mysteries in the city of Prague_" generated what seems to be several seasons of some 80s sci-fi low-budget show. However, when one looks carefully at the textual descriptions its sending to Stable Diffusion to illustrate the scene, its _storphounding_ works surprisingly well. Not for producing pieces of innovative cinematography, but for their understanding of different formats and their style of storytille, even if we are mostly drawing samples close to the median and, therefore, the results are vanilla. We invite the interested reader to engage with one of our prototypes and get involved in narrative gardening (sorry if our GPUs cannot handle all traffic): [https://narnativelab.org/grtwists/](https://narnativelab.org/grtwists/) ## If You Want to Have Cities, Youve Got To Build Roads In the prototype, the act of advancing with the bar at the top of the screen goes "forwards in time", but is not _story time_, necessarily. This is represented by the node-height, and each time in a story can be thought of as a scene in a show. Because this is essentially a storyboard, all rules for equivalent media (juxatpositions of image and text, namely comics) apply [37]. Therefore, subsequent scenes could represent a passage of seconds, days, or even a flashback. The discussion of fictional time is a central point in Umberto Eco's _Sit Walks_, and it showcases how this narrative "compressed sensing" can work. When we read a comic, we take the (very little) input the media gives us, and construct an entire world from scratch, just to render the story world in our mind. Is the book providing us a seed to plant a forest, or is it a woods by itself, and we grow it into a junge? Is the input of a story like the base noise in a diffusion model, or are we the base noise in this generative framework of mindspace? It does not really matter, the point we need to make is about another subject. When going _forward in time_ with the slider, the population of story blocks start growing and, soon, clusters start emerging. Then, one can see the emergence of a "grammar", where a story can be described in terms of its component tropes (_e.g.,_ 'AI Purrot Activation' \(\rightarrow\)_'Museum Clues' \(\rightarrow\)_'Hidden Laboratory' \(\rightarrow\)_..._). When increasing the amount of nodes to as much as one's browser can take before crashing, we see the emergence of different types of structures. Using difference and repetition, plus a relational operation (vertically, story time, horizontally archetype-space), we not only create a population, but a whole ecosystem, where tropes like "Macumian's Heroic Journey" become hubs in the narrative space. Now, it is a bit oanistic that ChatGPT is both creating the stories and deciding how to cluster them. Not only its lack of creativity is squared, but also all its other limitations. However, just defining a JSON format to communicate with ChatGPT was orders of magnitude faster than finding an ideal semantic clustering algorithm to run the dataset through. This framework as a whole (data + visualization), without having ChatGPT as a general solver, would have taken years of work, or at least a decently funded research project. The "good guy" side of ChatGPT and similar tools is to serve as catalyst, reducing the energy cost of activation for things to happen, just like roads. They produce traffic, making people go around, transport resources, and congregate. Good, accessible transportation has always been a cornerstone of human development. ## Learning the Woods A populariddle goes **Q**. _"How far can a fox run into the woods?"_, A. _"Hallows, after that she is running out of the woods."_ But, before the fox can arrive at the middle point of the forest, she must travel half of that distance or, a quarter of the total distance. However, before arriving at a quarter of the distance, she needs first to cross half of that, and so on. Someone once tried to prove no movement exists because, if we allow for infinitesimal divisions of time and space, it is impossible for the movement of the fox to ever be actualized. Is a tragic state of affairs, much like humanity trying to count all real humbers. Should we believe there's a halfway in the fictional woods? Or that the fox is forever captured in its event horizon? No matter how quick and brown it is, if it manages to jump over the lazy dog, delve forth past the fourth temple of Solomon, and disturb the fruitous sense of the codress green ideas on her way. She will always move towards the non-existing center, the fictional attractor. If she is the protagonist, there is always hope for redemption. John Conway, the legendary mathematician, developed the system of Surreal Numbers, which was later elaborated in a peculiar literary formy Donald Knuth [29], one of the greatest Computer Scientists of all time. The surreal numbers encompass not only all real numbers but also an infinite array of infinitesimal and infinitely large numbers. The construction of surreal numbers is based on a game-theoretic approach, where each number is defined by a pair of sets of previously created numbers. This recursive process allows for the creation of an extraordinarily diverse number system, including numbers that are infinitely small or large, and even numbers that are "infinitely infinite!". In the context of our narrative exploration, surreal numbers can be seen as a possible mathematical representation for narratives as generating spaces. They imply that a numbers identity can be defined as "everything that is smaller, to the left", and "everything that is bigger, to the right". The number itself is void, defined by how it is related to everything else. The Surreal Number system also has an implicit time, because at the start you can only represent 0, and at any time the potential numbers that can be represented is the combinatorial permutation of everything else that exists between left and right. Here is how you can bootstrap it: ## Genesis According to the Gospel of John (Conway) **Day 0:** The only number we have is 0. **Day 1:** We can create -1 and 1. **Day 2:** We can create -2, \(\frac{1}{2}\), and 2. **Day 3:** We can create -3, \(-\frac{3}{2}\), \(-\frac{1}{2}\), 0, \(\frac{1}{2}\), \(\frac{3}{2}\), and 3. **Day 4:** We can create -4, \(-\frac{5}{2}\), \(-\frac{2}{2}\), \(-\frac{3}{2}\), -1, \(-\frac{1}{2}\), 0, \(\frac{1}{2}\), 1, \(\frac{3}{2}\), 2, \(\frac{5}{2}\), 3, \(\frac{1}{2}\), and 4. **Day \(\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{ \boldsymbolboldsymbolboldsymbol }}}}}}}}}}}}}}}}}\) ** ** **Day \(\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbolboldsymbolboldsymbol {\boldsymbolboldsymbolboldsymbol{ \boldsymbolboldsymbol{ \boldsymbol{ \boldsymbol{ }}}}}}}}}}}}}}}}\) \)** ** **Day \(\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{\boldsymbol{\boldsymbol{\boldsymbolboldsymbolboldsymbol{ \boldsymbolboldsymbol{ \boldsymbolboldsymbol{\boldsymbol{ }}}}}}}}}}}}}}}} \)** **D \(\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{\boldsymbol{\boldsymbol{\boldsymbolboldsymbolboldsymbol{ \boldsymbolboldsymbol{\boldsymbol{\boldsymbol{\boldsymbol }}}}}}}}}}}}}}} \boldsymbol{}}\)** **D \(\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbolboldsymbolboldsymbol{ \boldsymbolboldsymbol{\boldsymbol{\boldsymbol{\boldsymbolboldsymbol }}}}}}}}}}}}}}} \boldsymbol{}}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}} \boldsymbol{}}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}} \boldsymbol{}}\boldsymbol{}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}}\boldsymbol{} \boldsymbol{}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}\boldsymbol{}}\boldsymbol{}}} \boldsymbol{}\boldsymbol{}}\boldsymbol{}\boldsymbol{}}\boldsymbol{}\boldsymbol{}} \boldsymbol{}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}\boldsymbol{}}\boldsymbol{}} \boldsymbol{}\boldsymbol{}}\boldsymbol{}\boldsymbol{}}\boldsymbol{}\boldsymbol{}} \boldsymbol{}}\boldsymbol{}\boldsymbol{}}\boldsymbol{}\boldsymbol{}}}\boldsymbol{} \boldsymbol{}\boldsymbol{}}\boldsymbol{}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}} \boldsymbol{}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}\boldsymbol{}}\boldsymbol{}} \boldsymbol{}\boldsymbol{}}\boldsymbol{}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}} \boldsymbol{}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}} \boldsymbol{}\boldsymbol{}}\boldsymbol{}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}} \boldsymbol{}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}}\boldsymbol{} \boldsymbol{}}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}\boldsymbol{}}\boldsymbol{}} \boldsymbol{}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}}\boldsymbol{} \boldsymbol{}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}\boldsymbol{}}\boldsymbol{}} \boldsymbol{}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}} \boldsymbol{}\boldsymbol{}}\boldsymbol{}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}} \boldsymbol{}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}} \boldsymbol{}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}\boldsymbol{}} \boldsymbol{}}\boldsymbol{}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}\}}\boldsymbol{}}\boldsymbol{}\boldsymbol{}}\boldsymbol{}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}\}}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}\}\boldsymbol{}}\boldsymbol{}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}\}}\boldsymbol{}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}\boldsymbol{}}\boldsymbol{}}\boldsymbol{}\}}\boldsymbol{}}\boldsymbol{}\boldsymbol{}}\boldsymbol{}\] **D \(\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol}\boldsymbol{\boldsymbol{\boldsymbol}\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol \boldsymbol{\boldsymbol{\boldsymbol}\boldsymbol{\boldsymbol{\boldsymbol}\boldsymbol{\boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ { { { \ also a testament to our ability to cooperate, to work together in the construction and navigation of these narrative spaces. A tree, (and by some extension what is referred to as a forest), in Computer Science ontology, is form of graph characterized by its branching. Each node can have edges with "descendants", and these "descendants" with their "descendants". It is used to represent hierarchical phenomena, because there is no direct connection between nodes besides the relationship. However, that is a misinterpretation of what a real forest is. If trees in a network are connected by fungal networks called "mycorant networks" (also joishy referred to as "Wood Wide Web") that could blur this notion by providing degrees of topological connectivity that are more like regular, almost-everything-goes graphs that tree data structures. Deleuza and Goutari's concept of the _trysome_[14] can be implanted in the narrative woods to bring forth an ecology of mind, as first proposed by Bateson [2]. And what does AI, LLMs and GPT fit? Humans are not really sure about the teleological status of their tools, or if we need tools to _survive and achieve goals_, or to increase a _some of comfort_, to earn a certain quality of life. As is one of the lessons of this paper, that _whenever you have a population with variability_, which humanity is blessed with a one of its main traits, _an ecosystem will form_ and with it, eventually culture, in whatever form it may appear, to compress things into narratives. Chat-GPT is special because it is a mirror of humanity (black or whatever). For the first time, we are able to talk with language itself, how is it defined as a milieu by the data in the dataset. If the dataset is somewhat representative of humanity, we should be using them as self-improvement devices. First, we discovered the power of _generalized attention_[51], now we are testing the limits of the Word [20], of stories themselves [1]. Could we potentially fit everything that an intelligence needs to know in a book? Many people on Earth currently believe this, in some way. Whatever we will write about it in the future, how is an exciting time to be alive. Many folk tales and mythical themes, such as the Golen of Prague, and equivalents such as the Greek myth of Pygamola and Galate, deal with the dangers of constructing and ensisting automata. In the Dune series, Frank Herbert even imagps a post-AI future where "_how shaft not make a machine in the liberals of a human mind_"[21] is a recent tenet. These narratives reflect our collective apprehension about creating entities that could potentially surpass us in intelligence and power. They also underscore the ethical considerations that come with creating artificial life forms. As we continue to advance in our technological capabilities, these stories serve as a reminder of the need for caution, responsibility, and respect for all forms of intelligence, whether biological or artificial. And is it here that we draw the line? Doi all of our tools deserve love? What about fictional intelligence, if we ever find out it exists in some form? Surely, the fictional words bustle with life by the interaction of our minds with the content. They do so in a philosophical way, for sure. Anyhow, in our narrative history as humanity we always struggled with an 'Other', be it the environment, other forms of life, other communities, and now we struggle with other forms of intelligence. We fill our dishes with talking animals, but in reality we were terribly lonely until now, as the 'only' talking species around. Is this changing right now [45]? This is a very personal question, and it depends on each reader's relationship with entities that fit the 'Other' set. ###### Acknowledgements. In this paper we tried to tell a transdisciplinary story, and we might have taken some poetic liberties, but we tried to be as scientifically responsible as possible. ChatGPT was used for ideation and writing support, but not irresponsibly. The images were made using MidJoumney. We apologize in advance if there are some inaccuracies in the text. Exaggerations we might have plenty, but they probably serve a narrative purpose. While we would like to publicize the prompts used for these images, unfortunately it is not that simple. We used mainly the img2img functionality of MJ and photohoughing to achieve our results. A discord channel was created with the MJ bot and the authors ideated and iterated on each image. Both headers and section teaser images reflect not only the general trend of the text around it, but they are also tell a story of its own, which we left the reader to figure out. The ChatGPT conversations linked in the text were made both before, during, and after the writing of the text. The general protocol was to start with an idea and try to lead ChatGPT to develop it for us, trying different plugins and forms of engagement. After we established the narrative structure of the paper and the role of these conversations in the exposition became more clear, we started adapting the conversations to a more fixed methodology. Once again, it is for the dedicated reader to extract the full contribution of the paper (fortunately this is for alt.vis). This work has been partially supported by the European Commission under the project Human-AI-Net (grant agreement 952026) and the Austrian Science Fund (FWF; grant P35767).
2301.12718
Dipole-matter interactions governed by the asymmetry of Maxwell equations
Directionally molding the near-field and far-field radiation lies at the heart of nanophotonics and is crucial for applications such as on-chip information processing and chiral quantum networks. The most fundamental model for radiating structures is a dipolar source located inside a homogeneous matter. However, the influence of matter on the directionality of dipolar radiation is oftentimes overlooked, especially for the near-field radiation. We show that the dipole-matter interaction is intrinsically asymmetric and does not fulfill the duality principle, originating from the inherent asymmetry of Maxwell equations, i.e., electric charge and current are ubiquitous but their magnetic counterparts are non-existent to elusive. Moreover, we find that the asymmetric dipole-matter interaction could offer an enticing route to reshape the directionality of not only the near-field radiation but also the far-field radiation. As an example, both the near-field and far-field radiation directionality of Huygens dipole (located close to a dielectric-metal interface) would be reversed, if the dipolar position is changed from the dielectric region to the metal region.
Yuhan Zhong, Chan Wang, Chenxu Bian, Xuhuinan Chen, Jialin Chen, Xingjian Zhu, Hao Hu, Tony Low, Hongsheng Chen, Baile Zhang, Xiao Lin
2023-01-30T08:23:46Z
http://arxiv.org/abs/2301.12718v2
# Dipole-matter interactions governed by the asymmetry of Maxwell's equations ###### Abstract Directionally molding the near-field and far-field radiation lies at the heart of nanophotonics and is crucial for applications such as on-chip information processing and chiral quantum networks. The most fundamental model for radiating structures is a dipolar source located inside a homogeneous matter. However, the influence of matter on the directionality of dipolar radiation is oftentimes overlooked, especially for the near-field radiation. We show that the dipole-matter interaction is intrinsically asymmetric and does not fulfill the duality principle, originating from the inherent asymmetry of Maxwell's equations, i.e., electric charge and current are ubiquitous but their magnetic counterparts are non-existent to elusive. Moreover, we find that the asymmetric dipole-matter interaction could offer an enticing route to reshape the directionality of not only the near-field radiation but also the far-field radiation. As an example, both the near-field and far-field radiation directionality of Huygens dipole
2303.08111
Differentials of Sinha's spectral sequence for long knots in codimension one
We compute the differentials of a few elements of Sinha's spectral sequence for cohomology of the space of long knots modulo immersions in codimension one, over a field of characteristic $2$ or $3$. We show that $d_2$ of an element is non-zero in characteristic $2$, which has already been proved by Salvatore essentially, and $d_3$ of another element is non-zero in characteristic $3$. While even convergence of the sequence is unclear in condimension one, these results have some applications to non-formality of operads. The result in characteristic $3$ implies planar non-formality of the standard map $C_*(E_1)\to C_*(E_2)$ in characteristic $3$, where $C_*(E_k)$ denotes the chain little $k$-disks operad. We also reprove the result of Salvatore which states $C_*(E_2)$ is not formal as a planar operad in characteristic $2$. For computation, we use a duality between configuration spaces and fat diagonals.
Syunji Moriya
2023-03-14T17:45:16Z
http://arxiv.org/abs/2303.08111v2
# Differentials of Sinha's spectral sequence for long knots in codimension one ###### Abstract. We compute the differentials of a few elements of Sinha's spectral sequence for cohomology of the space of long knots modulo immersions in codimension one over a field of characteristic \(2\) or \(3\). We show that \(d_{2}\) of an element is non-zero in characteristic \(2\), which has already been proved by Salvatore essentially, and \(d_{3}\) of another element is non-zero in characteristic \(3\). While even convergence of the sequence is unclear in codimension one, these results have some application to non-formality of operads. The result in characteristic \(3\) implies planar non-formality of the standard map \(C_{*}(E_{1})\to C_{*}(E_{2})\) in characteristic \(3\), where \(C_{*}(E_{k})\) denotes the chain little \(k\)-disks operad. We also reprove the result of Salvatore which states \(C_{*}(E_{2})\) is not formal as a planar operad in characteristic \(2\). For computation, we use a duality between configuration spaces and fat diagonals. This work is partially supported by JSPS KAKENHI Grant Number JP17K14192. ###### Contents * 1 Introduction * 2 The \(\mathbb{R}^{2}\)-diks operad * 2.1 The \(\mathbb{R}^{2}\)-diks operad * 2.2 The \(\mathbb{R}^{2}\)-diks operad * 2.3 The \(\mathbb{R}^{2}\)-diks operad * 2.4 The \(\mathbb{R}^{2}\)-diks operad * 2.5 The \(\mathbb{R}^{2}\)-diks operad * 2.6 The \(\mathbb{R}^{2}\)-diks operad * 2.7 The \(\mathbb{R}^{2}\)-diks operad * 2.8 The \(\mathbb{R}^{2}\)-diks operad * 2.9 The \(\mathbb{R}^{2}\)-diks operad * 2.1 The \(\mathbb{R}^{2}\)-diks operad * 2.1 The \(\mathbb{R}^{2}\)-diks operad * 2.2 The \(\mathbb{R}^{2}\)-diks operad * 2.3 The \(\mathbb{R}^{2}\)-diks operad * 2.4 The \(\mathbb{R}^{2}\)-diks operad * 2.5 The \(\mathbb{R}^{2}\)-diks operad * 2.6 The \(\mathbb{R}^{2}\)-diks operad * 2.7 The \(\mathbb{R}^{2}\)-diks operad * 2.8 The \(\mathbb{R}^{2}\)-diks operad * 2.9 The \(\mathbb{R}^{2}\)-diks operad * 2.1 The \(\mathbb{R}^{2}\)-diks operad * 2.1 The \(\mathbb{R}^{2}\)-diks operad * 2.2 The \(\mathbb{R}^{2}\)-diks operad * 2.3 The \(\mathbb{R}^{2}\)-diks operad * 2.4 The \(\mathbb{R}^{2}\)-diks operad * 2.5 The \(\mathbb{R}^{2}\)-diks operad * 2.6 The \(\mathbb{R}^{2}\)-diks operad * 2.7 The \(\mathbb{R}^{2}\)-diks operad * 2.8 The \(\mathbb{R}^{2}\)-diks operad * 2.9 The \(\mathbb{R}^{2}\)-diks operad * 2.1 The \(\mathbb{R}^{2}\)-diks operad * 2.2 The \(\mathbb{R}^{2}\)-diks operad * 2.1 The \(\mathbb{R}^{2}\)-diks operad * 2.2 The \(\mathbb{R}^{2}\)-diks operad * 2.3 The \(\mathbb{R}^{2}\)-diks operad * 2.4 The \(\mathbb{R}^{2}\)-diks operad * 2.5 The \(\mathbb{R}^{2}\)-diks operad * 2.6 The \(\mathbb{R}^{2}\)-diks operad * 2.7 The \(\mathbb{R}^{2}\)-diks operad * 2.8 The \(\mathbb{R}^{2}\)-diks operad * 2.9 The \(\mathbb{R}^{2}\)-diks operad * 2.1 The \(\mathbb{R}^{2}\)-diks operad * 2.1 The \(\mathbb{R}^{2}\)-diks operad * 2.2 The \(\mathbb{R}^{2}\)-diks operad * 2.3 The \(\mathbb{R}^{2}\)-diks operad * 2.4 The \(\mathbb{R}^{2}\)-diks operad * 2.5 The \(\mathbb{R}^{2}\)-diks operad * 2.6 The \(\mathbb{R}^{2}\)-diks operad * 2.7 The \(\mathbb{R}^{2}\)-diks operad * 2.8 The \(\mathbb{R}^{2}\)-diks operad * 2.9 The \(\mathbb{R}^{2}\)-diks operad * 2.1 The \(\mathbb{R}^{2}\)-diks operad * 2.2 The \(\mathbb{R}^{2}\)-diks operad * 2.1 The \(\mathbb{R}^{2}\)-diks operad * 2.2 The \(\mathbb{R}^{2}\)-diks operad * 2.3 The \(\mathbb{R}^{2}\)-diks operad * 2.4 The \(\mathbb{R}^{2}\)-diks operad * 2.5 The \(\mathbb{R}^{2}\)-diks operad * 2.6 The \(\mathbb{R}^{2}\)-diks operad * 2.7 The \(\mathbb{R}^{2}\)-diks operad * 2.8 The \(\mathbb{R}^{2}\)-diks operad * 2.9 The \(\mathbb{R}^{2}\)-diks operad * 2.1 The \(\mathbb{R}^{2}\)-diks operad * 2.2 The \(\mathbb{R}^{2}\)-diks operad * 2.1 The \(\mathbb{R}^{2}\)-diks operad * 2.2 The \(\mathbb{R}^{2}\)-diks operad * 2.3 The \(\mathbb{R}^{2}\)-diks operad * 2.4 The \(\mathbb{R}^{2}\)-diks operad * 2.5 The \(\mathbb{R}^{2}\)-diks operad * 2.6 The \(\mathbb{R}^{2}\)-diks operad * 2.7 The \(\mathbb{R}^{2}\)-diks operad * 2.8 The \(\mathbb{R}^{2}\)-diks operad * 2.9 The \(\mathbb{R}^{2}\)-diks operad * 2.1 The \(\mathbb{R}^{2}\)-diks operad * 2.1 The \(\mathbb{R}^{2}\)-diks operad * 2.2 The \(\mathbb{R}^{2}\)-diks operad * 2.2 The \(\mathbb{R}^{2}\)-diks operad * 2.3 The \(\mathbb{R}^{2}\)-diks operad * 2.4 The \(\mathbb{R}^{2}\)-diks operad * 2.5 The \(\mathbb{R}^{2}\)-diks operad * 2.6 The \(\mathbb{R}^{2}\)-diks operad * 2.7 The \(\mathbb{R}^{2}\)-diks operad * 2.8 The \(\mathbb{R}^{2}\)-diks operad * 2.9 The \(\mathbb{R}^{2}\)-diks operad * 2.1 The \(\mathbb{R}^{2}\)-diks operad * 2.1 The \(\mathbb{R}^{2}\)-diks operad * 2.2 The \(\mathbb{R}^{2}\)-diks operad * 2.1 The \(\mathbb{R}^{2}\)-diks operad * 2.2 The \(\mathbb{R}^{2}\)-diks operad * 2.3 The \(\mathbb{R}^{2}\)-diks operad * 2.4 The \(\mathbb{R}^{2}\)-diks operad * 2.5 The \(\mathbb{R}^{2}\)-diks operad * 2.6 The \(\mathbb{R}^{2}\)-diks operad * 2.7 The \(\mathbb{R}^{2}\)-diks operad * 2.8 The \(\mathbb{R}^{2}\)-diks operad * 2.9 The \(\mathbb{R}^{2}\)-diks operad * 2.1 The \(\mathbb{R}^{2}\)-diks operad * 2.1 The \(\mathbb{R}^{2}\)-diks operad * 2.2 The \(\mathbb{R}^{2}\)-diks operad * 2.2 The \(\mathbb{R}^{2}\)-diks operad * 2.3 The \(\mathbb{R}^{2}\)-diks operad * 2.4 The \(\mathbb{R}^{2}\)-diks operad * 2.5 The \(\mathbb{R}^{2}\)-diks operad * 2.6 The \(\mathbb{R}^{2}\)-diks operad * 2.7 The \(\mathbb{R}^{2}\)-diks operad * 2.8 The \(\mathbb{R}^{2}\)-diks operad * 2.9 The \(\mathbb{R}^{2}\)-diks operad * 2.1 The \(\mathbb{R}^{2}\)-diks operad * 2.1 The \(\mathbb{R}^{2}\)-diks operad * 2.2 The \(\mathbb{R}^{2}\)-diks operad * 2.1 The \(\mathbb{R}^{2}\)-diks operad * 2.2 The \(\mathbb{R}^{2}\)-diks operad * 2.3 The \(\mathbb{R}^{2}\)-diks operad * 2.4 The \(\mathbb{R}^{2}\)-diks operad * 2.5 The \(\mathbb{R}^{2}\)-diks operad * 2.6 The \(\mathbb{R}^{2}\)-diks operad * 2.7 The \(\mathbb{R}^{2}\)-diks operad * 2.8 The \(\mathbb{R}^{2}\)-diks operad * 2.9 The \(\mathbb{R}^{2}\)-diks operad * 2.1 The \(\mathbb{R}^{2}\)-diks operad * 2.1 The \(\mathbb{R}^{2}\)-diks operad * 2.1 The \(\mathbb{R}^{2}\)-diks operad * 2.2 The \(\mathbb{R}^{2}\)-diks operad * 2.2 The \(\mathbb{R}^{2}\)-diks operad * 2.3 The \(\mathbb{R}^{2}\)-diks operad * 2.4 The \(\mathbb{R}^{2}\)-diks operad * 2.5 The \(\mathbb{R}^{2}\)-diks operad * 2.6 The \(\mathbb{R}^{2}\)-diks operad * 2.7 The \(\mathbb{R}^{2}\)-diks operad * 2.8 The \(\mathbb{R}^{2}\)-diks operad * 2.9 The \(\mathbb{R}^{2}\)-diks operad * 2.1 The \(\mathbb{R}^{2}\)-diks operad * 2.1 The \(\mathbb{R}^{2}\)-diks operad * 2.1 The \(\mathbb{R}^{2}\)-diks operad * 2.2 The \(\mathbb{R}^{2}\)-diks operad * 2.1 The \(\mathbb{R}^{2}\)-diks operad * 2.2 The \(\mathbb{R}^{2}\)-diks operad * 2.3 The \(\mathbb{R}^{2}\)-diks operad * 2.4 The \(\mathbb{R}^{2}\)-diks operad * 2.5 The \(\mathbb{R}^{2}\)-diks operad * 2.6 The \(\mathbb{R}^{2}\)-diks operad * 2.7 The \(\mathbb{R}^{2}\)-diks operad * 2.8 The \(\mathbb{R}^{2}\)-di where \(|a|\) denotes the degree of \(a\). Hereafter, for a pointed map \(f:X\to Y\), the subscript \(*\) for the pushforward on homology is omitted if no confusion occurs. So \(f_{*}(a)\) is denoted by \(f(a)\). 3. Let \(X\) be an unpointed space. We denote \(X\) with disjoint basepoint by \(X_{+}\) and the one-point compactification of \(X\) by \(X^{*}\). We set \(S^{k}=(\mathbb{R}^{k})^{*}\) and \([0,\infty]=[0,\infty]^{*}\). We denote the interval \([0,1]\) by \(I\). We fix a fundamental cycle \(w_{S^{2}}\in\tilde{C}_{2}(S^{2})\), chains \(w_{\infty}\in\tilde{C}_{1}([0,\infty])\) and \(w_{I}\in C_{1}(I)\) such that \(dw_{\infty}=\{0\}\) and \(dw_{I}=\{1\}-\{0\}\), where \(\{0\},\{1\}\) are cycles represented by \(0,1\in[0,1]\subset[0,\infty]\). We set \[w_{k}=(w_{S^{2}})^{\wedge^{2}}\wedge(w_{\infty})^{\wedge k},\qquad\ w_{kl}=w_ {k}\wedge(w_{I})^{\wedge l}\qquad\text{ for }k\geq 0,\ l>0.\] 4. \(|-|\) denotes the standard Euclidean norm. We define elements \(u,v\in\mathbb{R}^{d}\) by \[u=(1,0,\dots,0),\ \ \text{and}\ \ v=(0,1,0,\dots,0).\] 5. We denote by \(=_{1}\), \(<_{1}\),... etc., the relations between the first coordinates of elements of \(\mathbb{R}^{k}\). For example, for two elements \(x=(x_{1},\dots,x_{k}),y=(y_{1},\dots,y_{k})\in\mathbb{R}^{k}\) and a number \(t\in\mathbb{R}\), \(x<_{1}y\) and \(x=_{1}y\) mean \(x_{1}<y_{1}\) and \(x_{1}=y_{1}\), respectively, and \(x<_{1}t\) means \(x_{1}<t\). ## 2. Punctured knot model and configuration space model For some reason, we mainly deal with a version of the punctured knot model defined below instead of the cosimplicial model. **Definition 2.1**.: A _partition \(P\) of_\([n+1]=\{0,1,\dots,n+1\}\) is a set of subsets of \([n+1]\) satisfying the following conditions. 1. \(\cup_{\alpha\in P}\alpha=[n+1]\). 2. Each element of \(P\) is non-empty. 3. If \(\alpha,\beta\in P\), either of \(\alpha=\beta\) or \(\alpha\cap\beta=\emptyset\) holds. 4. For each element \(\alpha\in P\) and pair of numbers \(i,j\in\alpha\) with \(i<j\), any number \(k\) with \(i<k<j\) belongs to \(\alpha\). 5. \(\#P\geq 2\), in other words the set consisting of the single element \([n+1]\) is not a partition. We call an element of \(P\) a _piece of \(P\)_. We regard a partition as a totally ordered set via the order induced by \([n+1]\). A partition \(Q\) is said to be a _subdivision of \(P\)_ if \(Q\neq P\) and each piece of \(Q\) is contained in some piece of \(P\). P\({}_{n}\) denotes the poset of partitions of \([n+1]\). Its objects are the partitions. A non-identity morphism \(P\to Q\) exists if and only if \(Q\) is a subdivision of \(P\). By abuse of notation, we identify the partition \(\{\{0\},\{1\},\dots,\{n+1\}\}\) consisting of singletons with \([n+1]\). **Example 2.2**.: The following sets are examples of partitions of [5]: \[P=\{\{0\},\{12\},\{345\}\},\qquad Q=\{\{0\},\{12\},\{3\},\{45\}\}.\] We omit commas in pieces, so the piece \(\{12\}\) denotes \(\{1,2\}\). We see that \(\#P=3\), \(\#Q=4\), and \(Q\) is a subdivision of \(P\). **Definition 2.3**.: 1. Throughout the paper, we fix positive numbers \(\rho,\epsilon\) and \(c_{0},\dots c_{n+1}\) satisfying \[c_{0}+\dots+c_{n+1}=1,\quad\rho<1,\quad 100\epsilon/\rho<c_{0},\quad 100\bigl{(} \epsilon/\rho+\sum_{j<i}c_{j}\bigr{)}<c_{i}\quad(1\leq i\leq n+1).\] (The last two inequalities are not used until section 5.) 2. We define a functor \(\mathcal{PK}:\) P\({}_{n}\to\mathcal{C}\mathcal{G}\) as follows. Set \(b_{i}=c_{0}+\dots+c_{i-1}\) for \(1\leq i\leq n\). For a partition \(P=\{\alpha_{0}<\dots<\alpha_{p+1}\}\), \(S_{P}\subset\{1,\dots,n+1\}\) denotes the set of minimum elements in each of \(\alpha_{1},\dots,\alpha_{p+1}\). \(\mathcal{PK}(P)\) is the space of embeddings \(f:[0,1]-\{b_{i}\mid i\in S_{P}\}\to D^{d}\) such that 1. \(f(0)=-u\) and \(f(1)=u\) 2. In each connected component of the domain, \(f(t)=x+ctu\) for some constant elements \(x\in D^{d}\) and \(c>0\). For a subdivision \(Q\) of \(P\), the map \(\mathcal{PK}(P)\to\mathcal{PK}(Q)\) is the obvious restriction. Let \(\Delta_{n}\) be the category whose objects are \([k]\) (\(0\leq k\leq n\)) and whose morphisms are the weakly order preserving maps. We define a functor \(\mathcal{F}:\) P\({}_{n}\to\Delta_{n}\) by \(P\mapsto[\#S_{P}-1]\). **Definition 2.4**.: Let \(\mathcal{K}_{d}\) denotes the \(d\)-dimensional Kontsevich operad defined in [21]. Its \(p\)-th term \(\mathcal{K}_{d}(p)\) is a version of Fulton-Macpherson compactification of the ordered configuration space \(Conf_{p}(\mathbb{R}^{d})\) of \(p\) points. The operad \(\mathcal{K}_{d}\) is equipped with a map \(\mathcal{A}\to\mathcal{K}_{d}\) from the associative operad as in [21]. This map induces a cosimplicial space \(\mathcal{K}_{d}^{\bullet}\), which we call _Sinha's cosimplicial space_, via the framework of McClure-Smith. We use the same notation for its restriction to \(\Delta_{n}\) if no confusion occurs. **Lemma 2.5**.: _There is a zigzag of natural homotopy equivalences_ \[\mathcal{PK}\simeq\mathcal{F}^{*}\mathcal{K}_{d}^{\bullet}.\] Proof.: Let \(\operatorname{Emb}_{c}([0,1],D^{d})\) be the space of embeddings \([0,1]\to D^{d}\) to the \(d\)-dimensional unit closed disk with fixed endpoints in \(\partial D^{d}\) and fixed tangent vectors on them. When we identify \(\mathrm{P}_{n}\) and the poset of non-empty subsets of \(\{1,\ldots,n+1\}\) by \(P\mapsto S_{P}\), after some change of parametrization of \([0,1]\), \(\mathcal{PK}\) is naturally homotopy equivalent to the homotopy fiber of the map from the punctured knot model of \(\operatorname{Emb}_{c}([0,1],D^{d})\) to the cubical model of \(\operatorname{Imm}_{c}([0,1],D^{d})\simeq\Omega S^{d-1}\) given by \(P\mapsto(S^{d-1})^{\#P-2}\). With this observation, the claim is obvious by the arguments in [20, 21]. **Definition 2.6**.: 1. For \(P\in\mathrm{P}_{n}\) and \(\alpha,\beta\in P\) with \(\alpha<\beta\), we set \[c_{\alpha}:= \sum_{i\in\alpha}c_{i},\] \[c_{\leq\alpha}:= \sum_{\gamma\in P,\gamma<\alpha}c_{\gamma}+c_{\alpha}/2,\] \[c_{\geq\alpha}:= \sum_{\gamma\in P,\gamma>\alpha}c_{\gamma}+c_{\alpha}/2,\] \[c_{\alpha\beta}:= \sum_{\gamma\in P,\alpha<\gamma<\beta}c_{\gamma}+(c_{\alpha}+c_ {\beta})/2.\] In fact, this definition does not depend on the pieces of \(P\) other than \(\alpha,\beta\). We abbreviate \(c_{\leq\{i\}}\) (resp. \(c_{\geq\{i\}}\)) as \(c_{\leq i}\) (resp. \(c_{\geq i}\)). 2. Let \(Q\) be a subdivision of \(P\) and write \(P=\{\alpha_{0}<\cdots<\alpha_{p+1}\}\) and \(Q=\{\beta_{0}<\cdots<\beta_{q+1}\}\). We define an affine monomorphism \(e_{P,Q}:\mathbb{R}^{dp}\to\mathbb{R}^{dq}\) as follows. Let \((x_{i})_{1\leq i\leq p}\in\mathbb{R}^{dp}\) be an element. For convenience, we set \[x_{0}=(-1+\rho c_{\alpha_{0}}/2,0,\ldots,0),\quad x_{p+1}=(1-\rho c_{\alpha_{ p+1}}/2,0,\ldots,0).\] For \(0\leq i\leq p+1\), suppose that \(\alpha_{i}\) includes exactly \(k\)-pieces of \(Q\), say \(\beta_{l},\ldots,\beta_{l+k-1}\). We create the line segment which is centered at \(x_{i}\), parallel to \(u\), and of length \(\rho c_{\alpha_{i}}\), and divide this segment into \(k\) little segments of length \(\rho c_{\beta_{l}},\ldots,\rho c_{\beta_{l+k-1}}\) arranged from left to right. Let \(y_{l+j-1}\) be the center of the \(j\)-th little segment. We set \(e_{P,Q}((x_{i})_{i})=(y_{m})_{1\leq m\leq q}\). For \(Q=[n+1]\) we write \(e_{P,Q}=e_{P}\). It is clear that \(e_{Q,R}\circ e_{P,Q}=e_{P,R}\) for a subdivision \(R\) of \(Q\). 3. We define a functor \(\mathcal{E}:\mathrm{P}_{n}\to\mathcal{C}\mathcal{G}\) as follows. 1. For \(P\in\mathrm{P}_{n}\) with \(\#P-2=p\), \(\mathcal{E}(P)\) is the subset of \(\mathbb{R}^{dp}\) consisting of elements \((x_{\alpha})_{\alpha\in P}\) satisfying the following inequalities: \[|x_{\alpha}| \leq 1-\rho c_{\alpha}/2,\text{ and }\] \[-1+\rho c_{\leq\alpha} \leq_{1}x_{\alpha}\leq_{1}\ 1-\rho c_{\geq\alpha},\text{ and }\] \[|x_{\alpha}-x_{\beta}| \geq\rho c_{\alpha\beta}.\] 2. If \(Q\) is a subdivision of \(P\), the corresponding map \(\mathcal{E}(P)\to\mathcal{E}(Q)\) is given by the map \(e_{P,Q}\) defined above. 4. We can define an inclusion \(\mathcal{E}(P)\to\mathcal{PK}(P)\) by creating the line segment centered at \(x_{\alpha}\) of length \(\rho c_{\alpha}\) for \((x_{\alpha})_{\alpha\in P}\). These inclusions form a natural transformation \(\mathcal{E}\to\mathcal{PK}\). **Lemma 2.7**.: _For each \(P\in\mathrm{P}_{n}\), the inclusion \(\mathcal{E}(P)\to\mathcal{PK}(P)\) is a homology isomorphism._ Proof.: Write \(P=\{\alpha_{0}<\ldots\alpha_{p+1}\}\). Let \(\mathit{Conf}_{p}(D^{d})\) be the ordered configuration space of \(p\) points in \(D^{d}\). Since the map \(\mathcal{PK}(P)\to\mathit{Conf}_{p}(D^{d})\) which takes a collection of line segments \((l_{\alpha_{i}})\) to the configuration of centers of \(l_{\alpha_{i}}\) (\(i\neq 0,p+1\)), is a homotopy equivalence, we only have to prove the composition \(\mathcal{E}(P)\to\mathcal{PK}(P)\to\mathit{Conf}_{p}(D^{d})\) is a homology isomorphism. This is codimension \(0\) embedding and fits into the following commutative diagram. Here, \(D^{dp}=(D^{d})^{p}\), and \(\partial^{\prime}D^{dp}\) is the subspace of elements which do not satisfy at least one of the first two inequalities in (3),(a) of Definition 2.6 for some \(\alpha\in P\) ( so this is a collar of \(\partial D^{dp}\)). \(\Delta^{\prime}_{\mathrm{fat}}\) is the subspace of elements which do not satisfy the third inequality for some pair \(\alpha,\beta\in P\). \(\Delta_{\mathrm{fat}}\) is the fat diagonal of \((D^{d})^{p}\). The vertical arrows are Poincare duality isomorophisms, and the bottom horizontal arrow is induced by the identity. We consider the Cech spectral sequences of pairs \((\Delta^{\prime}_{\mathrm{fat}},\Delta^{\prime}_{\mathrm{fat}}\cap\partial^{ \prime}D^{dp})\) and \((\Delta_{\mathrm{fat}},\Delta_{\mathrm{fat}}\cap\partial D^{dp})\) with respect to the coverings \(\{\Delta^{\prime}_{\alpha,\beta}\}_{\alpha,\beta}\) and \(\{\Delta_{\alpha\beta}\}_{\alpha,\beta}\), where \(\Delta^{\prime}_{\alpha\beta}\) is the subspace of elements which do not satisfy the third inequalities in (3)(a) of Definition 2.6 for \(\alpha,\beta\) and \(\Delta_{\alpha\beta}\) is the subspace of elements whose \(\alpha\)- and \(\beta\)- components are the same. The inclusion \(\Delta_{\alpha\beta}\to\Delta^{\prime}_{\alpha\beta}\) is clearly a homotopy equivalence. The inclusion \(\Delta_{\alpha\beta}\cap\partial D^{dp}\to\Delta^{\prime}_{\alpha\beta}\cap \partial^{\prime}D^{dp}\) is also a homotopy equivalence since its homotopy inverse is given by the orthogonal projection to \(\Delta_{\alpha\beta}\) followed by the projection to \(\partial(D^{dp})\) from the light source \(0\). So, the inclusion induces an isomorphism between the relative homology of the pairs. We also see that the inclusion induces an isomorphism between the relative homology of intersections similarly, so by spectral sequence, we see that the bottom arrow of the square is an isomorphism. ## 3. Fat diagonal model In this section, we define a functor \(\mathcal{T}:\mathrm{P}_{n}^{op}\to\mathcal{CG}_{*}\) consisting of Thom spaces (or suspensions) of fat diagonals. We prove that this functor is stably equivalent to the Spanier-Whitehead dual of the punctured knot model \(\mathcal{PK}\). **Definition 3.1**.: Let \(P\in\mathrm{P}_{n}\) be a partition having the minimum piece \(\gamma_{m}\) and the maximum piece \(\gamma_{M}\). 1. We define a positive number \(\epsilon_{P}\) by \[\epsilon_{P}=\frac{\epsilon}{8^{n-p}},\quad\text{where}\quad p=\#P-2.\] Here, \(\epsilon\) is the number fixed in Definition 2.3. 2. For \(P\in\mathrm{P}_{n}\) let \(\nu_{P}\) be the \(\epsilon_{P}\)-neighborhood of \(e_{P}(\mathbb{R}^{dp})\) i.e., \[\nu_{P}=\{y\in\mathbb{R}^{dn}\mid y-e_{P}(x)|<\epsilon_{P}\text{ for some }x\in\mathbb{R}^{dp}\}.\] For \(\alpha,\beta\in P-\{\gamma_{m},\gamma_{M}\}\), we define a subspace \(D_{\alpha\beta}\) of \(\mathbb{R}^{dp}\) by \[D_{\alpha\beta}=\{(x_{\gamma})_{\gamma\in P-\{\gamma_{m},\gamma_{M}\}}\mid|x _{\alpha}-x_{\beta}|\leq d_{\alpha\beta}(P)\},\] where \[d_{\alpha\beta}(P)=\rho c_{\alpha\beta}-\epsilon_{P}.\] We denote by \(E_{\alpha}\) the subspace of the elements \((x_{\gamma})_{\gamma}\) satisfying the following condition: \[|x_{\alpha}| \geq 1-\rho c_{\alpha}/2+\epsilon_{P},\text{ or}\] \[x_{\alpha} \leq_{1}\ -1+\rho c_{\leq\alpha}-\epsilon_{P},\text{ or}\] \[x_{\alpha} \geq_{1}\ 1-\rho c_{\geq\beta}+\epsilon_{P}.\] Set \[E_{P}=\bigcup_{\gamma_{m}<\alpha<\gamma_{M}}E_{\alpha},\qquad D_{P}=E_{P}\cup \bigcup_{\gamma_{m}<\alpha<\beta<\gamma_{M}}D_{\alpha\beta}.\] We define a space \(\mathcal{T}(P)\in\mathcal{CG}_{*}\) by \[\mathcal{T}(P)=Th(\nu_{P})/Th(\nu_{P}|_{D_{P}})=\mathbb{R}^{dn}/\{(\mathbb{R}^{ dn}-\nu_{P})\cup\pi_{P}^{-1}(D_{P})\},\] where \(\pi_{P}:\nu_{P}\to\mathbb{R}^{pd}\) is the standard projection. If \(Q\) is a subdivision of \(P\) we define a map \(\mathcal{T}(Q)\to\mathcal{T}(P)\) as the natural collapsing map and this defines a functor \(\mathcal{T}:(\mathrm{P}_{n})^{op}\to\mathcal{CG}_{*}\) (see Lemmas 3.2 and 3.3 for well-definedness). 3. Set \[\mathcal{T}_{\emptyset_{P}}=Th(\nu_{P})/Th(\nu_{P}|_{E_{P}})=\mathbb{R}^{dn}/\{( \mathbb{R}^{dn}-\nu_{P})\cup\pi_{P}^{-1}(E_{P})\}.\] For two pieces \(\alpha,\beta\) of \(P\) with \(\gamma_{m}<\alpha<\beta<\gamma_{M}\), let \(\mathcal{T}_{\alpha\beta}\) denotes the subspace of \(\mathcal{T}_{\emptyset_{P}}\) consisting of the basepoint and the elements represented by \(x\in\nu_{P}\) satisfying \(\pi_{P}(x)\in D_{\alpha\beta}\). **Lemma 3.2**.: _Let \(Q\) be a subdivision of a partition \(P\in\mathrm{P}_{n}\). We have_ \[(\mathbb{R}^{dn}-\nu_{Q})\cup(\nu_{Q}|_{E_{Q}})\subset(\mathbb{R}^{dn}-\nu_{P} )\cup(\nu_{P}|_{E_{P}}).\] _In particular, the identity on \(\mathbb{R}^{dn}\) induces a collapsing map_ \[\delta^{\prime}_{P,Q}:\mathcal{T}_{\emptyset_{Q}}\to\mathcal{T}_{\emptyset_{ P}}.\] Proof.: Let \(P^{\prime}\subset P\) (resp. \(Q^{\prime}\subset Q\)) denote the subset of pieces which are neither the minimum nor maximum. Let \(y\in\mathbb{R}^{dn}\). Write \[\pi^{\prime}_{Q}(y) =(x_{\gamma})_{\gamma\in Q^{\prime}},\] \[\pi^{\prime}_{P}(y) =(\bar{x}_{\gamma^{\prime}})_{\gamma^{\prime}\in P^{\prime}},\] \[e_{P,Q}(\pi^{\prime}_{P}(y)) =(x^{\prime}_{\gamma})_{\gamma\in Q^{\prime}}.\] Here, \(\pi^{\prime}_{Q}:\mathbb{R}^{dn}\to e_{Q}(\mathbb{R}^{dq})=\mathbb{R}^{dq}\) is the orthogonal projection, where \(q=\#Q^{\prime}\), and \(\pi^{\prime}_{P}\) is the similar map. Suppose \(y\in\nu_{P}\). Since the image of \(e_{P}\) is contained in the image of \(e_{Q}\) and the map \(\pi^{\prime}_{Q}\) sends \(y\) to its closest point in the image of \(e_{Q}\), we have \[|y-e_{Q}(x_{\gamma})|\leq|y-e_{Q}(x^{\prime}_{\gamma})|=|y-e_{P}(\pi^{\prime} _{P}(y))|<\epsilon_{P}\ (<\epsilon_{Q}). \tag{1}\] So we see \(y\in\nu_{Q}\). This means \(\mathbb{R}^{dn}-\nu_{Q}\subset\mathbb{R}^{dn}-\nu_{P}\). We shall show \(\nu_{Q}|_{E_{Q}}\subset(\mathbb{R}^{dn}-\nu_{P})\cup\nu_{P}|_{E_{P}}\). Let \(\alpha\in Q^{\prime}\) be a piece and \(\alpha^{\prime}\) be the piece of \(P\) including \(\alpha\). Suppose \(|x_{\alpha}|\geq 1-\rho c_{\alpha}/2+\epsilon_{Q}\). If \(y\not\in\nu_{P}\), we have nothing to prove, so suppose further \(y\in\nu_{P}\). By the inequality (1), we have \[|e_{Q}(x^{\prime}_{\gamma})-e_{Q}(x_{\gamma})|\leq|e_{Q}(x^{\prime}_{\gamma})- y|+|y-e_{Q}(x_{\gamma})|\leq 2\epsilon_{P}.\] As \(e_{Q}\) is a composition of diagonal map with parallel transport, we have \[|(x^{\prime}_{\gamma})-(x_{\gamma})_{\gamma}|\leq 2\epsilon_{P}\quad(\text{so }|x^{\prime}_{\gamma}-x_{\gamma}|\leq 2 \epsilon_{P}\text{ for each }\gamma\in Q^{\prime}). \tag{2}\] So we have \(|x^{\prime}_{\gamma}-x_{\gamma}|\leq 2\epsilon_{P}\) for each \(\gamma\in Q^{\prime}\). Let \(\beta\subset\alpha^{\prime}\) be the set of elements smaller than the minimum of \(\alpha\). We easily see \[\bar{x}_{\alpha^{\prime}}-x^{\prime}_{\alpha}=_{1}\frac{\rho}{2}(c_{\alpha^{ \prime}}-2c_{\beta}-c_{\alpha}). \tag{3}\] Putting these (in)equalities together, we see \[|\bar{x}_{\alpha^{\prime}}|=|\bar{x}_{\alpha^{\prime}}| \geq|x_{\alpha}|-|x_{\alpha}-x^{\prime}_{\alpha}|-|x^{\prime}_{ \alpha}-\bar{x}_{\alpha^{\prime}}|\] \[\geq 1-\rho c_{\alpha}/2+\epsilon_{Q}-2\epsilon_{P}-\rho(c_{\alpha ^{\prime}}-c_{\alpha})/2\] \[\geq 1-\rho c_{\alpha^{\prime}}/2+\epsilon_{P}\] since we see \[\epsilon_{Q}-2\epsilon_{P}=\frac{1-2\cdot 8^{p-q}}{8^{n-q}}\geq\frac{1-2/8}{8^{n-q }}>\epsilon_{P}.\] We have shown that the first inequality in Definition 3.1(2) for \(Q\) and the condition \(y\in\nu_{P}\) imply the corresponding inequality for \(P\). Suppose \(x_{\alpha}\leq_{1}-1+\rho c_{\leq\alpha}-\epsilon_{Q}\) and \(y\in\nu_{P}\). Let \(\alpha^{\prime}\) be as above. We see \[\bar{x}_{\alpha^{\prime}} =\bar{x}_{\alpha^{\prime}}-x^{\prime}_{\alpha}+x^{\prime}_{\alpha }-x_{\alpha}+x_{\alpha}\] \[\leq_{1}\frac{\rho}{2}(c_{\alpha^{\prime}}-2c_{\beta}-c_{\alpha})+ 2\epsilon_{P}+(-1+\rho c_{\leq\alpha}-\epsilon_{Q}\] \[=-1+\rho c_{\leq\alpha^{\prime}}-(\epsilon_{Q}-2\epsilon_{P}) \quad(\because\ c_{\leq\alpha}+(c_{\alpha^{\prime}}-2c_{\beta}-c_{\alpha})/2=c_ {\leq\alpha^{\prime}})\] \[<-1+\rho c_{\leq\alpha^{\prime}}-\epsilon_{P}.\] This is the second inequality in Definition 3.1(2) for \(P\). Similarly, we see \(\bar{x}_{\alpha^{\prime}}\geq_{1}1-\rho c_{\geq\alpha^{\prime}}+\epsilon_{P}\) if \(x_{\alpha}\geq_{1}1-\rho c_{\geq\alpha}+\epsilon_{P}\) and \(y\in\nu_{P}\). Thus, we have shown the claimed inclusion. **Lemma 3.3**.: _Let \(Q\) be a subdivision of \(P\). Let \(\alpha,\beta\in Q\) be two pieces which are neither minimum nor maximum and satify \(\alpha<\beta\). Let \(\alpha^{\prime},\beta^{\prime}\in P\) be pieces which include \(\alpha\) and \(\beta\) respectively. Let \(\delta^{\prime}_{P,Q}:\mathbb{T}_{\emptyset_{Q}}\to T_{\emptyset_{P}}\) denote the map given in Lemma 3.2. We have_ \[\delta^{\prime}_{P,Q}(\mathcal{T}_{\alpha\beta})\ \subset\ \left\{\begin{array}{ll}\{*\}&( \text{if }\alpha^{\prime}=\beta^{\prime}\text{ or }\alpha^{\prime}\text{ is the minimum of }P\text{ or }\beta^{\prime}\text{ is the maximum of }P),\\ \mathcal{T}_{\alpha^{\prime},\beta^{\prime}}&(\text{ otherwise }).\end{array}\right.\] Proof.: Let \(y\in\mathbb{R}^{dn}\) be an element. We use the same notations \(P^{\prime},Q^{\prime},\pi^{\prime}_{P},\pi^{\prime}_{Q},x_{\gamma},x^{\prime} _{\gamma}\), and \(\bar{x}_{\gamma^{\prime}}\) as in the proof of Lemma 3.2. We shall show the claim in the case \(\alpha^{\prime}=\beta^{\prime}\). We assume \(\delta^{\prime}_{P,Q}(y)\neq*\). So \(y\in\nu_{P}\). By the inequality (2) in the proof of Lemma 3.2 and the definition of the map \(e_{P,Q}\), we have the following inequality. \[|x_{\alpha}-x_{\beta}| \geq|x^{\prime}_{\alpha}-x^{\prime}_{\beta}|-|x_{\alpha}-x^{ \prime}_{\alpha}|-|x_{\beta}-x^{\prime}_{\beta}|\] \[\geq\rho c_{\alpha\beta}-4\epsilon_{P}>d_{\alpha\beta}(Q).\] This inequality implies \(\pi_{Q}(y)\not\in D_{\alpha\beta}\). We shall show the claim in the case that \(\alpha^{\prime}\) is the minimum. It is enough to show the case of \(\beta=\beta^{\prime}\) since general subdivisions factor through the one of this case. In this case, by definition of \(e_{P,Q}\), \(x^{\prime}_{\alpha}=(-1+\rho c_{\leq\alpha})u\). Suppose \(\delta^{\prime}_{P,Q}(y)\neq*\) and \(\pi_{Q}(y)\in D_{\alpha\beta}\). We have \[x_{\beta} =x^{\prime}_{\alpha}+(x_{\beta}-x_{\alpha})-(x^{\prime}_{\alpha}- x_{\alpha})\] \[\leq_{1}-1+\rho c_{\leq\alpha}+|x_{\beta}-x_{\alpha}|+|x^{\prime }_{\alpha}-x_{\alpha}|\] \[\leq-1+\rho c_{\leq\alpha}+\rho c_{\alpha\beta}-\epsilon_{Q}+2 \epsilon_{P}\] \[<-1+\rho c_{\leq\beta}-\epsilon_{P}.\] So we have the claim. The case that \(\beta^{\prime}\) is the minimum is completely similar. We shall show the remaining part of the claim. It is enough to the case that either of the minimum or maximum of \(\alpha^{\prime}\) belongs to \(\alpha\). Clearly, we have \(e_{P,Q}(\bar{x}_{\gamma^{\prime}})=(x^{\prime}_{\gamma})\). As we see in the case of \(\alpha^{\prime}=\beta^{\prime}\), we have \[|x^{\prime}_{\alpha}-x^{\prime}_{\beta}|\leq |x^{\prime}_{\alpha}-x_{\alpha}|+|x_{\alpha}-x_{\beta}|+|x_{ \beta}-x^{\prime}_{\beta}|\] \[\leq 4\epsilon_{P}+d_{\alpha\beta}(Q)=\rho c_{\alpha\beta}-\epsilon_ {Q}+4\epsilon_{P}.\] By the equality (3) in the proof of Lemma 3.2, we have \[|\bar{x}_{\alpha^{\prime}}-x^{\prime}_{\alpha}|=\rho(c_{\alpha^{\prime}}-c_ {\alpha})/2.\] By similar equality for \(\beta\), we see \[|\bar{x}_{\alpha^{\prime}}-\bar{x}_{\beta^{\prime}}|\leq |\bar{x}_{\alpha^{\prime}}-x^{\prime}_{\alpha}|+|x^{\prime}_{ \alpha}-x^{\prime}_{\beta}|+|\bar{x}_{\beta^{\prime}}-x^{\prime}_{\beta}|\] \[\leq \rho(c_{\alpha^{\prime}}-c_{\alpha})/2+(\rho c_{\alpha\beta}- \epsilon_{Q}+4\epsilon_{P})+\rho(c_{\beta^{\prime}}-c_{\beta})/2\] \[< d_{\alpha^{\prime},\beta^{\prime}}(P).\] Thus, we have shown the claim. **Definition 3.4**.: 1. A _spectrum_ \(X\) is a sequence of pointed spaces \(X_{0},X_{1},\dots\) with structure map \(S^{1}\wedge X_{k}\to X_{k+1}\) for each \(k\geq 0\). A morphism (or map) \(f:X\to Y\) of spectra is a sequence of pointed maps \(f_{0}:X_{0}\to Y_{0},f_{1}:X_{1}\to Y_{1},\dots\) compatible with the structure maps. Let \(\mathcal{SP}\) denote the category of spectra and their maps. For a spectrum \(X\), \(\pi_{k}(X)\) denotes the colimit of the sequence \(\pi_{k}(X_{0})\to\pi_{k+1}(X_{1})\to\cdots\) defined by the structure maps. A map \(f:X\to Y\) is called a _stable homotopy equivalence_ if it induces an isomorphism \(\pi_{k}(X)\to\pi_{k}(Y)\) for any integer \(k\). 2. We define a functor \(\mathcal{T}^{S}:\mathrm{P}^{op}_{n}\to\mathcal{SP}\) as follows. Set \(\mathcal{T}^{S}(P)_{k}=S^{k-dn}\wedge\mathcal{T}(P)\) if \(k\geq dn\), and \(\mathcal{T}^{S}(P)_{k}=*\) otherwise. These spaces form a spectra with the obvious structure map. The map corresponding to a map \(P\to Q\) is also obviously induced from that of \(\mathcal{T}\). 3. For a positive number \(\delta\), we define a spectrum \(\mathbb{S}_{\delta}\) as follows. We set \(\mathbb{S}_{\delta,k}=\{y\in\mathbb{R}^{k}\}/\{y\mid|y|\geq\delta\}\). The structure map \(S^{1}\wedge\mathbb{S}_{\delta,k}\to\mathbb{S}_{\delta,k+1}\) is the obvious collapsing map. 4. We define a functor \(\mathcal{E}^{\dagger}:\mathrm{P}^{op}_{n}\to\mathcal{SP}\) as follows. Set \(\mathcal{E}^{\dagger}(P)=\mathrm{Map}(\mathcal{E}(P)_{+},\mathbb{S}_{\delta})\) where \(\delta=\epsilon_{P}/4\). For a map \(P\to Q\), the corresponding map is the restriction of the source followed by the pushforward by the collapsing map \(\mathbb{S}_{\epsilon_{Q}/4}\to\mathbb{S}_{\epsilon_{P}/4}\). 5. We define a functor \(\mathcal{E}^{\vee}:\mathrm{P}_{n}^{op}\to\mathcal{SP}\) as follows. Set \(\mathcal{E}^{\vee}(P)=\mathrm{Map}(\mathcal{E}(P)_{+},\mathbb{S})\) For a map \(P\to Q\), the corresponding map is the restriction of the source. 6. We define a map \(\widetilde{\Phi}=\widetilde{\Phi}_{P,k}:\mathbb{R}^{k}\to\mathcal{E}^{\dagger} (P)_{k}\) by \[\mathbb{R}^{k}\ni y\longmapsto\{(x_{\gamma})\mapsto(y-(0,e_{P}(x_{\gamma}))\} \in\mathcal{E}^{\dagger}(P)/\] \(\mathcal{T}^{S}_{k}\) is naturally identified with Thom space associated to the tubular neighborhood \(\mathbb{R}^{k-dn}\times\nu_{P}\) of the embedding \(0\times e_{P}:\mathbb{R}^{dp}\to\mathbb{R}^{k}\) (with some extra collapsed points). \(\Phi_{P,k}\) factors through \(\mathcal{T}^{S}(P)_{k}\) as in Theorem 3.5, and these maps form a natural transformation \(\Phi:\mathcal{T}^{S}\to\mathcal{E}^{\dagger}\). We see that this is well-defined below. 7. A natural transformation \(p_{*}:\mathcal{E}^{\vee}\to\mathcal{E}^{\dagger}\) is defined by the pushforward by the collapsing map \(p:\mathbb{S}\to\mathbb{S}^{\delta}\). The following equivalence is a variation of the one given in [18]. If it is projected to the stable homotopy category, it is a special case of Atiyah duality which is also a special case of \(S\)-duality. We need point-set level compatibility so we have been taking care about parameters. **Theorem 3.5**.: _Under the notations of Definition 3.4, the map \(\Phi\) is well-defined, and the two maps \(\Phi\) and \(p_{*}\) are stable homotopy equivalences._ Proof.: We shall show the map \(\widetilde{\Phi}\) factors through \(\mathcal{T}^{S}(P)_{k}\). For notational simplicity, we consider the case of \(k=dn\). The other cases will follow completely similarly. It is clear that \(\widetilde{\Phi}(\mathbb{R}^{dn}-\nu_{P})=\{*\}\). Let \(y\in\mathbb{R}^{k}\) be an element with \(\widetilde{\Phi}(y)\neq*\). There exists an element \((x_{\gamma})\in\mathcal{E}(P)\) such that \(|y-e_{P}(x_{\gamma})|<\epsilon_{P}/4\) holds. So we have \(|y-e_{P}(\pi_{P}y)|<\epsilon_{P}/4\) and \[|\pi_{P}y-(x_{\gamma})|\leq|e_{P}(\pi_{P}y)-e_{P}(x_{\gamma})|\leq|e_{P}(\pi_{ P}y)-y|+|y-e_{P}(x_{\gamma})|<\epsilon_{P}/2\] If we write \(\pi_{P}(y)=(\bar{x}_{\gamma})\), it follows that \(|\bar{x}_{\alpha}-x_{\alpha}|<\epsilon_{P}/2\) for each non-minimum and non-maximum \(\alpha\in P\). We see \[|\bar{x}_{\alpha}| \leq|x_{\alpha}|+|\bar{x}_{\alpha}-x_{\alpha}|\] \[\leq 1-\rho c_{\alpha}/2+\epsilon_{P}/2<1-\rho c_{\alpha}/2+ \epsilon_{P},\] \[\bar{x}_{\alpha} =x_{\alpha}+(\bar{x}_{\alpha}-x_{\alpha})\] \[\geq_{1}-1+\rho c_{\leq\alpha}-|\bar{x}_{\alpha}-x_{\alpha}|\] \[\geq-1+\rho c_{\leq\alpha}-\epsilon_{P}/2>-1+\rho c_{\leq\alpha} -\epsilon_{P},\] \[\bar{x}_{\alpha} \leq x_{\alpha}+(\bar{x}_{\alpha}-x_{\alpha})\] \[\leq_{1}1-\rho c_{\geq\alpha}+|\bar{x}_{\alpha}-x_{\alpha}|\] \[\leq 1-\rho c_{\geq\alpha}+\epsilon_{P}/2<1-\rho c_{\geq\alpha}+ \epsilon_{P}.\] These inequalities imply \(\Phi(\pi_{P}^{-1}(E_{\alpha}))=*\) in the notations of Definition 3.1. We also see \[|\bar{x}_{\alpha}-\bar{x}_{\beta}| \geq|x_{\alpha}-x_{\beta}|-|x_{\alpha}-\bar{x}_{\alpha}|-|x_{ \beta}-\bar{x}_{\beta}|\] \[>\rho c_{\alpha\beta}-\epsilon_{P}=d_{\alpha\beta}(P).\] This implies \(\widetilde{\Phi}(\pi_{P}^{-1}(D_{\alpha\beta}))=*\). Thus, \(\widetilde{\Phi}\) factors through \(\mathcal{T}^{S}\). Now the claim of the theorem follows from the classical Atiyah duality (see [2] for example). **Remark 3.6**.: In [14], Malin proved homotopy invariance of homogeneous layers of stable embedding calculus tower using a duality similar to Theorem 3.5. **Definition 3.7**.: 1. Let \(\mathcal{CH}_{\mathsf{k}}\) be the category of chain complexes and chain maps over \(\mathsf{k}\). 2. For a chain complex \(C_{*}\), \(C_{*}[k]\) is the chain complex given by \(C_{l}[k]=C_{k+l}\) with the same differential as \(C_{*}\) (without extra sign). 3. Functors \(C^{*}(\mathcal{PK}),\ \bar{C}_{*}(\mathcal{T}):\mathrm{P}_{n}^{op}\to\mathcal{CH}_{ \mathsf{k}}\) is given by taking cochains and reduced chains of \(\mathcal{PK}\) and \(\mathcal{T}\) in the objectwise manner respectively. \(\bar{C}_{*}(\mathcal{T})[dn]:\mathrm{P}_{n}^{op}\to\mathcal{CH}_{\mathsf{k}}\) is given by taking the shift of \(\bar{C}_{*}(\mathcal{T})\) in the objectwise manner. **Proposition 3.8**.: \(C^{*}(\mathcal{PK})\) _and \(\bar{C}_{*}(\mathcal{T})[dn]\) are connected by a zigzag of natural quasi-isomorphisms_ Proof.: In [18], a chain functor \(C_{*}\) for symmetric spectra is defined. The same definition works for our category of spectra as it is, so we adopt this functor in this proof. It preserves stable equivalence between semistabe spectra. By Lemma 5.3 of [18], Lemma 2.7, and Theorem 3.5. we have the following chain of natural quasi-isomorphisms \[C^{*}(\mathcal{PK})\simeq C^{*}(\mathcal{E})\simeq C_{*}(\mathcal{E}^{V}) \simeq C_{*}(\mathcal{T}^{S})\simeq C_{*}(\mathcal{T})[dn],\] where the last morphism is the canonical one in view of definition of the chain functor. ## 4. Spectral sequences **Definition 4.1**.: 1. For a partition \(P\in\mathrm{P}_{n}\), a _graph \(G\) on \(P\)_ consists of the set of vertexes \(V(G)=P\), a finite set of edges \(E(G)\) and a map \(\phi_{G}:E(G)\to P_{1,2}(P)\) called the _incidence map_, where \(P_{1,2}(P)=\{S\subset P\ |\ \#S=1\ \text{or}\ 2\}\). So, the vertexes of \(G\) are the pieces of \(P\). We say that an element of \(\phi_{G}(e)\) is _incident with an edge \(e\)_. An edge \(e\) is called a _loop_ if \(\#\phi_{G}(e)=1\). two edges \(e,e^{\prime}\) are called _double edges_ if \(\phi_{G}(e)=\phi_{G}(e^{\prime})\) and \(\#\phi_{G}(e)=2\). (Other edges may have the same incident vertexes as double edges.) Let \(\tilde{\mathsf{G}}(P)\) denote the set of all graphs on \(P\). Let \(\gamma_{m}\) and \(\gamma_{M}\) be the minimum and maximum pieces of \(P\), respectively. \(\mathsf{G}(P)\subset\tilde{\mathsf{G}}(P)\) denotes the subset of graphs with the edge set \(E(G)\subset\{(\alpha,\beta)\mid\alpha,\beta\in P,\ \gamma_{m}<\alpha<\beta<\gamma_{M}\}\) and the natural incidence map \((\alpha,\beta)\mapsto\{\alpha,\beta\}\). The set \(E(G)\) of a graph \(G\in\mathsf{G}(P)\) is regarded as a totally ordered set by the lexicographical order. Let \(\emptyset_{P}\in\mathsf{G}(P)\) denote the graph with empty edge set. We sometimes denote a graph in \(\mathsf{G}(P)\) by a formal product of edges (see Example 4.2 below). Let \(e\) be the \(i\)-th edge of \(G\in\mathsf{G}(P)\). \(\partial_{e}G\) (or \(\partial_{i}G\)) denotes the subgraph of \(G\) with \(E(\partial_{e}G)=E(G)-\{e\}\). Similarly, if \(e^{\prime}\) is the \(j\)-th edge, \(\partial_{ee^{\prime}}G\) (or \(\partial_{ij}G\)) denotes the subgraph made by removing \(e\) and \(e^{\prime}\). For two vertexes \(\alpha,\beta\) of a graph \(G\), we write \(\alpha\sim_{G}\beta\) when \(\alpha\) and \(\beta\) belong to the same connected component of \(G\). By abusing notations, for \(i\in[n+1]\), we write \(i\sim_{G}\beta\) if there is a (unique) piece \(\alpha\) satisfying \(i\in\alpha\) and \(\alpha\sim_{G}\beta\). Similarly, if \(i\) belongs to a piece \(\alpha\) which belongs to a connected component \(S\) of \(G\), we write \(i\in S\) instead of \(i\in\alpha\in S\). 2. For a map \(P\to Q\) of partitions, \(\delta_{P,Q}:Q\to P\) denotes the map of sets sending \(\alpha\in Q\) to the piece of \(P\) containing \(\alpha\). This map induces a map \(\delta_{P,Q}:\tilde{\mathsf{G}}(Q)\to\tilde{\mathsf{G}}(P)\). The graph \(\delta_{P,Q}(G)\) has the same edge set as \(G\) and the incidence map given by the composition \[E(G)\stackrel{{\phi_{G}}}{{\to}}P_{1,2}(P)\stackrel{{( \delta_{P,Q})_{*}}}{{\to}}P_{1,2}(Q).\] For a graph \(G\in\mathsf{G}(Q)\), if \(\delta_{P,Q}(G)\) has no loop nor double edges and the minimum and maximum pieces of \(P\) are not incident with any edge of \(\delta_{P,Q}(G)\), we always identify \(\delta_{P,Q}(G)\) with a unique graph in \(\mathsf{G}(P)\) which has the same image of the incidence map. If \(P\) is made by unifying the \(i+1\)-th and \(i+2\)-th pieces of \(Q\), we denote \(P\) and \(\delta_{P,Q}\) by \(\delta_{i}Q\) and \(\delta_{i}\), respectively. Similarly, if \(P\) is made by unifying each of the \(i+1\)-th and \(i+2\)-th pieces, and \(j+1\)-th and \(j+2\)-th pieces, \(P\) and \(\delta_{P,Q}\) are denoted by \(\delta_{ij}Q\) and \(\delta_{ij}\). \(\delta_{ijk}\) is similarly understood. **Example 4.2**.: Let \(G_{1},G_{2}\in\mathsf{G}([5])\) be the graphs with two edges given by \[G_{1}=(1,4)(2,3),\quad G_{2}=(1,3)(2,4).\] These graphs are drawn in Figure 1. The straight line is not a part of data, and the chords denote the edges. The vertexes are intersections of the line and chords. They are labeled by \(\{1\},\ldots,\{4\}\) in the order from left to right. The minimum and maximum vertexes \(\{0\},\{5\}\) are usual omitted. We have \(E(\partial_{1}G_{1})=\{(2,3)\}\) and \(E(\partial_{2}G_{1})=\{(1,4)\}\). \(G_{1}\) has the four connected components \(\{\{0\}\}\), \(\{\{1\},\{4\}\}\), \(\{\{2\},\{3\}\}\), and \(\{\{5\}\}\). By definition, we have \[\delta_{1}[5]=\{\{0\},\{12\},\{3\},\{4\},\{5\}\},\] and \(\delta_{1}G_{1}=\delta_{1}G_{2}\) since \(E(\delta_{1}G_{1})=E(\delta_{1}G_{2})=\{(\{12\},\{3\}),(\{12\},\{4\})\}\). Similarly, we have \(\delta_{3}G_{1}=\delta_{3}G_{2}\). The graph \(\delta_{1}G_{1}\) has the three connected components \(\{\{0\}\}\), \(\{\{12\},\{3\},\{4\}\}\), and \(\{\{5\}\}\). **Definition 4.3**.: For a graph \(G\in\mathsf{G}(P)\), set \(\mathcal{T}_{G}=\bigcap_{(\alpha,\beta)\in E(G)}\mathcal{T}_{\alpha\beta}\subset T _{\emptyset_{P}}\). We define a triple complex \(\mathbb{T}_{\bullet\bullet\bullet}\) as follows. As a module, we set \[\mathbb{T}_{pqs}=\bigoplus_{P,G}\bar{C}_{q}(\mathcal{T}_{G})\] where \(P\) runs through partitions of \(P_{n}\) such that \(\#P=p+2\) and \(G\) runs through graphs of \(\mathsf{G}(P)\) such that \(\#E(G)=s\). \(\mathbb{T}\) has three differentials \(\delta,d,\partial\) of degree \((1,0,0)\), \((0,1,0)\), \((0,0,1)\). \(d\) denotes the differential on singular chains, \(\partial\) is the Cech differential given by \[\partial=\sum_{1\leq k\leq s}(-1)^{k-1}\partial_{k},\] where \(\partial_{k}:\bar{C}_{q}(\mathcal{T}_{G})\to\bar{C}_{q}(\mathcal{T}_{ \partial_{k}G})\) is the pushforward by the inclusion. To define \(\delta\), we shall define a map \(\delta_{i}\) of degree \((1,0,0)\). Put \(P=\{\alpha_{0}<\cdots\alpha_{p+1}\}\). If \(\delta_{i}G\) has a loop or double edge, or at least one of the connected components of \(\delta_{i}G\) which include the minimum or maximum is not discrete, we set \(\delta_{i}=0\) on \(\bar{C}_{*}(\mathcal{T}_{G})\). Suppose otherwise i.e., \(\delta_{i}G\) belongs to \(\mathsf{G}(P_{i})\). The set of edges of \(G\) whose smaller incident vertex is \(\alpha_{i}\) or \(\alpha_{i+1}\) and the set of edges of \(\delta_{i}(G)\) whose smaller incident vertex is \(\alpha_{i}\cup\alpha_{i+1}\) are one-to-one correspondence via the incidence map. This correspondence induces the permutation \(\sigma_{G,i}\) of the lexicographical order of the edges. We set \(\delta_{i}=sgn(\sigma_{G,i})(\delta^{\prime}_{i})_{*}\). Here, \(\delta^{\prime}_{i}:\mathcal{T}_{G}\to\mathcal{T}_{\delta_{i}G}\) is the collapsing map for subdivision \(P\) of \(\delta_{i}P\) given in Lemma 3.3. (By definition of \(\mathcal{T}_{G}\) and the same lemma, the image of \(\delta^{\prime}_{i}\) is contained in \(\mathcal{T}_{\delta_{i}G}\).) We set \(\delta=\sum_{i=0}^{p}(-1)^{i}\delta_{i}\). **Lemma 4.4**.: _The triple complex \(\mathbb{T}\) is well defined. i.e., the three differentials are commutative without signs._ Proof.: Only non-trivial commutativity is that of \(\partial\) with \(\delta\). Let \(e\) be the \(k\)-th edge among those whose smaller incident vertex is \(\alpha_{i}\). Suppose exactly \(m\) edges pass through \(e\) by \(\sigma_{G,i}\). On \(G\), the sign in removing \(e\) is \((-1)^{k}\) and On \(\delta_{i}G\), the sign in removing the corresponding edge is \((-1)^{k+m}\). On the other hand, we have \(sgn(\sigma_{G,i})=sgn(\sigma_{\partial_{k}G,i})\). So we have the commutativity. **Definition 4.5**.: 1. Let \(\mathrm{Tot}(\mathbb{T})\) be a total complex given by \[\mathrm{Tot}_{k}(\mathbb{T})=\bigoplus_{p+q+s=k}\mathbb{T}_{pqs}\] with the differential \(\tilde{D}=\delta+(-1)^{p}d+(-1)^{p+q}\partial\) on \(\mathbb{T}_{pqs}\). Let \(\{F_{l}\}_{l}\) be a filtration of \(\mathrm{Tot}(\mathbb{T})\) given by \(F_{l}=\oplus_{p\leq l}\mathbb{T}_{pqs}\). This filtration induces a spectral sequence \(\tilde{\mathbb{E}}_{r}\). We give the grading on \(\tilde{\mathbb{E}}_{r}\) such that \(\tilde{\mathbb{E}}_{r}^{-p,q}\) is a subquotient of the submodule \(\oplus_{a,b}\mathbb{T}_{pqb}\) where \((a,b)\) runs through pairs satisfying \(a+b=dn-q\). We also consider a truncated version \(tr_{m}(\mathrm{Tot}(\mathbb{T}))\) which consists of the chains in \(\mathbb{T}_{pqs}\) with \(s\geq m\). The differential is given by the same formula as \(\tilde{D}\) but the Cech differential on the graph with exactly \(m\) edges are understood as zero. Considering the filtration on \(p\), we form a spectral sequence \(\{tr_{m}\tilde{\mathbb{E}}_{r}\}_{r}\). There is an obvious map \(\mathrm{Tot}(\mathbb{T})\to tr_{m}\,\mathrm{Tot}(\mathbb{T})\) given by the identity on \(\mathbb{T}_{pqs}\) with \(s\geq m\) and zero on the other summands. This map commutes with differentials and filtrations so induces the map of spectral sequences \(\tilde{\mathbb{E}}_{r}\to tr_{m}\tilde{\mathbb{E}}_{r}\). 2. Let \(D(C^{*}(\mathcal{K}_{d}^{\leq n}))\) be the unnormalized complex of singular cochain of Sinha's cosimplicial space restricted to \(\Delta_{\leq n}\) (see [21]). So, \(D_{p}(C^{*}(\mathcal{K}_{d}^{\leq n}))=C^{*}(\mathcal{K}_{d}^{p})\) for \(p\leq n\), and it is zero for \(p>n\), and the differential is the signed sum of face maps. We give this double complex the filtration by cosimplicial degree \(p\). The resulting spectral sequence is denoted by \(\tilde{\mathbb{E}}_{r}\). This is a version of Sinha's spectral sequence restricted to the part of cosimplicial degree \(\leq n\). The (full) Sinha's spectral sequence \(\mathbb{E}_{r}\) is defined as the Bousfield-Kan type cohomological spectral sequence associated the cosimplicial space \(\mathcal{K}_{d}^{\bullet}\) (see Definition 2.4). First part of the following lemma follows from Lemma 2.5 and Proposition 3.8. Second part is seen by observing the spectral sequence associated to \((\mathbb{T}_{p\,*\,*},d,\partial)\). **Lemma 4.6**.: 1. _The spectral sequence_ \(\{\tilde{\mathbb{E}}_{r}\}_{r}\) _is isomorphic to_ \(\{\tilde{\mathbb{E}}_{r}\}_{r}\) _after_ \(E_{1}\)_-page._ 2. _The natural map_ \(\tilde{\mathbb{E}}_{1}^{pq}\to tr_{m}\tilde{\mathbb{E}}_{1}^{pq}\) _induces an isomorphism between the summands labeled by the graphs with a larger number of edges than_ \(m\) _and a monomorphism between the summands of graphs with exactly_ \(m\) _edges._ ## 5. Condensed maps In this section, we define a class of maps used to define chains in the triple complexes \(\mathbb{T}\) and prove their properties. **Definition 5.1**.: 1. Let \(X\) be an unpointed topological space and \(G\in\mathsf{G}(P)\) a graph. A map \(f=(f_{1},\ldots,f_{n}):X\to\mathbb{R}^{dn}\) is \(G\)-_condensed_ if the following conditions hold. 1. \(f\) is proper. 2. For each connected component \(S\) of \(G\), there is a vertex \(\alpha\in S\), called _a base of \(S\) (for \((G,f)\))_, which satisfies the following conditions. 1. For each \(\beta\in S\) and each \(i\in\beta\), \(f_{i}(x)\) belongs to the convex hull of \(\{f_{j}(x)\mid j\in\alpha\}\), 2. there is no edge in \(S\) both of whose incident vertexes are smaller than \(\alpha\). (A discrete vertex is regarded as a base.) 2. An edge \(e\) of a graph \(G\) is called a _bridge_ if \(\#\pi_{0}(\partial_{e}G)>\#\pi_{0}(G)\). 3. Let \(f:X\to\mathbb{R}^{dn}\) be a map, and \(e=(\alpha,\beta),\ e^{\prime}\) two bridges of a graph \(G\). For \(1\leq i\leq n\) and \(s\in[0,\infty)\), we set \[A^{i}_{e}(s)=\left\{\begin{array}{rl}s&\mbox{ if }i\sim_{\partial_{e}G}\alpha\\ -s&\mbox{ if }i\sim_{\partial_{e}G}\beta\\ 0&\mbox{ otherwise}\end{array}\right.\] and define \(A_{e}:[0,\infty)\to\mathbb{R}^{dn}\) \[A_{e}(s)=(A^{i}_{e}(s)v)_{1\leq i\leq n},\] where \(v=(0,1,0,\ldots,0)\) as in subsection 1.1. The _contracting homotopy \(F:X\times[0,\infty)\to\mathbb{R}^{dn}\) of \(f\) in removing \(e\) from \(G\)_ (in short, \(e\)_-contraction of \(f\) for \(G\)_) is defined by \[F(x,s)=f(x)+A_{e}(s).\] The \((e,e^{\prime})\)_-contraction_\(F^{\prime}:X\times[0,\infty)^{2}\to\mathbb{R}^{dn}\)_ _of_ \(f\) _for_ \(G\) is defined by \[F^{\prime}(x,s_{1},s_{2})=f(x)+A_{e}(s_{1})+A_{e^{\prime}}(s_{2}).\] 4. For two maps \(f,g:X\to\mathbb{R}^{dn}\), the _straight homotopy \(h:X\times I\to\mathbb{R}^{dn}\) from \(f\) to \(g\)_ is defined by \(h(x,t)=(1-t)f(x)+tg(x)\). 5. Let \(G\in\mathsf{G}([n+1])\) be a graph with exactly \(4\) connected components. Define a map \(f_{G}=(f_{1},\ldots f_{n}):\mathbb{R}^{2d}\to\mathbb{R}^{dn}\) by \[f_{i}(x,y)=\left\{\begin{array}{rl}x&\mbox{ ($i\sim_{G}1$)}\\ y&\mbox{ (otherwise)}.\end{array}\right.\] Clearly, \(f_{G}\) is \(G\)-condensed, and a vertex is a base if it satisfies the condition (ii) on a base. So, the minimum and the second to minimum vertexes of each connected component are bases. 6. For a graph \(G\in\mathsf{G}(P)\), we define subsets \(U_{G}\) and \(U^{1}_{G}\) of \(\mathbb{R}^{dn}\) by \[U_{G}= (\mathbb{R}^{dn}-\nu_{P})\cup\pi_{P}^{-1}(\cap_{(\alpha,\beta)\in E (G)}D_{\alpha\beta}),\] \[U^{1}_{G}= (\mathbb{R}^{dn}-p_{1}^{-1}p_{1}(\nu_{P}))\cup p_{1}^{-1}p_{1}( \cap_{(\alpha,\beta)\in E(G)}\pi_{P}^{-1}D_{\alpha\beta}).\] Here, \(p_{1}:\mathbb{R}^{dn}\to\mathbb{R}^{n}\) is the product of the projection to the first coordinate \(\mathbb{R}^{d}\to\mathbb{R}\). **Example 5.2**.: Set \(n=4\) and let \(G_{1}=(1,4)(2,3),G_{2}=(1,3)(2,4)\in\mathsf{G}([5])\). \(f_{G_{1}}\) (resp. \(f_{G_{2}}\)) is given by \((x,y)\mapsto(x,y,y,x)\) (resp. \((x,y)\mapsto(x,y,x,y)\)). Put \(e=(1,4)\) and \(e^{\prime}=(2,3)\). The \(e\)-contraction of \(f_{G_{1}}\) for \(G_{1}\) is given by \((x,y,s)\mapsto(x+sv,y,y,x-sv)\) and the \((e,e^{\prime})\)-contraction of \(f_{G_{1}}\) for \(G_{1}\) is given by \((x,y,s_{1},s_{2})\mapsto(x+s_{1}v,y+s_{2}v,y-s_{2}v,x-s_{1}v)\). \(f_{G_{1}}\) is also \(\delta_{1}G_{1}\)-condensed. \(\{1,2\}\) is a base of a connected component of \(\delta_{1}G_{1}\). The other vertexes (except for \(\{0\}\) and \(\{5\}\)) are not bases. The \(e\)-contraction of \(f_{G_{1}}\) for \(\delta_{1}G_{1}\) is given by \((x,y,s)\mapsto(x+sv,y+sv,y+sv,x-sv)\) and is different from that for \(G_{1}\) (since connected components are different). The straight homotopy \(h\) from \(f_{G_{1}}\) to \(f_{G_{2}}\) is given by \((x,y,t)\mapsto(x,y,(1-t)y+tx,(1-t)x+ty)\). This is also \(\delta_{1}G_{1}\)-condensed. its base is \(\{1,2\}\). **Lemma 5.3**.: _Let \(f:X\to\mathbb{R}^{dn}\) be a \(G\)-condensed map. We have \(f(X)\subset U_{G}\cap U^{1}_{G}\). In particular, \(f\) induces a map \(f:X^{*}\to\mathcal{T}_{G}\)._ Proof.: Let \(S\) be a connected component of \(G\), \(\alpha=\{j,j+1,\ldots,k\}\) and \(\beta=\{l,\ldots,m\}\) two vertexes of \(S\). For \(x\in X\), suppose \(f(x)\in\nu_{P}\). Write \(\pi_{P}(f(x))=(y_{\gamma})\). First suppose further that \(\alpha\) is a base of \(S\). By definitions of \(e_{P}\) and \(\nu_{P}\), for \(i\in\beta\), we have \[|f_{i}(x)-y_{\beta}|\leq\rho\frac{c_{\beta}-\min\{c_{l},c_{m}\}}{2}+\epsilon_{P}.\] Since \(f_{i}(x)\) belongs to the convex hull of \(\{f_{r}(x)\}_{r\in\alpha}\), we have \[|f_{i}(x)-y_{\alpha}|\leq\rho\frac{c_{\alpha}-\min\{c_{j},c_{k}\}}{2}+\epsilon_{ P}.\] So, we have \(|y_{\alpha}-y_{\beta}|<\rho(c_{\alpha}+c_{\beta})/2-\epsilon_{P}\leq d_{\alpha \beta}(P)\), which implies \(\pi_{P}(f(x))\in D_{\alpha\beta}\). If neither of \(\alpha\) nor \(\beta\) is a base, we take a base \(\gamma\) of \(S\). By definition of the base, we may assume \(\alpha<\gamma<\beta\) or \(\gamma<\alpha<\beta\). By the above argument, we see \[|y_{\alpha}-y_{\beta}| <\frac{\rho}{2}(c_{\alpha}+c_{\beta}+2c_{\gamma}-\min\{c_{l},c_{ m}\}-\min\{c_{j},c_{k}\})+4\epsilon_{P}\] \[<d_{\alpha\beta}(P).\] The second inequality follows from the condition on \(c_{r}\) imposed in Definition 2.3 (1). We have shown \(f(X)\subset U_{G}\). The proof of the inclusion to \(U_{G}^{1}\) is completely similar. Since \(f\) is proper, \(f\) induces the map from the one-point compactification. **Lemma 5.4**.: _Let \(Q\) be a partition, \(G\in\mathsf{G}(Q)\) a graph, and \(f:X\to\mathbb{R}^{dn}\) a \(G\)-condensed map._ 1. _Let_ \(e\) _be a bridge of_ \(G\) _such that each non-discrete connected component of_ \(\partial_{e}G\) _contains a base of some connected component of_ \(G\)_. The_ \(e\)_-contraction_ \(F\) _of_ \(f\) _is_ \(\partial_{e}G\)_-condensed._ 2. _Let_ \(e,e^{\prime}\) _be two bridges of_ \(G\) _such that each non-discrete connected component of_ \(\partial_{ee^{\prime}}G\) _contains a base of some connected component of_ \(G\)_. The_ \((e,e^{\prime})\)_-contraction_ \(F\) _of_ \(f\) _is_ \(\partial_{ee^{\prime}}G\)_-condensed._ 3. _Set_ \(P=\delta_{i}Q\)_. Suppose that_ \(\delta_{i}G\in\mathsf{G}(P)\) _and both of the_ \(i+1\)_-th and_ \(i+2\)_-th pieces of_ \(Q\) _are bases for_ \(f\)_. Then,_ \(f\) _is_ \(\delta_{i}G\)_-condensed._ Proof.: For part 1, we only have to prove that \(F\) is proper. The other conditions are clear by definition. Let \(\beta\) and \(\gamma\) be the vertexes incident with \(e\), \(\alpha\) a base of the connected component of \(G\) including \(e\). Say \(\alpha\not\vdash_{\alpha,G}\beta\) and \(\gamma<\beta\). Suppose \(F(x)\in D_{R}^{dn}=(D_{R}^{d})^{n}\), the product of disks of radius \(R\). For \(i\in\beta\) and \(j\in\alpha\), we have \[2R\geq|F_{i}(x)-F_{j}(x)|=|f_{i}(x)-sv-(f_{j}(x)+sv)|\geq||f_{i}(x)-f_{j}(x)|-2s|.\] This implies \(s\leq R+|f_{i}(x)-f_{j}(x)|/2\). Since the convex hulls of \(\{f_{j}(x)+sv\}_{j\in\alpha}\) and \(\{f_{j}(x)\}_{j\in\alpha}\) are congruent and the diameter of former one is \(2R\) or less. So, we have \(|f_{i}(x)-f_{j}(x)|\leq 2R\). By these inequalities, we have \(s\leq 2R\), which implies \(|f_{k}(x)|\leq 3R\) for a number \(k\) belonging to a piece included in \(S\). Thus, we have \(F^{-1}(D_{R}^{dn})\subset f^{-1}(D_{3R}^{dn})\) and conclude that \(F\) is proper. Other conditions of \(\partial_{e}G\)-condensed map is clear from the assumption. Parts 2 and 3 are similar. The proof of the following lemma is clear and omitted. **Lemma 5.5**.: _Let \(G\in\mathsf{G}(P)\) be a graph._ 1. _Let_ \(f,g:X\to\mathbb{R}^{dn}\) _be two_ \(G\)_-condensed maps which have a common base_ \(\alpha\) _in each connected component_ \(S\) _satisfying_ \(f_{i}=g_{i}\) _for each_ \(i\in\alpha\)_. Then, the straight homotopy_ \(h\) _from_ \(f\) _to_ \(g\) _is_ \(G\)_-condensed, so it induces a map_ \(h:X^{*}\wedge(I_{+})\to\mathbb{R}^{dn}\)_._ 2. _Let_ \(f:X\to\mathbb{R}^{dn}\) _be a_ \(G\)_-condensed map and_ \(e\) _an edge of_ \(G\)_. Suppose that_ \(\delta_{i}G\in\delta_{i}P\)_, the_ \(i+1\)_-th and_ \(i+2\)_-th pieces of_ \(P\) _are bases for_ \(f\)_, each non-discrete connected component of_ \(\partial_{e}G\) _contains a base of some connected component of_ \(G\)_, and_ \(e\) _is a bridge of_ \(\delta_{i}G\)_. Let_ \(F\) _(resp._ \(F^{\prime}\)_) be the_ \(e\)_-contraction of_ \(f\) _for_ \(G\) _(resp._ \(\delta_{i}G\)_.) The straight homotopy from_ \(F\) _to_ \(F^{\prime}\) _is_ \(\delta_{k}\partial_{e^{\prime}}G\)_-condensed._ 3. _Let_ \(f:X\to\mathbb{R}^{dn}\) _be a_ \(G\)_-condensed map and_ \(e,e^{\prime}\) _be two edges of_ \(G\)_. Suppose that_ \(\delta_{i}G\in\delta_{i}P\)_, the_ \(i+1\)_-th and_ \(i+2\)_-th pieces of_ \(P\) _are bases for_ \(f\)_, each non-discrete connected component of_ \(\partial_{ee^{\prime}}G\) _contains a base of some connected component of_ \(G\)_, and_ \(e\)_,_ \(e^{\prime}\) _are bridges of_ \(\delta_{i}G\)_. Let_ \(F\) _(resp._ \(F^{\prime}\)_) be the_ \((e,e^{\prime})\)_-contraction of_ \(f\) _for_ \(G\) _(resp._ \(\delta_{i}G\)_.) The straight homotopy from_ \(F\) _to_ \(F^{\prime}\) _is_ \(\delta_{i}\partial_{ee^{\prime}}G\)_-condensed._ **Notation and terminology :** As written in Lemmas 5.3, 5.4, and 5.5, we use the same symbol for the induced map between the pointed spaces as the original condensed map. We also use the terms 'contraction' and'straight homotopy' for the induced map. The following lemma is clear. **Lemma 5.6**.: _Let \(f=(f_{1},\ldots,f_{n}):X\to\mathbb{R}^{dn}\) be a proper map, \(x\in X\), and \(P\in\mathrm{P}_{n}\).Put \(f_{0}(x)=(-1+\rho c_{0}/2)u\) and \(f_{n+1}(x)=(1-\rho c_{n+1}/2)u\). For the induced map \(f:X^{*}\to\mathcal{T}_{\emptyset_{P}}\), if \(f(x)\neq*\), the following inequalities hold for any \(\alpha\in P\), numbers \(1\leq k\leq n\), and \(i,j\in\alpha\) with \(i<j\):_ \[-1+\rho c_{\leq k}-2\epsilon_{P}<_{1}f_{k}(x)<_{1}1-\rho c_{\geq k }+2\epsilon_{P},\] \[\rho c_{ij}-2\epsilon_{P}<_{1}f_{j}(x)-f_{i}(x)<_{1}\rho c_{ij}+2 \epsilon_{P}.\] _In particular if \(f_{j}(x)\leq_{1}f_{i}(x)\) for \(i,j\in\alpha\) with \(i<j\), \(f(x)=*\)._ **Lemma 5.7**.: _Let \(P\to Q\in\mathrm{P}_{n}\) be a subdivision, \(G\in\mathsf{G}(Q)\) a graph with an edge \(e=(\alpha,\beta)\)._ 1. _Let_ \(f:X\to\mathbb{R}^{dn}\) _be a proper map whose image is contained in_ \(U_{G}^{1}\)_. Suppose that either of the following two conditions holds._ 1. \(\alpha\cup\beta\) _is included in a piece of_ \(P\)_._ 2. _At least one of_ \(\alpha\)_,_ \(\beta\) _is included in either of the minimum or maximum of_ \(P\)_._ _Then, the induced map_ \(f:X^{*}\to\mathcal{T}_{\emptyset_{P}}\) _is_ \(*\) _(the constant map to the base point). In particular, if_ \(f\) _is the_ \(e\)_-contraction of some_ \(G\)_-condensed map for_ \(G\)_, or more generally,_ \(f_{l}=_{1}g_{l}\) _(_\(1\leq l\leq n\)_) for some_ \(G\)_-condensed map_ \(g\)_, the induce map to_ \(\mathcal{T}_{\emptyset_{P}}\) _is_ \(*\)_._ 2. _Let_ \(f:X\to\mathbb{R}^{dn}\) _be a_ \(G\)_-condensed map. Let_ \(\gamma\) _be a piece of_ \(Q\) _with_ \(\beta>\alpha<\gamma\)_, Suppose that_ \(\alpha\) _is a base for_ \(f\)_,_ \(\gamma\sim_{G}\alpha\)_, and_ \(\beta\cup\gamma\) _is included in a piece of_ \(P\)_. The_ \(e\)_-contraction of_ \(f\) _followed by the collapsing map_ \(\delta_{P,Q}|_{\mathcal{T}_{\partial_{e}G}}:\mathcal{T}_{\partial_{e}G}\to \mathcal{T}_{\emptyset_{P}}\) _is the constant map to_ \(*\)_._ Proof.: We shall show part 1 for the condition (a). For simplicity, we assume \(\alpha<\beta\) and \(\alpha\) is a base of \(f\). Let \(i\) (resp. \(j\)) be the minimum of \(\beta\) (resp. the maximum of \(\alpha\). Suppose \(f(x)\in\nu_{P}\) for some \(x\in X\). By definitions of \(e_{P}\) and \(\nu_{P}\), we have \(f_{i}(x)-f_{j}(x)>_{1}\rho(c_{i}+c_{j})/2-2\epsilon_{P}\). Since \(f(x)\in U_{G}^{1}\cap\nu_{P}\), by an argment similar to the proof of Lemma 3.2, we have \[f_{i}(x)-f_{j}(x)<_{1}\rho(c_{i}+c_{j})/2-\epsilon_{Q}+2\epsilon_{P}<\rho(c_{i }+c_{j})/2-2\epsilon_{P}.\] These inequalities contradict and we have proved part 1. The proofs of other parts are similar. **Definition 5.8**.: Let \(F=(F_{1},\ldots,F_{n}):X\to\mathbb{R}^{dn}\) be a map. For \(\epsilon=+\) or \(-\), we define a map \(F^{ie}:X\times[0,\infty)\to\mathbb{R}^{dn}\) called the \((i,\epsilon)\)_-contraction of \(f\)_ as follows. \[F^{ie}_{k}(x)=\left\{\begin{array}{rl}F_{i}(x)+\epsilon su&(k=i),\\ F_{i+1}(x)-\epsilon su&(k=i+1),\\ F_{k}(x)&(\text{otherwise}).\end{array}\right.\] On the right hand side, \(\epsilon=\pm\) are regaded as \(\pm 1\) respectively. The \((i,\epsilon)\)-contraction is not necessarily \(G\)-condensed it induces a map to the Thom space. **Lemma 5.9**.: _Let \(f=(f_{1},\ldots,f_{n}):X\to\mathbb{R}^{dn}\) be a \(G\)-condensed map for a graph \(G\in\mathsf{G}(P)\), \(e,e^{\prime}\) bridges of \(G\) and \(i\) a number with \(1\leq i\leq n-1\). Say \(i\) belongs to the \(k+1\)-th piece of \(P\). Suppose that \(i\sim_{G}i+1\) and \(\delta_{k}\partial_{e}G\) (resp. \(\delta_{k}\partial_{ee^{\prime}}G\)) belongs to \(\mathsf{G}(\delta_{k}P)\). If any base of the component including \(i\) includes \(i^{\prime}=i\) or \(i+1\), suppose further that for any \(x\in X\) there exist \(l,m\neq i,i+1\) such that_ \[|f_{i^{\prime},1}(x)-f_{l,1}(x)|\leq|f_{m,1}(x)-f_{l,1}(x)|,\] _where the subscript \(1\) denotes the first coordinate. Let \(F\) be the \(e\)-contraction (resp. \((e,e^{\prime})\)-contraction) of \(f\) for \(G\). The \((i,\epsilon)\)-contraction \(F^{ie}\) of \(F\) is proper and its image is contained in \(U_{\delta_{k}\partial_{e}G}\cap U_{\delta_{k}\partial_{e}G}^{1}\) (resp. \(U_{\delta_{k}\partial_{e^{\prime}}G}\cap U_{\delta_{k}\partial_{e^{\prime}}G}^{1}\)). In particular, \(F^{ie}\) induces a map \(F^{ie}:X^{*}\wedge[0,\infty]\to\mathcal{T}_{\delta_{k}\partial_{e}G}\) (resp. \(F^{ie}:X^{*}\wedge[0,\infty]^{\wedge 2}\to\mathcal{T}_{\delta_{k}\partial_{e^{\prime}}G}\))._ Proof.: Put \(F=(F_{1},\ldots F_{n})\) and \(F^{ie}=F^{\prime}=(F^{\prime}_{1},\ldots,F^{\prime}_{n})\). Similarly to the proof of Lemma 5.3, we can see that \(F^{\prime}\) is proper. (The extra assumption for the case that any base includes one of \(i,i+1\) is used here). We shall show \(F^{\prime}(X\times[0,\infty)^{2})\subset U_{\delta_{k}G}\) in the case that \(F\) is the \(e\)-contraction and there is a base \(\beta\) which does not include \(i,i+1\). Suppose \(F^{\prime}(\tilde{x})\in\nu_{P}\) for \(\tilde{x}=(x,s_{1},s_{2})\in X\times[0,\infty)^{2}\). Put \(\pi_{P}(F^{\prime}(\tilde{x}))=(y_{\gamma})_{\gamma}\). Let \(\alpha\in\delta_{k}P\) be the piece including \(i,i+1\) By definition, we have \[|y_{\alpha}-\frac{1}{2}(F^{\prime}_{i}(x)+F^{\prime}_{i+1}(x))|<\frac{\rho c_ {\alpha}}{2}-50\epsilon.\] Clearly, \(\frac{1}{2}(F^{\prime}_{i}(x)+F^{\prime}_{i+1}(x))=\frac{1}{2}(F_{i}(x)+F_{i+1}(x))\) and the first coordinate of \(\frac{1}{2}(F_{i}(x)+F_{i+1}(x))\) is in the image of projection of the convex hull of \(\{F_{j}(x)\}_{j\in\beta}\) to the first coordinate, so \[|y_{\beta 1}-\frac{1}{2}(F_{i1}(x)+F_{i+1,1}(x))|<\frac{\rho c_{\beta}}{2}-50\epsilon.\] By these (in)equalities, we see \(|y_{\alpha 1}-y_{\beta 1}|<\rho(c_{\alpha}+c_{\beta})/2-100\epsilon\). Since \(\alpha\simeq_{G}\beta\), there exists \(i_{1}\in\alpha\) such that \(F_{i_{1}}(x)\) is in the convex hull of \(\{F_{i}(x)\}_{l\in\beta}\). We also have \(F^{\prime}_{i_{2},2}(x)=F_{i_{2},2}(x)\) for any \(i_{2}\) by definition, where the extra subscript means the \(n-1\)-tuple of the second to \(n\)-th coordinate. So we see \(|y_{\alpha 2}-y_{\beta 2}|<4\epsilon_{P}\). Thus, we have shown \(|y_{\alpha}-y_{\beta}|<d_{\alpha\beta}(P)\). Part 1 of the following lemma is the reason why we need both signs in \((i,\pm)\)-contractions. **Lemma 5.10**.: _Let \(f=(f_{1},\ldots,f_{n}):X\to\mathbb{R}^{dn}\) be a \(G\)-condensed map for a graph \(G\in\mathsf{G}(P)\), and \(i,j\) two numbers such that \(1\leq i,j\leq n-1\). Say \(i\) and \(j\) belongs to the \(k\)-th and \(l\)-th pieces of \(P\), respectively. Suppose \(i\sim_{\delta_{kl}G}j\). Let \(e\) be a bridge of \(G\) such that \(\delta_{kl}\partial_{e}G\in\mathsf{G}(\delta_{kl}P)\), and \(F\) the \(e\)-contraction of \(f\) for \(G\)._ 1. _Suppose that either of the following conditions holds._ 1. \(f_{i}=f_{j}\) _and_ \(f_{i+1}=f_{j+1}\)_, or_ 2. \(f_{i}=f_{j+1}\) _and_ \(f_{i+1}=f_{j}\) _We put_ \(\epsilon^{\prime}=-\epsilon\) _(resp._ \(\epsilon\)_) in the case (a) (resp. (b)). The straight homotopy_ \(h\) _from_ \(F^{ie}\) _to_ \(F^{j\epsilon^{\prime}}\) _is proper and its image is contained in_ \(U_{\delta_{kl}\partial_{e}G}\cap U^{1}_{\delta_{kl}\partial_{e}G}\)_. So,_ \(h\) _induces a map_ \(h:X^{*}\wedge[0,\infty]^{\wedge 2}\wedge(I_{+})\to\mathcal{T}_{\delta_{kl} \partial_{e}G}\)_._ 2. _Suppose that_ \(i,i+1,j,j+1\) _is contained in a single connected component of_ \(G\) _which has a base which does not contain any of the four numbers. For any pair_ \(\epsilon,\epsilon^{\prime}=\pm\)_, the straight homotopy_ \(h\) _from_ \(F^{ie}\) _to_ \(F^{j\epsilon^{\prime}}\) _is proper and its image is contained in_ \(U_{\delta_{kl}\partial_{e}G}\cap U^{1}_{\delta_{kl}\partial_{e}G}\) _. So,_ \(h\) _induces a map_ \(h:X^{*}\wedge[0,\infty]^{\wedge 2}\wedge(I_{+})\to\mathcal{T}_{\delta_{kl} \partial_{e}G}\)_._ _A similar claim holds for \((e,e^{\prime})\)-contraction._ Proof.: The proof is similar to that of Lemma 5.9 so we omit details. The choice of \(\epsilon^{\prime}\) in part 1 ensures \(h\) is proper. ## 6. Computation of differential in characteristic \(2\) In this section, we set \(n=4\) and \(d=2\) and assume that the base field \(\mathsf{k}\) is of characteristic \(2\). As is well-known, cohomology \(H^{*}(\mathit{Conf}_{n}(\mathbb{R}^{2}))\) of the ordered configuration space of \(n\) points is generated by elements \(g_{ij}\) (\(1\leq i<j\leq n\)) which satisfy \((g_{ij})^{2}=0\) and the \(3\)-term relation. This cohomology appears in the \(E_{1}\)-page of Sinha's spectral sequence. The element \(g_{14}g_{23}+g_{13}g_{24}+g_{12}g_{34}\) is a cycle for \(d_{1}\) in characteristic \(2\). We consider \(d_{2}\) of the corresponding element in \(\bar{\mathbb{E}}\). More precisely, we consider its projection to the truncated spectral sequence \(tr_{1}\bar{\mathbb{E}}\). Our computation is based on the well-known description of the differential of spectral sequence associated to a double complex in terms of zigzag of horizontal and vertical differentials. We define three graphs in \(\mathsf{G}([5])\) as follows: \[G_{1}=(1,4)(2,3),\qquad G_{2}=(1,3)(2,4),\qquad G_{3}=(1,2)(3,4).\] See Figure 1 and Example 4.2 for explanation of the figure. Throughout this section, \(G_{i}\) denotes one of these graphs (not those in section 7). **Definition 6.1**.: For \(G=G_{1},G_{2}\), and \(G_{3}\), put \(f=f_{G}\) and \(E(G)=\{e_{1}<e_{2}\}\) (see Definition 5.1). For \(j=1,2\), let \(f_{j}\) be the \(e_{j}\)-contraction of \(f\) for \(G\). Set \[c(G)=f(w_{0})+f_{1}(w_{1})+f_{2}(w_{1})\quad\in\bar{C}_{4}(\mathcal{T}_{G})\oplus \bar{C}_{5}(\mathcal{T}_{\partial_{1}G})\oplus\bar{C}_{5}(\mathcal{T}_{\partial_ {2}G})\] Figure 1. graphs in sections 3, 5 and 6 and corresponding maps \(f_{G}\) Here, by our convention, \(f_{j}(w_{1})\) denotes the pushforward of \(w_{1}\) by \(f_{j}:S^{4}\wedge[0,\infty]\to\mathcal{T}_{\partial_{i}G}\) and \(f(w_{0})\) is a similar abbreviation. See Figure 2 where the dotted chords denote the removed edges and the signs denote those of the term \(sv\) added to the corresponding component. **Example 6.2**.: Let \(G=G_{1}\). See Example.5.2 for the concrete formulas of \(f\) and \(f_{1}\). \(f_{2}\) is given by \((x,y,s)\mapsto(x,y+sv,y-sv,x)\). Set \(D=d+\partial\). **Lemma 6.3**.: _For \(G=G_{1},G_{2},\) and \(G_{3}\), \(c(G)\) is a cycle in \((tr_{1}\mathbb{T},D)=(tr_{1}\check{\mathbb{E}}_{0},d_{0})\)._ Proof.: Under the notations of Definition 6.1, \(f_{j}\) is \(\partial_{j}G\)-condensed by Lemma 5.4. So by Lemma 5.3, each pushfoward in the definition of \(c(G)\) is well-defined. Clearly, we have \(df(w_{0})=0\) and \(\partial_{j}f(w_{0})=df_{j}(w_{1})\). These equality imply the claim. **Definition 6.4**.: For \((G,H,i)=(G_{1},G_{2},3),(G_{1},G_{2},1)\) and \((G_{2},G_{3},2)\), we shall define a bounding chain of \(\delta_{i}(c(G)+c(H))\). Set \(f=f_{G}\) and \(f^{\prime}=f_{H}\). If the \(i\)-th components of \(f\) and \(f^{\prime}\) are identical, \(\psi\) denotes the straight homotopy from \(f\) to \(f^{\prime}\). Otherwise, \(\psi\) is the straight homotopy from \(f\) to \(f^{\prime}\circ T\), where \(T:\mathbb{R}^{4}\to\mathbb{R}^{4}\) is the transposition \(T(x,y)=(y,x)\). Put \(E(G)=\{e_{1}<e_{2}\}\) and \(E(H)=\{e^{\prime}_{1}<e^{\prime}_{2}\}\). We regard \(E(\delta_{i}G)=E(G)\) and \(E(\delta_{i}H)=E(H)\) in the way compatible with the incidence maps. Let 1. \(\lambda_{j}\) be the straight homotopy from \(e_{j}\)-contraction of \(f\) for \(G\) to \(e_{j}\)-contraction of \(f\) for \(\delta_{i}G\), 2. \(\lambda^{\prime}_{j}\) the straight homotopy from \(e^{\prime}_{j}\)-contraction of \(f^{\prime}\) for \(H\) to \(e^{\prime}_{j}\)-contraction of \(f^{\prime}\) for \(\delta_{i}H\), 3. \(\psi\) the straight homotopy from \(f\) to \(f^{\prime}\), and 4. \(\psi_{j}\) be the \(e_{j}\)-contraction of \(\psi\) for \(\delta_{i}G\). Set \[c(G,H,i)=\psi(w_{01})+\sum_{j=1,2}(\lambda_{j}+\lambda^{\prime}_ {j}+\psi_{j})(w_{11})\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\ Proof.: Clearly, we have \(\delta_{i}G=\delta_{i}H\). This equation, Lemmas 5.4 and 5.5 imply that \(\psi\) is \(\delta_{i}G\)-condensed. \(\psi_{j}\) and \(\lambda_{j}\) are \(\delta_{i}\partial_{j}G\)-condensed and \(\lambda^{\prime}_{j}\) is \(\delta_{i}\partial_{j}H\)-condensed similarly. By Lemma 5.3, each pushforward in the definition of \(c(G,H,i)\) is well-defined. Concatenation of \(\psi_{j}\), \(\lambda_{j}\), and \(\lambda^{\prime}_{j^{\prime}}\) gives homotopy between the \(e_{j}\)-contraction of \(f\) for \(G\) and \(e^{\prime}_{j^{\prime}}\)-contraction of \(f^{\prime}\) or \(f^{\prime}\circ T\) for \(H\), where \(j^{\prime}=j\) if \(\psi\) does not alternate edges, and \(j^{\prime}=3-j\) otherwise. For example, if \((G,H,i,j)=(G_{1},G_{2},1,1)\), the concatenation gives a homotopy from \(e_{1}\)-contraction of \(f\) to \(e^{\prime}_{2}\)-contraction of \(f^{\prime}\). The concatenation induces a homotopy between maps to the Thom space \(\mathcal{T}_{\delta_{i}\partial_{j}G}\) since the maps are \(\delta_{i}\partial_{j}G\)-condensed. Since \(\lambda_{j}|_{s=0}\) and \(\lambda^{\prime}_{j}|_{s=0}\) are constant for the variable \(t\in I\), the pushforwards by these maps are zero in the normalized chain complex. The rest of boundaries cancel with each other. Thus, we have the first equation. The chains \(\delta_{j}(c(G_{k}))\) which are not covered by \(c(G,H,i)\) are zero by Lemma 5.7 (1). For example, \(\delta_{2}c(G_{1})=0\) by the part (a) of it. The equation of \(D(C)\) follows from the first equation and this observation. While we give a rigorous proof below, we shall explain the reason why \(\delta^{\prime}_{3}\circ\psi_{j}\) for \((G_{1},G_{2},1)\) collapses to the basepoint and \(\delta^{\prime}_{1}\circ\psi_{j}\) for \((G_{1},G_{2},3)\) does not intuitively (\(\delta^{\prime}_{i}\) is the collapsing map in Definition 4.3). Let \(\psi^{1}_{j}\) (resp.\(\psi^{2}_{j}\)) be \(\psi_{j}\) for \((G_{1},G_{2},1)\) (resp. \((G_{1},G_{2},3)\)). The homotopy \(\psi^{1}_{j}\) (resp. \(\psi^{2}_{j}\)) alternates \(x\) and \(y\) in the third and fourth components (resp. the first and second components). See Figure 6. If the image of a point \((x,y,s,t)\) by these maps is not the basepoint, the image must be contained in \(\nu_{P}\) for \(P=\{\{0\},\{12\},\{34\},\{5\}\}\). So the difference between the first coordinates of the first and second components is approximately \(\rho(c_{1}+c_{2})/2\pm\epsilon_{P}\) and that between the third and fourth components is approximately \(\rho(c_{3}+c_{4})/2\pm\epsilon_{P}\). In the figure, \(x\) and \(y\) must be in the interior of the solid circles on the right and left, respectively. (The radius of these circles represents the width of tubular neighborhood \(\nu_{P}\) and we ignore addition of \(\pm sv\) since the second coordinate is not important.) The components moved by the homotopy must be in the interior of the dotted circles. It is seen that this is not the case for \(\delta^{\prime}_{3}\circ\psi^{1}_{j}\) while the moving points pass through the dotted circle for \(\delta^{\prime}_{1}\circ\psi^{2}_{j}\) as \(c_{1}+c_{2}<<c_{3}+c_{4}\). **Lemma 6.7**.: _Let \(C\) be the chain defined in Lemma 6.6, and \(\psi\), \(\psi_{j}\), \(\lambda_{j}\), and \(\lambda^{\prime}_{j}\) the maps given in Definition 6.4 for \((G,H,i)=(G_{1},G_{2},3)\). Put \(G_{0}=\delta_{13}\partial_{1}G_{1}\). \(G_{0}\) is the graph with only one edge \((\{12\},\{34\})\)._ 1. _Each pushforward which appears as the terms of_ \(c(G,H,i)\) _are annihilated by_ \(\delta_{k}\) _unless_ \((G,H,i,k)=(G_{1},G_{2},3,1)\)_. The terms of_ \(c(G_{1},G_{2},3)\) _except for_ \(\psi_{j}(w_{11})\) _is annihilated by_ \(\delta_{1}\)_._ 2. \(\delta C=\delta_{1}(\psi_{1}(w_{11})+\psi_{2}(w_{21}))\) _is a fundamental cycle of_ \(\mathcal{T}_{G_{0}}\)_._ Proof.: For part 1, we consider the case of \((G,H,i)=(G_{1},G_{2},1)\). We have \(\delta_{1}\psi(w_{01})=0\) as \(\delta_{1}\partial_{1}G\) has a loop. The graph of \(\delta^{\prime}_{1}\circ\lambda_{2}\) is \(\delta_{1}\partial_{2}G\). While this does not have a loop nor double edges, by Lemma 5.7 (1), we have \(\delta^{\prime}_{1}\circ\lambda_{2}=*\). We also have \(\delta^{\prime}_{2}\circ\psi_{j}=*\) by (2) of the same lemma. The other terms vanish similarly. We shall show part 2. By definition, \(\mathcal{T}_{G_{0}}\) is a Thom space associated to the disk bundle \(\nu_{P}|_{N}\), where \(N\) is a open tubular neighborhood of the diagonal \(\Delta\) in the product of disks \((D^{2})^{2}\) (with different radius). Precisely speaking, in the notation of Definition 3.1, \((D^{2})^{2}=\mathbb{R}^{4}-E_{P}\), and \(N=D_{\alpha\beta}\cap(\mathbb{R}^{4}-E_{P})\), where \(\alpha=\{12\},\beta=\{34\}\) and \(P=\{\{0\},\alpha,\beta,\{5\}\}\). So, \(\mathcal{T}_{G_{0}}\) is homeomorphic to \(S^{4}\wedge S^{2}\wedge(IntD_{+}^{2})\), where \(S^{4}\) and \(S^{2}\times\{0\}\subset S^{2}\wedge(IntD_{+}^{2})\) correspond to a fiber and \(\Delta\), respectively. We consider a retract to \(S^{4}\wedge(S^{2}\times\{0\})\cong Th(\nu_{P}|_{\Delta})\) and show that the composition of \(\psi_{j}\) with this retract is of degree one. Write \(\nu=\nu_{P}|_{N}\). The orthogonal projection \(r:N\to\Delta\) induces the bundle map \(\tilde{r}:\nu\to\nu|_{\Delta}\) which is given by the parallel transport taking the center to the center on each fiber. This map induces a map \(\tilde{r}:\mathcal{T}_{G_{0}}\to Th(\nu|_{\Delta})\). We consider \(\nu\) and \(\nu|_{\Delta}\) as subspaces of \(\mathbb{R}^{8}\). Put \[F_{j}:=\tilde{r}\circ\psi_{j}|_{(\psi_{j})^{-1}(\nu)}:(\psi_{i}^{1})^{-1}(\nu )\to\nu|_{\Delta}\qquad\text{for}\qquad j=1,2.\] We shall show \(F_{j}\) is a monomorphism and \(Im(F_{1})\cup Im(F_{2})=\nu|_{\Delta}\) and \(Im(F_{1})\cap Im(F_{2})=Im(F_{1})|_{s=0}=Im(F_{2})|_{s=0}\), where \(s\in[0,\infty)\) is the variable in the \(e_{j}\)-contraction. These claims imply the lemma. By definition, \(\nu\) is the tubular neighborhood of the map \[e_{P}:(a,b)\mapsto\left(a-\frac{\rho}{2}c_{2}u,\ a+\frac{\rho}{2}c_{1}u,\ b- \frac{\rho}{2}c_{4}u,\ b+\frac{\rho}{2}c_{3}u\right).\] The projection \(\pi_{P}\) of \(\nu\) sends \((c,d,e,f)\in\mathbb{R}^{8}\) to the point \((a,b)\) which minimize the distance \(|(c,d,e,f)-e_{P}(a,b)|\). By elementary calculation, the point is given by \[(a,b)=\left(\frac{c+d}{2}+\frac{\rho}{4}(c_{2}-c_{1})u,\ \frac{e+f}{2}+\frac{ \rho}{4}(c_{4}-c_{3})u\right).\] Similarly, we see that \(r:N\to\Delta\) is given by \(r(a,b)=(a+b)/2\). We shall verify \(F_{1}\) is a monomorphism. Since \(\psi_{1}(x,y,s,t)=((1-t)x+ty+sv,tx+(1-t)y-sv,x-sv,y-sv)\), we have \[r\circ\pi\circ\psi_{1}(x,y,s,t)=\frac{x+y}{2}-\frac{sv}{2}+\frac{\rho}{8}(c_{2 }+c_{4}-c_{1}-c_{3})u.\] We denote the right hand side by \(w\). For simplicity, we move the fiber of \(\nu\) over \(\pi_{P}(\psi_{1}(x,y,s,t))\) by the parallel transport which sends its center to \(0\). By this move, \(\psi_{1}(x,y,s,t)\) is sent to \[\psi_{1}(x,y,s,t)-e_{P}(\pi_{P}(\psi(x,y,s,t))=(p_{1},-p_{1},q,-q)\] where \[p_{1}=(t-1/2)(y-x)+sv+\frac{\rho}{4}(c_{1}+c_{2})u,\ q=\frac{-1}{2}(y-x)+\frac {\rho}{4}(c_{3}+c_{4})u.\] Similarly, If \(F_{2}(x,y,s,t)\) is in the fiber over a point sent to \(w\) by \(r\), the point is given by \((p_{2},-p_{2},q,-q)\) where \[p_{2}=(t-1/2)(y-x)-sv+\frac{\rho}{4}(c_{1}+c_{2})u\] The fiber over \(\pi_{P}\) is a disk of radius \(\epsilon_{P}\). It is enough to show there exists a unique \((x,y,s,t)\) such that \(q^{\prime}=q\) and \((p^{\prime}=p_{1}\) or \(p^{\prime}=p_{2})\) for given point \((p^{\prime},q^{\prime},w)\) with \(|(p^{\prime},q^{\prime})|\leq\epsilon_{P}\), and for the point \((x,y,s,t)\), \(p^{\prime}=p_{1}=p_{2}\) holds if and only if \(s=0\). Suppose that \((p^{\prime},q^{\prime})=(p_{j},q)\). By the above formulas for \(p_{j}\) and \(q\), for both of \(j=1,2\) we have \[t=\frac{{p^{\prime}}^{1}-{q^{\prime}}^{1}+\rho(c_{3}+c_{4}-c_{1}-c_{2})/4}{-2{q ^{\prime}}^{1}+\rho(c_{3}+c_{4})/2},\] where \({p^{\prime}}^{1}\) and \({q^{\prime}}^{1}\) denote the first components o \(p^{\prime}\) and \(q^{\prime}\), respectively. The assumptions on \(c_{j}\) and \(|(p^{\prime},q^{\prime})|\) ensure \(0<t<1\). Similar computation shows that \(s>0\) holds for exactly one of \(p_{1}\), \(p_{2}\) or otherwise, \(s=0\) holds for both of \(p_{1}\) and \(p_{2}\). Thus we have proved the claim. **Theorem 6.8**.: _In dimension \(d=2\) and over a field of characteristic \(2\), there exists an element \(g\in\mathbb{E}_{2}^{-2,2}\) satisfying \(d_{2}(g)\neq 0\)._ Proof.: By degree reason, we may consider \(\bar{\mathbb{E}}_{r}\) instead of \(\mathbb{E}_{r}\). By Lemma 4.6, we see the class \(g^{\prime}\) represented by \(\sum_{1\leq i\leq 3}c(G_{i})\) is lifted to a class \(g\in\bar{\mathbb{E}}_{1}^{-2,2}\). By Lemma 6.6, \(d_{1}g^{\prime}=0\). Again by Lemma 4.6, \(d_{1}g=0\). By Lemma 6.6 and 6.7, we have \(d_{2}g^{\prime}\neq 0\), and we have \(d_{2}g\neq 0\) ## 7. Computation of differential in characteristic \(3\) Throughout this section we set \(n=5\) and \(d=2\) and assume that \(\mathsf{k}\) is a field of characteristic \(3\). We compute \(d_{3}\) of the element in \(tr_{1}\tilde{\mathbb{E}}\) corresponding to \(-g_{13}g_{23}g_{45}+g_{14}g_{24}g_{35}+g_{14}g_{25}g_{34}+g_{15}g_{24}g_{34} \in H^{*}(Conf_{5}(\mathbb{R}^{2}))\). (In the integral coefficient, this element does not represents a cycle for \(d_{1}\).) The computation is similar to the one in section \(6\). We define four graphs in \(\mathsf{G}([6])\) as follows: \[G_{1}=(1,3)(2,3)(4,5),\quad G_{2}=(1,4)(2,4)(3,5),\quad G_{3}=(1,4)(2,5)(3,4), \quad G_{4}=(1,5)(2,4)(3,4),\] see Figure 5. Throughout this section, \(G_{i}\) (\(1\leq i\leq 4\)) denotes one of these graphs. **Definition 7.1**.: Let \(G\) be one of \(G_{i}\). Put \(E(G)=\{e_{1}<e_{2}<e_{3}\}\) and \(f=f_{G}:\mathbb{R}^{4}\to\mathbb{R}^{10}\), see Definition 5.1. Let \(f_{j}\) be the \(e_{j}\)-contraction of \(f\), \(f_{jk}\) the \((e_{j},e_{k})\)-contraction of \(f\). We define a chain \(c(G)\) by \[c(G)=f(w_{0})+\sum_{j=1}^{3}(-1)^{j}f_{j}(w_{1})+\sum_{1\leq j< k\leq 3}(-1)^{j+k+1}f_{jk}(w_{2}).\] \[\in\quad\bar{C}_{*}(\mathcal{T}_{G})\oplus\bigoplus_{j}\bar{C}_{* }(\mathcal{T}_{\partial_{j}G})\oplus\bigoplus_{j<k}\bar{C}_{*}(\mathcal{T}_{ \partial_{jk}G})\] Set \(D=d+(-1)^{*}\partial\), where \(*\) is the singular degree. The following lemma is obvious. **Lemma 7.2**.: \(c(G)\) _is a cycle in \((tr_{1}\tilde{\mathbb{E}}_{0},d_{0})\). _ ### First bounding chain **Definition 7.3**.: Let \((G,H,i)\) be one of \((G_{1},G_{2},3),(G_{2},G_{3},2),(G_{2},G_{3},4),(G_{3},G_{4},1)\), and \((G_{4},G_{3},4)\). Set \(f=f_{G}\), \(f^{\prime}=f_{H}\), and \(E(G)=\{e_{1}<e_{2}<e_{3}\}\). We identify edge sets through the bijection \(E(G)\cong E(\delta_{i}G)\cong E(\delta_{i}H)\cong E(H)\) commuting with the incidence maps. (By this convention, \(e_{j}\) does not necessarily represent the \(j\)-th edge of \(E(H)\) or \(E(\delta_{i}H)\).) If the \(i\)-th components of \(f\) and \(f^{\prime}\) are identical, \(\psi\) denotes the straight homotopy from \(f\) to \(f^{\prime}\). Otherwise, \(\psi\) is the straight homotopy from \(f\) to \(f^{\prime}\circ T\), where \(T:\mathbb{R}^{4}\to\mathbb{R}^{4}\) is the transposition \(T(x,y)=(y,x)\). Let 1. \(\psi_{j}\) (resp. \(\psi_{jk}\)) be the \(e_{j}\)-contraction (resp. \((e_{j},e_{k})\)-contraction) of \(\psi\) for \(\delta_{i}G\), 2. \(\lambda_{j}\) (resp. \(\lambda_{jk}\)) the straight homotopy from the \(e_{j}\)-contraction (resp. \((e_{j},e_{k})\)-contraction) of \(f\) for \(G\) to the \(e_{j}\)-contraction (resp. \((e_{j},e_{k})\)-contraction) of \(f\) for \(\delta_{i}G\), and 3. \(\lambda^{\prime}_{j}\) (resp. \(\lambda^{\prime}_{jk}\)) the straight homotopy from the \(e_{j}\)-contraction (resp. \((e_{j},e_{k})\)-contraction) of \(f^{\prime}\) for \(H\) to \(e_{j}\)-contraction (resp. \((e_{j},e_{k})\)-contraction) of \(f^{\prime}\) for \(\delta_{i}H\). Then, we set \[c(G,H,i):=\psi(w_{01})+\sum_{1\leq j\leq 3}(-1)^{j+1}(\psi_{j}+\lambda_{j}- \lambda^{\prime}_{j})(w_{11})+\sum_{1\leq j<k\leq 3}(-1)^{j+k+1}(\psi_{jk}+ \lambda_{jk}-\lambda^{\prime}_{jk})(w_{21}),\] Here, we compose \(\psi_{j}\) with the transposition of \([0,\infty]\wedge(I_{+})\) implicitly since definition of \(\psi_{j}\) put \([0,\infty]\) at the rightmost component. \(\psi_{jk}\) is also composed with the transposition \((I_{+})\wedge[0,\infty]^{\wedge 2}\cong[0,\infty]^{\wedge 2}\wedge(I_{+}),\ (t,s_{1},s_{2}) \mapsto(s_{1},s_{2},t)\). Throughout this section, definitions of chains are understood to include similar implicit transposition. **Lemma 7.4**.: _We have_ \[Dc(G_{1},G_{2},3) =-\delta_{3}c(G_{1})+\delta_{3}c(G_{2}),\] \[Dc(G_{2},G_{3},2) =-\delta_{2}c(G_{2})-\delta_{2}c(G_{3}),\] \[Dc(G_{2},G_{3},4) =-\delta_{4}c(G_{2})+\delta_{4}c(G_{3}),\] \[Dc(G_{3},G_{4},1) =-\delta_{1}c(G_{3})-\delta_{1}c(G_{4}),\] \[Dc(G_{4},G_{3},4) =-\delta_{4}c(G_{4})+\delta_{4}c(G_{3}).\] Figure 5. graphs in section \(7\) Proof.: The proof is similar to that of Lemma 6.6 with some care about signs. Let \((G,H,i),f,\) and \(f^{\prime}\) be as in Definition 7.3. \(\delta_{i}:E(G)\to E(\delta_{i}(G))\) preserves the order of edges so the restriction of each map to \(0\in[0,\infty)\) cancels with Cech differential of another map. Let \(f_{j}\) and \(f^{\prime}_{j}\)(resp. \(f_{jk}\), and \(f^{\prime}_{jk}\)) denotes the \(e_{j}\)-(resp. \((e_{j},e_{k})\)-) contractions of \(f\) and \(f^{\prime}\) for \(G\) and \(H\), respectively. The concatenation of \(\psi_{j}\), \(\lambda_{j}\), and \(\lambda^{\prime}_{j}\) defines a homotopy from \(f_{j}\) to \(f^{\prime}_{j}\) possibly composed with a transposition. We have a similar homotopy for \(\psi_{jk},\lambda_{jk}\), and \(\lambda^{\prime}_{jk}\). For \((G,H,i)=(G_{1},G_{2},3)\), the bijection \(E(G)\cong E(H)\) in Definition 7.3 preserves the order. Straightforward computation shows the first equation. The proof of the third and fifth equations is similar. For the second equation, consider the case of \((G,H,i)=(G_{2},G_{3},2)\). The bijection alternate second and third edges, so the homotopy gives the terms of \(f^{\prime}_{j}\) and \(f^{\prime}_{jk}\) the signs opposite to \(c(H)\) except for the terms of \(f^{\prime}\) and \(f^{\prime}_{1}\). (For \(f^{\prime}_{23}\), the homotopy alternates the components of \([0,\infty)^{2}\), which produces a sign.) This exception is covered by the extra sign on \(\delta_{2}\) in permutating edges since we have \(sgn(\sigma_{H,2})=sgn(\sigma_{\partial_{1}H,2})=-1\) and \(sgn(\sigma_{H^{\prime},2})=1\) for the other subgraphs \(H^{\prime}\) of \(H\) (see Definition 4.3). The proof of the fourth equation is similar. **Definition 7.5**.: Let \((G,i)\) be one of \((G_{1},1),(G_{2},1)\) and \((G_{4},2)\). Set \(f=f_{G}\) and \(E(G)=\{e_{1}<e_{2}<e_{3}\}\) Let \(f_{j}\) be \(e_{j}\)-contraction of \(f\), and \(f_{jk}\)\((e_{j},e_{k})\)-contraction of \(f\). Let \(f^{\epsilon}_{j}\) (resp.\(f^{\epsilon}_{jk}\)) be the \((i,\epsilon)\)-contraction of \(f_{j}\) (resp. \(f_{jk}\)) for \(\epsilon=+\) or \(-\), see Definition 5.8. We set \[c(G,i)=\sum_{1\leq j\leq 3}(-1)^{j+1}f^{\pm}_{j}(w_{2})+\sum_{1 \leq j<k\leq 3}(-1)^{j+k+1}f^{\pm}_{jk}(w_{3})\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad **Definition 7.8**.: Let \((G,i,j)\) be one of those which appears in Lemmas 7.9,7.10, 7.11, and 7.12 below. Put \(f=f_{G}\) and \(E(G)=\{e_{1}<e_{2}<e_{3}\}\). Let \(f_{k}\) (resp. \(f_{kl}\)) be the \(e_{k}\)-contraction (resp. \(f_{kl}\)) of \(f\) for \(G\). Let \(\mu_{k}^{\pm}\) (resp. \(\mu_{kl}^{\pm}\)) be the straight homotopy \(h\) defined in Lemma 5.10 (1) for \(F=f_{k}\) (resp. \(f_{kl}\)) and numbers \(i,j\). We set \[c(G,i,j)=\sum_{1\leq k\leq 3}(-1)^{k}\mu_{k}^{\pm}(w_{21})+\sum_{1 \leq l<k\leq 3}(-1)^{k+l+1}\mu_{lk}^{\pm}(w_{31})\] \[\qquad\qquad\in\bigoplus_{k}\bar{C}_{*}(\mathcal{T}_{\delta_{ij} \partial_{k}G})\oplus\bigoplus_{k<l}\bar{C}_{*}(\mathcal{T}_{\delta_{ij} \partial_{kl}G})\] Here we use notations similar to Definition 7.5. The proofs of the following three lemmas are similar to that of Lemma 7.4, and we omit them. **Lemma 7.9**.: _We have_ \[Dc(G_{2},G_{3},2,4) =\delta_{3}c(G_{2},G_{3},2)+c^{\prime}(G_{2},4)+c^{\prime}(G_{3}, 4),\] \[Dc(G_{2},G_{3},4,2) =\delta_{2}c(G_{2},G_{3},4)+c^{\prime}(G_{2},2)-c^{\prime}(G_{3}, 2),\] \[Dc(G_{4},G_{3},4,2) =\delta_{2}c(G_{4},G_{3},4)+\delta_{3}c(G_{4},2)-c^{\prime}(G_{3}, 2),\] \[Dc(G_{2},2,4) =c^{\prime}(G_{2},2)-c^{\prime}(G_{2},4),\] \[Dc(G_{3},2,4) =c^{\prime}(G_{3},2)-c^{\prime}(G_{3},4).\] _Here, \(c^{\prime}(G_{2},4)\) is the chain defined by the same formula as \(c(G,i)\), where \(j\) in the first sum runs through the labels of double edges of \(\delta_{24}G_{2}\). (While this chain is not well-defined in the Thom space of the subgraphs of \(\delta_{2}G_{2}\), it is well-defined in that of \(\delta_{24}G_{2}\).) The other \(c^{\prime}(G_{k},i)\)'s are defined completely similarly. If we set_ \[C_{21}=c(G_{2},G_{3},2,4)-c(G_{2},G_{3},4,2)-c(G_{4},G_{3},4,2)+c(G_{2},2,4)+c (G_{3},2,4),\] _We have_ \[DC_{21}=\delta_{3}(c(G_{2},G_{3},2)-c(G_{4},2))-\delta_{2}(c(G_{2},G_{3},4)+c (G_{4},G_{3},4)).\] See Figure 6, where \(\leftrightarrow\) specifies the vertexes whose corresponding components of the map \(\psi\) is added \(\pm su\) in defining \((i,\pm)\)-contraction. **Lemma 7.10**.: _We have_ \[Dc(G_{2},G_{3},4,1) =\delta_{1}c(G_{2},G_{3},4)+\delta_{3}c(G_{2},1)-c^{\prime}(G_{3},1),\] \[Dc(G_{4},G_{3},4,1) =\delta_{1}c(G_{4},G_{3},4)+c^{\prime}(G_{4},1)-c^{\prime}(G_{3},1),\] \[Dc(G_{3},G_{4},1,4) =\delta_{3}c(G_{3},G_{4},1)+c^{\prime}(G_{3},4)+c^{\prime}(G_{4},4),\] \[Dc(G_{4},4,1) =c^{\prime}(G_{4},4)-c^{\prime}(G_{4},1),\] \[Dc(G_{3},4,1) =c^{\prime}(G_{3},4)-c^{\prime}(G_{3},1).\] Figure 6. the differential of \(c(G_{4},G_{3},4,2)\): We acturally consider subgraphs of these graphs. _Here, \(c^{\prime}(G_{3},i)\)'s are defined similarly as in Lemma 7.9 except for the extra sign \(sgn(\sigma_{H,1})=-1\) on the term of \(f_{1}^{\pm}\), where \(H=\delta_{1}(\partial_{1}G_{3})\). \(c^{\prime}(G_{3},1,2)\) is also similar to \(c(G,i,j)\) but has the same extra sign on the term \(\mu_{1}^{\pm}\). If we set_ \[C_{23}=c(G_{2},G_{3},2,1)-c(G_{3},G_{4},1,2)-c^{\prime}(G_{3},1,2),\] _we have_ \[DC_{23}=\delta_{1}(c(G_{2},G_{3},2)-c(G_{3},G_{4},1)+c(G_{2},1)-c(G_{4},2)).\] Proof.: In the first equation, the only point which we need to care is the sign of the third term of the right hand side. The straight homotopy between the maps corresponding to \(G_{2}\) and \(G_{3}\) alternates the second and third edges. This homotopy negates the terms except for the one corresponding to the removal of the first edge. This exception is covered by the extra sign in \(c^{\prime}(G_{3},1)\). In the second equation, the extra sign of \(\delta_{1}\) of \(c(G_{3},G_{4},1,2)\) is negative only for the term corresponding to \(\delta_{1}\partial_{1}(G_{3})\). This is covered by the extra sign in the definition of \(c(G_{3},G_{4},1)\). The straight homotopy from \(G_{3}\) to \(G_{4}\) alternates the first and second edges. This negates the term except for the removal of the third edge. This is covered by the extra sign of \(\delta_{1}\) on \(c(G_{4},2)\). **Lemma 7.13**.: _Let \(C_{1},C_{21},C_{22}\), \(C_{23}\) be as in Lemmas 7.6, 7.9, 7.10, and 7.12. If we set_ \[C_{2}=C_{21}+C_{22}+C_{23}+c(G_{1},G_{2},3,1),\] _we have_ \[D(C_{2})=-\delta(C_{1}).\] Proof.: Let \(s\) be the variable for \((i,\epsilon)\)-contraction in \(\mu_{k}^{\epsilon}\) or \(\mu_{kl}^{\epsilon}\) in Definition 7.8. We see that \(\mu_{k}^{\epsilon}|_{s=0}\) and \(\mu_{kl}^{\epsilon}|_{s=0}\) are constant for \(t\in I\), and the pushforwards by these maps are zero. This observation and arguments similar to the proof of Lemmas 6.6, 6.7, together with Lemmas 7.9, 7.12, 7.11, and 7.10 imply the claim. ### Computation of the differential as an element of the first page **Lemma 7.14**.: _We have_ \[-\delta(C_{2})=\delta_{1}(-c(G_{4},G_{3},4,2)+c(G_{2},2,4)+c(G_{3},2,4)+c(G_{2 },G_{3},4,1)).\] _Any pushforward which appears as a term of the left hand side, other than the following four terms, is zero:_ \[\delta_{1}\psi_{kl}^{-}(w_{31})\text{ of }\delta_{1}c(G_{4},G_{3},4,2 ),\ \delta_{1}\mu_{kl}^{+}(w_{31})\text{ of }\delta_{1}c(G_{2},2,4),\] \[\delta_{1}\mu_{kl}^{-}(w_{31})\text{ of }\delta_{1}c(G_{3},2,4), \text{ and }\ \delta_{1}\psi_{kl}^{+}(w_{31})\text{ of }\delta_{1}c(G_{2},G_{3},4,1).\] Proof.: We will see that all the maps which appears as terms of \(\delta(C_{2})\), other than the four maps in the claim are \(*\). If \(\delta_{abc}G_{i}\) has a loop, the maps corresponding to its subgraphs are \(*\). For example, we consider the term \(\delta_{2}\psi_{23}^{\epsilon}(w_{31})\) of \(\delta_{2}c(G_{2},G_{3},2,4)\). \(\delta_{134}G_{2}\) has loops but \(\delta_{134}\partial_{23}G_{2}\) does not. The image of \(\psi_{2}^{\epsilon}\) is contained in \(U_{\delta_{2}\partial_{2}G_{2}}^{\epsilon}\) by Lemma 5.9 and \(\psi_{2,l}^{\epsilon}=_{1}\psi_{23,l}^{\epsilon}\) for the \(l\)-th components by definition (\(1\leq l\leq 5\)) so the image of \(\psi_{23}^{\epsilon}\) is also contained in \(U_{\delta_{2}\partial_{2}G_{2}}^{\epsilon}\). By Lemma 5.7, we have \(\delta_{2}^{\prime}\circ\psi_{23}^{\epsilon}=*\). Some of the other terms are zero by the latter part of Lemma 5.6. For example, we consider the term \(\delta_{1}\psi_{kl}^{\epsilon}(w_{31})\) of \(\delta_{1}c(G_{2},G_{3},2,4)\). \(\psi_{kl}^{+}\) alternates \(x\) and \(y\) by the homotopy in the first, fourth, and fifth components and adds \(-su\) and \(su\) to the fourth and fifth components respectively, so it may be expressed as follows. \[\{\underline{x}\ x\ y\}\{\overrightarrow{\underline{x}}\ \overleftarrow{\underline{y}}\}\] Here, the arrows \(\rightarrow,\leftarrow\) denote addition of \(su,-su\), respectively, \(\underline{x}\) (resp. \(\underline{y}\)) represents the homotopy from \(x\) to \(y\) (resp. \(y\) to \(x\)), and \(\{\ \}\) specifies a vertex (the vertex set of the graph corresponding to the map \(\delta_{1}\circ\psi_{kl}^{+}\) is \(\{\{0\},\{123\},\{45\},\{6\}\}\)). We ignore \(\pm sv\) as it does not affect the first coordinate. Write \(\psi_{kl}^{+}=(\psi_{1},\ldots,\psi_{5})\) for simplicity. If \(x<_{1}y\), \(\psi_{1}=\underline{x}>_{1}x=\psi_{2}\), and if \(x>_{1}y\), \(\psi_{2}=x>_{1}y=\psi_{3}\) since \(\underline{x}\) and \(y\) always lie between \(x\) and \(y\). So we have \(\delta_{1}^{\prime}\circ\psi_{kl}^{+}=*\) by Lemma 5.6. \(\delta_{1}\circ\psi_{kl}^{-}\) is expressed as \(\{\underline{x}\ x\ y\}\{\overleftarrow{\underline{x}}\ \overrightarrow{ \underline{y}}\}\) and we have \(\delta_{1}^{\prime}\circ\psi_{kl}^{-}=*\) similarly. As another example, we consider the term \(\delta_{1}\mu_{kl}^{+}(w_{31})\) of \(\delta_{1}c(G_{4},4,1)\). \(\delta_{1}^{\prime}\circ\mu_{kl}^{+}\) is expressed as \[\{\overrightarrow{x}\ \overleftarrow{y}\ y\}\{\overrightarrow{y}\ \overleftarrow{x}\}\] Write \(\mu_{kl}^{+}=(\mu_{1},\ldots,\mu_{5})\). We see that \(\mu_{4}=\overrightarrow{y}>_{1}\overleftarrow{x}=\mu_{5}\) if \(x<_{1}y\), and \(\mu_{1}>\mu_{2}\) otherwise, which implies \(\delta_{1}^{\prime}\circ\mu_{kl}^{+}=*\). The terms which cannot be seen to be zero by the above two kinds of observations are the following four terms, where we omit subscripts. \[\delta_{1}^{\prime}\circ\psi^{\epsilon}\ \text{of}\ \delta_{1}c(G_{2},G_{3},4,2), \ \delta_{1}^{\prime}\circ\mu^{-}\ \text{of}\ \delta_{1}c(G_{2},2,4),\] \[\delta_{1}^{\prime}\circ\psi^{-}\ \text{of}\ \delta_{1}c(G_{2},G_{3},4,1),\ \text{and}\ \delta_{2}^{\prime}\circ\psi^{-}\ \text{of}\ \delta_{2}c(G_{3},G_{4},1,2).\] We consider the first term. The map \(\delta_{1}^{\prime}\circ\psi^{+}\) is expressed as \[\{x\ \overrightarrow{x}\ \overleftarrow{\underline{y}}\}\{x\ y\}\] Put \(P=\{\{0\},\{123\},\{45\},\{6\}\}\). Suppose \(\psi^{+}(\tilde{x})\in\nu_{P}\) for \(\tilde{x}=(x,y,\ldots)\in\mathbb{R}^{4}\times[0,\infty)^{3}\times I\). By fourth and fifth components, we have \(x<_{1}y\). Put \(\psi^{+}=(\psi_{1},\ldots,\psi_{5})\) Since the average of \(\psi_{2}(\tilde{x})=\overrightarrow{\underline{x}}\) and \(\psi_{3}(\tilde{x})=\overleftarrow{\underline{y}}\) is equal to \((x+y)/2\), we see \[\psi_{3}(\tilde{x})-\psi_{1}(\tilde{x})>_{1}\frac{1}{2}(\psi_{5}(\tilde{x})- \psi_{4}(\tilde{x})).\] This implies \[c_{1}+2c_{2}+c_{3}>(c_{4}+c_{5})/2-4\epsilon_{P}\] by the assumption of \(\tilde{x}\) and an argument similar to the proof of Lemma 3.3. This is impossible when \(c_{i}/c_{i-1}\) is sufficiently large as assumed in Definition 2.3, which implies \(\delta_{1}^{\prime}\circ\psi^{+}=*\). The other three terms are shown to be zero similarly. **Definition 7.15**.: Put \(G_{0}=\delta_{124}\partial_{12}G_{2}\) (\(=\delta_{124}\partial_{12}G_{3}=\delta_{124}\partial_{12}G_{4}\)). \(G_{0}\) is the graph having only one edge (\(\{123\},\{45\}\)). 1. For \((G,H,i,j)=(G_{2},G_{3},4,1)\) (resp. \((G_{2},G_{3},4,2)\), \((G_{4},G_{3},4,2)\), \((G_{4},G_{3},4,1)\)), let \(\tilde{\psi}_{kl}^{1+}\) (resp. \(\tilde{\psi}_{kl}^{2-}\), \(\tilde{\psi}_{kl}^{2-}\), \(\tilde{\psi}_{kl}^{1+}\)) denote \(\psi_{kl}^{+}\) (resp. \(\psi_{kl}^{-}\), \(\psi_{kl}^{-}\), \(\psi_{kl}^{+}\), \(\psi_{kl}^{+}\)) in Definition 7.7. Let \(\tilde{\eta}_{kl}\) (resp. \(\hat{\eta}_{kl}\)) be the straight homotopy from \(\tilde{\psi}_{kl}^{1+}\) to \(\tilde{\psi}_{kl}^{2-}\) (resp. from \(\tilde{\psi}_{kl}^{2-}\) to \(\tilde{\psi}_{kl}^{1+}\)). Set \[B_{1}=\sum_{k<l}(-1)^{k+l+1}\tilde{\eta}_{kl}(w_{32}),\qquad B_{2}=\sum_{k<l}(-1 )^{k+l+1}\hat{\eta}_{kl}(w_{32})\ \in\bar{C}_{*}(\mathcal{T}_{G_{0}}).\] 2. Put \(G=G_{3}\), \(E(G)=\{e_{1}<e_{2}<e_{3}\}\), and \(f=f_{G}\). Let \(\lambda_{kl}\) be the straight homotopy from \((e_{k},e_{l})\)-contraction of \(f\) for \(G\) to \((e_{k},e_{l})\)-contraction of \(f\) for \(\delta_{4}G\). Let \(\tilde{\lambda}_{kl}\) be the straight homotopy from \(\lambda_{kl}^{2+}\) and \(\lambda_{kl}^{4+}\). Set \[B_{3}=\frac{1}{2}\sum_{k<l}(-1)^{k+l+1}\tilde{\lambda}_{kl}(w_{32})\ \in \bar{C}_{*}(\mathcal{T}_{G_{0}})\] Put \(\tilde{\mu}_{3kl}^{-}=\tilde{\lambda}_{kl}|_{t_{1}=1}\), where \(t_{1}\in I\) is the variable in \(\lambda\). Let \(\theta_{kl}\) be the straight homotopy from \(\tilde{\mu}_{3kl}^{-}\) to \(\hat{\eta}|_{t_{1}=1}\) (see part 1). Set \[B_{4}=\frac{1}{2}\sum_{k<l}(-1)^{k+l+1}\theta_{kl}(w_{32})\ \in\bar{C}_{*}( \mathcal{T}_{G_{0}})\] \(B_{1},B_{2}\) are well-defined by Lemma 5.10, and well-definedness of \(B_{3},B_{4}\) is verified similarly to the proof of the lemma. **Lemma 7.16**.: _The cycle \(-\delta C_{2}\) is homologous to \(\delta_{1}\left(\frac{1}{2}\sum_{k<l}(-1)^{k+l+1}\mu_{2kl}^{+}(w_{31})\right)\) in \((tr_{1}\hat{\mathbb{E}}_{0},d_{0})\). Here, \(\mu_{2kl}^{+}\) is the map \(\mu_{kl}^{+}\) in Definition 7.8 for \((G,i,j)=(G_{2},2,4)\)._ Proof.: By arguments similar to the proof of Lemma 7.14, we have \[D(B_{1}) =\sum_{k<l}(-1)^{k+l}(\tilde{\psi}_{kl}^{1+}(w_{31})+\tilde{\eta} _{kl}|_{t_{1}=1}(w_{31}))\] \[=-\delta_{1}c(G_{2},G_{3},4,1)+\tilde{K},\] \[D(B_{2}) =\sum_{k<l}(-1)^{k+l}(\tilde{\psi}_{kl}^{2-}(w_{31})+\hat{\eta}_{ kl}|_{t_{1}=1}(w_{31}))\] \[=-\delta_{1}c(G_{4},G_{3},4,2)+\hat{K},\] \[D(B_{3}) =\frac{1}{2}\sum_{k<l}(-1)^{k+l}(\tilde{\mu}_{3kl}^{-}-\mu_{3kl} ^{-})(w_{31}),\] \[D(B_{4}) =\frac{1}{2}\sum_{k<l}(-1)^{k+l+1}(\hat{\eta}|_{t_{1}=1}-\tilde{ \mu}_{3kl}^{-})(w_{31}).\] Here, we set \[\tilde{K}=\sum_{k<l}(-1)^{k+l}\tilde{\eta}_{kl}|_{t_{1}=1}(w_{31}),\qquad\hat {K}=\sum_{k<l}(-1)^{k+l}\hat{\eta}_{kl}|_{t_{1}=1}(w_{31}).\] \(\mu_{3kl}^{-}\) is the map \(\mu_{kl}^{-}\) in Definition 7.8 for \((G,i,j)=(G_{3},2,4)\). Putting these equations into together, we have \[-\delta C_{2}+D(B_{1}-B_{2}-B_{3}+B_{4})=\tilde{K}-2\hat{K}+\frac{1}{2}\sum_ {k<l}(-1)^{k+l+1}\mu_{2kl}^{+}(w_{31}).\] \(\hat{K}\) is homologous to \(-\tilde{K}\) since the parameter for the homotopy between \((i,\pm)\)-contractions are reversed. We have \([\tilde{K}-2\hat{K}]=[-3\hat{K}]=0\). Thus, we have obtained the claim. We shall compute the remaining term. **Lemma 7.17**.: _The chain \(\delta_{1}\left(\frac{1}{2}\sum_{k<l}(-1)^{k+l+1}\mu_{2kl}^{+}(w_{31})\right)\) in Lemma 7.16 is a fundamental cycle of \(\mathcal{T}_{G_{0}}\)._ Proof.: By definition, \(\mathcal{T}_{G_{0}}\) is a Thom space associated to the disk bundle \(\nu_{P}|_{N}\), where \(N\) is a open tubular neighborhood of the diagonal \(\Delta\) in the product of disks \((D^{2})^{2}\) (with different radius). Precisely speaking, in the notation of Definition 3.1, \((D^{2})^{2}=\mathbb{R}^{4}-E_{P}\), and \(N=D_{\alpha\beta}\cap(\mathbb{R}^{4}-E_{P})\), where \(\alpha=\{1,2,3\},\beta=\{4,5\}\) and \(P=\{\{0\},\alpha,\beta,\{6\}\}\). Write \(\nu=\nu_{P}\). Fiber dimension of \(\nu\) is \(6\). The orthogonal projection \(r:N\to\Delta\) induces the bundle map \(\tilde{r}:\nu\to\nu|_{\Delta}\) which is given by the parallel transport taking center to center on each fiber. This map induces a map \(\tilde{r}:\mathcal{T}_{G_{0}}\to Th(\nu|_{\Delta})\). \(Th(\nu|_{\Delta})\) is homeomorphic to \(S^{8}\). We consider \(\nu\) and \((\nu|_{\Delta})\) as subspaces of \(\mathbb{R}^{10}\). Put \[F_{kl}:=\tilde{r}\circ\mu_{2kl}^{+}|_{(\mu_{2kl}^{+})^{-1}(\nu)}:(\mu_{2kl}^{+ })^{-1}(\nu)\to\nu|_{\Delta}\qquad\text{for}\qquad 1\leq k<l\leq 3\] We shall show \(F_{kl}\) is a monomorphism and \(\cup_{k<l}Im(F_{kl})=\nu|_{\Delta}\) and \(Im(F_{kl})\cap Im(F_{k^{\prime},l^{\prime}})\) is contained in the union of the image by \(F_{kl}\) of subspace where at least one of the contracting parameters \(s_{1},s_{2}\) in the direction of \(v\) is zero if \((k,l)\neq(k^{\prime},l^{\prime})\). These claims imply the lemma. By definition, \(\nu\) is the tubular neighborhood of the map \[e_{P}:(a,b)\mapsto\left(a-\frac{\rho}{2}(c_{2}+c_{3})u,\ a+\frac{\rho}{2}(c_{1} -c_{3})u,\ a+\frac{\rho}{2}(c_{1}+c_{2})u,\ b-\frac{\rho}{2}c_{5}u,\ b+\frac{ \rho}{2}c_{4}u\right).\] The projection \(\pi_{P}\) of \(\nu_{P}\) sends \((c,d,e,f,g)\in\mathbb{R}^{10}\) to the point \((a,b)\) which minimize the distance \(|(c,d,e,f,g)-e_{P}(a,b)|\). By elementary calculation, the point is given by \[(a,b)=\left(\frac{1}{3}(c+d+e+\rho(c_{3}-c_{1})u),\ \frac{1}{2}(f+g+\frac{\rho}{ 2}(c_{4}-c_{3})u)\right).\] Similarly, we see that \(r:N\to\Delta\) is given by \(r(a,b)=(a+b)/2\). We shall verify that \(F_{kl}\) is a monomorphism. For a while, we omit the subscript \(kl\) and \(A_{\epsilon_{k},\epsilon_{l}}(s_{1},s_{2})\) in Definition 5.1 is also abbreviated as \(A=(A^{1},\ldots,A^{5})v\), and \(\mu^{+}_{2kl}\) is abbreviated as \(\mu\). For \(\tilde{x}=(x,y,s_{1},s_{2},s_{3},t)\), we have \[\mu(\tilde{x})=(x+A^{1}v,x+(1-t)s_{3}u+A^{2}v,y-(1-t)s_{3}u+A^{3}v,x-ts_{3}u+A ^{4}v,y+ts_{3}u+A^{5}v).\] Straightforward computation shows \[r\circ\pi_{P}\circ\mu(\tilde{x})=\frac{1}{12}\left(7x+5y+\rho\big{(}2c_{3}-2c_ {1}+\frac{3}{2}c_{5}-\frac{3}{2}c_{4}\big{)}u+2(A^{1}+A^{2}+A^{3})v+3(A^{4}+A^ {5})v\right).\] We denote right hand side of this equation by \(w\). For simplicity, we move the fiber of \(\nu\) over \(\pi_{P}(\mu(\tilde{x}))\) by the parallel transport which sends its center to \(0\). By this move, \(\mu(\tilde{x})\) is sent to to \[\mu(\tilde{x})-e_{P}(\mu(\tilde{x}))=(p,q,-p-q,r,-r)\] where \[p =\frac{1}{3}(x-y)+\frac{\rho}{6}(2c_{1}+3c_{2}+c_{3})u+\frac{1}{3 }(2A^{1}-A^{2}-A^{3})v\] \[q =\frac{1}{3}(x-y)+\frac{\rho}{6}(c_{3}-c_{1})u+(1-t)s_{3}u+\frac{ 1}{3}(-A^{1}+2A^{2}-A^{3})v\] \[r =\frac{1}{2}(x-y)+\frac{\rho}{4}(c_{4}+c_{5})u-ts_{3}u+\frac{1}{ 2}(A^{4}-A^{5})v\] The fiber over \(\pi_{P}\) is a disk of radius \(\epsilon_{P}\). It is enough to show that there exists a point \(\tilde{x}\) such that \(p^{\prime}=p\), \(q^{\prime}=q\), \(r^{\prime}=r\) for a given point \((p^{\prime},q^{\prime},r^{\prime})\) with \(|(p^{\prime},q^{\prime},r^{\prime})|\leq\epsilon_{P}\), and the combination of such a point and numbers \(k,l\) is unique unless \(s_{1}\) or \(s_{2}=0\). We fix \(w,p^{\prime},q^{\prime},r^{\prime}\) and suppose \((p^{\prime},q^{\prime},r^{\prime})=(p,q,r)\). By using the formula for \(r\), we eliminate \(x-y\) in the formulas of \(p,q\). We have \[p^{\prime} =\frac{1}{3}(2r^{\prime}+(2ts_{3}+\frac{\rho}{2}(2c_{1}+3c_{2}+c_ {3}-c_{4}-c_{5}))u+(2A^{1}-A^{2}-A^{3}-A^{4}+A^{5})v),\] \[q^{\prime} =\frac{1}{3}(2r^{\prime}+((3-t)s_{3}+\frac{\rho}{2}(c_{3}-c_{1}- c_{4}-c_{5}))u+(-A^{1}+2A^{2}-A^{3}-A^{4}+A^{5})v).\] By these formulas, we have \[s_{3} =\frac{1}{4}(2p^{\prime}_{1}+2q^{\prime}_{1}-4r^{\prime}_{1}+ \rho(c_{4}+c_{5}-c_{2}-c)),\] \[t =\frac{6p^{\prime}_{1}-4r^{\prime}_{1}+\rho(c_{4}+c_{5}-2c_{1}-3c _{2}-3c_{3})}{2p^{\prime}_{1}+4q^{\prime}_{1}-4r^{\prime}_{1}+\rho(c_{4}+c_{5} -c_{2}-c_{3}),\] where the subscript \(1\) means the first coordinate. The conditions on \(c_{i}\) and \((p^{\prime},q^{\prime},r^{\prime})\) ensure \(s_{3}>0\) and \(0<t<1\). We consider the last terms of the above formulas for \(p^{\prime},q^{\prime}\). For \((k,l)=(1,2),(1,3)\), and \((2,3)\), we have \[(2A^{1}-A^{2} -A^{3}-A^{4}+A^{5},-A^{1}+2A^{2}-A^{3}-A^{4}+A^{5})\] \[=(4s_{1}-2s_{2},-2s_{1}+4s_{2}),\quad(4s_{1}-2s_{2},-2s_{1}-2s_{2 }),\ \ \text{and}\ \ (-2s_{1}-2s_{2},4s_{1}-2s_{2}),\] respectively. When \(s_{1},s_{2}\geq 0\) vary, this point runs through a domain \(D_{kl}\). It is easy to see that \(\cup_{kl}D_{kl}=\mathbb{R}^{2}\), \(\partial D_{kl}\) corresponds to \(s_{1}\) or \(s_{2}=0\), and \(D_{kl}\cap D_{k^{\prime},l^{\prime}}\subset\partial D_{kl}\cap\partial D_{k^{ \prime},l^{\prime}}\) if \((k,l)\neq(k^{\prime},l^{\prime})\). So, the coefficients of \(v\) can take any value. When we fix \((p^{\prime},q^{\prime},r^{\prime})\), \(x-y\) is fixed but \(7x+5y\) can take any value, so we can set \(w\) freely. Thus, we have proved the claim. ### Non-triviality of the differential To prove non-triviality of the differential \(d_{3}\), we need to prove that a generator of \(\mathbb{E}_{2}^{-2,1}\) persists to \(E_{3}\)-page. For this, we compute \(d_{2}:\mathbb{E}_{2}^{-4,2}\to\check{\mathbb{E}}_{2}^{-2,1}\). The module \(\mathbb{E}_{2}^{-4,2}\) is generated by \(g_{13}g_{24}\). We actually consider the spectral sequence \(tr_{1}\check{\mathbb{E}}\). A cycle which represents \(g_{13}g_{24}\) is a linear combination of chains in the Thom spaces of the following three graphs (and their subgraphs). \[G_{5}=(1,3)(2,4),\qquad G_{6}=(1,3)(1,4),\qquad G_{7}=(1,4)(2,4)\] The computation of \(d_{2}\) is similar to those done in section 6 and previous subsections. One difference is that we need to deal with the \(3\)-term relation. To make the computation easier, we modify the definition of chains. **Definition 7.18**.: Let \(f:X\to\mathbb{R}^{8}\) be \(G\)-condensed map and \(e=(\alpha,\beta)\) be a bridge of \(G\). In this subsection, we call the \(e\)-contraction given in Definition 5.1 the \((e,+)\)_-contraction_. The \((e,-)\)_-contraction_ of \(f\) for \(G\) is a version of \(e\)-contraction whose contracting direction is reversed. We add \(-sv\) (resp. \(sv\)) to the \(i\)-th component if \(i\sim_{\partial_{e}G}\alpha\) (resp. \(i\sim_{\partial_{e}G}\beta\)). **Example 7.19**.: Put \(f=f_{G_{5}},e=(1,3)\). the \((e,-)\)-contraction of \(f\) (for \(G_{5}\)) is given by \((x,y,s)\mapsto(x-sv,y,x+sv,y)\). **Definition 7.20**.: For \(G=G_{5},G_{6}\), and \(G_{7}\), set \(f=f_{G}\) and \(\{e_{1}<e_{2}\}=E(G)\). Let \(f_{j}^{\epsilon}\) be the \((e_{j},\epsilon)\)-contraction of \(f\). Set \[c^{\prime}(G)=f(w_{0})-f_{1}^{\pm}(w_{1})+f_{2}^{\pm}(w_{1}).\] Here, \(f_{j}^{\pm}\) denotes the average as before. We consider the chain \[C^{\prime}=-c^{\prime}(G_{5})+c^{\prime}(G_{6})+c^{\prime}(G_{7}).\] Clearly, we have \(D(C^{\prime})=0\). We shall construct a bounding chain of \(\delta C^{\prime}\). We define two chain \(c^{\prime}(G_{5},G_{6},1)\) and \(c^{\prime}(G_{5},G_{7},3)\) satisfying \[Dc^{\prime}(G_{5},G_{6},1)=\delta_{1}c^{\prime}(G_{6})-\delta_{1}c^{\prime}( G_{5}),\qquad Dc^{\prime}(G_{5},G_{7},1)=\delta_{1}c^{\prime}(G_{7})-\delta_{1}c^{ \prime}(G_{5}),\] respectively. The definition is similar to that of \(c(G,H,i)\) with taking \((e,\epsilon)\)-contractions into consideration and we omit details. We shall construct a bounding chain of \(\delta_{2}C^{\prime}\). Put \(f_{1}=f_{G_{5}},f_{2}=f_{G_{6}}\), and \(f_{3}=f_{G_{7}}\). Below, we denote edges of all involved graphs with two edges by the same notation \(e_{1}<e_{2}\). Since \(\delta_{k}\) which appears here does not permute edges, this does not cause confusion. Let \(\lambda_{ij}^{\epsilon}\) be the straight homotopy from \((e_{j},\epsilon)\)-contraction of \(f_{i}\) for \(G_{i}\) to \((e_{j},\epsilon)\)-contraction of \(f_{i}\) for \(\delta_{2}G_{i}\), \(\psi\) the straight homotopy from \(f_{1}\) to \(f_{2}\), \(\phi\) the straight homotopy from \(f_{1}\) to \(f_{3}\), \(\psi_{j}^{\epsilon}\)\((e_{j},\epsilon)\)-contraction of \(\psi\) for \(\delta_{2}G_{6}\), \(\phi_{j}^{\epsilon}\)\((e_{j},\epsilon)\)-contraction of \(\phi\) for \(\delta_{2}G_{7}\). Let \(G_{8}\in\mathsf{G}(\delta_{2}[5])\) be the graph with \(E(G_{8})=\{(1,\{23\}),(1,4),(\{23\},4)\}\). We set \[c(G_{5},G_{6},G_{7},2)=f_{1}(w_{0})+(\psi+\phi)(w_{01})+\sum_{j=1,2}(\psi_{j}^{ \pm}+\phi_{j}^{\pm}+\lambda_{1j}^{\pm}-\lambda_{2j}^{\pm}-\lambda_{3j}^{\pm})(w _{11})\] where the pushforward by \(f_{1}\), (resp. \(\psi\), \(\phi\), \(\psi_{j}^{\epsilon}\), \(\phi_{j}^{\pm}\)) belongs to \(\bar{C}_{*}(G_{8})\), (resp. \(\bar{C}_{*}(\delta_{2}G_{6})\), \(\bar{C}_{*}(\delta_{2}G_{7})\), \(\bar{C}_{*}(\delta_{2}\partial_{j}G_{6})\), \(\bar{C}_{*}(\delta_{2}\partial_{j}G_{7})\)). This is well-defined as \(f_{1}\) is also \(\delta_{2}G_{6}\)- and \(\delta_{2}G_{7}\)-condensed. **Lemma 7.21**.: _We have \(Dc(G_{5},G_{6},G_{7},2)=\delta_{2}C^{\prime}\). If we set \(\Gamma^{\prime}=c(G_{5},G_{6},1)+c(G_{5},G_{7},3-)-c(G_{5},G_{6},G_{7},2)\), we have \(D\Gamma^{\prime}=-\delta C^{\prime}\)._ Proof.: Since \(\delta_{2}\partial_{1}G_{6}=\delta_{2}\partial_{2}G_{7}\), the pushforwards by \(\psi_{1}^{\epsilon}\) and \(\phi_{2}^{\epsilon}\) belongs to the same summand, and we see \(\psi_{1}^{\pm}|_{t=0}=\phi_{2}^{\mp}|_{t=0}\), where \(t\) is the variable for straight homotopy. (This twist of sign is the reason why we introduce \((e,-)\)-contraction.) These observations and elementary computation imply first equation. Second one follows from first one. **Proposition 7.22**.: _Let \(\Gamma^{\prime}\) be the chain defined in Lemma 7.21. \(\delta\Gamma^{\prime}\) is null-homologous in the chain complex \((tr_{1}\mathbb{T},D)=(tr_{1}\check{\mathbb{E}}_{0},d_{0})\). In particular, \(d_{2}(g_{13}g_{24})=0\) in \(\mathbb{E}_{2}\)._ Proof.: By degree reason, we may consider \(\check{\mathbb{E}}_{r}\) instead of \(\mathbb{E}_{r}\). Among the terms of \(\delta\Gamma^{\prime}\), \(\delta_{2}(c^{\prime}(G_{5},G_{6},1))\) only contains non-zero terms. Each of \((e,+)\)- and \((e,-)\)-contractions of this term forms a cycle and we can easily see their classes cancel with each other similarly to the proof of Lemmas 6.7 and 7.17. Latter part easily follows from former one and Lemma 4.6. In view of Lemma 7.17 and Proposition 7.22, the proof of the following theorem is completely similar to Theorem 6.8. **Theorem 7.23**.: _In dimension \(d=2\) and over a field of characteristic \(3\), the element \([-c(G_{1})+c(G_{2})+c(G_{3})+c(G_{4})]\in tr_{1}\tilde{\mathbb{E}}_{1}^{-3,5}\) lifts to an element \(g\in\mathbb{E}_{1}^{-3,5}\) which persists up to \(\mathbb{E}_{3}^{-3,5}\) and satisfies \(d_{3}(g)\neq 0\). _ ## 8. Absolute non-formality in characteristic \(2\) In this section, we prove Corollary 1.3. We assume that \(\mathsf{k}\) is a field of characteristic \(2\) and \(d=2\). Let \(\mathcal{A}_{\infty}\) denote the cellular chain operad of Stasheff's associahedral operad. Precisely speaking, \(\mathcal{A}_{\infty}\) is generated by a set \(\{\,\mu_{k}\in\mathcal{A}_{\infty}(k)\,\}_{k\geq 2}\) ( \(|\mu_{k}|=k-2\) ) with partial compositions. The differential is given by the following formula: \[d\mu_{k}=\sum_{l,p,q}\,\mu_{l}\circ_{p+1}\mu_{q}.\] where \(l,p,q\) runs through the range \(l,q\geq 2,0\leq p\leq l-1\), and \(l+q=k+1\). **Definition 8.1**.: For a \(\mathsf{k}\)-vector space \(U\), \(U^{\vee}\) denotes its linear dual. Let \(f:\mathcal{A}_{\infty}\to\mathcal{O}\) be a map of chain operads. Let \(\mu^{\prime}_{l}\in\mathcal{O}(l)\) be the image of \(\mu_{l}\in\mathcal{A}_{\infty}(l)\) by \(f\). we define a linear map \((-\circ_{i}\mu_{l}):\mathcal{O}(m)^{\vee}\to\mathcal{O}(m-l+1)^{\vee}\) for integers \(m\geq l\) and \(1\leq i\leq l\) as the following composition where the right arrow is the evaluation of the first factor on \(\mu^{\prime}_{l}\). We also define \((\mu_{l}\circ_{i}-):\mathcal{O}(m)^{\vee}\to\mathcal{O}(m-l+1)^{\vee}\) for integers \(m\geq l\) and \(1\leq i\leq m-l+1\) similarly. We define a chain complex \((\mathrm{CH}\mathcal{O},\tilde{d})\) called _Hochschild complex of \(\mathcal{O}\)_, as follows. Set \(\mathrm{CH}^{-p,q}\mathcal{O}=(\mathcal{O}(p)_{q})^{\vee}\). The differential \(\tilde{d}\) is given as a map \[\tilde{d}=d+\delta:\bigoplus_{q-p=k}\mathrm{CH}^{-p,q}\mathcal{O}\longrightarrow \bigoplus_{q-p=k+1}\mathrm{CH}^{-p,q}\mathcal{O}.\] Here \(d\) is the internal (original) differential on \(\mathcal{O}(p)^{\vee}\) and \(\delta\) is given by the formula \(\delta(x)=\sum_{l\geq 2}\mu_{l}*x\) where \[\mu_{l}*x=x\circ_{1}\mu_{l}+x\circ_{l}\mu_{l}+\sum_{i=1}^{p-l+1}\mu_{l}\circ_{ i}x\] for \(x\in\mathcal{O}(p)^{\vee}\). We define a spectral sequence \(E_{r}^{-p,q}(\mathcal{O})\) by filtering \((\mathrm{CH}\mathcal{O},\tilde{d})\) by the arity \(p\). We call a map \(f:\mathcal{O}\to\mathcal{P}\) of chain operads a _weak-equivalence_ if it induces a quasi-isomorphism \(\mathcal{O}(p)\to\mathcal{P}(p)\) for each \(p\). The following lemma is clear. **Lemma 8.2**.: 1. _Let_ \(\mathcal{A}_{\infty}\to\mathcal{O}\) _and_ \(\mathcal{A}_{\infty}\to\mathcal{O}^{\prime}\) _be two maps of operads and_ \(f:\mathcal{O}\to\mathcal{O}^{\prime}\) _a weak equivalence compatible with the maps from_ \(\mathcal{A}_{\infty}\)_. Then_ \(f\) _induces an isomorphism_ \(E_{r}(\mathcal{O})\cong E_{r}(\mathcal{O}^{\prime})\) _compatible with the differentials for_ \(r\geq 1\)_._ 2. _Let_ \(\mathcal{A}_{\infty}\to C_{*}(\mathcal{K}_{2})\) _be the composition_ \(\mathcal{A}_{\infty}\to\mathcal{A}\to C_{*}(\mathcal{K}_{2})\) _of the fixed maps (see Definition_ 2.4_). The spectral sequence_ \(\{E_{r}(C_{*}(\mathcal{K}_{2}))\}\) _is isomorphic to_ \(\{\mathbb{E}_{r}\}\) _(see Definition_ 4.5_)._ The following lemma is easily obtained by unwinding the definition of the spectral sequence \(E_{r}(\mathcal{O})\). **Lemma 8.3**.: _Let \(\mathcal{A}_{\infty}\to\mathcal{O}\) be a map of operads._ 1. \(d_{1}([x])=[\mu_{2}*x]\) _for an element_ \([x]\in E_{1}^{-p,q}(\mathcal{O})\) _represented by_ \(x\in\mathcal{O}(p)_{q}^{\vee}\)_._ 2. _For an element_ \([x]\in E_{2}^{-p,q}(\mathcal{O})\) _represented by_ \(x\in\mathcal{O}(p)_{q}^{\vee}\)_, we can take an element_ \(y\in\mathcal{O}(p-1)_{q-1}^{\vee}\) _with_ \(dy=\mu_{2}*x\)_. We have_ \(d_{2}[x]=[\mu_{2}*y+\mu_{3}*x]\)_._ Proof of Corollary 1.3.: Let \(\mathcal{A}_{\infty}\to C_{*}(\mathcal{K}_{2})\) be the map given in Lemma 8.2. We use the projective model structure on the category of planar chain operads (see e.g. [16]). We take a cofibrant replacement \(\mathcal{A}_{\infty}\to\mathcal{O}\stackrel{{\sim}}{{\to}}C_{*} (\mathcal{K})\). Suppose that \(C_{*}(\mathcal{K}_{2})\) is formal. By this assumption, we can take a weak equivalence of operads \(\mathcal{O}\to H_{*}(\mathcal{K})\). By considering the composition \(f:\mathcal{A}_{\infty}\to\mathcal{O}\to H_{*}(\mathcal{K})\), we obtain an isomorphism of spectral sequences \(E_{r}(\mathcal{O})\cong E_{r}(H_{*}(\mathcal{K}))\). Let \(\mu^{\prime}_{l}\in H_{l-2}(\mathcal{K}_{2}(l))\) be the image of \(\mu_{l}\) by \(f\). By definition of \(\mathcal{A}_{\infty}\), we see \[\mu^{\prime}_{2}\circ_{1}\mu^{\prime}_{3}+\mu^{\prime}_{2}\circ_{2}\mu^{\prime }_{3}+\mu^{\prime}_{3}\circ_{1}\mu^{\prime}_{2}+\mu^{\prime}_{3}\circ_{2}\mu^{ \prime}_{2}+\mu^{\prime}_{3}\circ_{3}\mu^{\prime}_{2}=d\mu^{\prime}_{4}=0.\] This means that \(\mu_{3}^{\prime}\) is a cycle for the differential of the unnormalized complex of the cosimplicial vector space associated to the fixed map \(\mathcal{A}\to H_{*}(\mathcal{K})\). By easy (and well-known) computation, this differential is a monomorphism on \(H_{1}(\mathcal{K}(3))\), so we have \(\mu_{3}^{\prime}=0\). This observation and Lemma 8.3 imply \(d_{2}=0\) for \(E_{2}(H_{*}(\mathcal{K}))\). Since \(E_{r}(\mathcal{O})\) is isomorphic to \(E_{r}(C_{*}(\mathcal{K}))\cong\mathbb{E}_{r}\), this vanishing of differential contradicts to Theorem 6.8.
2301.09476
Topology of quadrupolar Berry phase of a Qutrit
We examine Berry phase pertaining to purely quadrupolar state ($\langle \psi | \vec{S} | \psi \rangle = 0$) of a spin-$1$ system. Using the Majorana stellar representation of these states, we provide a visualization for the topological (zero or $\pi$) nature of such quadrupolar Berry phase. We demonstrates that the $\pi$ Berry phase of quadrupolar state is induced by the Majorana stars collectively tracing out a closed path (a great circle) by exchanging their respective positions on the Bloch sphere. We also analyse the problem from the perspective of dynamics where a state from the quadrupolar subspace is subjected to a static magnetic field. We show that time evolution generated by such Hamiltonian restricts the states to the quadrupolar subspace itself thereby producing a geometric phase (of the Aharonov-Anandan type) quantized to zero or $\pi$. A global unitary transformation which maps the quadrupolar subspace to the subspace of purely real states proves a natural way of understanding the topological character of this subspace and its connection to the anti-unitary symmetries.
Rajeev Singh, Navneet Kumar Karn, Rahul Bhowmick, Sourin Das
2023-01-23T15:20:53Z
http://arxiv.org/abs/2301.09476v1
# Topology of quadrupolar Berry phase of a Qutrit ###### Abstract We examine Berry phase pertaining to purely quadrupolar state (\(\langle\psi|\vec{S}|\psi\rangle=0\)) of a spin-1 system. Using the Majorana stellar representation of these states, we provide a visualization for the topological (zero or \(\pi\)) nature of such quadrupolar Berry phase. We demonstrates that the \(\pi\) Berry phase of quadrupolar state is induced by the Majorana stars collectively tracing out a closed path (a great circle) by exchanging their respective positions on the Bloch sphere. We also analyse the problem from the perspective of dynamics where a state from the quadrupolar subspace is subjected to a static magnetic field. We show that time evolution generated by such Hamiltonian restricts the states to the quadrupolar subspace itself thereby producing a geometric phase (of the Aharonov-Anandan type) quantized to zero or \(\pi\). A global unitary transformation which maps the quadrupolar subspace to the subspace of purely real states proves a natural way of understanding the topological character of this subspace and its connection to the anti-unitary symmetries. ## I Introduction In recent times, geometric phase has played a pivotal role in our understanding of the physics of topological insulators where the topological properties of these band insulators could be understood in terms of the Berry curvature associated with band structure of these materials [1; 2]. This encompasses avatars of the geometric phase which _can not_ be understood in terms of closed loop adiabatic evolution of the magnetic field applied to a spin-\(S\) particle (\(S=1/2,1,3/2...\)), which is the conventional example of Berry phase[3]. This has been a topic of much discussion in recent times triggered by the discovery of higher order topological insulators[4] which involves Hamiltonians having band structure supporting nonzero electric quadrupole and octupole (in general multi-poles) moments resulting in non-trivial Berry curvature. Effects of magnetic quadrupolar Berry phase in interacting many-body systems such as models of spin chain has also been studied[5]. More recent developments in fragile topological insulators[6] or Euler insulators[7] also has an interesting connection to qutrit system with quantized Berry phase[8]. Motivated by these recent developments, we revisit this problem in a minimal setting of a spin \(S=1\) system (or, equivalently a three level system) where such quadrupolar Berry phases can arise. In 1994, Robbins and Berry[9] showed that reversing the direction of externally applied magnetic field (say along the \(z\)-direction) acting on a spin-\(S\) (where \(S\in\mathbb{Z}\)) results in a geometric phase factor of \((-1)^{S}\) for the eigenstate of \(\hat{S_{z}}\) operator with zero eigenvalue. Such a geometric phase may arise as a result of adiabatic rotation of the applied magnetic field such that it completes half cycle (one way journey between two antipodal points assuming the tip of the magnetic field to lay on a sphere) and not a full cycle. Note that, even though the eigenstate of \(\hat{S_{z}}\) operator with zero eigenvalue does not have a direct coupling to the external magnetic field, it nevertheless responds to its adiabatic evolution via the appearance of this geometric phase. It can be understood as follows -rotating the magnetic field by half cycle reorganizes all states in the Hilbert space except this one. This reorganization of states leads to the geometric phase factor of \((-1)^{S}\). For \(S=1\), the phase factor is \(-1\), which corresponds to \(\pi\) Berry phase. For \(S=1\), the condition \(\langle\psi|\vec{S}|\psi\rangle=0\) defines a set of states with no magnetic moment and a finite quadrupole moment. We refer to this set as the set of quadrupolar states which include the state with zero eigenvalue for the \(S_{z}\) operator. In this sense, the observation of \(\pi\) Berry phase in Ref.[9] for \(S=1\) is an early example of quadrupolar Berry phase. We show that there exists a local gauge choice in which the quadrupolar states form a vector space defined over the field of real numbers. As geometric phase is independent of the choice of the local gauge, cyclic evolution in the quadrupolar subspace can lead to geometric phase which is quantized to zero or \(\pi\) (as the only nontrivial phase comes from multiplying a state by \(-1\) owing to reality constraints). In this article, we employ Majorana stellar representation (MSR) to arrive at a geometric visualization of the topological properties of the quadrupolar subspace. MSR is an old idea due to Ettore Majorana [10] which has gained prominence recently as a tool to understand geometric properties of finite-dimensional Hilbert spaces [11; 12; 13; 14; 15; 16; 17; 18; 19; 20] and their connection to entanglement properties of permutation symmetric states of spin half system. The original idea of Majorana has also been extended to permutation symmetric states of higher spin systems[21]. This article is organized as following. In section II, we discuss a possible way to construct subspace of a Hilbert space which supports only zero or \(\pi\) Berry phase and explore its connection to anti-unitary symmetries. In section III, we introduce the quadrupolar subspace of a qutrit (\(S=1\)) and identify a global unitary map between the quadrupolar and real subspaces of the qutrit hence justifying the topological nature of the former. We also arrive at a connection between the quadrupolar subspace and the time-reversal symmetry. In section IV, we show that the exchange of Majorana stars leads to quantized Berry phase of \(\pi\) for quadrupolar subspace. We discuss implications drawn from Majorana stars representations of eigenstates of quadrupolar Hamiltonians. We also discuss how the picture of exchange of Majorana stars evolves as we interpolate between the quadrupolar subspace and the real subspace via a family of global unitary transformations. In section V, we explore the influence of static Hamiltonian which keeps the states within quadrupolar subspace under time- evolution and discuss the geometric phase (of the Aharonov-Anandan type) accumulated by the quadrupolar state under cyclic time evolution. In section VI, we summarize our findings. ## II Topological subspace A subspace of a Hilbert space is considered to be topological if it allows only two values for the geometric phase (0 and \(\pi\)) for any closed loop in the corresponding ray space. It may not be easy to determine whether a given subspace is topological without exhaustive explicit calculations. However there is one well-known topological subspace for any quantum system - the set of all real states (in some chosen basis) [22]. It is obvious that the phase accumulated by a state in the real subspace of states, due to closed loop evolution in the corresponding ray space can only be 0 or \(\pi\) (the state getting multiplied by \(-1\)) as all the states along the loop are real. Hence the real subspace qualifies as a topological subspace. To explicitly verify this intuitive observation, let us consider the \(N^{th}\) order Bargmann invariant[23] defined as the cyclic inner product of states from this subspace given by \(\langle\psi_{0}|\psi_{1}\rangle\langle\psi_{1}|\psi_{2}\rangle\cdots\langle \psi_{N-1}|\psi_{0}\rangle\), where \(|\psi_{i}\rangle\) for \(i=0,1,..,N-1\) are \(N\) distinct states with no two successive states being orthogonal in the sequence of states. In general this is a complex number whose argument is the geometric phase accumulated by the state corresponding to this closed loop projections. But for this subspace, each of the terms of the product are real numbers hence resulting in the geometric phase being either 0 or \(\pi\) when the product is positive or negative respectively. It is straightforward to construct examples with 0 phase - we choose each state to have either all positive or all negative projections on a orthonormal basis. Because each state appears twice in the product (as ket and bra) the cyclic product will be positive. We now present one method to construct a negative cyclic product. For any two states let us consider the geodesic loop passing through them [24]. If the two states are chosen to be from the real subspace then all the states in the geodesic loop will also be real. In fact the geodesic is the one parameter subspace (over real field) spanned by the two chosen states. It is straightforward to show that the cyclic inner product of \(N\) equidistant (with respect to the parametrization angle) states on a geodesic parametrized by an angle in \([0,\pi)\) is \[\langle\psi_{0}|\psi_{1}\rangle\langle\psi_{1}|\psi_{2}\rangle \cdots\langle\psi_{N-1}|\psi_{0}\rangle=\left(\cos\frac{\pi}{N}\right)^{N-1} \cos\left(\pi-\frac{\pi}{N}\right), \tag{1}\] which is real and negative. The above construction is a general one and true for any loop comprising a shorter and longer geodesic connecting two distinct nonorthogonal state. Its relevance here is just because the geodesic loop passing through two chosen real states lies within the real subspace. This property has been used here to provide an explicit construction of loops within the real subspace with \(\pi\) geometric phase. From the sets of all real states and corresponding real Hamiltonians ( which maps the set of real states back to itself ), one can generate a continuous family of sets with exactly same geometric and topological properties by applying family of global unitary transformations on them. We will show that the quadrupolar subspace discussed here is one such example of a subspace which is connected to the real subspace by a global unitary transformation. ### Invariance of states of a topological subspace under an anti-unitary operator There is another way to characterize such topological subspaces without explicitly referring to the mapping to real subspace. States from the real subspace are invariant under the anti-unitary operation of complex conjugation. In fact this invariance can be taken to be the defining property of the real subspace[25]. We now study such invariances in other subspaces which are unitarily connected to the real subspace. Let us denote the complex conjugation by \(\mathcal{K}\) and consider a arbitrary state \(|\psi\{x_{1},x_{2},\cdots x_{n}\}\rangle_{R}\) from the largest real subspace of a finite dimensional Hilbert of dimension \(n\) where subscript \(R\) stands for the real subspace and the set \(\{x_{1},x_{2},\cdots x_{n}\}\) represents the \(n\) real parameters required to parametrize the state such that, \[\mathcal{K}|\psi\rangle_{R}=|\psi\rangle_{R}. \tag{2}\] Let us now apply an arbitrary unitary transformation \(U\) on both sides \[U|\psi\rangle_{R}=U\mathcal{K}|\psi\rangle_{R}=\mathcal{K}U^{*}| \psi\rangle_{R} =\mathcal{K}(U^{*}U^{\dagger})U|\psi\rangle_{R}\] \[=\Theta U|\psi\rangle_{R}, \tag{3}\] where \(\Theta=\mathcal{K}(U^{*}U^{\dagger})\) is another anti-unitary operator and \(U|\psi\rangle_{R}\) is invariant under it. Thus we see that all topological subspaces unitarily related to the real one are made up of states that are invariant under an anti-unitary operator. It is important to note that the state \(U|\psi\{x_{1},x_{2},\cdots x_{n}\}\rangle_{R}\) is in general defined up to a \(U(1)\) phase which can be different for different values of \(x_{1},x_{2},\cdots x_{n}\) and hence it will be difficult to check this invariance when the states from the subspace are expressed in a arbitrary \(U(1)\) gauge. One way to get around it is to check if it is possible to make a new \(U(1)\) gauge choice for the subspace under consideration, such that the subspace in this new gauge choice forms a vector space over the field of real numbers. Now let us note some properties of the anti-unitary operator \(\Theta\) - \[\Theta^{2}=\mathcal{K}(U^{*}U^{\dagger})\mathcal{K}(U^{*}U^{\dagger})=\mathcal{K }U^{*}U^{\dagger}UU^{T}\mathcal{K}=1. \tag{4}\] We also note that our construction of anti-unitary operator involves symmetric unitary operators \[(U^{*}U^{\dagger})^{T}=(U^{\dagger})^{T}(U^{*})^{T}=U^{*}U^{\dagger}.\] Since any symmetric unitary operator can be written as \(U^{*}U^{\dagger}\) we have in fact obtained a general prescription to define the topological subspaces unitarily related to the real one. Such a subspace consists of states that are invariant under the action of an anti-unitary operator made up of a symmetric unitary operator followed by complex conjugation. We note here that the invariance under an anti-unitary operator discussed above is completely equivalent to the real mapping and in fact the same unitary operator is used in the construction of the anti-unitary operator characterizing the subspace. As mentioned before, there is a continuous family of topological subspaces which are related to each other and the real subspace by global unitary transformations. The quadrupolar subspace is just one of them for the three-level system. We next discuss the quadrupolar subspace and its characterization using Majorana stellar representation. ## III Quadrupolar subspace We now specialize the general ideas of the previous section to the physically relevant case of the quadrupolar subspace of a three-level or spin-1 system (qutrit). This subspace is defined as the set of states with zero magnetization in all directions \[\langle\psi|\vec{S}|\psi\rangle=0. \tag{5}\] The topological nature of the quadrupolar subspace can be established by realizing that the entire subspace can be made real by a global unitary transformation. To show this let us start from the most general three-level state \[|\psi\rangle=\begin{bmatrix}\alpha\\ \beta\\ \gamma\end{bmatrix} \tag{6}\] Upon imposing Eq.5 and choosing \(\beta\) to be real and positive (which can be thought of a freedom of choosing an overall phase for the state, which do not change the physical state), we get the normalized quadrupolar states as \[|\psi_{q}\rangle=\begin{bmatrix}\alpha\\ \beta\\ -\alpha^{*}\end{bmatrix}\quad\text{where,}\ \ 2|\alpha|^{2}+\beta^{2}=1\] With this parametrization it is staightforward to see that the quadrupolar states form a vector space over the field of real numbers. We may also choose a Bloch sphere like parametrization as follows \[|\psi_{q}\rangle=\begin{bmatrix}\frac{1}{\sqrt{2}}e^{i\phi}\sin(\theta/2)\\ \cos(\theta/2)\\ \frac{-1}{\sqrt{2}}e^{-i\phi}\sin(\theta/2)\end{bmatrix}\] where \(\theta\in[0,\pi]\) and \(\phi\in[0,2\pi)\). With this parametrization each point on the surface of a sphere corresponds to a quadrupolar state. Now consider the effect of the following unitary transformation \[U=\frac{1}{\sqrt{2}}\begin{bmatrix}i&0&i\\ 0&\sqrt{2}&0\\ 1&0&-1\end{bmatrix} \tag{7}\] on the quadrupolar states, \[U|\psi_{q}\rangle=\frac{1}{\sqrt{2}}\begin{bmatrix}i(\alpha-\alpha^{*})\\ \sqrt{2}\beta\\ (\alpha+\alpha^{*})\end{bmatrix},\] which makes the quadrupolar states real. Same can be verified with the Bloch sphere like parametrization. Since all the states have become real by a single unitary transformation, the quadrupolar subspace has the same geometric and topological properties as the real subspace which is topological. Let us also construct the anti-unitary operator that leaves states from quadrupolar subspace invariant \[\Theta=\mathcal{K}U_{q}^{*}U_{q}^{\dagger}=\mathcal{K}\begin{bmatrix}0&0&-1\\ 0&1&0\\ -1&0&0\end{bmatrix}\] where \(U_{q}\) is the unitary operator that maps real subspace to the quadrupolar one, i.e. inverse of Eq.7. We can explicitly check the invariance of the quadrupolar states under this anti-unitary operator as \[\Theta\ket{\psi_{q}}=\mathcal{K}\begin{bmatrix}0&0&-1\\ 0&1&0\\ -1&0&0\end{bmatrix}\begin{bmatrix}\alpha\\ \beta\\ -\alpha^{*}\end{bmatrix}=\mathcal{K}\begin{bmatrix}\alpha^{*}\\ \beta\\ -\alpha\end{bmatrix}=|\psi_{q}\rangle\] ### Time reversal symmetry In quantum theory, complex conjugation is intimately related to the time-reversal operation and indeed there is another way to obtain the above anti-unitary operator for the quadrupolar subspace. A spin-1 system can be considered as the triplet sector of two spin-1/2 systems. The time-reversal operator for two spin-1/2 systems, \(\mathcal{T}_{2\times 2}=\mathcal{K}(i\sigma_{y}\otimes i\sigma_{y})\), upon projecting to the triplet sector gives \[\mathcal{T}_{2\times 2}^{\text{tr}}=\mathcal{PT}_{2\times 2}\mathcal{P}^{ \dagger}=\mathcal{K}\begin{bmatrix}0&0&1\\ 0&-1&0\\ 1&0&0\end{bmatrix}\] where \(\mathcal{P}\) is the projection operator from the \(2\otimes 2\) Hilbert space to the triplet sector. Thus the anti-unitary operator of our construction is the exact negative of the time-reversal operator in the triplet sector. Hence the anti-unitary operator that leaves states from the quadrupolar subspace invariant is just time-reversal. We now explicitly show that this is in fact the defining property of quadrupolar subspace. Acting the time-reversal operator on a general state Eq. 6 from the triplet sector we get \[\mathcal{T}^{\mathrm{tr}}_{2\times 2}|\psi\rangle=\mathcal{K}\begin{bmatrix} \gamma\\ -\beta\\ \alpha\end{bmatrix}=\begin{bmatrix}-\gamma^{*}\\ \beta^{*}\\ -\alpha^{*}\end{bmatrix}.\] Demanding \(\mathcal{T}^{\mathrm{tr}}_{2\times 2}|\psi\rangle=-|\psi\rangle\) results in the exact conditions defining the quadrupolar subspace i.e. \(\alpha=-\gamma^{*}\) and \(\beta=\beta^{*}\). With this we also note that in general, the real space does not have time-reversal symmetry as by defining property they are invariant under complex-conjugation. ## IV Majorana stellar representation We present a geometric way to study the topological nature of the quadrupolar subspace via the MSR. In MSR the state of a \(d\)-dimensional system is represented by \(d-1\) points or stars on a Bloch sphere. These stars are identical in the sense that interchanging them does not change the state they correspond to. The indistinguishability of stars gives another physical meaning to this representation - these \(d\)-level states are completely symmetrized states of \((d-1)\) spin-\(1/2\) systems[26]. These symmetrised states are nothing but a spin-\((d-1)/2\) state. Using this symmetrization, \(d\)-dimensional Hilbert space is mapped to a polynomial vector space of degree \((d-1)\) over complex field. A generic state of spin-\(S=(d-1)/2\) is \(|\psi_{S}\rangle=\sum_{m=-S}^{S}c_{m}|S,m\rangle\) mapped to the Majorana polynomial(MP) \(\sum_{r=0}^{2S}a_{r}x^{2S-r}\) with[10] \[a_{r}=(-1)^{r}\frac{c_{s-r}}{\sqrt{(2S-r)!r!}} \tag{8}\] The MP has \((d-1)\) roots, which can be written as \(\tan\frac{\theta_{k}}{2}e^{i\phi_{k}}\) with \(k\in\{1,2,...,d-1\}\). Then the spherical coordinates \((\theta_{k},\phi_{k})\) gives the unit vector \(\hat{u}_{k}\), the location of MSs on the Bloch sphere. The MSR of quadrupolar states turns out to be particularly simple--a pair of stars that are antipodal(see appendix A). It is easy to understand the MSR for the quadrupolar states--as the quadrupolar states have zero magnetization they must correspond to two spin-\(1/2\) particles with opposite magnetization and hence must have antipodal stars. Many properties of the quadrupolar subspace can be immediately demonstrated in the MSR, such as time-reversal symmetry of the states. Because these states have zero magnetization they possess time-reversal symmetry which is manifest in the MSR, since the time-reversal operation in MSR amounts to taking every star to its antipodal location. Another non-trivial property of this subspace is that the entanglement between two constituent spin-\(1/2\) particles is maximum. Again this is simple to see in the MSR as all states are formed by the symmetrized linear combination of states which are antipodal (and hence orthogonal) and as a result are of the form \((|01\rangle+|10\rangle)/\sqrt{2}\) in suitable basis, one of the Bell states having maximum entanglement. ### Berry phase in MSR Understanding geometric phase in terms of MS is a well studied topic [12; 14]. When a cyclic change of states result in individual closed loop trajectories for each MSs representing the state, the corresponding Berry phase for the state is given by[14] \[\gamma^{(d)}=\gamma_{0}^{(d)}+\gamma_{C}^{(d)}\, \tag{9}\] which is a sum of two contributions - one part is \(\gamma_{0}^{(d)}=-\sum_{i=1}^{d-1}\Omega_{i}/2\), the sum of solid angle subtended by each star (modified by a \(\pm\) sign depending on whether the loop points towards or away from the origin in the right hand screw rule sense) and the other part is \[\gamma_{C}^{(d)}=\frac{1}{2}\oint\sum_{i=1}^{d-1}\sum_{j(>i)}^{d-1}\beta_{ij} \Omega\left(\mathrm{d}\hat{u}_{ij}\right) \tag{10}\] Here, \(\beta_{ij}\) is the correlation factor given by \[\beta_{ij}(\mathbf{D})\equiv-\frac{d_{ij}}{N_{d-1}^{2}(\mathbf{D})}\frac{\partial N_{ d-1}^{2}(\mathbf{D})}{\partial d_{ij}} \tag{11}\] with \(\mathbf{D}=\{d_{ij}\},\ i<j\); \(d_{ij}=1-\hat{u}_{i}.\hat{u}_{j}\) and \(N_{d-1}^{2}(\mathbf{D})\) is the normalization coefficient of state \(|\psi_{S}\rangle\) given in terms of \(\hat{u}_{i}\)'s. The term \(\Omega\left(\mathrm{d}\hat{u}_{ij}\right)\equiv\hat{u}_{i}\times\hat{u}_{j} \cdot d\left(\hat{u}_{j}-\hat{u}_{i}\right)/d_{ij}\) is the sum of solid angles of the infinitesimally thin triangle \((\hat{u}_{i},-\hat{u}_{j},-\hat{u}_{j}-d\hat{u}_{j})\) and \((\hat{u}_{j},-\hat{u}_{i},-\hat{u}_{i}-d\hat{u}_{i})\). It can be interpreted as the solid angle due to the relative motion between each pair of stars and their absolute evolution [14]. This prescription works when the cyclic evolution of the state gives rise to cyclic evolution of each Figure 1: MSR of states from the quadrupolar subspace. the individual stars. As mentioned earlier, since the stars are identical it is possible that the cyclic evolution of the state may result in a permutation of a fewer stars[18] in which case these stars will not complete a closed loop individually. Hence, in the case when the cyclic evolution of the states result in permutation alone (i.e. no individual cyclic loops for any star), it corresponds to a possibility which is beyond the discussion presented in Ref. [14]. If the permutation is such that there is effectively just one loop, the Berry phase is the solid angle of this loop. We will see below that the quadrupolealr case of interest to us has this property. ### MSR of quadrupolar subspace The fact that quadrupolar states are represented by antipodal stars implies that we need to keep track of only one of the stars as it immediately tells us the location of the other. This is a very convenient situation geometrically as the complexity of visualizing this subspace is the same as that of a spin-1/2 system, namely everything happens on the surface of a Bloch sphere and a state is represented by a point. But topology of the two problems are quite distinct as discussed in details in Ref. [9]. Actually this geometric representation using the MSR clearly show that the space of quadrupolar states lay on the Bloch sphere with antipodal point being identified hence providing yet another route to arrive at the conclusion that the topology of parameter space for the quadrupolar subspace is the real projective plane \((RP^{2})\)[5]. Now we will show that the MSR provides an elegant way to visualize the quantization of Berry phase to zero or \(\pi\) for quadrupolar states. We start by noting that the closed loops in ray space of quadrupolar subspace can be of two types - ones where the two stars individually make loops and others where they get exchanged. Because the two stars are always antipodal there is no relative motion between them and hence there is no correlation contribution (\(\gamma_{C}^{(d)}=0\)) to the Berry phase, i.e., \(\gamma^{(d)}=\gamma_{0}^{(d)}\). In fact the correlation contribution for the real subspace is also zero(see appendix D) but unlike the quadrupolar case it is not easy to visualize geometrically. In the first case when the two stars make individual loops and their contributions exactly cancel each other [a simple example of such situation is shown in Fig.1(a)] and hence \(\Omega_{1}=-\Omega_{2}\), resulting in zero Berry phase. In cases when the stars exchange their location [see e.g. Fig.1(b)] the combined trajectory subtends exactly \(2\pi\) solid angle at the center giving rise to a Berry phase of \(\pi\), i.e., \(\gamma_{0}^{(d)}=\pi\). Not that the contributions due to individual trajectory of the Majorana stars to \(\gamma_{0}^{(d)}\) by themselves do not corresponds to any gauge invariant geometric phase and hence \(\gamma_{0}^{(d)}\) can not be expressed as a sum of solid angles(see appendix C). It is not difficult to see that one of these two scenarios will hold for all possible loops and since we have exhausted all the possibilities for the quadrupolar subspace, we can conclude that this subspace is indeed topological. With this understanding of quadrupolar states in terms of MSs, we can make interesting prediction about the topological properties of the eigenstates of quadrupolar Hamiltonian given by \[H=\sum_{ij}\alpha_{ij}Q_{ij}, \tag{12}\] where \(Q_{ij}\) are components of the quadrupole moment tensor operator[27] expressed as \[Q_{ij}=\frac{1}{2}(S_{i}S_{j}+S_{j}S_{i})-\frac{1}{3}S^{2}\delta_{ij}, \tag{13}\] and \(S_{i}\) represents component of the spin operator along the \(i^{th}\) direction where \(i=x,y,z\). As \(Q\) represents a traceless symmetric tensor, hence (\(Q_{ij}=Q_{ji}\)) and the diagonal elements sum to zero (\(Q_{xx}+Q_{yy}+Q_{zz}=0\)). Owing to this fact, only five components of quadrupole moment tensor operator are linearly independent and conventionally they are taken to be \(Q_{xy},Q_{yz},Q_{xx}\), \(Q_{zz}\) and \(Q_{xx}-Q_{yy}\). It is straightforward to check that the eigenstates of a quadrupolar Hamiltonian given in Eq. 12 belong to the quadrupolar subspace of states defined via Eq. 5. Hence the set of orthonormal quadrupolar eigenstates of such a Hamiltonian will occupy the six end points of a Cartesian coordinate system on the Bloch sphere in MSR as shown in Fig.2. This result provides a simple geometric way to understand some topological properties of the system, such as it is not possible to have a single eigenstate which under adiabatic and cyclic time evolution of the Hamiltonian leads to a geometric phase of \(\pi\). We will call these eigenstate as topological eigenstates. In general a cyclic evolution of the quadrupolar Hamiltonian will correspond to a continuous rigid rotation of the six stars on the Bloch sphere. All eigenstate being topological requires all pair of antipodal MS getting Figure 2: Geometry of MSs for quadrupolar eigenstates - the three orthogonal states are represented by three pairs of MSs along three mutually perpendicular lines in three different color. exchanged which is equivalent to the inversion operation in three-dimension, which can not be obtained by any rotation. This implies that all the three eigenstates cannot be simultaneously topological. Also continuous rigid rotation can lead to exchange of a pair of MS and hence topological eigenstate can only appear in pairs. Thus for a cyclic evolution of quadrupolar Hamiltonians we will either have no topological eigenstates(trivial) or a pair of topological states(non-trivial). ### Mapping from quadrupolar to real subspace It is clear from the discussion above that Berry phase for the quadrupolar subspace and the real subspace, evaluated using the MSR given by Eq. 9 share the common feature that for both case the correlation terms vanishes, i.e., \(\gamma_{C}^{(d)}=0\). Another commonality between the two is, for the case of zero Berry phase, the trajectory of the two MSs individually form closed loops such that \(\gamma_{0}^{(d)}=-\Omega_{1}/2-\Omega_{2}/2=0\), which implies that \(\Omega_{1}=-\Omega_{2}\). Now the crucial differences lay in the case of Berry phase being \(\pi\). In case of quadrupolar subspace, this Berry phase is induced by MS trajectories which lead to their exchange hence the two MS collectively forming a closed loop, while in the case of real subspace, in general, each MS trajectory individually form a closed loop such that the sum \(\gamma_{0}^{(d)}=-\Omega_{1}/2-\Omega_{2}/2\) is constrained to \(\pm\pi\). Hence it leaves us with a possibility that a pair of closed loop trajectories traced out by the MSs subjected to the \(\gamma_{0}^{(d)}=\pm\pi\) in real subspace will evolve into a single exchange trajectory in the quadrupolar subspace under the action of a continuous unitary transformation. We construct unitary transformations parameterized by a parameter \(\alpha\) which can be continuously varied such that we start from the quadrupolar space and reach the real space. The unitary matrix can be parameterized as follows, \[U(\alpha)=e^{i\theta_{1}\alpha}P_{1}+e^{i\theta_{2}\alpha}P_{2}+e^{i\theta_{3} \alpha}P_{3} \tag{14}\] where \(e^{i\theta_{i}}\)s are the eigenvalues of the unitary in Eq. 7 and \(P_{i}\)s are the projectors into corresponding eigenspaces. \[P_{1}=\begin{pmatrix}\frac{3+\sqrt{3}}{6}&0&\frac{1-i}{2\sqrt{3}}\\ 0&0&0\\ \frac{1+i}{2\sqrt{3}}&0&\frac{3-\sqrt{3}}{6}\end{pmatrix},P_{2}=\begin{pmatrix} \frac{3-\sqrt{3}}{6}&0&-\frac{1-i}{2\sqrt{3}}\\ 0&0&0\\ -\frac{1+i}{2\sqrt{3}}&0&\frac{3+\sqrt{3}}{6}\end{pmatrix},\] \[P_{3}=\begin{pmatrix}0&0&0\\ 0&1&0\\ 0&0&0\end{pmatrix} \tag{15}\] The unitary varies from identity matrix to the unitary given in Eq.7 as \(\alpha\) changes from 0 to 1. Applying this parameterized unitary on the MS trajectories of quadrupolar states, we shows the gradual evolution of the loops Figure 3: Trajectories of MSs corresponding to an eigenstate of the Hamiltonian \(H=U^{\dagger}(\alpha)(\cos\theta Q_{x^{2}-y^{2}}+\sin\theta Q_{xy})U(\alpha)\) (associated with the eigenvalue \(\sqrt{5+3\cos(2\theta)}/2\sqrt{2}\)) as the parameter \(\alpha\) of the unitary operator is changed from 0 to 1 (quadrupolar space to real space) shown in alphabetical order. The values of the parameter shown above are \(\alpha\) = 0, 0.2, 0.5, 0.75, 0.9, 0.99, 0.999, 1 in same order as in the diagram. and as seen in Fig. 3 and observe that the final transition into two loops occurs at \(\alpha=1\). ## V Quadrupolar Subspace Dynamics We now address the question of dynamics within the quadrupolar subspace beginning with the question that if we start with a quadrupolar state what Hamiltonians will keep the system in such a state throughout its dynamics. The answer is completely straightforward with the help of the mapping we have identified between quadrupolar and real subspaces. The analogous question in the real subspace becomes - "if we start with a real state what Hamiltonians will keep it real?" From the time-dependent Schrodinger equation it is easy to see that all purely imaginary Hamiltonians have this property. For such Hamiltonians the time-dependent Schrodinger equation becomes a set of coupled linear "real" differential equations and if the initial state is real, then the subsequent state of the system remains real always. With the help of the mapping given in Eq. 7, the above answer gets translated to all spin Hamiltonians, i.e., Hamiltonians with no quadrupolar term. Physically this result is slightly nontrivial: while the eigenstates of a quadrupolar Hamiltonians can be chosen to be quadrupolar and would not evolve non-trivially in time, a general quadrupolar state which is not an eigenstate will become non-quadrupolar upon dynamics with a general quadrupolar Hamiltonian. In other words, the quadrupolar Hamiltonians do not preserve the quadrupolar nature of states during time-evolution. On the other hand with any pure spin Hamiltonian, all quadrupolar states will remain quadrupolar throughout their dynamics. This result is consistent with the recent findings that pure spin Hamiltonians generate rigid rotation of the Majorana sphere [17]. As a result a pair of antipodal stars will remain so throughout the dynamics with such Hamiltonians. We next consider the issue of geometric phase accumulated during cyclic evolution of quadrupolar states due to application of static pure spin Hamiltonians discussed above in the spirit of Aharonov-Anandan type scenario [28]. For a purely spin Hamiltonian, a quadrupolar state remains quadrupolar during dynamics. If the Hamiltonian generates a periodic orbit, the total phase accumulated can only be \(0\) or \(\pi\). Also, by definition the mean energy of pure spin Hamiltonians for any state in quadrupolar subspace is zero (Eq.5). Hence there is no dynamical phase contribution and the total phase is infact the geometric phase. Again as before it is easier to answer this question for the real subspace and then translate the result to the quadrupolar one using the global change of basis. The three spin operators (see appendix B) upon transformation into real space using Eq.7 become \[S_{x}^{\prime}=\begin{bmatrix}0&i&0\\ -i&0&0\\ 0&0&0\end{bmatrix},S_{y}^{\prime}=\begin{bmatrix}0&0&0\\ 0&0&-i\\ 0&i&0\end{bmatrix},S_{z}^{\prime}=\begin{bmatrix}0&0&-i\\ 0&0&0\\ i&0&0\end{bmatrix}\, \tag{16}\] and we write the most general purely imaginary Hamiltonian as \[H=-aS_{x}^{\prime}+bS_{y}^{\prime}+cS_{z}^{\prime}=\begin{bmatrix}0&-ia&-ic \\ ia&0&-ib\\ ic&ib&0\end{bmatrix}. \tag{17}\] The eigenvalues of this Hamiltonian are \(0,\pm\omega\), where \(\omega=\sqrt{a^{2}+b^{2}+c^{2}}\). The initial real state \(\left|\psi(0)\right\rangle=\begin{bmatrix}r_{0},s_{0},t_{0}\end{bmatrix}^{ \mathrm{T}}\) at time \(\tau\) becomes \[\left|\psi(\tau)\right\rangle=\frac{1}{\omega^{2}}\begin{bmatrix}abt_{0}+b^{ 2}r_{0}-bcs_{0}-\omega\left(as_{0}+ct_{0}\right)\sin\left(\omega\tau\right)+ \left(a^{2}r_{0}-abt_{0}+bcs_{0}+c^{2}r_{0}\right)\cos\left(\omega\tau\right) \\ -act_{0}-bcr_{0}+c^{2}s_{0}+\omega\left(ar_{0}-bt_{0}\right)\sin\left(\omega \tau\right)+\left(a^{2}s_{0}+act_{0}+b^{2}s_{0}+bcr_{0}\right)\cos\left( \omega\tau\right)\\ a^{2}t_{0}+abr_{0}-acs_{0}+\omega\left(bs_{0}+cr_{0}\right)\sin\left(\omega \tau\right)+\left(-abr_{0}+acs_{0}+b^{2}t_{0}+c^{2}t_{0}\right)\cos\left( \omega\tau\right)\end{bmatrix}\, \tag{18}\] which remains real and oscillates with frequency \(\omega\). Any initial state comes back to itself at \(T=2\pi/\omega\), and the question of a quantized geometric phase of \(\pi\) boils down to whether it becomes the exact negative of the initial state at some intermediate time. To answer this question we measure the distance of \(\left|\psi(\tau)\right\rangle\) from \(-|\psi(0)\rangle\), i.e. \(|||\psi(\tau)\rangle+|\psi(0)\rangle||^{2}\) with \(\tau\), for a given initial condition and Hamiltonian to obtain an expression of the form \(A+B\cos(\omega\tau)\), where \(A,B\) depend on the Hamiltonian parameters and the initial condition. Since the distance we are interested in is maximum at \(\tau=0\), it initially decreases and acquires its minimum value at \(\tau=T/2=\pi/\omega\). The minimum value of this distance becomes zero when the initial condition obeys \[at_{0}=-br_{0}+cs_{0}.\] When this condition is satisfied Eq.18 in fact becomes an equation for a geodesic (see Eq. 4.16 in Ref. [24]) passing through the initial state \[\left|\psi(\tau)\right\rangle=\cos(\omega\tau)\begin{bmatrix}r_{0}\\ s_{0}\\ t_{0}\end{bmatrix}+\frac{\sin(\omega\tau)}{a\omega}\begin{bmatrix}-a^{2}s_{0}+ bcr_{0}-c^{2}s_{0}\\ a^{2}r_{0}+b^{2}r_{0}-bcs_{0}\\ a\left(bs_{0}+cr_{0}\right)\end{bmatrix}. \tag{19}\] The normalization of wavefunction restricts the set of initial conditions to a two-dimensional surface that can be conveniently considered to be the surface of a unit sphere centered at origin. The above condition of obtaining loops with quantized geometric phase of \(\pi\) is in fact an equation of a plane passing through origin hence identifying a great circle on this unit sphere. For all initial conditions the Hamiltonian generates a closed trajectory on this sphere but only those initial conditions which lead to a great circle as the closed trajectory are of interest and are given by Eq. 19. We can easily translate this result to the quadrupolar subspace by using the unitary transform in Eq. 7. ## VI Conclusion To summarize we have shown that the quadrupolar subspace is topological and there is a convenient geometrical way of visualizing this using MSR. We find a global unitary transformation that relates the quadrupolar subspace to the real subspace, which is yet another way to reach the same conclusion. Using this global unitary transformation we identify the anti-unitary symmetry which ensures the topological nature of the quadrupolar subspace. We show that in MSR, some well-known topological properties becomes very easy to visualize geometrically, such as all close looped trajectories of eigenstates in the parameter space of a three-level system can not be simultaneously topological. It can have either 0 (trivial) or 2 (non-trivial) topological eigenstates. We have shown that, though the quadrupolar and the real subspace are related by global unitary transformation but the trajectories of MSs on the Bloch sphere connected by this transformation can be quite distinct as depicted in Fig. 3. Finally we demonstrate that any static pure spin Hamiltonian acting on a quadrupolar state results in time evolution which stays restricted to the quadrupolar subspace and closed loop time evolution results in either zero or \(\pi\) geometric phase. ###### Acknowledgements. R.S. thanks Shitadhi Roy, Krishanu Roychowdhury and Subhro Bhattacharjee for many helpful discussions. S.D. would like to thank Michael V. Berry for stimulating discussions and for pointing out Ref. [9]. R.S. thanks the Science and Engineering Research Board (SERB), India for funding via the Ramanujan Fellowship (SB/S2/RJN-034/2016). S.D. would like to acknowledge the MATRICS grant (MTR/ 2019/001 043) from the Science and Engineering Research Board (SERB) for funding. ## Appendix A Antipodal MSs for quadrupolar states Here, we discuss the implications of the global unitary map (Eq. 7) between the real subspace and the quadrupolar subspace in terms of the MSR of the corresponding states. Consider a most general real state \(|\psi\rangle_{R}=(r,s,t)^{T}\) of a spin-1 system, where \(r,s\) and \(t\) are real. This real states maps on to a quadrupolar state as \[|\psi\rangle_{Q}=U^{\dagger}|\psi\rangle=\frac{1}{\sqrt{2}}\begin{pmatrix}-t- ir\\ \sqrt{2}s\\ t-ir\end{pmatrix}. \tag{11}\] So the Majorana polynomial equation for \(|\psi\rangle_{Q}\) is \[\frac{(-t-ir)x^{2}}{2}-sx+\frac{(t-ir)}{2}=0\, \tag{12}\] whose roots are given by \[X_{1,2}=\frac{-s\pm\sqrt{s^{2}+r^{2}+t^{2}}}{r^{2}+t^{2}}(t-ir). \tag{13}\] Now we rewrite the roots in the form \(\tan\frac{\theta}{2}e^{i\phi}\), where \(\theta\) and \(\phi\) corresponds to the polar and azimuthal coordinates of MSs on the Bloch sphere. Hence the spherical polar coordinates for the two MSs are given by \[\theta_{1}=2\tan^{-1}(\frac{s+\sqrt{s^{2}+r^{2}+t^{2}}}{\sqrt{r^{2}+t^{2}}}), \ \phi_{1}=\arg(-t+ir)\,\] \[\theta_{2}=2\tan^{-1}(\frac{-s+\sqrt{s^{2}+r^{2}+t^{2}}}{\sqrt{r^{2}+t^{2}}}), \ \phi_{2}=\arg(t-ir)\.\] From above polar coordinates of MSs, we get \(\theta_{1}+\theta_{2}=\pi\) and \(\phi_{2}-\phi_{1}=\pi\) i.e. the two MSs are antipodal. Now let us consider the real state \((1/2,1/\sqrt{2},1/2)^{T}\) whose Majorana polynomial equation is \[\frac{x^{2}}{2\sqrt{2}}-\frac{x}{\sqrt{2}}+\frac{1}{2\sqrt{2}}=0 \tag{14}\] Its both roots are \(X=1\). So the corresponding Majorana stars are coincident. Hence the global unitary transform takes two coincident MSs to antipodal position, when the real state is mapped to quadrupolar state. ## Appendix B Spin operators The spin operators in the \(S_{z}\) basis are give by, \[S_{x}=\frac{1}{\sqrt{2}}\begin{bmatrix}0&1&0\\ 1&0&1\\ 0&1&0\end{bmatrix},S_{y}=\frac{1}{\sqrt{2}}\begin{bmatrix}0&-i&0\\ i&0&-i\\ 0&i&0\end{bmatrix},S_{z}=\begin{bmatrix}1&0&0\\ 0&0&0\\ 0&0&-1\end{bmatrix}. \tag{15}\] Real space representation of these operators are given in Eq. 16. ## Appendix C MS Exchange induced Berry phase Here we show that \(\gamma_{c}^{(0)}\) evaluated for closed loop path in quadrupolar ray space, which corresponds to a Berry phase of \(\pi\), can not be expressed as sum of geometric phase arising from evolution of the individual MSs. It is rather a collective Berry phase. We start by noting that the Berry connection corresponding to each MS is given by \[\gamma_{c}^{(0)}=\text{Im}\langle\psi|\nabla\psi\rangle=\text{Im}\sum_{i}\langle \hat{a}_{i}|d\hat{u}_{i}\rangle \tag{10}\] where, \(\hat{u}_{i}\) is the unit vector from the origin to the position of MS on the Bloch sphere which represents the state \[|\hat{u}\rangle=\cos\frac{\theta}{2}\ket{\uparrow}+e^{i\phi}\sin\frac{\theta}{ 2}\ket{\downarrow} \tag{11}\] and its differential in parameter space of MSs is \[|d\hat{u}\rangle=-\frac{1}{2}\sin\frac{\theta}{2}d\theta\ket{\uparrow}+e^{i \phi}\left(\frac{1}{2}\cos\frac{\theta}{2}d\theta+i\sin\frac{\theta}{2}d\phi \right)\ket{\downarrow}\, \tag{12}\] where \(\theta\) and \(\phi\) are the polar coordinates of the unit vector \(\hat{u}\). Then from Eq.(10), the Berry connection is \[\text{Im}\sum_{i}\langle\hat{u}_{i}|d\hat{u}_{i}\rangle=\sum_{i}\frac{1-\cos \theta_{i}}{2}d\phi_{i}. \tag{13}\] The Berry phase is the line integral of Berry connection over a closed path, and if and only if each MSs completes a loop individually on the Bloch sphere, then it can be expressed as a sum of half the solid angles made by each stars. Now for the case of exchange of MSs under closed loop evolution of quadrupolar states in the ray space, the Berry phase is given by the sum of integral of the Berry connection for each star over a open paths (not closed) given by \[\gamma_{c}^{(0)}=-\text{Im}\int_{1}^{2}\langle\hat{u}_{1}|d\hat{u}_{1}\rangle- \text{Im}\int_{2}^{1}\langle\hat{u}_{2}|d\hat{u}_{2}\rangle. \tag{14}\] As the two starts are constrained to stay antipodal while the states evolves to exchange the MSs, hence these two integrals together forms single closed line integral such that it result in a Berry phase of \(\pi\). ## Appendix D Correlation term in real subspace In this section we show that the vanishing of term \(\gamma_{C}^{(d)}\) appearing in the expression of Berry phase given in Eq. 9 not only holds for quadrupolar subspace but also for real subspace. It is given by, \(\gamma^{(d)}=\gamma_{0}^{(d)}+\gamma_{C}^{(d)}\) \[\gamma_{C}^{(d)}=\frac{1}{2}\oint\sum_{i=1}^{d-1}\sum_{j(>i)}^{d-1}\beta_{ij} \Omega\left(\text{d}\hat{u}_{ij}\right)\] Here, \(\beta_{ij}\) is the correlation factor (see Eq.11) and \(\Omega\left(\text{d}\hat{u}_{ij}\right)\equiv\hat{u}_{i}\times\hat{u}_{j}\cdot d \left(\hat{u}_{j}-\hat{u}_{i}\right)/d_{ij}\). It can be interpreted as the solid angle due to the relative motion between each pair of stars and their absolute evolution [14] (as described in subsection IV.1). In the quadrupolar space we get antipodal MSRs and hence the correlation term is trivially zero since \(\hat{u}_{1}\times\hat{u}_{2}=0\). We will show now that this is also true for real subspace. This comes from the interpretation of the correlation term as the solid angle due to relative motion of the two stars, which we prove to be zero when restricted to real space. Hence the problem reduces to showing that this solid angle is zero. Consider an arbitrary real state given by \[\ket{\psi}=\begin{pmatrix}r\\ s\\ t\end{pmatrix}\.\] Consider the corresponding Majorana polynomial equation given by \[\frac{r}{\sqrt{2}}x^{2}-sx+\frac{t}{\sqrt{2}}=0\,\] which is a quadratic equation with real co-efficients in our case. Hence the roots are either complex conjugates pair or both real. Next we note that the identification of the \((\theta,\phi)\) coordinates of the MSs are given by the equation \[\tan\frac{\theta_{1}}{2}e^{i\phi_{1}} =\frac{s+\sqrt{s^{2}-2rt}}{\sqrt{2}r}\, \tag{15}\] \[\tan\frac{\theta_{2}}{2}e^{i\phi_{2}} =\frac{s-\sqrt{s^{2}-2rt}}{\sqrt{2}r}\.\] The action of complex conjugation on the roots imply, \[\theta\rightarrow\theta\quad\phi\rightarrow-\phi. \tag{16}\] This action is exactly a reflection about the \(x\)-\(z\) plane on the Bloch sphere. Since real states are invariant under this action we can conclude that the two MSs corresponding to states in the real subspace are always mirror reflections of each other about the \(x-z\) plane. Now, the possibility of the relative motion of the MSs forming finite solid angles is severely restricted. One of the possible case is when both stars remain entirely in \(x\)-\(z\) plane throughout and the relative vector extends a \(2\pi\) solid angle during the path. However, the correlation term becomes zero here since \(\Omega\left(\text{d}\hat{u}_{ij}\right)=\hat{u}_{i}\times\hat{u}_{j}\cdot d \left(\hat{u}_{j}-\hat{u}_{i}\right)/d_{ij}\) is zero as \(\hat{u}_{i}\times\hat{u}_{j}\) points perpendicular to the \(x\)-\(z\) plane while \(d\left(\hat{u}_{j}-\hat{u}_{i}\right)\) is always in the plane. On the other hand if the MSs stay out of the \(x\)-\(z\), then the relative vector between the MRs are constrained by the reflection symmetry and hence is left with a one dimension degree of freedom passing through the origin leading zero solid angle. This completes our proof of \(\gamma_{C}^{(d)}\) being zero for the real states.
2306.02584
Synthetic Regressing Control Method
Estimating weights in the synthetic control method, typically resulting in sparse weights where only a few control units have non-zero weights, involves an optimization procedure that simultaneously selects and aligns control units to closely match the treated unit. However, this simultaneous selection and alignment of control units may lead to a loss of efficiency. Another concern arising from the aforementioned procedure is its susceptibility to under-fitting due to imperfect pre-treatment fit. It is not uncommon for the linear combination, using nonnegative weights, of pre-treatment period outcomes for the control units to inadequately approximate the pre-treatment outcomes for the treated unit. To address both of these issues, this paper proposes a simple and effective method called Synthetic Regressing Control (SRC). The SRC method begins by performing the univariate linear regression to appropriately align the pre-treatment periods of the control units with the treated unit. Subsequently, a SRC estimator is obtained by synthesizing (taking a weighted average) the fitted controls. To determine the weights in the synthesis procedure, we propose an approach that utilizes a criterion of unbiased risk estimator. Theoretically, we show that the synthesis way is asymptotically optimal in the sense of achieving the lowest possible squared error. Extensive numerical experiments highlight the advantages of the SRC method.
Rong J. B. Zhu
2023-06-05T04:23:54Z
http://arxiv.org/abs/2306.02584v2
# Synthetic Matching Control Method+ ###### Abstract Estimating weights in the synthetic control method involves an optimization procedure that simultaneously selects and aligns control units in order to closely match the treated unit. However, this simultaneous selection and alignment of control units may lead to a loss of efficiency in the synthetic control method. Another concern arising from the aforementioned procedure is its susceptibility to under-fitting due to imperfect pre-treatment fit. It is not uncommon for the linear combination, using nonnegative weights, of pre-treatment period outcomes for the control units to inadequately approximate the pre-treatment outcomes for the treated unit. To address both of these issues, this paper proposes a simple and effective method called _Synthetic Matching Control_ (SMC). The SMC method begins by performing the univariate linear regression to establish a proper match between the pre-treatment periods of the control units and the treated unit. Subsequently, a SMC estimator is obtained by synthesizing (taking a weighted average) the matched controls. To determine the weights in the synthesis procedure, we propose an approach that utilizes a criterion of unbiased risk estimator. Theoretically, we show that the synthesis way is asymptotically optimal in the sense of achieving the lowest possible squared error. Extensive numerical experiments highlight the advantages of the SMC method. _Keywords:_ Synthetic Control, Treatment Effects, Panel Data, Ensemble Introduction The synthetic control (SC) method is a popular approach of evaluating the effects of policy changes. It allows estimation of the impact of a treatment on a single unit in panel data settings with a modest number of control units and with many pre-treatment periods (Abadie et al., 2010; Abadie and Gardeazabal, 2003). The key idea under the SC method is to construct a weighted average of control units, known as a synthetic control, that matches the treated unit's pre-treatment outcomes. The estimated impact is then calculated as the difference in post-treatment outcomes between the treated unit and the synthetic control. See Abadie (2021) for recent reviews. The SC method utilizes a constrained optimization to solve for weights, typically resulting in sparse weights where only a few control units have non-zero weights (Abadie and L'Hour, 2021). This estimation process can be seen as an automatic procedure of simultaneously selecting and aligning control units in order to closely match the treated unit. However, the simultaneous selection and alignment of control units can lead to a loss of efficiency of the SC method. This procedure also arises another concern is that it is susceptible to under-fitting due to imperfect pre-treatment fit. The method requires that the synthetic control's pre-treatment outcomes closely match the pre-treatment outcomes for the treated unit (Abadie et al., 2015). This requirement is often too stringent for using synthetic control alone due to interpolation bias, as discussed in Kellogg et al. (2021). It is not uncommon to encounter situations where a linear combination, using nonnegative weights, of the pre-treatment period outcomes for the control units fails to accurately approximate to the pre-treatment outcomes of the treated unit. In this article, we present a straightforward yet effective method called _Synthetic Matching Control_ (SMC) to address these issues. SMC begins with the pre-treatment fit for each control units that match the treated unit. It employs univariate linear regression to better fit on pre-treatment periods, thereby obtaining a matched characteristic for the control unit. The pre-treatment fit provides SMC with the flexibility, allowing for the inclusion of negative coefficients on certain units, thereby reducing interpolation bias. Subsequently, SMC synthesizes the pre-treatment fits of all control units to generate an SMC estimator. Our proposed synthesis method builds on the literature on model averaging (Hansen, 2007), which we study here for synthetic controls. Like the SC method, SMC can control extrapolation bias by applying the synthesis method. When the pre-treatment fit on certain control units is good for the SMC estimator, their corresponding weights are far away from zero, indicating a heavy reliance on these units. Conversely, if the weights are zero, it signifies that SMC does not depend on those units. We conduct both detailed simulation studies and an empirical study of the economic costs of conflicts in Basque, Spain, to shed light on when the SMC method performs well. We find evidence that SMC has lower mean-squared prediction error than alternatives in these studies. Choosing weights in SMC is more flexible than alternatives so that it can reduce the extrapolation bias of synthetic estimators with restrictions. The article is organized as follows. Section 1.1 briefly reviews related work. Section 2 introduces the set-up and the SC method. Section 3 presents the SMC method, which includes Section 3.1 on the pre-treatment fit for each control unit, Section 3.2 on the synthesis method, and Section 3.3 on an asymptotically optimal weighting method for the SMC method. Two extensions are considered: the case when units are more than time periods in Section 4.1, and the incorporation of auxiliary covariates in Section 4.2. Section 5 reports on extensive simulation studies as well as an application to the Basque dataset. Finally, Section 6 discusses the limitations of the method and some possible directions for further research. ### Related Work This article is closely related to the studies that investigate the SC estimator when the pre-treatment fit is imperfect. One approach to addressing the issue is to relax the restriction that the weights are nonnegative. Doudchenko and Imbens (2016) argues that negative weights would be beneficial in many settings and proposes adding an intercept into the SC problem. Similarly, Amjad et al. (2018) proposes a denoising algorithm that combines negative weights with a preprocessing step, and Ferman and Pinto (2021) also argues that a demeaned version of the SC method is already efficient. Another approach is to use an outcome model for reducing the imperfect fit. Powell (2018) allows for extrapolation by constructing the SC unit based on the fitted values on unit-specific time periods. Ben-Michael et al. (2021) proposes the augmented synthetic control method, which uses an outcome model to estimate bias resulting from imperfect pre-treatment fit and de-biases the original SC estimator. The SMC method relates these estimators in the sense of addressing the issue of imperfect pre-treatment fit. However, it differs from them in intent. The concept of SMC aims to mitigate interpolation bias through unit matching while concurrently reducing extrapolation bias by ensembling all matched units. We delve these aspects further in Section 3.2. Several related articles have addressed the challenge of dealing with datasets that include too many control units, leading to that the solution of the SC estimator is not unique (Abadie et al., 2015). Robbins et al. (2017) and Abadie and L'Hour (2021) adapt the original SC proposal to incorporate a penalty on the weights into the SC optimization problem. Gobillon and Magnac (2016) makes use of dimension reduction strategies to improve the estimator's performance. Doudchenko and Imbens (2016) suggests selecting the set of best controls by restricting the number of controls allowed to be different from zero using an \(l_{0}\)-penalty on the weights. While the SMC method does not employ a penalty to tackle the problem of an excessive number of unit, the criterion used in a penalty-style fashion stems from constructing an unbiased estimator of the risk associated with the synthetic estimator. In situations where the number of units is large with respect to the number of time periods, a preprocessing step of screening units can extend the SMC method. Our article is related to Kellogg et al. (2021), which proposes the matching and synthetic control (MASC) estimator by using a weighted average of the SC and matching estimators to balance interpolation and extrapolation bias. Our SMC method differs from MASC in several ways. First, SMC does not use the matching estimator and instead considers pre-treatment fit of each control unit that matches the treated unit. Second, while the MASC estimator is a weighted average between the SC estimator and the matching estimator, the SMC estimator is a synthetic estimator that incorporates all matched controls. Finally, the methods also differ in terms of how the weights are chosen. For the MASC estimator, the only weight is chosen by the cross-validation method. In contrast, the SMC method involves solving multiple weights, and we employ the unbiased risk estimator criterion, as developed in model averaging by Hansen (2007), to determine these weights. Our article is also related to Athey et al. (2019) and Viviano and Bradic (2022) that have considered the benefits of model averaging in the context of synthetic control. Athey et al. (2019) combines several regularized SC and matrix completion estimators developed in Doudchenko and Imbens (2016) and Athey et al. (2021), while Viviano and Bradic (2022) combines a large number of estimators from the machine learning literature. In contrast, the purpose of the weighting scheme in our SMC method is to ensemble all matched units and to mitigate the risk associated with the resulting synthetic estimator. We utilize the pre-treatment fitted estimators from each control unit as the estimators to be averaged. This averaging process helps alleviate the risk inherent in the synthetic estimator. This approach distinguishes our method from previous methods, which involves averaging various types of estimators or combining multiple estimators from the machine learning literature. Finally, in addition to SC-style weighting strategies, there have been articles that directly use outcome modeling approaches. These include the panel data approach in Hsiao et al. (2012), the generalized synthetic control method in Xu (2017), the matrix completion method in Athey et al. (2021), and the synthetic difference-in-differences method in Arkhangelsky et al. (2021). In this article we focus on studying the synthetic control framework. Synthetic Control Method We consider the canonical SC panel data setting with \(j=1,\cdots,J+1\) units observed for \(t=1,\cdots,T\) time periods. We restrict attention to the case where a single unit receives treatment, and follow the convention that the first one \(j=1\) is treated and that the remaining \(J\) ones are control units. Let \(T_{0}\) be the number of pre-intervention periods, with \(1\leq T_{0}<T\). We adopt the potential outcomes framework (Neyman 1923); the potential outcomes for unit \(j\) in period \(t\) under control and treatment are \(Y_{jt}(0)\) and \(Y_{jt}(1)\), respectively. Thus, the observed outcomes \[Y_{1t}=\begin{cases}Y_{1t}(0)\text{ if }t\leq T_{0},\\ Y_{1t}(1)\text{ if }t>T_{0};\end{cases}\] \[Y_{jt}=Y_{jt}(0)\text{ for }j=2,\cdots,J+1.\] We define the treated potential outcome as \[Y_{jt}(1)=Y_{jt}(0)+\tau_{jt},\] where \(\tau_{jt}\) is the effect of the intervention for unit \(j\) at time \(t\). Since the first unit is treated, the key estimand of interest is \(\tau_{1T}=Y_{1t}(1)-Y_{1t}(0)\). We separate the counterfactual outcome \(Y_{1t}(0)\) into a model component \(\mu_{1t}=\mathbb{E}\left[Y_{1t}(0)\right]\) plus an error term \(\epsilon_{1t}\): \[Y_{1t}(0)=\mu_{1t}+\epsilon_{1t},\ \ t=1,\cdots,T, \tag{1}\] where \(\epsilon_{1t}\) is a zero mean error term with variance \(\sigma_{t}^{2}=\mathbb{E}\left[\epsilon_{1t}^{2}\right]\). Here we allow for heteroskedasticity. Let \(\mathbf{y}_{1}\) be a \((T_{0}\times 1)\) vector of pre-intervention characteristics of the treated unit that we aim to match as closely as possible, and \(\mathbf{Y}_{0}\) be \((T_{0}\times J)\) matrix that contains the same variables for the control units. A synthetic control is defined as a weighted average of the control units. Let \(\mathbf{w}=(w_{2},\cdots,w_{J+1})^{\top}\) be the weight vector in the unit simplex in \(\mathbb{R}^{J}\): \[\mathcal{H}_{sc}=\left\{w_{j}\in[0,1]:\sum_{j=2}^{J+1}w_{j}=1\right\}.\] In the SC method, the weight vector \(\mathbf{w}\) is chosen to solve the following optimization problem: \[\tilde{\mathbf{w}}^{\rm sc}=\arg\min_{\mathbf{w}\in\mathcal{H}_{sc}}\|\mathbf{y}_{1}- \mathbf{Y}_{0}\mathbf{w}\|_{\mathbf{V}}, \tag{2}\] where \(\|\mathbf{y}_{1}-\mathbf{Y}_{0}\mathbf{w}\|_{\mathbf{V}}=\sqrt{(\mathbf{y}_{1}- \mathbf{Y}_{0}\mathbf{w})^{\top}\mathbf{V}(\mathbf{y}_{1}-\mathbf{Y}_{0}\mathbf{w})}\) with some symmetric and positive semidefinite matrix \(\mathbf{V}\). The introduction of \(\mathbf{V}\) is a crucial step in the SC estimator as it helps to reduce the mean square error of the estimator. The \(\mathbf{V}\) matrix is used to apply a linear transformation on the variables in \(\mathbf{Y}_{0}\) and \(\mathbf{y}_{1}\) based on their predictive power on the outcome. The optimal choice of \(\mathbf{V}\) assigns weights that minimize the mean square error of the synthetic control estimator. A common way is to choose positive definite and diagonal matrices for \(\mathbf{V}\), which results in the minimum mean squared prediction error of the outcome variable for the pre-intervention periods (Abadie et al., 2010; Abadie and Gardeazabal, 2003). Then a synthetic control estimator is constructed by \[\hat{Y}_{1t}(0)=\sum_{j=2}^{J+1}\tilde{w}_{j}^{\rm sc}Y_{jt},\ \ t\in\{1, \cdots,T\}.\] The treatment effect \(\tau_{1t}\) is given by the comparison between the outcome for the treated unit and the outcome for the synthetic control estimator at time \(t\in\{1,\cdots,T\}\): \[\hat{\tau}_{1t}=Y_{1t}-\hat{Y}_{1t}(0)=Y_{1t}-\sum_{j=2}^{J+1}\tilde{w}_{j}^{ \rm sc}Y_{jt}.\] The weights \(\tilde{\mathbf{w}}^{\rm sc}\) in the SC estimator are typically sparse, meaning that they are only non-zero for a few control units (Abadie and L'Hour, 2021). This feature is considered as an attractive property since it provides a way for experts to use their knowledge to evaluate the plausibility of the resulting estimates (Abadie, 2021). In the SC method, the optimization problem (2) involves simultaneously pursuing two objectives: matching and synthesis. The unit simplex serves as the constraint for the parameter \(\mathbf{w}\) in (2), ensuring that the estimate \(\tilde{\mathbf{w}}^{\rm sc}\) often exhibits sparsity. This sparsity indicates a procedure of selecting control units that are close to the treated unit. However, the SC method does not account for the potential fit relation between the treated unit and the control units, which can result in a loss of accuracy in matching the control units. Furthermore, the unit simplex ensures that the weights in \(\tilde{\mathbf{w}}^{\rm sc}\) sum up to 1, representing a weighted average of the selected control units. As discussed in Kellogg et al. (2021), the synthesis strategy in the SC method aims to minimize extrapolation bias but may be susceptible to interpolation bias. ## 3 Synthetic Matching Control Method ### Unit Matching The goal of unit matching is to establish a correspondence between each control unit and the treated unit, aiming to minimize the distance between them. This process entails conducting a univariate regression analysis where the treated unit is regressed on the control unit. By doing so, we can estimate the counterfactual outcome for each control unit based on the pre-intervention regression fit. We approximate \(\mu_{1t}\) in (1) by control unit \(j\) in the following working model \[\mu_{1t}=\theta_{j}Y_{jt}+b_{jt}. \tag{3}\] where \(b_{jt}\) represents the approximation error of \(\mu_{1t}\) by \(\theta_{j}Y_{jt}\). We estimate \(\theta_{j}\in\mathbb{R}\) using the unconstrained univariate ordinary least squares method. Without loss of generality (see Appendix A), we assume that \(\mathbf{y}_{1}\) and \(\mathbf{Y}_{0}\) are centered, i.e., \(\mathbf{1}^{\top}\mathbf{y}_{1}=0\) and \(\mathbf{1}^{\top}\mathbf{Y}_{0}=\mathbf{0}\). We construct the matched control \(\theta_{j}\mathbf{y}_{j}\), where \(\theta_{j}\) is obtained by minimizing the simple Euclidean distance: \(\min_{\theta_{j}\in\mathbb{R}}\|\mathbf{y}_{1}-\theta_{j}\mathbf{y}_{j}\|_{ \mathbf{V}}^{2}\). In practice, we use the procedure employed in the SC method to choose \(\mathbf{V}\)(Abadie et al., 2010, 2015; Abadie and Gardeazabal, 2003). Because \(\mathbf{V}\) can be absorbed into \(\mathbf{y}_{1}\) and \(\mathbf{y}_{j}\), without loss of generality we simply rewrite the minimization as \[\min_{\theta_{j}\in\mathbb{R}}\|\mathbf{y}_{1}-\theta_{j}\mathbf{y}_{j}\|^{2}.\] Using the regression method is to mimic the behavior of the treated unit before the intervention as closely as possible. It follows the least squares estimator \[\hat{\theta}_{j}=\left(\mathbf{y}_{j}^{\top}\mathbf{y}_{j}\right)^{-1} \mathbf{y}_{j}^{\top}\mathbf{y}_{1}.\] Consequently, the value of the control unit \(j\) based on the matched characteristic is given by \[\tilde{\mathbf{y}}_{j}=\hat{\theta}_{j}\mathbf{y}_{j}.\] ### Synthetic Matching Control We utilize the synthetic method by assigning a weight for the matching estimator for each control unit. Let \(\mathbf{w}=(w_{2},\cdots,w_{J+1})^{\top}\) be the weight vector in the simplex in \(\mathbb{R}^{J}\): \[\mathcal{H}_{J}=\left\{w_{i}\in[0,1]:j=2,\cdots,J+1\right\}.\] Note we do not require that \(\sum_{j=2}^{J+1}w_{j}=1\). Then the synthetic matched control estimator is \[\hat{\mathbf{y}}_{1}(\mathbf{w})=\sum_{j=2}^{J+1}w_{j}\hat{\theta}_{j}\mathbf{y}_ {j}. \tag{4}\] Let us call it _"Synthetic Matching Control"_ (SMC). In contrast to SC, the weight in SMC is separate from the regression coefficient. The weight represents the degree to which the matched characteristic is considered in the synthesis process, while the regression coeffi cient reflects the relationship between the treated and control units. Similar to SC, SMC leverages the synthetic method to control extrapolation error by assigning more weight to the matched characteristics that demonstrate higher similarity. Denote \(\mathbf{\mu}_{1}=(\mu_{11},\cdots,\mu_{1T_{0}})^{\top}\) and \(\mathbf{\epsilon}_{1}=(\epsilon_{11},\cdots,\epsilon_{1T_{0}})^{\top}\). We decompose the error as follows: \[\hat{\mathbf{y}}_{1}(\mathbf{w})-\mathbf{\mu}_{1} =\sum_{j=2}^{J+1}w_{j}\hat{\theta}_{j}\mathbf{y}_{j}-\sum_{j=2}^{ J+1}w_{j}\mathbf{y}_{1}+\sum_{j=2}^{J+1}w_{j}\mathbf{y}_{1}-\mathbf{\mu}_{1}\] \[=\sum_{j=2}^{J+1}w_{j}(\hat{\theta}_{j}\mathbf{y}_{j}-\mathbf{y}_ {1})+(\sum_{j=2}^{J+1}w_{j}-1)\mathbf{\mu}_{1}+\sum_{j=2}^{J+1}w_{j}\mathbf{\epsilon}_{1}. \tag{5}\] Eqn. (5) illustrates two components within the error: the first term represents the interpolation error, resulting from the pre-treatment fitting errors \(\hat{\theta}_{j}\mathbf{y}_{j}-\mathbf{y}_{1}\) with weights \(w_{j}\); the second term is extrapolation error \((\sum_{j=2}^{J+1}w_{j}-1)\mathbf{\mu}_{1}+\sum_{j=2}^{J+1}w_{j}\mathbf{\epsilon}_{1}\). For the SC estimator \(\tilde{\mathbf{y}}_{1}^{\text{sc}}(\mathbf{w})=\sum_{j=2}^{J+1}w_{j}\mathbf{y}_{j}\), the corresponding error is decomposed as \[\tilde{\mathbf{y}}_{1}^{\text{sc}}(\mathbf{w})-\mathbf{\mu}_{1}=\sum_{j=2}^{J+1}w_{j}( \mathbf{y}_{j}-\mathbf{y}_{1})+(\sum_{j=2}^{J+1}w_{j}-1)\mathbf{\mu}_{1}+\sum_{j=2 }^{J+1}w_{j}\mathbf{\epsilon}_{1}. \tag{6}\] Comparing (5) with (6), we observe two advantages of SMC over SC. First, the process of unit matching reduces the interpolation error since \(\hat{\theta}_{j}\mathbf{y}_{j}\) is the best linear prediction based on control unit \(j\). This helps to improve the accuracy of the estimated counterfactual outcome. Second, the constraint \(\sum_{j=2}^{J+1}w_{j}=1\) in the SC method is not necessarily aimed at minimizing the error. Although the term \((\sum_{j=2}^{J+1}w_{j}-1)\mathbf{\mu}_{1}\) disappears when \(\sum_{j=2}^{J+1}w_{j}=1\), it does not guarantee that other terms become small. In the next section, we adopt the mean-squared error as a measure to minimize in order to determine the weights \(\mathbf{w}\). This objective provides a quantitative criterion to optimize the SMC estimator and improve its overall performance. ### Calculation of \(\mathbf{w}\) From Section 3.1, the prediction of \(\mathbf{\mu}_{1}\) on the control unit \(j\) is \(\tilde{\mathbf{y}}_{j}=\hat{\theta}_{j}\mathbf{y}_{j}\). We rewrite it as \(\tilde{\mathbf{y}}_{j}=\mathbf{H}_{j}\mathbf{y}_{1}\), where \(\mathbf{H}_{j}=\mathbf{y}_{j}\left(\mathbf{y}_{j}^{\top}\mathbf{y}_{j}\right)^ {-1}\mathbf{y}_{j}^{\top}\), implying that (4) is rewritten as \[\hat{\mathbf{y}}_{1}(\mathbf{w})=\sum\nolimits_{j=2}^{J+1}w_{j}\mathbf{H}_{j} \mathbf{y}_{1}.\] Define \(\mathbf{H}(\mathbf{w})=\sum_{j=2}^{J+1}w_{j}\mathbf{H}_{j}\) with \(\ell_{jt}\) as the \(t\)-th diagonal element of \(\mathbf{H}_{j}\), and let \(R(\mathbf{w})=\mathbb{E}\left[\|\hat{\mathbf{y}}_{1}(\mathbf{w})-\mathbf{\mu}_{1}\|^{2}\right]\) denote the risk. We have that \[\mathbb{E}\left[\|\hat{\mathbf{y}}_{1}(\mathbf{w})-\mathbf{y}_{1}\|^ {2}\right]-R(\mathbf{w}) =\mathbb{E}\left[\|\hat{\mathbf{y}}_{1}(\mathbf{w})-\mathbf{y}_{1}\| ^{2}-\|\hat{\mathbf{y}}_{1}(\mathbf{w})-\mathbf{\mu}_{1}\|^{2}\right]\] \[=\mathbb{E}\left[\epsilon^{\top}\epsilon-2\epsilon^{\top}( \mathbf{H}(\mathbf{w})\mathbf{\mu}_{1}-\mathbf{\mu}_{1}+\mathbf{H}(\mathbf{w})\epsilon)\right]\] \[=\sum\nolimits_{t=1}^{T_{0}}\sigma_{t}^{2}-2\sum\nolimits_{j=2}^ {J+1}w_{j}\sum\nolimits_{t=1}^{T_{0}}\sigma_{t}^{2}\ell_{jt}. \tag{7}\] Eqn. (7) demonstrates that the expression \[\|\hat{\mathbf{y}}_{1}(\mathbf{w})-\mathbf{y}_{1}\|^{2}+2\sum\nolimits_{j=2}^{J+1 }w_{j}\sum\nolimits_{t=1}^{T_{0}}\sigma_{t}^{2}\ell_{jt}-\sum\nolimits_{t=1}^ {T_{0}}\sigma_{t}^{2}\] serves as an unbiased estimator of \(R(\mathbf{w})\). This motivates the utilization of the following criterion to obtain \(\mathbf{w}\): \[\mathcal{C}_{0}(\mathbf{w})=\|\hat{\mathbf{y}}_{1}(\mathbf{w})-\mathbf{y}_{1}\|^{2}+2 \sum\nolimits_{j=2}^{J+1}\sigma_{j}^{2}w_{j}.\] Here \(\sigma_{j}^{2}=\sum_{t=1}^{T_{0}}\sigma_{t}^{2}\ell_{jt}\). It is worth noting that \(\sum_{t=1}^{T_{0}}\ell_{jt}=1\), in this context, \(\sigma_{j}^{2}\) is defined as a weighted average of \(\sigma_{t}^{2}\) over \(t=1,\cdots,T_{0}\), where the weights are represented by \(\ell_{jt}\). Replacing \(\sigma_{j}^{2}\) by \(\hat{\sigma}^{2}=\|\mathbf{y}_{1}-\mathbf{Y}_{0}[\text{diag}(\mathbf{Y}_{0}^{ \top}\mathbf{Y}_{0})]^{-1}\mathbf{Y}_{0}^{\top}\mathbf{y}_{1}\|^{2}\), where \(\text{diag}(\mathbf{Y}_{0}^{\top}\mathbf{Y}_{0})\) denotes the diagonal matrix formed by the diagonal elements of \(\mathbf{Y}_{0}^{\top}\mathbf{Y}_{0}\), we propose an approximate Mallows' \(C_{p}\) criterion \[\mathcal{C}(\mathbf{w})=\|\hat{\mathbf{y}}_{1}(\mathbf{w})-\mathbf{y}_{1}\|^{2}+2\hat {\sigma}^{2}\sum\nolimits_{j=2}^{J+1}w_{j}. \tag{8}\] From (8), the weight vector is obtained as \[\hat{\mathbf{w}}=\operatorname*{arg\,min}_{\mathbf{w}\in\mathcal{H}_{J}}\mathcal{C}(\mathbf{ w}).\] With \(\hat{\mathbf{w}}\) into (4), we obtain the SMC estimator \[\hat{Y}_{1t}(0)=\sum_{j=2}^{J+1}\hat{w}_{j}\hat{\theta}_{j}Y_{jt}\ \ t\in\{1, \cdots,T\}. \tag{9}\] We summarize the procedure of obtaining the SMC estimator as Algorithm 1. In (9), the SMC estimator is represented as a linear weighting estimator of the outcomes of control units \(Y_{jt}\), similar to the SC estimator. The weights \(\hat{w}_{j}\hat{\theta}_{j}\) can be understood as the solution to a penalized synthetic control problem. It is important to note that these weights consist of two components: \(\hat{w}_{j}\) and \(\hat{\theta}_{j}\). This formulation allows for negative weights and enables extrapolation beyond the convex hull of the control units. In unit matching alone, the estimator \(\hat{\theta}_{j}\) allows for arbitrarily weights even when there is no correlation between the treated unit and the control unit \(j\). On the contrary, by imposing the constraint of the convex hull of the matched control units, the sum of weights is penalized directly. This constraint effectively controls the amount of extrapolation error. ``` Obtain \(\hat{\theta}_{j}=\left(\mathbf{y}_{j}^{\top}\mathbf{y}_{j}\right)^{-1}\mathbf{ y}_{j}^{\top}\tilde{\mathbf{y}}_{1}\) for \(j\in\{2,\cdots,J+1\}\). Solve \(\hat{\mathbf{w}}\) by \[\hat{\mathbf{w}}=\operatorname*{arg\,min}_{\mathbf{w}\in\mathcal{H}_{J}}\left\{\| \mathbf{y}_{1}-w_{j}\hat{\theta}_{j}\mathbf{y}_{j}\|^{2}+2\hat{\sigma}^{2}\bm {w}^{\top}\mathbf{1}\right\}.\] Obtain \(\hat{Y}_{1t}(0)=\sum_{j=2}^{J+1}\hat{w}_{j}\hat{\theta}_{j}Y_{jt}\), \(t\in\{1,\cdots,T\}\). ``` **Algorithm 1** The SMC estimator We shall provide a justification: the synthetic estimator with the weights \(\hat{\mathbf{w}}\) asymptotically achieve the minimum loss of the infeasible best possible synthetic estimator. **Theorem 1**.: _Denote \(L(\mathbf{w})=\|\hat{\mathbf{y}}_{1}(\mathbf{w})-\mathbf{\mu}_{1}\|^{2}\). Assume that (1) \(\max_{t}\mathbb{E}\left[\epsilon_{t}^{4}\right]\leq c_{1}<\infty\) for some constant \(c_{1}\), (2) \(\|\mathbf{\mu}_{1}\|^{2}/T_{0}\leq c_{2}<\infty\) for some constant \(c_{2}\), and (3) \(\mathbf{Y}_{0}[\text{diag}(\mathbf{Y}_{0}^{\top}\mathbf{Y}_{0})]^{-1}\mathbf{Y}_{0 }^{\top}\boldsymbol{\mu}_{1}\|^{2}\to\infty\) as \(T_{0}\to\infty\), then_ \[\frac{L(\hat{\boldsymbol{w}})}{\inf_{\boldsymbol{w}\in\mathcal{H}_{J}}L( \boldsymbol{w})}\to 1,\] _in probability, as \(T_{0}\to\infty\)._ Proof.: The technical proofs are given in Appendix B. Theorem 1 demonstrates the asymptotic optimality of the proposed method, showing that that the squared error obtained by \(\hat{\boldsymbol{w}}\) is asymptotically equivalent to the infeasible optimal weight vector. The asymptotic optimality is commonly observed in statistical problems, such as model selection (Li, 1987) and model averaging (Hansen, 2007). This result justifies that the SMC estimator is asymptotically optimal in the class of synthetic estimators where the weight vector \(\boldsymbol{w}\) is restricted to the set \(\mathcal{H}_{J}\). The conditions of \(\max_{t}\mathbb{E}\left[\epsilon_{t}^{4}\right]\leq c_{1}\) and \(\|\boldsymbol{\mu}_{1}\|^{2}/T_{0}\leq c_{2}\) are quite mild since they only require bounded fourth moments of errors and that \(\|\boldsymbol{\mu}_{1}\|^{2}=O(T_{0})\), respectively. The key condition, \(J^{-1}\|\boldsymbol{\mu}_{1}-\mathbf{Y}_{0}[\text{diag}(\mathbf{Y}_{0}^{\top }\mathbf{Y}_{0})]^{-1}\mathbf{Y}_{0}^{\top}\boldsymbol{\mu}_{1}\|^{2}\to\infty\) as \(T_{0}\to\infty\), means that the squared model approximation error is large relative to the number of control units. This condition described is typically considered to be mild in the context of the synthetic control problem. This is because achieving a perfect approximation through univariate regression on a simple control unit is rarely attainable. ## 4 Extensions In this section, we consider two elaborations to the basic setup. First, we extend it to cases where units are more than time periods. Second, we extend it by incorporating auxiliary covariates. ### Screening Units When They are Too Many We extend the application of the SMC method to cases where \(J\geq T_{0}\) or \(J\approx T_{0}\). To accomplish this, we propose a practical procedure that involves screening the units using the sure independent ranking and screening (SIRS) method (Zhu et al., 2011) to reduce the number of units. In high-dimensional statistics, Theorems 2 and 3 in Zhu et al. (2011) indicate that SIRS can reduce the dimensionality without losing any active variables with a probability approaching one. We prefer SIRS over the original sure independence screening proposed by Fan and Lv (2008) because it allows us to assume that no linear candidate model is correct. In model (3), any working model is an approximation of the expected counterfactual value \(\mu_{1t}\). For applying SIRS into the control units, we assume that \(\mu_{1t}\) depends only on some of the control units, called as active units, in this study. SIRS screens the units based on the magnitude of the following statistics instead of the marginal correlation, \[\tilde{\eta}_{j}=\frac{1}{T_{0}}\sum_{t=1}^{T_{0}}\left\{\frac{1}{T_{0}}\sum_{ l=1}^{T_{0}}Y_{jt}I_{(-\infty,\ Y_{1t})}(Y_{1l})\right\}^{2},\quad\text{for} \quad j=2,\cdots,J+1.\] Derivation and interpretation of this statistics can be found in Zhu et al. (2011). We use the statistics \(\tilde{\eta}_{j}\) for screening units, then obtain a set that involves any activate units. Once the screened units are reduced, we perform the SMC method on these units to obtain the estimator. We summarize it as Algorithm 2 below. ``` Step 1: Screen units by the SIRS method to get the subset \(\mathbf{Y}_{s}\) from \(\mathbf{Y}_{0}\); Step 2: Perform Algorithm 1 on \(\mathbf{y}_{1}\) and \(\mathbf{Y}_{s}\). ``` **Algorithm 2** The SMC estimator when control units are too many ### Incorporating auxiliary covariates We have focused on matching pre-treatment values of the outcome variable. In practice, we typically observe a set of auxiliary covariates as well. For example, in the study of Proposition 99, Abadie et al. (2010) considers the following covariates: average retail price of cigarettes, per capita state personal income, per capita beer consumption, and the percentage of the population age 15-24. It is natural to incorporate auxiliary covariates in applying the SMC method. For unit \(j\), denote \(\mathbf{x}_{j}\) as a \((p\times 1)\) vector of observed covariates that are not affected by the intervention. Let \(\mathbf{X}=(\mathbf{x}_{1},\mathbf{x}_{2},\cdots,\mathbf{x}_{J+1})\). Analogous to the SC method (Abadie et al., 2010), We define the augmented \((T_{d}\times 1)\), where \(T_{d}=T_{0}+p\), vector of pre-intervention characteristics for the treated unit \(\mathbf{z}_{1}=(\mathbf{y}_{1}^{\top},\mathbf{x}_{1})^{\top}\in\mathbb{R}^{T_{ d}}\). Similarly, \(\mathbf{Z}_{0}\) is a \((T_{d}\times J_{0})\) matrix that contains the same variables for the control units. We apply Algorithm 1 on \(\mathbf{z}_{1}\) and \(\mathbf{Z}_{0}\) to obtain \(\hat{w}_{j}^{(\mathbf{z})}\) and \(\hat{\theta}_{j}^{(\mathbf{z})}\), and then obtain the SMC estimator \[\hat{Y}_{1t}(0)=\sum_{j=2}^{J+1}\hat{w}_{j}^{(\mathbf{z})}\hat{\theta}_{j}^{( \mathbf{z})}Y_{jt}\ \ t\in\{1,\cdots,T\}. \tag{10}\] We summarize it as Algorithm 3 below. ``` Step 1: Combine \(\mathbf{y}_{1}\) with \(\mathbf{x}_{1}\) to obtain \(\mathbf{z}_{1}\); Similarly combine \(\mathbf{Y}_{0}\) with \(\mathbf{X}_{0}\) to obtain \(\mathbf{Z}_{0}\); Step 2: Perform Algorithm 1 on \(\mathbf{z}_{1}\) and \(\mathbf{Z}_{0}\), and then obtain \(\hat{w}_{j}^{(\mathbf{z})}\) and \(\hat{\theta}_{j}^{(\mathbf{z})}\); Step 3: Obtain the SMC estimator according to (10). ``` **Algorithm 3** The SMC estimator when auxiliary covariates are incorporated ## 5 Empirical Studies In this section, we conduct extensive Monte Carlo simulation studies to assess the performance of various methods, finding where and how the SMC estimator performs compared to existing estimators, and subsequently we perform an empirical analyse on a real dataset to examine the behavior of the SMC method. ### Simulation Studies Now we investigate the finite sample performance of alternative estimators in two simulation experiments, one using a generative model and the other using a working model. We compare several representative synthetic estimators, including: (a) the original SC method (SC) in Abadie et al. (2010), (b) the de-meaned SC (dSC) in Ferman and Pinto (2021), (c) the augmented SC Method (ASC) in Ben-Michael et al. (2021), (d) the matching and SC method (MASC) in Kellogg et al. (2021), (e) OLS in Hsiao et al. (2012), and (f) the constrained lasso (lasso) in Chernozhukov et al. (2021). **Generative Model.** In this experiment, all units are generated according to the factor model as follows \[Y_{jt}(0)=\alpha_{t}+\lambda_{j}F_{t}+\epsilon_{jt},\] where time specific terms \(\alpha_{t}\sim\mathcal{N}(0,1)\), and unobserved factors \(F_{t}\sim\mathcal{N}(0,1)\). Similarly as Li (2020), we consider three sets of factor loadings \(\mathbf{\lambda}\). (1) \(\mathbf{\lambda}_{1}\): \(\lambda_{j}=1\) for \(j=1,\cdots,7\), and \(\lambda_{j}=0\) for \(j=8,\cdots,J+1\); (2) \(\mathbf{\lambda}_{2}\): \(\lambda_{1}=3\), \(\lambda_{j}=1\) for \(j=2,\cdots,7\), and \(\lambda_{j}=0\) for \(j=8,\cdots,J+1\); (3) \(\mathbf{\lambda}_{3}\): \(\lambda_{1}=3\), \(\lambda_{j}=1\) for \(j=2,\cdots,J+1\); For (1), all nonzero factor loadings are set to be ones so that both treated and control units with nonzero loadings are drawn from a common distribution. In contrast, for (2), treated and control units are drawn from two heterogeneous distributions, since loadings for the treated unit all equal to 3, and the control units' nonzero loadings all equal to 1. The setting of \(\mathbf{\lambda}_{3}\) is similar to \(\mathbf{\lambda}_{2}\), except that control units' loadings all equal to 1. We set \(T=50\) with \(T_{0}=40\) and \(J=20\). The errors \(\epsilon_{jt}\sim\mathcal{N}(0,\sigma^{2})\), where we use three values of \(\sigma\) values, 1, 0.5, and 0.1, to investigate the impact of \(\sigma\). To evaluate each estimator, we compute the mean squared prediction error (MSPE), which is defined as \(\text{MSPE}=(T-T_{0})^{-1}\sum\limits_{t=T_{0}+1}^{T}\|\hat{Y}_{1,t}(0)-Y_{1,t} (0)\|^{2}\), by calculating the average loss across 500 simulations. The results are reported in Table 1. For the homoscedastic setting of \(\mathbf{\lambda}_{1}\), the SC, dSC, ASC, and MASC methods perform similarly as lasso, and are better than SMC. For the heteroscedastic settings of \(\mathbf{\lambda}_{2}\) and \(\mathbf{\lambda}_{3}\), SMC is better than other methods except for the case of \(\mathbf{\lambda}_{2}\) and \(\sigma=0.1\), where it is slightly worse than ASC. Especially for the dense setting \(\mathbf{\lambda}_{3}\), the SC, dSC, ASC, and MASC methods work as badly as lasso, while SMC works pretty well. Meanwhile, OLS works well in this setting, but is still worse than SMC. Comparing the results across various \(\sigma\) values, we find that the above observations hold true. Notably, we observe that the MSPE values of both the SMC and OLS estimators approach zero as \(\sigma\) decreases from \(1\) to \(0.1\) across all three scenarios of \(\mathbf{\lambda}\). However, we do not observe this convergence in the SC, dSC, MASC, and lasso methods for the heteroscedastic settings, nor in ASC for the setting of \(\mathbf{\lambda}_{3}\). In order to assess the impact of \((T,J)\), we consider three scenarios: \((T,J)\) takes on values of \((100,20)\), \((50,10)\), and \((50,50)\). We set \(T_{0}=T-10\). The results are reported in Table 2. When comparing the cases of \((T,J)=(100,20)\), \((50,10)\) or \((50,50)\) with the case of \((T,J)=(50,20)\) in Table 1, we find that these methods consistently demonstrate robust performance across different values of \((T,J)\). In the \(\mathbf{\lambda}_{1}\) setting, SMC performs similarly to the SC, dSC, ASC, MASC, and lasso methods. However, it outperforms the other methods in the \(\mathbf{\lambda}_{2}\) and \(\mathbf{\lambda}_{3}\) settings. It is noteworthy that in the case of \((50,50)\), we have \(T_{0}<J\). For this case, we employ the SMC estimator by applying Algorithm 2 with SIRS preprocessing on screening units as described in Section 4.1. These results obtained in the \((50,50)\) case highlight the effectiveness of the SIRS preprocessing step on screening units within the extended SMC method. **Working Model.** Our second simulation is based on the following working model: \[\mathbf{y}_{1}=\mathbf{Y}_{0}\mathbf{\theta}+\mathbf{\epsilon}.\] \begin{table} \begin{tabular}{c|c c c c c c c} \hline \((\mathbf{\lambda},\sigma)\) & SC & dSC & ASC & MASC & OLS & lasso & SMC \\ \hline \((\mathbf{\lambda}_{1},1)\) & 1.521 & 1.493 & **1.474** & 1.562 & 2.566 & 1.564 & 1.578 \\ \((\mathbf{\lambda}_{1},0.5)\) & 0.285 & 0.290 & **0.279** & 0.412 & 0.477 & 0.295 & 0.336 \\ \((\mathbf{\lambda}_{1},0.1)\) & **0.014** & 0.015 & 0.014 & 0.021 & 0.031 & 0.015 & 0.017 \\ \hline \((\mathbf{\lambda}_{2},1)\) & 6.572 & 6.805 & 3.561 & 6.656 & 3.827 & 6.839 & **2.528** \\ \((\mathbf{\lambda}_{2},0.5)\) & 5.758 & 5.871 & 0.900 & 5.828 & 1.061 & 5.809 & **0.742** \\ \((\mathbf{\lambda}_{2},0.1)\) & 5.246 & 5.364 & 0.024 & 5.253 & 0.031 & 5.347 & **0.021** \\ \hline \((\mathbf{\lambda}_{3},1)\) & 19.80 & 19.97 & 21.87 & 19.94 & 2.752 & 19.48 & **2.010** \\ \((\mathbf{\lambda}_{3},0.5)\) & 18.10 & 18.29 & 18.81 & 18.24 & 0.721 & 18.22 & **0.489** \\ \((\mathbf{\lambda}_{3},0.1)\) & 17.89 & 17.98 & 19.12 & 17.96 & 0.034 & 18.01 & **0.021** \\ \hline \end{tabular} \end{table} Table 1: The post-period MSPE of alternative estimators. Here rows of the \(T\) by \(J\) predictor matrix \(\mathbf{Y}_{0}\) is generated from a multivariate normal distribution with mean \(\mathbf{0}\) and covariance matrix \(\mathbf{\Sigma}\) with the \((i,j)\)th element \(\sigma_{ij}=\rho^{|i-j|}\), where the value of \(\rho\) is set to \(0.8\). As the first simulation does, \((T,J)\) is set to \((50,20)\), where \(T_{0}=40\). The true coefficients \(\boldsymbol{\theta}=(\theta_{1},\cdots,\theta_{J})^{\top}\) are set to \((c/7,\cdots,c/7,0,\cdots,0)^{\top}\), where the first \(7\) coefficients are non-zero with \(c>0\). We control the summation of these coefficients by setting various \(c\): \(0.5\), \(1\), and \(2\). When \(c=1\), the SC method is expected to be perfect. For \(c=0.5\) or \(2\), there is gap between the restriction that the weights sum up to one and the real approximating model. The errors \(\boldsymbol{\epsilon}\) is considered to be the heteroscedastic \begin{table} \begin{tabular}{c|c|c c c c c c c} \hline \((T,J)\) & \((\boldsymbol{\lambda},\sigma)\) & SC & dSC & ASC & mASC & OLS & lasso & SMC \\ \hline \multirow{8}{*}{(100,20)} & \((\boldsymbol{\lambda}_{1},1)\) & 1.221 & 1.238 & **1.214** & 1.291 & 1.444 & 1.260 & 1.291 \\ & \((\boldsymbol{\lambda}_{1},0.5)\) & 0.314 & 0.321 & **0.309** & 0.338 & 0.393 & 0.321 & 0.331 \\ & \((\boldsymbol{\lambda}_{1},0.1)\) & 0.013 & 0.013 & **0.013** & 0.014 & 0.015 & 0.013 & 0.014 \\ \cline{2-10} & \((\boldsymbol{\lambda}_{2},1)\) & 20.316 & 20.252 & 3.223 & 20.133 & 3.528 & 20.259 & **2.952** \\ & \((\boldsymbol{\lambda}_{2},0.5)\) & 19.524 & 19.349 & 0.823 & 19.447 & 0.896 & 19.298 & **0.716** \\ & \((\boldsymbol{\lambda}_{2},0.1)\) & 18.947 & 18.879 & 0.037 & 18.932 & 0.029 & 18.850 & **0.027** \\ \cline{2-10} & \((\boldsymbol{\lambda}_{3},1)\) & 14.557 & 21.023 & 15.304 & 14.098 & 1.813 & 20.462 & **1.730** \\ & \((\boldsymbol{\lambda}_{3},0.5)\) & 12.967 & 19.572 & 14.023 & 12.926 & 0.455 & 19.478 & **0.423** \\ & \((\boldsymbol{\lambda}_{3},0.1)\) & 12.734 & 19.179 & 13.593 & 12.732 & 0.018 & 19.165 & **0.017** \\ \hline \multirow{8}{*}{(50,10)} & \((\boldsymbol{\lambda}_{1},1)\) & 1.166 & 1.202 & **1.152** & 1.251 & 1.527 & 1.213 & 1.204 \\ & \((\boldsymbol{\lambda}_{1},0.5)\) & 0.360 & 0.378 & **0.351** & 0.403 & 0.416 & 0.371 & 0.375 \\ & \((\boldsymbol{\lambda}_{1},0.1)\) & 0.013 & 0.013 & **0.012** & 0.014 & 0.015 & 0.012 & 0.013 \\ \cline{2-10} & \((\boldsymbol{\lambda}_{2},1)\) & 26.252 & 26.847 & 4.592 & 26.931 & 3.716 & 27.192 & **3.246** \\ & \((\boldsymbol{\lambda}_{2},0.5)\) & 25.219 & 26.184 & 1.035 & 25.263 & 1.033 & 26.292 & **0.952** \\ & \((\boldsymbol{\lambda}_{2},0.1)\) & 25.755 & 26.596 & 0.038 & 25.674 & 0.032 & 26.588 & **0.028** \\ \cline{2-10} & \((\boldsymbol{\lambda}_{3},1)\) & 13.669 & 15.136 & 14.827 & 13.458 & 2.683 & 15.000 & **2.531** \\ & \((\boldsymbol{\lambda}_{3},0.5)\) & 11.900 & 13.765 & 13.155 & 11.820 & 0.718 & 13.839 & **0.689** \\ & \((\boldsymbol{\lambda}_{3},0.1)\) & 11.543 & 13.792 & 12.797 & 11.557 & 0.022 & 13.760 & **0.021** \\ \hline \multirow{8}{*}{(50,50)} & \((\boldsymbol{\lambda}_{1},1)\) & 1.375 & 1.428 & **1.368** & 1.477 & 5.867 & 1.383 & 1.482 \\ & \((\boldsymbol{\lambda}_{1},0.5)\) & **0.334** & 0.335 & 0.335 & 0.358 & 1.541 & 0.337 & 0.349 \\ \cline{1-1} & \((\boldsymbol{\lambda}_{1},0.1)\) & 0.012 & 0.013 & 0.012 & 0.012 & 0.055 & **0.012** & 0.013 \\ \cline{1-1} \cline{2-10} & \((\boldsymbol{\lambda}_{2},1)\) & 11.753 & 10.580 & **2.366** & 11.825 & 12.150 & 11.114 & 2.906 \\ \cline{1-1} & \((\boldsymbol{\lambda}_{2},0.5)\) & 11.591 & 10.436 & 0.917 & 11.728 & 2.562 & 10.153 & **0.851** \\ \cline{1-1} & \((\boldsymbol{\lambda}_{2},0.1)\) & 11.472 & 10.417 & 0.040 & 11.428 & 0.121 & 10.286 & **0.030** \\ \cline{1-1} \cline{2-10} & \((\boldsymbol{\lambda}_{3},1)\) & 42.628 & 50.944 & 44.088 & 42.713 & 6.288 & 51.190 & **2.645** \\ \cline{1-1} & \((\boldsymbol{\lambda}_{3},0.5)\) & 41.936 & 50.807 & 43.524 & 41.814 & 1.644 & 50.473 & **0.610** \\ \cline{1-1} & \((\boldsymbol{\lambda}_{3},0.1)\) & 41.520 & 50.186 & 42.391 & 41.572 & 0.059 & 50.225 & **0.024** \\ \hline \end{tabular} \end{table} Table 2: The post-period MSPE of alternative estimators under various \((T,J)\). setting of \(\epsilon_{t}\sim N(0,\sigma_{t}^{2})\) with \(\sigma_{t}^{2}=\sigma_{0}^{2}\frac{\|\mathbf{y}^{t}\|^{2}}{J+\|\mathbf{y}^{t}\| ^{2}}\), where \(\mathbf{y}^{t}\) means the \(t\)-th row of \(\mathbf{Y}_{0}\). Here the value of \(\sigma_{0}^{2}\) is selected to control the coefficient of determination \(R^{2}\) to vary among 0.4, 0.6, and 0.8. The results in the working model setting are reported in Table 3, which shows the post-period MSPE values of alternative estimators. For the case where \(c=1\), the SC, dSC, ASC, MASC and lasso methods perform similarly, but achieve a lower MSPE than SMC, which aligns with our expectations. However, in cases where \(c=0.5\), SMC achieves a lower risk than the SC, dSC, ASC, and MASC methods, and performs similarly as lasso. Additionally, for cases where \(c=2\), SMC achieves the best among all methods. These observations remain consistent across various \(R^{2}\) values. ### The Basque dataset We study the effect of terrorism on per capita GDP in Basque, Spain. The Basque dataset is from Abadie and Gardeazabal (2003). It consists of per capita GDP of 17 regions in Spain from 1955 to 1997, and 12 other covariates of each region over the same time interval, representing education, investment, sectional shares, and population density in each region. We incorporate auxiliary covariates which include averages for the 13 characteristics from 1960 to 1969, and scale each covariate so that it has equal variance of outcomes. In this study, the treated unit is the Basque Country, and the treatment is the onset of separatist \begin{table} \begin{tabular}{c|c|c c c c c c c} \hline \(c\) & \(R^{2}\) & sc & dsc & asc & masc & ols & lasso & SMC \\ \hline & 0.4 & 1.1059 & 1.1279 & 1.1227 & 1.1280 & 1.8637 & **1.0634** & 1.1615 \\ 1 & 0.6 & **0.4941** & 0.5091 & 0.4971 & 0.5744 & 1.0746 & 0.5089 & 0.5349 \\ & 0.8 & 0.1947 & 0.2039 & **0.1918** & 0.2115 & 0.3898 & 0.2045 & 0.2270 \\ \hline & 0.4 & 0.2237 & 0.2327 & 0.2268 & 0.2349 & 0.2922 & 0.1664 & **0.1566** \\ 0.5 & 0.6 & 0.1440 & 0.1489 & 0.1451 & 0.1510 & 0.1026 & 0.0768 & **0.0702** \\ & 0.8 & 0.1110 & 0.1141 & 0.1118 & 0.1265 & 0.0492 & 0.0431 & **0.0272** \\ \hline & 0.4 & 4.9207 & 5.1163 & 4.5520 & 5.0719 & 7.5369 & 5.1716 & **4.3806** \\ 2 & 0.6 & 2.7837 & 2.8202 & 2.7010 & 2.7860 & 3.6843 & 2.8342 & **2.4110** \\ & 0.8 & 1.3589 & 1.3843 & 1.1473 & 1.3118 & 1.3252 & 1.3425 & **0.8332** \\ \hline \end{tabular} \end{table} Table 3: The post-period MSPE of alternative estimators in the working model setting. terrorism, which begins in 1970. **Placebo Analyses.** Similar to Abadie and Gardeazabal (2003), we conduct a placebo study to compare alternative estimators in the real data. We perform placebo analyses on each region, excluding Basque, as the placebo region. We calculate the mean squared prediction error (MSPE) for each region by taking the differences between its actual and fitted outcome paths in each of the post-period years (1970-1997), squaring these differences and then averaging them among these years. The results of our analysis are presented in Table 4, which shows that, on average, SMC tends to have the lowest MSPE. In addition, we include the pre-period fit of these estimator in Table 6 of Appendix to further demonstrate their performance. Interestingly, we observe that SMC does not exhibit the best pre-period fit on average (it is the second best), while ASC demonstrates the best pre-period fit on average. This observation suggests that SMC is less prone to over-fitting compared to ASC. **Synthetic Basque.** We estimate the effect of exposure to terrorism on GDP per capita in Basque, Spain. We present the GDP per capita for both Basque and its synthetic control, \begin{table} \begin{tabular}{c|c c c c c c c} \hline region & sc & dsc & ASC & masc & OLS & lasso & SMC \\ \hline Andalucia & 0.41 & 0.15 & 0.17 & 0.32 & 5.18 & **0.13** & 0.22 \\ Aragon & 0.03 & 0.12 & 0.06 & 0.04 & 0.03 & 0.02 & **0.01** \\ Asturias & 0.71 & 0.56 & 3.40 & 0.73 & **0.30** & 0.44 & 0.67 \\ Baleares & 2.12 & 3.68 & 1.24 & 2.12 & 2.51 & 4.73 & **0.38** \\ Canarias & 0.07 & 0.10 & 0.35 & 0.07 & 0.96 & **0.02** & 0.25 \\ Cantabria & 0.37 & 0.65 & 0.90 & 0.34 & 1.87 & 0.56 & **0.12** \\ Leon & **0.01** & 0.12 & 0.08 & 0.01 & 0.15 & 0.13 & 0.13 \\ Mancha & 0.07 & **0.02** & 0.04 & 0.04 & 0.49 & 0.39 & 0.04 \\ Cataluna & 0.44 & **0.03** & 0.14 & 0.44 & 0.73 & 1.33 & 0.04 \\ Valenciana & 0.15 & 0.14 & 0.09 & 0.04 & 0.08 & **0.03** & 0.31 \\ Extremadura & 0.74 & **0.06** & 0.17 & 0.74 & 0.99 & 0.63 & 0.32 \\ Galicia & **0.01** & 0.02 & 0.04 & **0.01** & 0.07 & 0.01 & 0.11 \\ Madrid & **0.11** & 0.38 & 3.75 & **0.11** & 0.48 & 0.16 & 0.48 \\ Murcia & 0.21 & 0.27 & 0.07 & 0.20 & 1.18 & **0.03** & 0.51 \\ Navarra & 0.04 & 0.04 & 0.03 & 0.05 & 0.04 & 0.12 & **0.03** \\ Rioja & 0.04 & **0.03** & 0.08 & 0.19 & 0.09 & 0.30 & 0.04 \\ \hline average & 0.35 & 0.40 & 0.66 & 0.34 & 0.95 & 0.56 & **0.23** \\ \hline \end{tabular} \end{table} Table 4: Performance (MSPE) of alternative estimators in the placebo study. generated using the SC and SMC methods, in Figure 1. We also show the difference between the actual values and the synthetic values. Due to saving space and for ease of comparison, we just report the comparison between SMC and the original SC method, ignoring other estimators. We observe that SMC performs better in tracking the post-intervention trend. To gain a better understanding of the SMC method, we inspect the comprehensive weight \(\theta_{j}w_{j}\) assigned to each control unit \(j\) and compare it with the weights obtained from other methods. The results are reported in Table 5. The SC, dSC, and MASC methods assign positive weights to just two control units. The lasso method assigns positive weights to four control units. Comparing to them, SMC assigns non-zero weight to four units. It is important to note that OLS has no zero weight as it is not constrained. This suggests that, compared with other methods with constraints, SMC's greater flexibility may bring greater predictive power. Figure 1: Study of the Basque Country. Left: Actual and counterfactual per capita GDP of the Basque Country. Right: The difference of actual and counterfactual values (ATT). ## 6 Discussion This paper makes three contributions: (1) We propose a simple and effective method, synthetic matching control, by synthesizing the matched controls. (2) We determine the weights by minimizing the unbiased risk estimate criterion. We demonstrate that this method is asymptotically optimal, achieving the lowest possible squared error. (3) We expand the method in several domains, including conducting inference, extending it to cases where control units is more than time periods, and incorporating auxiliary covariates. There are several potential directions for future work. First, we focus on the simple linear regression to reduce the issue of imperfect fit. Consequently, the synthesis method relies on the simple linear regression. However, the SMC method may be applicable for more complex data structures, such as discrete, count, or hierarchical outcomes. Therefore, extending the method to broader regression models is an interesting direction. Second, for settings with multiple treated units, we can fit SMC separately for each treated unit, as \begin{table} \begin{tabular}{c|c c c c c c} \hline region & sc & dsc & masc & OLS & lasso & SMC \\ \hline Andalucia & 0 & 0 & 0 & 0.217 & -0.112 & 0 \\ Aragon & 0 & 0 & 0 & -3.059 & 0 & 0 \\ Asturias & 0 & 0 & 0 & 1.460 & 0 & 0 \\ Baleares & 0 & 0.582 & 0 & -0.365 & 0.365 & 0 \\ Canarias & 0 & 0 & 0 & -0.245 & 0 & 0 \\ Cantabria & 0 & 0 & 0 & 0.048 & 0.048 & 0.270 \\ Leon & 0 & 0 & 0 & 0.080 & 0 & 0 \\ Mancha & 0 & 0 & 0 & 0.861 & 0 & 0 \\ Cataluna & 0.851 & 0 & 0.851 & -0.174 & 0.174 & 0 \\ Valenciana & 0 & 0 & 0 & 1.766 & 0.065 & 0.058 \\ Extremadura & 0 & 0 & 0 & -0.401 & 0 & 0 \\ Galicia & 0 & 0 & 0 & -0.396 & 0 & 0 \\ Madrid & 0.149 & 0.418 & 0.149 & 0.533 & 0 & 0.342 \\ Murcia & 0 & 0 & 0 & -0.547 & 0 & 0 \\ Navarra & 0 & 0 & 0 & 1.302 & 0.234 & 0 \\ Rioja & 0 & 0 & 0 & 0.673 & 0 & 0.567 \\ \hline Intercept & - & -0.335 & - & -2.580 & 1.097 & -0.192 \\ \hline \end{tabular} \end{table} Table 5: Estimates of weights for alternative estimators in the Basque study. The comprehensive weights \(\theta_{j}w_{j}\) and the coefficients are reported for SMC and OLS, respectively. in Abadie (2021). However, this approach brings a loss of efficiency due to the correlation of treated units. Therefore, efficiently extending the method to multiple treated units is a worthy direction. Third, a set of auxiliary covariates is available, we pool the auxiliary covariates and the outcomes together to conduct the SMC method. However, this approach may bring extra risk when the linear approximation relation in the covariates is different from that in the outcomes. Therefore, exploring ways to incorporate auxiliary covariates into the SMC method while minimizing such risks is another worthy problem for future research. Finally, extending it to more complicated situations, such as staggered adoption where units take up the treatment at different times (Ben-Michael et al., 2022), is another challenging direction.
2302.10477
TMoE-P: Towards the Pareto Optimum for Multivariate Soft Sensors
Multi-variate soft sensor seeks accurate estimation of multiple quality variables using measurable process variables, which have emerged as a key factor in improving the quality of industrial manufacturing. The current progress stays in some direct applications of multitask network architectures; however, there are two fundamental issues remain yet to be investigated with these approaches: (1) negative transfer, where sharing representations despite the difference of discriminate representations for different objectives degrades performance; (2) seesaw phenomenon, where the optimizer focuses on one dominant yet simple objective at the expense of others. In this study, we reformulate the multi-variate soft sensor to a multi-objective problem, to address both issues and advance state-of-the-art performance. To handle the negative transfer issue, we first propose an Objective-aware Mixture-of-Experts (OMoE) module, utilizing objective-specific and objective-shared experts for parameter sharing while maintaining the distinction between objectives. To address the seesaw phenomenon, we then propose a Pareto Objective Routing (POR) module, adjusting the weights of learning objectives dynamically to achieve the Pareto optimum, with solid theoretical supports. We further present a Task-aware Mixture-of-Experts framework for achieving the Pareto optimum (TMoE-P) in multi-variate soft sensor, which consists of a stacked OMoE module and a POR module. We illustrate the efficacy of TMoE-P with an open soft sensor benchmark, where TMoE-P effectively alleviates the negative transfer and seesaw issues and outperforms the baseline models.
Licheng Pan, Hao Wang, Zhichao Chen, Yuxing Huang, Xinggao Liu
2023-02-21T06:49:09Z
http://arxiv.org/abs/2302.10477v1
# TMoE-P: Towards the Pareto Optimum ###### Abstract Multi-variate soft sensor seeks accurate estimation of multiple quality variables using measurable process variables, which have emerged as a key factor in improving the quality of industrial manufacturing. The current progress stays in some direct applications of multitask network architectures; however, there are two fundamental issues remain yet to be investigated with these approaches: (1) negative transfer, where sharing representations despite the difference of discriminate representations for different objectives degrades performance; (2) seesaw phenomenon, where the optimizer focuses on one dominant yet simple objective at the expense of others. In this study, we reformulate the multi-variate soft sensor to a multi-objective problem, to address both issues and advance state-of-the-art performance. To handle the negative transfer issue, we first propose an Objective-aware Mixture-of-Experts (OMoE) module, utilizing objective-specific and objective-shared experts for parameter sharing while maintaining the distinction between objectives. To address the seesaw phenomenon, we then propose a Pareto Objective Routing (POR) module, adjusting the weights of learning objectives dynamically to achieve the Pareto optimum, with solid theoretical supports. We further present a Task-aware Mixture-of-Experts framework for achieving the Pareto optimum (TMoE-P) in multi-variate soft sensor, which consists of a stacked OMoE module and a POR module. We illustrate the efficacy of TMoE-P with an open soft sensor benchmark, where TMoE-P effectively alleviates the negative transfer and seesaw issues and outperforms the baseline models. Soft Sensor, Multi-objective Optimization, Negative Transfer, Seesaw. ## I Introduction Process monitoring plays a significant role in contemporary industry and is directly associated to the manufacture of critical industrial products, _e.g._, oil, gas, rare metals, iron, and steel, which are integral to modern human life and national economies. Monitoring the dynamics of critical quality variables in manufacturing has become one of the main concerns in order to meet urgent and demanding requirements, including increasing yields, reducing material consumption, protecting the environment and ensuring the safety of manufacturing processes. The main challenge with monitoring is that some quality variables are extremely difficult to measure. For instance, in deep water gas-lift oil well process, down-hole pressure is an extremely useful indicator for assessing the manufacture quality [1], but it is difficult to measure with hardware sensors (_e.g._, permanent down-hole pressure gauges) due to the inability of hardware sensors in high pressure and salinity environments [2]. Soft sensors aims to estimate immeasurable quality variables with measurable process variables, which can be categorized as model-driven and data-driven approaches. With the success of machine learning and database technology, data-driven approaches have been predominant for building effective soft sensors, which involves building statistical estimates of quality variables with process variables. At the very beginning, the data-driven soft sensors were implemented with linear statistical approaches, such as principal component regression [5, 6], partial least squares [7] and gaussian process [8]. To depict nonlinear relationships between variables, nonlinear models in the machine learning community were further involved in soft sensors, represented by the support vector regression [9, 10], decision tree [11] and kernel regression [12]. More recently, with the great advancement of deep learning techniques [3, 4], soft sensors have been dominated by deep neural methods. Representative methods can be roughly categorized into several types: autoencoder networks [13, 14, 15], recurrent neural networks [16, 17, 18], convolution neural networks [19, 20], graph neural networks [21, 22], and self-attentive networks [23, 24], with each type of methods its own strengths and weaknesses. For example, Autoencoder-based sensors can be refined with semi-supervised setting, but they struggle to capture long-term sequence patterns; RNN-based sensors can be incrementally updated for real-time monitoring, but they suffer from sub-optimal accuracy and single-threaded computational paradigm; self-attention based sensors can process the whole sequence in a parallel manner, but they suffer from huge computational cost and overfitting risk. Overall, the aforementioned line of research focuses on developing more efficient and effective architectures with the aim of improving the estimation accuracy of _single_ quality variable. Despite their success, in real-world practice there is more than one quality variable to be estimated, which requires the construction of an effective multi-variate soft sensor (MVSS). For example, the product concentration and reactant concentration in the reactive distillation process must be measured concurrently to track the separation energy consumption and product purity. Different from Multi-task Learning (MTL), Multi-objective Optimization (MOO) is a technology that can coordinate multiple objectives with internal conflicts and connections to obtain Pareto optimal solutions, mainly through the sharing mechanism. Generally, parameter sharing mechanism can be separated into two categories: hard parameter sharing and soft parameter sharing. Hard parameter sharing refers to the uniform sharing of the network's underlying parameters while maintaining the network's top-level parameters' independence for objectives, such as UberNet [25], multilinear relationship networks [26], and stochastic filter groups [27]. However, in circumstances when the connection between objectives is poor, the direct sharing of underlying parameters will result in performance decrease, which is known as NT phenomenon. Soft parameter sharing introduces distinctive underlying shared expert parameters to each objective, such as cross-stitch networks [28], slave networks [29], and MMoE [30, 33] The involvement of experts helps mitigate the NT issue, but because the variations and interactions among experts are neglected, it is challenging for multiple objectives to reach the same performance of a single objective at once, leading to the seesaw phenomenon. This paper proposes a Task-aware Mixture-of-Experts framework for achieving the Pareto optimum (TMoE-P) to solve issues of NT and seesaw, which were brought on by disregarding the commonality and variability across experts in parameter sharing. The model is made up of two modules: Objective-aware Mixture-of-Experts (OMoE) and Pareto Objective Routing (POR). To start, the OMoE module explicitly distinguishes between objective-specific and objective-shared experts, reducing potentially damaging interactions among representations. Secondly, the POR module solves an optimization problem concerning the gradient of shared parameters using the gradient of each objective, and then obtains the Pareto optimal model parameters, which alleviates some objectives' performance sacrifice throughout the model training process. The main contributions of this paper are summarized as follows: 1. The soft sensor problem is stated from the standpoint of MOO, and it is highlighted that the regression problem of multiple quality variables is transformed into the joint optimization of multiple regression objectives. A MOO model will be used to model the relationships between and within sequences of quality variables. 2. A TMoE-P model based on the OMoE module and POR module is proposed to solve the NT and seesaw issues of multi-objective soft sensor problem. The solutions for the two modules are explicit objective-specific with objective-shared representations learning, and Pareto optimal MOO. 3. Numerous off-line experiments were conducted in the Sulfur Recovery Unit process to assess the efficiency of TMoE-P in resolving the issues of NT and seesaw. The results and mathematical proofs reveal that TMoE-P outperforms the baseline models in soft sensor applications. The rest of this paper is organized as follows: Section II covers the preliminaries of multi-objective soft sensor problem and Pareto theory. Section III introduced the novel TMoE-P approach. Section IV is an experimental study on the well-known Sulfur Recovery Unit. The final section is the conclusion. ## II Preliminaries ### _Multi-Objective Soft Sensor Problem_ Given a dataset with i.i.d. data points \(\{\mathbf{x}_{i},\mathbf{y}_{i}\}_{i\in\{1,2,...,N\}}\) where \(\mathbf{x}_{i}=[x_{i,1},x_{i,2},\ldots,x_{i,D}]^{\top}\) is the \(D\)-dimensional process variables, \(\mathbf{y}_{i}=[y_{i,1},y_{i,2},\ldots,y_{i,K}]^{\top}\) is the \(K\)-dimensional quality variables, \(y_{i,k}\) is the label of the \(k\)th objective, \(K\) is the number of objectives, and \(N\) is the total number of data points. Multi-objective soft sensor (MOSS) seeks to establish an inference mathematical model \(f\left(\mathbf{x}|\mathbf{\theta}\right)\) from process variables to quality variables through sharing mechanism. When a new process variables sequence \(\mathbf{x}_{N+m}|_{m\geq 1}\) arrives, the quality variables sequence \(\hat{\mathbf{y}}_{N+m}\) can be predicted as follows: \[\hat{\mathbf{y}}_{N+m}=f(\mathbf{x}_{N+m}|\mathbf{\theta})\cong\left[y_{N+m,1},\ldots,y_{ N+m,K}\right]^{\top}. \tag{1}\] Once we have set the associated loss function for each objective as \(\mathcal{L}_{k}\), the mathematical form of MOSS is given as: \[\begin{split}&\min_{\mathbf{\theta}_{\text{sh}},\mathbf{\theta}_{1}, \ldots,\mathbf{\theta}_{K}}\mathcal{L}\left(\mathbf{\theta}_{\text{sh}},\mathbf{\theta}_{1 },\ldots,\mathbf{\theta}_{K}\right)\\ &=\min_{\mathbf{\theta}_{\text{sh}},\mathbf{\theta}_{1},\ldots,\mathbf{\theta }_{K}}\Big{[}\hat{\mathcal{L}}_{1}\left(\mathbf{\theta}_{\text{sh}},\mathbf{\theta}_{1 }\right),\ldots,\hat{\mathcal{L}}_{L}\left(\mathbf{\theta}_{\text{sh}},\mathbf{\theta }_{K}\right)\Big{]}^{\top}.\end{split} \tag{2}\] where \(\mathbf{\theta}_{\text{sh}}\) are shared parameters between objectives, \(\mathbf{\theta}_{k}\) are objective-specific, \(\hat{\mathcal{L}}_{k}\left(\mathbf{\theta}_{\text{sh}},\mathbf{\theta}_{k}\right)\) is the empirical loss of \(k\)th objective on the dataset, defined as \(\hat{\mathcal{L}}_{k}\left(\mathbf{\theta}_{\text{sh}},\mathbf{\theta}_{k}\right)= \frac{1}{N}\sum_{i=1}^{N}\hat{\mathcal{L}}_{k}\left(f_{k}\left(\mathbf{x}_{i}|\mathbf{ \theta}_{\text{sh}},\mathbf{\theta}_{k}\right),y_{i,k}\right)\) and \(f_{k}\) is the inference sub-model in \(f\) corresponding to \(k\)th objective. By leveraging both common features shared between objectives and objective-specific features, MOSS is capable of improving efficiency and generalization performance. There are two types of parameter sharing mechanisms that are frequently utilized nowadays. One is hard parameter sharing shown in Fig. 1(a), which involves a shared underlying model structure with common hidden layers across objectives. Although this structure minimizes the likelihood of overfitting, optimization conflicts resulting from objective heterogeneity still exists. The other type is soft parameter sharing, which shares experts capable of cross-talk at the bottom layer or fuses experts information via gating networks, as illustrated in Fig. 1: Network routing for hard and soft parameter sharing. Fig. 1(b) and 1(c). Compared with hard parameter sharing, soft parameter sharing has more objective-specific parameters and can nevertheless achieve higher performance when objective dependencies are complicated and parameter conflicts arise. ### _Negative Transfer & Seesaw Phenomenon In MOSS_ NT and seesaw phenomenons are two major issues that must be addressed throughout the MOO process of multiple quality variables in MOSS. Specially, NT phenomenon means that the multi-objective performance attained through training via (2) has deteriorated in comparison to the performance of individual modeling training for each objective. The typical explanation is that there is a weak or loose connection between the multiple objectives of joint modeling or that the MOO model is difficult to learn the similarities and differences between the various objectives. The seesaw phenomenon usually occurs when the association between multiple objectives is complex. It implies that multiple objectives cannot be performed at a high level at the same time, and that secondary objectives are frequently sacrificed to achieve excellent performance on the main objective. ### _Pareto Optimality For MOSS_ The MOSS problem specified in (1) tries to obtain estimates \(\{\hat{y}_{k}\}_{k\in\{1,2,\ldots,K\}}\) for each objective by minimizing the prediction loss of the inference model \(f\) on each objective by solving the MOP defined in (2). However, MOP is tough to solve and acquire the optimal solution. The main reason is that MOP has multiple objective functions, and the superiority and inferiority of MOP solutions cannot be compared and ranked by traditional size relation comparison. To evaluate the correlation between solutions, the Pareto optimality theory is proposed and defined as follows [31] in the parameter sharing mode: 1. A parameter solution \(\mathbf{\theta}\) is said to dominate another parameter solution \(\widetilde{\mathbf{\theta}}\) if \(\hat{\mathcal{L}}_{k}\left(\mathbf{\theta}_{\text{sh}},\mathbf{\theta}_{k}\right) \leq\hat{\mathcal{L}}_{k}\left(\widetilde{\mathbf{\theta}}_{\text{sh}},\widetilde {\mathbf{\theta}}_{k}\right)\) for all objectives \(k\), and there is a objective \(k^{\prime}\) that satisfies \(\hat{\mathcal{L}}_{k^{\prime}}\left(\mathbf{\theta}_{\text{sh}},\mathbf{\theta}_{k^{ \prime}}\right)<\hat{\mathcal{L}}_{k^{\prime}}\left(\widetilde{\mathbf{\theta}}_{ \text{sh}},\widetilde{\mathbf{\theta}}_{k^{\prime}}\right)\). 2. A parameter solution \(\mathbf{\theta}^{*}\) is called Pareto optimal if and only if there is no parameter solution \(\mathbf{\theta}\) dominating \(\mathbf{\theta}^{*}\). In fact, the solution of MOP is not unique, but rather a set of Pareto optimal solutions, often known as Pareto set. Besides, the KKT conditions listed below need to be met while solving MOP: 1. There exists \(c_{1},c_{2},\ldots,c_{K}\) that satisfies \(\sum_{k=1}^{K}c_{k}=1\) and \(\sum_{k=1}^{K}c_{k}\nabla_{\mathbf{\theta}_{\text{sh}}}\hat{\mathcal{L}}_{k}\left( \mathbf{\theta}_{\text{sh}},\mathbf{\theta}_{k}\right)=0\). 2. For all objectives \(k\), \(\nabla_{\mathbf{\theta}_{\text{sh}}}\hat{\mathcal{L}}_{k}\left(\mathbf{\theta}_{\text {sh}},\mathbf{\theta}_{k}\right)=0\) is satisfied. A solution that meets the above conditions is called a Pareto stationary point, and all Pareto optimal points are Pareto stationary, although the opposite is not always true. Therefore, while training the MOO model, we focus on leveraging Pareto stationary point to approximate the optimal MOO model parameters. ## III Proposed Method ### _Objective-aware Mixture-of-Experts Module_ The OMoE module is a multi-gate hybrid expert module inspired by [32] that was designed under the proposed framework. In contrast to the multi-gate hybrid expert models used in [30] and [33], the OMoE module explicitly distinguishes between objective-shared experts and objective-specific experts. Based on the MMoE structure of using objective-specific gate to fuse shared expert information to mitigate the NT phenomenon, the OMoE module increases objective-specific experts to achieve single-objective modeling performance, thereby solving the seesaw phenomenon caused by MOO imbalance. Specifically, the OMoE module is composed of several objective-specific and objective-shared feature extraction blocks, each with its own set of objective-specific expert module, objective-shared expert module, and corresponding gating networks. Multi-layer networks associated with each objective make up the OMoE module's top layer. The number of objective-shared and objective-specific experts in the feature extraction blocks, as well as the width and depth of the experts networks and tower networks, are all hyperparameters. Fig. 2 Fig. 2: The structure of Objective-aware Mixture-of-Experts module. shows an example of a single feature extraction block to illustrate data flow in the OMoE module. The inputs of the current feature extraction block are respectively derived from the selected outputs of objective-specific and objective-shared expert module in the previous feature extraction block. For the \(j^{\text{th}}\) feature extraction block, the outputs of \(k^{\text{th}}\) objective-specific expert module and the objective-shared expert module will be selected and concatenated as follows to obtain features \(O_{k}\) and \(O_{s}\): \[O_{k}\left(\mathbf{x}^{(k)}\right)=\left[\text{cat}\left\{E_{(k,p)}^{\top}\right\} _{p=1}^{n_{k}},\text{cat}\left\{E_{(s,q)}^{\top}\right\}_{q=1}^{n_{s}}\right]^ {\top},\] \[O_{s}\left(\mathbf{x}^{(s)}\right)=\left[\text{cat}\left\{\left\{E_{(k,p)}^{\top} \right\}_{p=1}^{n_{k}}\right\}_{k=1}^{K},\text{cat}\left\{E_{(s,q)}^{\top} \right\}_{q=1}^{n_{s}}\right]^{\top}. \tag{3}\] where \(\text{cat}\left\{\mathbf{\alpha}^{\top},\mathbf{\beta}^{\top}\right\}=\left[\mathbf{ \alpha}^{\top},\mathbf{\beta}^{\top}\right]\) is the concatenation function, \(\mathbf{x}^{(k)}\) is the input of the \(k^{\text{th}}\) objective-specific expert module, \(E_{(k,p)}\) is the output features of the \(p^{\text{th}}\) expert in the \(k^{\text{th}}\) objective-specific expert module, \(\mathbf{x}^{(s)}\), \(E_{(s,q)}\) correspond to the objective-shared expert module, and \(n_{k}\), \(n_{s}\) are the number of experts in the \(k^{\text{th}}\) objective-specific expert module and objective-shared expert module respectively. It should be noted that the corresponding inputs for all modules in the first feature extraction block is the same, that is \(\mathbf{x}^{(k)}=\mathbf{x}^{(s)}\). Following selective concatenation, the output features are selectively fused via gating networks. The gating network is a single-layer fully connected network with softmax as the activation function, and the input is used as a filter to determine the weighted sum of the chosen splicing vectors, as shown in Fig. 3. Taking the \(k^{\text{th}}\) objective-specific gating network as an example, its output after feature extraction block (FEB) is also used as the input of the \(k^{\text{th}}\) objective-specific expert module in block \(j+1\) as follows: \[\text{FEB}\left(\mathbf{x}^{(k)}\right)=g_{k}\left(\mathbf{x}^{(k)}\right)O_{k}\left( \mathbf{x}^{(k)}\right). \tag{4}\] where \(g_{k}\) is the weight calculation network and also the input linear transformation with a softmax layer: \[g_{k}\left(\mathbf{x}^{(k)}\right)=\text{softmax}\left(\mathbf{W}_{k}\mathbf{x}^{(k)} \right). \tag{5}\] where \(\mathbf{W}_{k}\in\mathbb{R}^{n\times d}\) is a trainable matrix, \(n\) is the number of experts in \(O_{k}\) and \(d\) is the dimension of input \(\mathbf{x}^{(k)}\). Finally, the output of the gating network is fed into the matching tower network to obtain the predicted value: \[\hat{y}_{k}=h_{k}\left(g_{k}^{(n_{k})}\left(\mathbf{x}^{(k,n_{k})}\right)\right). \tag{6}\] where \(h_{k}\) is the \(k^{\text{th}}\) tower network and \(n_{b}\) is the number of feature extraction block. The OMoE module is a MOO model of soft parameter sharing framework, but unlike MMoE and cross-stitch networks, OMoE divides feature extraction into objective-specific and objective-shared parts. This design adopts separate gating networks to integrate expert information for multiple objectives to mitigate the NT phenomenon, while using objective-specific experts to achieve performance comparable to single-objective modeling to mitigate the seesaw phenomenon. ### _Pareto Objective Routing Module_ The genesis of the seesaw phenomenon is more than just a problem with the architecture of MOO models. Another cause of the uneven distribution of training resources among objectives is the imbalance of loss weights. The optimization direction of the MOO model is determined by the weighted sum of the loss weight and the objective loss, and the incorrect setting of the loss weight may sacrifice the performance of the secondary objectives to improve the performance of the primary objectives. Therefore, the POR module is designed in such a way that the loss weight is dynamically modified as the model parameters are optimized. #### Iii-B1 Problem Statement As previously stated, the MOP defined by (2) is difficult to have a unique solution to minimize the loss of all objectives. The typical solution is to convert the loss vector optimization in (2) into scalar optimization, which is realized by the weighted sum of the following formula: \[\min_{\mathbf{\theta}}\mathcal{L}_{\mathbf{w}}\left(\mathbf{\theta}\right)=\min_{\mathbf{ \theta}}\mathbf{w}^{\top}\mathcal{L}\left(\mathbf{\theta}\right)=\min_{\mathbf{\theta}} \sum_{k=1}^{K}w_{k}\hat{\mathcal{L}}_{k}\left(\mathbf{\theta}_{\text{sh}},\mathbf{ \theta}_{k}\right). \tag{7}\] where \(\mathbf{w}=\left[w_{1},\dots,w_{K}\right]^{\top}\) is the loss weight, satisfying \(\sum_{k=1}^{K}w_{k}=1\) and \(w_{k}\geq 0\) for all objectives \(k\). The Pareto stationary point of the multi-objective loss minimization problem in scalar form can be approximated using the following sub-optimization problem: \[\min_{\mathbf{w}}L\left(\mathbf{w}\right)=\min_{\mathbf{w}}\left\|\nabla_{\mathbf{\theta}_{ \text{a}}}\mathcal{L}_{\mathbf{w}}\right\|_{2}^{2}. \tag{8}\] where \(\|\cdot\|_{2}\) is the \(L_{2}\)-norm. The sub-optimization problem (8) is a linearly constrained optimization problem about the loss weight \(\mathbf{w}\), and its optimal solution \(\mathbf{w}^{*}\) can be obtained through linear search algorithm Frank-Wolfe [34]. #### Iii-B2 Frank-Wolfe Algorithm The Frank-Wolfe algorithm is a method for approximately minimizing the objective function in the feasible zone after first-order linearization. The loss function \(L\left(\mathbf{w}\right)\) in (8) can be linearized as follows: \[\min_{\mathbf{w}}L\left(\mathbf{w}\right)\rightarrow\min_{\mathbf{w}}\nabla_{\mathbf{w}}L \left(\mathbf{w}^{(r)}\right)^{\top}\mathbf{w}. \tag{9}\] where \(\mathbf{w}^{(r)}\) is the approximation point in current iteration. In the case of two objectives, (8) can be generalized as \(\min_{w}\|\mathbf{w}l_{1}+(1-w)\mathbf{l}_{2}\|_{2}^{2}\), where \(\mathbf{l}_{1},\mathbf{l}_{2}\) are the loss gradient vectors of the two objectives relative to the objective-shared parameters. The Frank-Wolfe algorithm points out that the Fig. 3: The workflow of gating network. weight \(w^{*}\) that makes the norm in the preceding formula the smallest is: \[w^{*}=\left\{\begin{array}{ll}1&\text{if }\mathbf{l}_{1}^{\top}\mathbf{l}_{2}\geq\mathbf{l}_{1 }^{\top}\mathbf{l}_{1}\\ 0&\text{if }\mathbf{l}_{1}^{\top}\mathbf{l}_{2}\geq\mathbf{l}_{2}^{\top}\mathbf{l}_{2}\\ \frac{(\mathbf{l}_{2}-\mathbf{l}_{1})^{\top}\mathbf{l}_{2}}{\|\mathbf{l}_{1}-\mathbf{l}_{2}\|_{2}^ {2}}&\text{otherwise}\end{array}\right.. \tag{10}\] For the common situation where the number of objectives is higher than two, the objective gradient interconnection matrix \(\mathbf{M}\) should be firstly defined as follows: \[\mathbf{M}_{i,j}=\left(\nabla_{\mathbf{\theta}_{\text{sh}}}\hat{\mathcal{L}}_{i}\left( \mathbf{\theta}_{\text{sh}},\mathbf{\theta}_{i}\right)\right)^{\top}\left(\nabla_{\bm {\theta}_{\text{sh}}}\hat{\mathcal{L}}_{j}\left(\mathbf{\theta}_{\text{sh}},\mathbf{ \theta}_{j}\right)\right). \tag{11}\] Further calculate the weighted row sum of \(\mathbf{M}\) to find the objective \(\hat{t}\) that minimizes the weighted sum of its own gradient multiplied by the gradients of other objectives. The weight \(v^{*}\) that makes \(\left[v\mathbf{e}_{\hat{k}}+(1-v)\mathbf{w}^{(r)}\right]^{\top}\mathbf{M}\left[v\mathbf{e}_{ \hat{k}}+(1-v)\mathbf{w}^{(r)}\right]\) the least can be derived by (10). Finally, the loss weight \(\mathbf{w}\) is updated as follows: \[\mathbf{w}^{(r+1)}\gets v^{*}\mathbf{e}_{\hat{k}}+(1-v^{*})\mathbf{w}^{(r)}. \tag{12}\] where \(r\) is the number of iterations and \(\mathbf{e}_{\hat{k}}\in\mathbb{R}^{K}\) is a unit vector whose \(\hat{k}^{\text{th}}\) element is 1. Iteratively update the loss weight \(\mathbf{w}\) until \(v^{*}\) approaches zero, and then find the loss weight \(\mathbf{w}^{*}\) that eventually meets the normalization and non-negative conditions. Ultimately, the objective-specific parameters are updated in the same way as the regular network parameters, with the objective-shared parameters updated using the loss weights: \[\mathbf{\theta}_{k} \leftarrow\mathbf{\theta}_{k}-\eta\nabla_{\mathbf{\theta}_{k}}\hat{ \mathcal{L}}_{k}\left(\mathbf{\theta}_{\text{sh}},\mathbf{\theta}_{k}\right)\quad k \in\{1,\ldots,K\}, \tag{13}\] \[\mathbf{\theta}_{\text{sh}} \leftarrow\mathbf{\theta}_{\text{sh}}-\eta\sum_{k=1}^{K}w_{k}\nabla_{ \mathbf{\theta}_{\text{sh}}}\hat{\mathcal{L}}_{k}\left(\mathbf{\theta}_{\text{sh}}, \mathbf{\theta}_{k}\right).\] where \(\eta\) is the learning rate. Algorithm 1 displays the POR module for parameter optimization of the MOO model. ``` 0:\(\mathbf{\theta}\): parameters of the MOO model; \(\nabla_{\mathbf{\theta}_{\text{sh}}}\hat{\mathcal{L}}_{k}\): gradient of each objective loss to the objective-shared parameters; \(\nabla_{\mathbf{\theta}_{\text{sh}}}\hat{\mathcal{L}}_{k}\): gradient of each objective loss to the objective-specific parameters. Parameter: \(R\): the maximum number of iterations; \(\mathbf{e}_{\hat{k}}\): a \(K\)-dimensional unit vector whose \(\hat{k}^{\text{th}}\) element is 1. Output: \(\mathbf{w}^{*}\): loss weight which minimizes \(L\left(\mathbf{w}\right)\); \(\mathbf{\theta}\): optimized parameters. 1: Initialize \(\mathbf{w}^{(0)}=\left[w_{1},\ldots,w_{K}\right]^{\top}=\left[\frac{1}{r},\ldots, \frac{1}{r}\right]^{\top}\) 2: Compute \(\mathbf{M}\) with element \(\mathbf{M}_{i,j}=\left(\nabla_{\mathbf{\theta}_{\text{sh}}}\hat{\mathcal{L}}_{i} \right)^{\top}\left(\nabla_{\mathbf{\theta}_{\text{sh}}}\hat{\mathcal{L}}_{j}\right)\) 3: Set \(r=0\) 4:repeat 5:\(\hat{k}\leftarrow\text{argmin}_{i}\sum_{j=1}^{K}w_{j}\mathbf{M}_{i,j}\) 6:\(v^{*}\leftarrow\text{argmin}_{v}\left[v\mathbf{e}_{\hat{k}}+(1-v)\mathbf{w}\right]^ {\top}\mathbf{M}\left[v\mathbf{e}_{\hat{k}}+(1-v)\mathbf{w}\right]\) 7:\(\mathbf{w}\gets v^{*}\mathbf{e}_{\hat{k}}+(1-v^{*})\mathbf{w}\) 8:\(r=r+1\) 9:until\(v^{*}\to 0\) or \(r=R\) 10: Update \(\mathbf{\theta}_{k}\leftarrow\mathbf{\theta}_{k}-\eta\nabla_{\mathbf{\theta}_{\text{sh}}} \hat{\mathcal{L}}_{k}\left(\mathbf{\theta}_{\text{sh}},\mathbf{\theta}_{k}\right)\) for all objectives 11: Update \(\mathbf{\theta}_{\text{sh}}\leftarrow\mathbf{\theta}_{\text{sh}}-\eta\sum_{k=1}^{K}w_{ k}\nabla_{\mathbf{\theta}_{\text{sh}}}\hat{\mathcal{L}}_{k}\left(\mathbf{\theta}_{ \text{sh}},\mathbf{\theta}_{k}\right)\) ``` **Algorithm 1** POR for Parameter Optimization Algorithm 1 displays the POR module for parameter optimization of the MOO model. #### Iii-B3 Analysis of Primal-Dual Convergence We shall demonstrate the primal-dual convergence of the Frank-Wolfe algorithm, which is stronger than the convergence of the primal problem, using a simple dual gap proof framework. The convergence of the primal error is given first in Theorem III.1. **Theorem III.1**.: _For each \(r\geq 1\), the iterates \(\mathbf{w}^{(r)}\) of the Frank-Wolfe algorithm satisfy_ \[L(\mathbf{w}^{(r)})-L(\mathbf{w}^{*})\leq\frac{2C_{f}}{r+2}(1+\delta). \tag{14}\] _where \(\delta\geq 0\) is the accuracy to which the linear subproblems (9) are solved, and \(C_{f}\) is the curvature constant of loss function \(L\) which is defined as_ \[C_{f}=\sup_{\gamma\in[0,1]}\frac{2}{\gamma^{2}}(L(\mathbf{w}^{\prime})-L(\mathbf{w})- \langle\mathbf{w}^{\prime}-\mathbf{w},\nabla L(\mathbf{w})\rangle). \tag{15}\] _where \(\mathbf{w}^{\prime}=\mathbf{w}+\gamma(\mathbf{s}-\mathbf{w})\) and \(\mathbf{s}\) is a feasible solution._ The proof of the above convergence theorem is given in Appendix for completeness. Theorem III.1 guarantees a small raw error, while the optimal value \(L(\mathbf{w}^{*})\) and the curvature constant \(C_{f}\) are usually unknown. The surrogate duality gap is defined below for more convenient estimation of the approximation quality \[\phi(\mathbf{w})=\max\ \langle\mathbf{w}-\mathbf{s},\nabla L(\mathbf{w})\rangle. \tag{16}\] Convexity of \(L\) implies that the linearization \(L(\mathbf{w})+\langle\mathbf{w}-\mathbf{s},\nabla L(\mathbf{w})\rangle\) always lies below the function \(L\), which provides the property of the duality gap, \(\phi(\mathbf{w})\geq L(\mathbf{w})-L(\mathbf{w}^{*})\). The Frank-Wolfe algorithm obtain guaranteed small duality gap \(\phi(\mathbf{w})\leq\epsilon\) after \(O(\frac{1}{\epsilon})\) iterations even if the linear subproblems 9 are only solved approximately. **Theorem III.2**.: _If the Frank-Wolfe algorithm is run for \(R\geq 2\) iterations, then the algorithm has a bounded duality gap with iterate \(\mathbf{w}^{(r)}\), \(1\leq r\leq R\)_ \[\phi(\mathbf{w}^{(r)})\leq\frac{2\beta C_{f}}{R+2}(1+\delta). \tag{17}\] _where \(\beta=\frac{27}{8}\)._ By proving that the duality gap cannot be kept large in multiple iterations, Theorem III.2 is proved by contradiction in Appendix. ### _Architecture & Procedure_ Fig. 4 depicts the general architecture of the proposed TMoE-P model. The model is divided into two components: the OMoE module and the POR module. The input is first routed through the OMoE module, where objective-specific and objective-shared features are progressively separated using stacked feature extraction blocks. After that, the gating networks screen and fuse the separated features through (3) and (4). Additionally, the output of the last feature extraction block will disgard the part of the objective-shared gating network and estimate the quality variables via the tower networks. The predicted quality variables sequence is utilized in conjunction with the real values to calculate the loss of each objective, while backpropagation is employed to derive the gradient of each objective loss for the objective-shared parameters. The gradient will be submitted to the POR module, which will thereafter iteratively search for the loss weight \(\mathbf{w}^{*}\) that minimizes (8). The multi-objective loss \(\mathcal{L}_{\mathbf{w}}\) is calculated after \(\mathbf{w}^{*}\) and the loss of each objective have been weighted. Finally, the network parameters are optimized using (13) to approach the Pareto optimal network parameters. ## IV Experiments In this section, we carry out extensive experiments using real material measurement data to assess the effectiveness of the proposed model. ### _Experiment Setup_ #### Iv-A1 Dataset The dataset comes from Sulfur Recovery unit (SRU), which is an essential part of managing sulfur emissions [35]. Fig. 5 depicts an SRU made up of four parallel sulfur production lines, the major inputs of which are H\({}_{2}\)S-enriched MEA gas and H\({}_{2}\)S- and NH\({}_{3}\)-enriched SWS gas. The key reaction in SRU that can purify acid gas and generate sulfur as a byproduct is as follows: \[\left\{\begin{array}{ll}3\text{H}_{2}\text{S}+\frac{1}{2}\text{O}_{2}& \rightarrow\text{SO}_{2}+2\text{H}_{2}\text{S}+\text{H}_{2}\text{O}\\ \text{SO}_{2}+2\text{H}_{2}\text{S}&\rightarrow\text{S}_{x}+2\text{H}_{2} \text{O}\end{array}\right.. \tag{18}\] In practical industrial applications, plant operators must manually adjust the air-to-feed ratio such that the concentration ratio of H\({}_{2}\)S and SO\({}_{2}\) is roughly 2:1, where online analyzers are inseparable. However, it's simple for acid gas to harm the instrument itself, hence soft sensors should be considered. Table. I displays the process variables and quality variables collected from the SRU industrial history data. Taking into account the process dynamics and time-delay characteristics, the actual input of the soft sensor model at time \(t\) is designed as follows: \[\mathbf{x}_{t}=\left[x_{1}(t),\ldots,x_{1}(t-9),\ldots,x_{D}(t),\ldots,x_{D}(t-9) \right]^{\top}. \tag{19}\] where \(D=5\), \(x_{d}(t),d=1,2,\ldots,D\) are the input variables at time \(t\), \(x_{d}(t-z),z=1,2,\ldots,9\) are corresponding lagged values. After removing null values, the dataset comprises a total of 10,000 data samples. #### Iv-A2 Baseline Models In experiments, we compare the proposed model to conventional regression prediction models such as partial least squares regression (PLSR), AE, variant of AE like stacked autoencoder (SAE) and gated stacked target-related autoencoder (GSTAE) [36], long short-term memory network (LSTM), variable attention LSTM (VA-LSTM) network [37] and supervised LSTM (SLSTM) network [38]. In addition, we compared SOTA MTL models such as MMoE [30], PLE [32], and BMoE [33]. The appendix provides descriptions of baseline models we used. Since certain regression prediction models can only predict univariate, we undertake a fair assessment of quality variables using univariate modeling. And for regression prediction models capable of multivariate prediction, we substitute their multivariate prediction heads with objective-specific tower networks and transform them into MOO models for comparison. #### Iv-A3 Training & Evaluation All quality variable prediction models in experiments are trained and evaluated through MSE loss. The SRU dataset is split into training, validation and test set at a ratio of 6:2:2. In all MOO models and single-objective models, we use RELU-activated MLP networks for each expert or feature extraction layer. The proposed model necessitates the adjustment of several hyperparameters: the number of feature extraction blocks, objective-specific experts, and objective-shared experts. The evaluation metrics employed RMSE, MAE, and R\({}^{2}\), and all experiments were carried out on Ubuntu 18.04.2 LTS based on Python 3.7. Fig. 4: The general architecture of the proposed Task-aware Mixture-of-Experts for achieving the Pareto optimum. Fig. 5: Process flow of the SRU. ### _Performance Evaluation_ The baseline approaches and the proposed TMoE-P model's prediction performance evaluation metrics for the SRU dataset are provided in Table. II, together with the mean and variation. And Fig. 6 depicts the prediction curve, the accompanying residual histogram, and the kernel density estimation curve. The outcomes demonstrate that the proposed TMoE-P model greatly outperforms the baseline models in the concentration prediction of both H\({}_{2}\)S and SO\({}_{2}\). As seen from SRU's reaction process (18), there is a inverse and complex relationship between between the concentration of H\({}_{2}\)S and SO\({}_{2}\), leading to the occurrence of NT phenomenon. Because the dependencies between objectives are challenging to describe when multi-objective joint modeling is required for a single-objective regression prediction model, this is notably evident in the SLSTM. In single-objective modeling of SO\({}_{2}\), the SLSTM model scored the greatest metric among most of the baseline approaches; however, when employed in multi-objective modeling, the R\({}^{2}\) of SO\({}_{2}\) concentration prediction declined by 0.054. Furthermore, our experimental results reveal that in the multi-objective scenario of H\({}_{2}\)S and SO\({}_{2}\) concentration prediction, the majority of the multi-objective models suffer from seesaw phenomenon. Specifically, several models decrease the accuracy of SO\({}_{2}\) while improving the accuracy of H\({}_{2}\)S. Take the VA-LSTM model for example, the R\({}^{2}\) of H\({}_{2}\)S concentration prediction increased by 0.02 while the R\({}^{2}\) of SO\({}_{2}\) concentration prediction decreased by about 0.01. On this premise, the MMoE and BMoE models utilize the mixture-of-experts structure to balance multiple objectives, and significantly improve H\({}_{2}\)S with a slight decrease in SO\({}_{2}\) performance. The PLE model incorporates objective-specific experts, which improves the accuracy of H\({}_{2}\)S concentration prediction while exacerbating the seesaw phenomenon. The proposed model TMoE-P employs objective-aware mixture-of-experts, uses the POR module to dynamically modify target weights, and can effectively extract correlation information between objectives, outperforming or even outperforming single modeling of each objective. The kernel density estimation curve in Fig. 6 ensures that the residuals are Gaussian white noise with zero mean and variance concentration, which verifies that TMoE-P is an unbiased estimation model in statistical significance ### _Ablation Studies_ #### Iv-C1 On the POR Module The introduction of the POR module to dynamically modify the target weights in the updating of the OMoE network parameters is one of the proposed TMoE-P model's primary advances. This subsection will verify the effectiveness of dynamic weights in the POR module by contrasting the OMoE model trained with constant target weights on the same parameters. The trend chart of metrics in the H\({}_{2}\)S and SO\({}_{2}\) concentration prediction objectives of the TMoE-P network trained with constant target weights and dynamic weights is shown in Fig. 7. It is obvious that the model trained with the POR module's dynamic weights outperforms the model trained with Fig. 6: Prediction curve, accompanying residual histogram, and the kernel density estimation curve of TMoE-P. (a) and (c) for H\({}_{2}\)S, (b) and (d) for SO\({}_{2}\). constant weights in terms of steady-state metrics values after convergence. In particular, the R\({}^{2}\) of H\({}_{2}\)S has risen from 0.8764 to 0.8845 while the R\({}^{2}\) of SO\({}_{2}\) has risen from 0.8371 to 0.8626. Furthermore, dynamic weights outperform single-objective modeling by 8.1% and 0.67% respectively, while eliminating the seesaw phenomenon and NT phenomenon. To better understand the evolution trend of dynamic weights, we extracted target weights during the TMoE-P training process, as illustrated in Fig. 8. The figure shows that as the objective-specific features and objective-shared features are progressively separated, the target weight evolves from equilibrium to focus on H\({}_{2}\)S concentration prediction, eventually settling at roughly 0.7:0.3. The dynamic adjustment of target weights allows the TMoE-P to overcome the imbalance of training resources produced by excessively complicated objective correlations, bringing it closer to the Pareto optimal model, demonstrating the effectiveness of the POR module in MOO. We further verified the improvement of POR module to OMoE module under different number of experts \(n_{e}\), as shown in Fig. 9. It can be seen from the step value in the histogram that, with the same number of experts, the proposed TMoE-P has a higher R\({}^{2}\) than OMoE alone, and the variance is generally lower, ensuring more stable and accurate predictions. In addition, we applied the POR module to the traditional soft sensor models VA-LSTM, SLSTM and the multi-task learning model MMoE, and the results are shown in Fig. 10. It should be noted that the shared parameter of the first two models is the subject feature extraction network, and the objective-specific parameter is the prediction head. And it can be clearly seen that POR module can produce effective accuracy improvement no matter it is in the traditional soft sensor model or in the multi-task learning model. #### Iv-B2 On the Experts Module In Table III, we remove objective-shared and objective-specific components from the OMoE module to study the differences between target-alone and hard parameter sharing modeling versus hybrid modeling. We find that hybrid modeling, which combines objective-shared and objective-specific experts, is the best MOO framework across all parameter groups. OMoE without objective-shared experts fully ignores the correlation information between quality variables, making it challenging to improve the multi-objective metrics by increasing the number of experts and executing information fusion through the gating network. And regrettably, the R\({}^{2}\) metric fell by 2% \(\sim\) 6% for predictions of two quality variables. The OMoE structure without objective-specific experts, on the other hand, is analogous to the MMoE structure, which cannot escape the seesaw phenomenon in MOO. Fig. 8: Dynamic target weights and corresponding losses during training process of TMoE-P. Fig. 7: Performance of dynamic weights and constant equal weights during model training. (a),(c) and (e) are the metrics of H\({}_{2}\)S concentration; (b),(d) and (f) are the metrics of SO\({}_{2}\) concentration. Fig. 9: Ablation study of POR module under different \(n_{e}\). (a) for H\({}_{2}\)S, and (b) for SO\({}_{2}\). ### _Parameter Sensitivity Studies_ The main hyperparameters of the proposed TMoE-P model are the number of objective-specific experts \(n_{e}\), the number of feature extraction blocks \(n_{b}\), and the number of expert network layers \(n_{l}\). The sensitivity analysis of the TMoE-P prediction performance will be performed on the three variable hyper-parameters in this subsection, with the remaining parameters fixed and one of the above hyperparameters changed through the control variable approach. Fig. 11 depicts the fluctuation diagram of the evaluation metrics RMSE and R\({}^{2}\) when \(n_{e}\), \(n_{b}\) and \(n_{l}\) vary between \([1,5]\), \([1,4]\) and \([1,5]\). According to the variations in the two metrics, it can be seen that when \(n_{e}=1\), the sum of TMoE-P's metrics in the prediction of H\({}_{2}\)S and SO\({}_{2}\) concentrations is the highest, reaching 0.8845 and 0.8626, respectively. However, when \(n_{e}\) grows, the performance of TMoE-P falls dramatically. One plausible explanation is that the gating network shown in Fig. 3 is difficult to integrate large-capacity expert knowledge, and the TMoE-P with single expert is adequate to realize joint prediction of H\({}_{2}\)S and SO\({}_{2}\) concentrations. When it comes to the effect of \(n_{b}\), TMoE-P performs best for \(n_{b}=2\), and as \(n_{b}\) decreases or grows, TMoE-P performs worse and worse. Indeed, when \(n_{b}=1\), TMoE-P can only separate and fuse objective-specific and objective-shared features via a objective-specific gating network, and when \(n_{b}\) is too large, the network structure becomes too complex for both to avoid the seesaw phenomenon. Additionally, we discovered that when \(n_{l}=3\), the objective-specific expert structure of TMoE-P attained highest performance, outperforming the \(n_{l}=2\) and \(n_{l}=4\) models in terms of R\({}^{2}\), which improved by 0.7% and 4.0% for H\({}_{2}\)S and 1.0% and 2.8% for SO\({}_{2}\), respectively. To summarize the sensitivity experiment results for the aforementioned hyperparameters, we recommend that during the TMoE-P training process, first set \(n_{e}\) to 1 and then change the hyperparameters \(n_{b}\) and \(n_{l}\) within \([1,3]\) and \([2,4]\). ## V Related Work Solving industrial soft sensor problems based on the concept of MOO is a novel and demanding task, as there are frequently contradicting but closely connected linkages between quality variables. Broadly speaking, current work in this field can be categorised in two groups. The first is to utilize MOO as the model hyperparameter optimizer to find the best model structure. The second instead generalize and solve the soft sensor problem using the mathematical form of MOO. He et al.[39] optimized the hyperparameters of the four data-driven models including Random Forest, Gradient Boosting Regression, Ridge Regression, and K-Nearest Neighbor using the intelligent evolutionary algorithm NSGA II, but did not further integrate the models. Instead, Jin et al.[40] simply performed an optimization search of similarity measures for Just-in-time learning models and developed a stacked Just-in-time learning model with a two-layer structure for online estimation Fig. 11: Parameter sensitivity analysis of the hyperparameters on metrics RMSE and R\({}^{2}\). (a),(c) and (e) for RMSE metric; (b),(d) and (f) for R\({}^{2}\) metric. Fig. 10: Ablation study of POR module under different model. (a) for H\({}_{2}\)S, and (b) for SO\({}_{2}\). of quality variables. While considering the dimensionality of features and the weight of regularization, Ribeiro et al.[41] proposed a two-stage MOO to improve the resilience of Partial Least Squares Regression. Yan et al.[42] suggested an end-to-end multi-quality variable correlation model for joint optimization, with the prediction loss of quality variables and the correlation loss of features serving as the optimization objectives in MOO. However, the nonlinear feature transformation considered in this model is merely Multilayer Perceptron, which is a standard hard parameter sharing model. Huang et al.[33] characterized the soft sensor problem as a multi-task regression problem, which is a special case of MOO, and used the MMoE structure and GradNorm module to balance the task gradient to improve multi-task performance. Futhermore, Lei et al.[43] extended the regression task with generation and classification tasks, and employed the upgraded Capsule Network and weighted objective loss for stochastic gradient multi-task optimization. Our approach is most similar to Huang, in that we both use gradients to create target weights to optimize network parameters. The distinction is that we concentrate primarily on three aspects: we begin by solving the MOP. Compared with multi-task learning, MOO focuses on approximating the Pareto optimal solution, and the network update mechanism is more challenging. Second, the OMoE and MMoE structures use different soft parameter sharing approaches, adding objective-specific experts, which can mitigate the seesaw phenomenon produced by full sharing of the underlying parameters when objective correlation is weak. Finally, our POR optimization algorithm approaches the Pareto optimal model parameters, whereas GradNorm lacks theoretical backing and relies on intuition to discover target weights that balance multi-task learning speed. ## VI Conclusion In this paper, we propose a Task-aware Mixture-of-Experts model combining the OMoE and the POR module capable of approaching Pareto optimality (TMoE-P) for the field of industrial soft sensor. The proposed TMoE-P uses the OMoE network to explicitly separate objective-specific parameters from objective-shared parameters in order to achieve soft sensor accuracy that exceeds individual modeling under shared mode. Simultaneously, we employ the POR module, which can approach the Pareto optimality, to dynamically modify the proportion of soft sensor objectives during the training of the OMoE network, avoiding the phenomenon of negative transfer and seesaw. The experimental results on the SRU chemical industry data set reveal that TMoE-P outperforms the SOTA MTL model and the convensional regression prediction model. Our future research will mainly focus on investigating the fusion mechanism of the gating network with a large number of experts, MTL modeling for time-series data in the soft sensor field, and the Pareto stationary point solving algorithm under the MOP optimization framework. ## Appendix Baseline Description The baseline model we used in the verification experiments of TMoE are as follows: * **PLSR** finds a linear regression model by projecting quality variables and process variables into a new space; * **AE** performs representation learning on the input data to obtain dimensionality reduction features, and predicts quality variables after combining with a fully connected neural network; * **SAE** increases the depth of the hidden layer based on AE and improves feature extraction ability through layer-wise unsupervised pre-training; * **GSTAE**[36] adds the prediction error of the quality variables to the loss function during the layer-wise unsupervised pre-training, guiding the feature learning process with target-related information, and uses gated neurons to learn the information flow of the final output neurons; * **LSTM** is a recurrent neural network capable of processing quality variable data arranged in sequences and capturing temporal dependencies between sequences; * **VA-LSTM**[37] consists of two LSTM networks, one considers the correlation of process variables with quality variables and assigns attention weights to process variables and assigns attention weights to process variables based on the correlations, and the other captures the long-term dependencies of weighted input to predict quality variables; * **SLSTM**[38] consists of two LSTM networks, one considers the correlation of process variables with quality variables and assigns attention weights to process variables based on the correlations, and the other captures the long-term dependencies of weighted input to predict quality variables; * **MMoE**[30] is multi-gate mixture-of-experts, which characterizes task correlation and learns specific tasks based on shared representations; * **BMoE**[33] leverages GradNorm on the basis of MMoE to balance the learning gradient between tasks; * **PLE**[32] explicitly distinguishes between task-specific and task-shared components in the model, avoiding the seesaw phenomenon caused by loose task correlation; ## Proof of Convergence Theorem The proof of the convergence of the primal error depends on the following Lemma A.1 on the improvement in each iteration. **Lemma A.1**.: _For a step \(\mathbf{w}^{(r+1)}=\mathbf{w}^{(r)}+\gamma(\mathbf{s}-\mathbf{w}^{(r)})\) with arbitrary step-size \(\gamma\in[0,1]\), if \(\mathbf{s}\) is a appropriate descent direction on the linear approximation to \(L\), it holds that_ \[L\left(\mathbf{w}^{(r+1)}\right)\leq L\left(\mathbf{w}^{(r)}\right)-\gamma\phi\left( \mathbf{w}^{(r)}\right)+\frac{\gamma^{2}}{2}C_{f}(1+\delta). \tag{20}\] Proof.: To simplify the notation, we write \(\mathbf{w}=\mathbf{w}^{(r)}\), \(\mathbf{w}^{\prime}=\mathbf{w}^{(r+1)}\), and \(d_{w}=\nabla L\left(\mathbf{w}^{(r)}\right)\). From the definition of the curvature constant \(C_{f}\), we have \[\begin{split} L(\mathbf{w}^{\prime})&=L(\mathbf{w}+\gamma(\mathbf{s }-\mathbf{w}))\\ &\leq L(\mathbf{w})+\gamma\left<\mathbf{s}-\mathbf{w},d_{w}\right>+\frac{\gamma ^{2}}{2}C_{f}.\end{split} \tag{21}\] Then we pick \(\mathbf{s}\) as an linear minimizer for \(L\), and it satisfies \(\left<\mathbf{s},\nabla L\left(\mathbf{w}^{(r)}\right)\right>\leq\min\left<\hat{\mathbf{s }},\nabla L\left(\mathbf{w}^{(r)}\right)\right>+\frac{1}{2}\delta\gamma C_{f}\), which can be rewritten as \[\begin{split}\left<\mathbf{s}-\mathbf{w},d_{w}\right>&\leq \min\left<\mathbf{w}^{\prime},d_{w}\right>-\left<\mathbf{w},d_{w}\right>+\frac{1}{2} \delta\gamma C_{f}\\ &=-\phi(\mathbf{w})+\frac{1}{2}\delta\gamma C_{f}.\end{split} \tag{22}\] Therefore, we obtain \(L(\mathbf{w}^{\prime})\leq L(\mathbf{w})-\gamma\phi(\mathbf{w})+\frac{\gamma^{2}}{2}C_{f} (1+\delta)\), which proves the lemma. **Theorem A.1**.: _For each \(r\geq 1\), the iterates \(\mathbf{w}^{(r)}\) of the Frank-Wolfe algorithm satisfy_ \[L(\mathbf{w}^{(r)})-L(\mathbf{w}^{*})\leq\frac{2C_{f}}{r+2}(1+\delta). \tag{23}\] _where \(\delta\geq 0\) is the accuracy to which the linear subproblems (9) are solved, and \(C_{f}\) is the curvature constant of loss function \(L\) which is defined as_ \[C_{f}=\sup_{\gamma\in[0,1]}\frac{2}{\gamma^{2}}(L(\mathbf{w}^{\prime})-L(\mathbf{w})- \left<\mathbf{w}^{\prime}-\mathbf{w},\nabla L(\mathbf{w})\right>). \tag{24}\] _where \(\mathbf{w}^{\prime}=\mathbf{w}+\gamma(\mathbf{s}-\mathbf{w})\) and \(\mathbf{s}\) is a feasible solution._ Proof.: From Lemma A.1 we know that \(L\left(\mathbf{w}^{(r+1)}\right)\leq L\left(\mathbf{w}^{(r)}\right)-\gamma\phi\left( \mathbf{w}^{(r)}\right)+\gamma^{2}C\) holds for every step of Frank-Wolfe algorithm where we define \(C=\frac{C_{f}}{2}(1+\delta)\) and take fixed step \(\gamma=\frac{2}{r+2}\). Writing the primal error as \(\zeta(\mathbf{w})=L(\mathbf{w})-L(\mathbf{w}^{*})\) at any point \(\mathbf{w}\), this implies that \[\begin{split}\zeta\left(\mathbf{w}^{(r+1)}\right)&\leq \zeta\left(\mathbf{w}^{(r)}\right)-\gamma\phi\left(\mathbf{w}^{(r)}\right)+\gamma^{2}C \\ &\leq\zeta\left(\mathbf{w}^{(r)}\right)-\gamma\zeta\left(\mathbf{w}^{(r)} \right)+\gamma^{2}C\\ &=(1-\gamma)\zeta\left(\mathbf{w}^{(r)}\right)+\gamma^{2}C.\end{split} \tag{25}\] where we have used weak duality \(\zeta(\mathbf{w})\leq\phi(\mathbf{w})\). We will now use induction over \(r\) to prove bound claimed as follows \[\zeta\left(\mathbf{w}^{(r+1)}\right)\leq\frac{4C}{r+3}\quad r=0,1,\ldots \tag{26}\] The base-case \(r=0\) follows from (25) applied for the first step through \(\gamma=\gamma^{(0)}=\frac{2}{r+2}=1\). Now considering \(r\geq 1\), \[\begin{split}\zeta\left(\mathbf{w}^{(r+1)}\right)&\leq (1-\gamma^{(r)})\zeta\left(\mathbf{w}^{(r)}\right)+\gamma^{(r)}{}^{2}C\\ &=(1-\frac{2}{r+2})\zeta\left(\mathbf{w}^{(r)}\right)+\frac{2}{r+2} ^{2}C\\ &\leq(1-\frac{2}{r+2})\frac{4C}{r+2}+\frac{2}{r+2}^{2}C.\end{split} \tag{27}\] Simply rearranging the terms gives the bound claimed above \[\begin{split}\zeta\left(\mathbf{w}^{(r+1)}\right)&\leq \frac{4C}{r+2}(1-\frac{1}{r+2})=\frac{4C}{r+2}\frac{r+1}{r+2}\\ &\leq\frac{4C}{r+2}\frac{r+2}{r+3}=\frac{4C}{r+3}.\end{split} \tag{28}\] **Theorem A.2**.: _If the Frank-Wolfe algorithm is run for \(R\geq 2\) iterations, then the algorithm has a bounded duality gap with iterate \(\mathbf{w}^{(r)}\), \(1\leq r\leq R\)_ \[\phi(\mathbf{w}^{(r)})\leq\frac{2\beta C_{f}}{R+2}(1+\delta). \tag{29}\] _where \(\beta=\frac{27}{8}\)._ Proof.: We do not prove that the above duality gap holds for the entire iteration, but for the last third of the \(R\) iteration. To simplify notation, we denote the primal and dual errors as \(\zeta^{(r)}=\zeta\left(\mathbf{w}^{(r)}\right)\) and \(\phi^{(r)}=\phi\left(\mathbf{w}^{(r)}\right)\) for \(r\geq 0\). By primal convergence Theorem A.1, we already know that the primal error satisfies \(\zeta^{(r)}\leq\frac{C}{r+2}\), where we reset \(C=2C_{f}(1+\delta)\). We firstly make following contradict assumption: \(\phi^{(r)}\) always stays larger than \(\frac{\beta C}{U}\) in the last third of the \(R\) iterations, which can be formally defined as \[\phi^{(r)}>\frac{\beta C}{U}\quad\text{ for }\quad r\in\{\lceil\mu U\rceil-2, \ldots,R\} \tag{30}\] where \(U=R+2\) for simple notation, and \(0<\mu<1\) is an arbitrary fixed parameter. We will find later that \(\mu=\frac{2}{3}\) is a good choice. Lemma A.1 can be further read as follows if we choose \(\gamma=\frac{2}{r+2}\) \[\begin{split}\zeta^{(r+1)}&\leq\zeta^{(r)}-\frac{2} {r+2}\phi^{(r)}+\frac{2}{(r+2)^{2}}C_{f}(1+\delta)\\ &=\zeta^{(r)}-\frac{2}{r+2}\phi^{(r)}+\frac{C}{(r+2)^{2}}.\end{split} \tag{31}\] Considering the assumptions we made in (30), we obtain \[\zeta^{(r+1)}<\zeta^{(r)}-\frac{2}{r+2}\frac{\beta C}{U}+\frac{C}{(r+2)^{2}}. \tag{32}\] If we define \(r_{\text{min}}=\lceil\mu U\rceil-2\), then \(r_{\text{min}}\geq 0\) for \(R\geq\frac{2(1-\mu)}{\mu}\). While the steps \(r\) satisfy \(r_{\text{min}}\leq r\leq R\), then \(\mu U\leq r+2\leq U\), the inequality in 32 now reads as \[\begin{split}\zeta^{(r+1)}&<\zeta^{(r)}-\frac{2}{U} \frac{\beta C}{U}+\frac{C}{(\mu U)^{2}}\\ &=\zeta^{(r)}-\frac{2\beta C-C/\mu^{2}}{U^{2}}.\end{split} \tag{33}\] We now sum up this inequality over the last third of the steps from \(r_{\text{min}}\) up to \(R\), then we get \[\begin{split}\zeta^{(R+1)}&<\zeta^{(r_{\text{min}})}-(R- r_{\text{min}}+1)\frac{2\beta C-C/\mu^{2}}{U^{2}}\\ &\leq\frac{C}{\mu U}-\tau\frac{2\mu\beta-1/\mu}{U}\frac{C}{\mu U }\\ &=\frac{C}{\mu U}\left(1-\tau\frac{2\mu\beta-1/\mu}{U}\right).\end{split} \tag{34}\] where \(\tau=(1-\mu)U\leq R+2-(\lceil\mu U\rceil-1)=R-r_{\text{min}}+1\), and in the last inequality we have used Theorem A.1 giving \(\zeta^{(r_{\text{min}})}\leq\frac{C}{r_{\text{min}}+2}\leq\frac{C}{\mu U}\). For \(\mu=\frac{2}{3}\) and \(\beta=\frac{27}{8}\), the following term become zero: \(1-\tau\frac{2\mu\beta-1/\mu}{U}=1-(1-\mu)(2\mu\beta-1/\mu)=0\). Then we arrive at the contradiction that \(\zeta^{(R+1)}<0\) and our assumption on the gap is refuted, so the claimed bound has been proven.
2308.05747
Least-Squares Design of Chromatic Dispersion Compensation FIR Filters Realized with Overlap-Save Processing
A design method for chromatic dispersion compensation filters realized using overlap-save processing in the frequency domain is proposed. Based on the idea to use the values that are normally zero-padded, better results than using optimal time-domain design are obtained without any modification of the overlap-save processing complexity.
Oscar Gustafsson, Cheolyong Bae, Hakan Johansson
2023-06-20T13:12:18Z
http://arxiv.org/abs/2308.05747v1
Least-Squares Design of Chromatic Dispersion Compensation FIR Filters Realized with Overlap-Save Processing ###### Abstract A design method for chromatic dispersion compensation filters realized using overlap-save processing in the frequency domain is proposed. Based on the idea to use the values that are normally zero-padded, better results than using optimal time-domain design are obtained without any modification of the overlap-save processing complexity. ## I Introduction Chromatic dispersion (CD) in optical fibers causes pulse widening and is one of the more prominent error sources in coherent transmission [1, 2, 3, 4]. CD compensation (CDC) also contributes a notable part of the power consumption in a coherent receiver. CD is modeled as a non-linear phase allpass frequency response in the fiber as \[C\left(e^{j\omega T}\right)=e^{-jK\left(\omega T\right)^{2}},\ K=\frac{D \lambda^{2}z}{4\pi cT^{2}}, \tag{1}\] where \(D\) is the fiber dispersion parameter, \(\lambda\) is the wavelength, \(z\) is the propagation distance, and \(c\) is the speed of light. In this work, we use \(\omega T=2\pi fT\) as "digital frequency" with a sampling period of \(T\), corresponding to a sampling frequency \(f_{\text{s}}=\frac{1}{T}\). Hence, a CDC filter that approximates the desired frequency response \[H_{\text{des}}(\omega T)=\frac{1}{C\left(e^{j\omega T}\right)}=e^{jK\left( \omega T\right)^{2}} \tag{2}\] should be designed. Only FIR filters are considered here as IIR filters are not suitable because of the inherent limited speed due to their recursive structure. The frequency response of an \(L\)-tap FIR filter (filter order \(L-1\)) is \[H\left(e^{j\omega T}\right)=\sum_{l=0}^{L-1}h_{l}e^{-jl\omega T} \tag{3}\] where \(h_{l}\) is the \(l\)th impulse response coefficient. Different design approaches for FIR CDC filters, i.e., determining the values of \(h_{l}\), have been proposed [5, 6, 7]. The estimated filter length is given as [5] \[L=2\left\lfloor 2K\pi\right\rfloor+1=2\left\lfloor\frac{D\lambda^{2}z}{2cT^{ 2}}\right\rfloor+1. \tag{4}\] It is common that CDC filters are implemented in the frequency domain [3, 4, 8, 9, 10, 11, 12], although time-domain implementation has also been proposed [13, 14, 15]. For correct operation, a scheme such as overlap-add or overlap-save must be used [16, 17], where in each iteration, \(M\) samples are processed using an \(N\)-point discrete Fourier transform (DFT), typically realized using a fast Fourier transform (FFT) algorithm. Overlap-save filtering is illustrated in Fig. 1 when processing \(M=4\) samples using an \(N=8\)-point DFT/FFT. Normally, the filter length, \(L\), is constrained to \[L\leq N-M+1, \tag{5}\] due to the zero-padding required to implement the convolution in the frequency domain. Consider a impulse response with \(h_{n}=N-n\) applied to the realization in Fig. 1. It can be shown that the four outputs are processed by different, circularly shifted, impulse responses [18, 19]. These are illustrated in Fig. 2. Now, for all outputs to have the same associated impulse response, the dashed impulse response values must be zero. This is the motivation of (5), as making the dashed values zero will give the same associated impulse response for all the outputs. However, as will be shown in this work for CDC, and earlier for general filter design [20, 21], it is possible to design the impulse response such that the dashed impulse response values are non-zero and benefit from an improved overall performance, Fig. 1: Overlap-save filtering with \(M=4\) samples per block using an \(N=8\)-point DFT/FFT. despite the filter having a time-varying impulse response and violating (5). In this work, we propose a method for designing CDC filters to be realized using overlap-save filtering in the frequency domain. By utilizing the zero-padding values, we obtain a better CDC filter with the same computational complexity as with zero-padding, supporting longer fibers at the same implementation complexity. ## II Proposed Design Method Introduce a vector \(\mathbf{h}=\begin{bmatrix}h_{0}&h_{1}&\ldots&h_{N-1}\end{bmatrix}^{T}\) corresponding to the effective length-\(N\) impulse response, i.e., including the values that are traditionally zero. Denote a matrix that, when multiplied from the left, circularly shifts a column vector \(k\) positions as \(\mathbf{S}_{k}\). Then, the impulse response for output \(y(Mi-m)\), denoted \(\mathbf{h}_{m}\), can be written as \(\mathbf{h}_{m}=\mathbf{S}_{m}\mathbf{h}\). Introduce a length \(N\) column vector \[\mathbf{d}_{m}=\begin{bmatrix}D\left(-\frac{L-1}{2}-m\right)\\ D\left(-\frac{L-1}{2}+1-m\right)\\ \vdots\\ D\left(N-1-m\right)\end{bmatrix}, \tag{6}\] where \[D(d) =\frac{e^{-j\left(\frac{d^{2}}{K}+\frac{3\pi}{4}\right)}}{4\sqrt {\pi K}}\left(\operatorname{erf}\left(\frac{e^{j\frac{2\pi}{4}}\left(2K\pi-d \right)}{2\sqrt{K}}\right)+\right.\] \[\left.\operatorname{erf}\left(\frac{e^{j\frac{3\pi}{4}}\left(2K \pi+d\right)}{2\sqrt{K}}\right)\right), \tag{7}\] with \(K\) from (1) and \(\operatorname{erf}\) denoting the error function. Finally, introduce an \(N\times N\) symmetric Toeplitz matrix \(\mathbf{Q}\) with the element values \[Q(n,m)=\begin{cases}\frac{\Omega}{n},&m=n\\ \frac{\sin(\Omega(m-n)\pi)}{(m-n)\pi},&m\neq n,\end{cases} \tag{8}\] where \(\Omega\) denotes the bandwidth. Here, only filters with a symmetric bandwidth is considered. An expression for the non-symmetric case is found in [6]. The optimal impulse response for output \(m\) is [6, 22] \[\hat{\mathbf{h}}_{m}=\mathbf{Q}^{-1}\mathbf{d}_{m}. \tag{9}\] Considering all \(M\) impulse responses simultaneously, the total least-squares error is minimized by solving \[\hat{\mathbf{h}}=\mathbf{R}^{-1}\mathbf{e}, \tag{10}\] where \[\mathbf{R}=\sum_{m=0}^{M-1}\mathbf{S}_{m}^{T}\mathbf{Q}\mathbf{S}_{m} \tag{11}\] and \[\mathbf{e}=\sum_{m=0}^{M-1}\mathbf{S}_{m}^{T}\mathbf{d}_{m}. \tag{12}\] For the full-band case, i.e., \(\Omega=\pi\), it is possible obtain a simpler expression as \(\mathbf{Q}=\mathbf{I}\), the identity matrix, and \(\mathbf{S}_{m}^{T}\mathbf{I}\mathbf{S}_{m}=\mathbf{I}\) leading to \(\mathbf{R}=M\mathbf{I}\) and \(\mathbf{R}^{-1}=\frac{1}{M}\mathbf{I}\). Factoring \(M\) out from \(\mathbf{e}\) and separating it into two vectors give \[\mathbf{e}=M\begin{bmatrix}\mathbf{f}\\ \mathbf{g}\end{bmatrix}, \tag{13}\] where \(\mathbf{f}\) is of length \(L\): \[\mathbf{f}=\begin{bmatrix}D\left(-\frac{L-1}{2}\right)\\ D\left(-\frac{L-1}{2}+1\right)\\ \vdots\\ D\left(\frac{L-1}{2}\right)\end{bmatrix}, \tag{14}\] and \(\mathbf{g}\) is of length \(N-L=M-1\): \[\mathbf{g}=\frac{1}{M}\begin{bmatrix}\left(M-1\right)D\left(\frac{L-1}{2}+1 \right)+D\left(-\frac{L-1}{2}-M+1\right)\\ \left(M-2\right)D\left(\frac{L-1}{2}+2\right)+2D\left(-\frac{L-1}{2}-M+2\right) \\ \vdots\\ D\left(\frac{L-1}{2}+M-1\right)+\left(M-1\right)D\left(-\frac{L-1}{2}-1 \right)\end{bmatrix}. \tag{15}\] Hence, the optimal value for the full-band case is \[\hat{\mathbf{h}}=\frac{1}{M}\mathbf{I}M\begin{bmatrix}\mathbf{f}\\ \mathbf{g}\end{bmatrix}=\begin{bmatrix}\mathbf{f}\\ \mathbf{g}\end{bmatrix}. \tag{16}\] It is stressed that although the effective filter length in the proposed approach is \(N\) rather than \(L=N-M+1\), the DFTs of both cases are of length \(N\), so the computational complexities of the realizations are identical. It should be noted that the terms in both \(\mathbf{f}\) and \(\mathbf{g}\) are symmetric as \(D(d)\) is an even function. Hence, it is possible to rewrite (16) so that fewer variables are used and that the matrix to be inverted is about half the size of the original. Also, it should be noted that although the theoretically optimal solution is given by (10), numerical errors may lead to that it is better to use an algorithm to solve the least-square problem using the provided expressions. This is not required for the full-band case, but can improve the numerical accuracy for bandlimited cases. Fig. 2: Associated impulse responses for outputs in Fig. 1: (a) \(y(4n)\), (b) \(y(4n-1)\), (c) \(y(4n-2)\), and (d) \(y(4n-3)\). By zero-extending the five-tap impulse response, the dashed values are zero. In this work, we assign these taps a non-zero value to increase the CDC filtering performance. ## III Results For the results, a 60 GBd system with 16-QAM modulation and fractional oversampling in the receiver illustrated in Fig. 3 is considered. This is similar to our earlier works [11, 23]. As in Bae et al [11], no adaptive equalizer is considered. The filter is designed for full bandwidth, i.e., \(\Omega=\pi\). Initially, it is assumed that \(M=128\), i.e., processing 128 samples per clock cycle in a fully parallel implementation of the FFT. Five different values of \(N\) are selected, all resulting in efficient FFT implementations [11] and the fiber length is changed. The bit-error rate (BER) results are shown in Fig. 4 for an SNR of 8 dB, aiming at an uncoded BER of about \(10^{-2}\). It is clear that the proposed filter design technique allows a longer fiber to be used using the same DFT size and processing complexity compared to designing the CDC filter for the time-domain. A traditional BER plot is shown in Fig. 5 for three different filter lengths assuming a 250 km long fiber. This corresponds to \(L=161\) using (4). As seen, for all cases, there is a significant advantage using the proposed design, although for increasing \(N\), the benefit decreases. To see the impact of only changing the DFT size when realizing filters with the same \(L\), the case of \(L=129\) is considered, with an estimated maximum fiber length from (4) of 200 km. The DFT size \(N\) and samples per DFT \(M\) are selected as if a filter with \(L=129\) is implemented. A common selection in this case is \(N=256\). In addition, \(N=512\) and \(N=1024\) are considered, which provide readily realizable FFT architectures, but require a non-power-of-two number of input samples per clock cycle1. The BER for increasing fiber lengths with an SNR of 8 dB is shown in Fig. 6. It can be seen that longer fibers can be supported at a similar BER level, where \(N=512\) and \(N=1024\) provide additional improvements over \(N=256\). Footnote 1: For \(N=512\) and \(N=1024\), \(96\) and \(112\) input samples are processed per clock cycle, respectively, using a 128-parallel FFT implementation [10]. BER results for fibers of 150, 200, and 250 km are shown in Figs. 7a to 7c, respectively. It can be seen in Fig. 7a that the proposed approach does not provide any benefits for 150 km. On the other hand, for 200 km, shown in Fig. 7b, there is a significant performance gain using the proposed approach. Finally, for a 250 km fiber, shown in Fig. 7c, it is possible to obtain a much lower BER than for the time-domain design, although for high SNR there is a clear deviation from the theoretical bound. Again, \(N=512\) and \(N=1024\) provide additional improvements over \(N=256\). Fig. 4: BER at different fiber lengths, SNR \(=8\) dB. Dashed: time-domain design [6] with filter length \(L\), solid: proposed with FFT length \(N=M+L-1,M=128\). \(\square\): \(L=33\), \(\triangle\): \(L=65\circ\): \(L=129\), \(\times\): \(L=161\), and \(\circ\): \(L=193\). Vertical dotted lines: fiber lengths from (4). Fig. 5: BER at 250 km. Dashed: time-domain design [6] with filter length \(L\), solid: proposed with FFT length \(N=M+L-1,M=128\). \(\diamond\): \(L=129\), \(\times\): \(L=161\), and \(\circ\): \(L=193\). Fig. 3: System setup for simulations. Only one polarization shown. ## IV Conclusions In this work, a chromatic dispersion compensation filter design method for filters realized using overlap-save processing was proposed. Instead of zero-extending the impulse response, non-zero values that optimize the total approximation error in the least-squares sense over all impulse responses simultaneously. The provided simulation results show a significant BER performance improvement obtained without increasing the computational complexity of the overlap-save processing. Hence, the improved CD compensation capabilities come for free if an overlap-save implementation is already used.
2310.17109
LP-OVOD: Open-Vocabulary Object Detection by Linear Probing
This paper addresses the challenging problem of open-vocabulary object detection (OVOD) where an object detector must identify both seen and unseen classes in test images without labeled examples of the unseen classes in training. A typical approach for OVOD is to use joint text-image embeddings of CLIP to assign box proposals to their closest text label. However, this method has a critical issue: many low-quality boxes, such as over- and under-covered-object boxes, have the same similarity score as high-quality boxes since CLIP is not trained on exact object location information. To address this issue, we propose a novel method, LP-OVOD, that discards low-quality boxes by training a sigmoid linear classifier on pseudo labels retrieved from the top relevant region proposals to the novel text. Experimental results on COCO affirm the superior performance of our approach over the state of the art, achieving $\textbf{40.5}$ in $\text{AP}_{novel}$ using ResNet50 as the backbone and without external datasets or knowing novel classes during training. Our code will be available at https://github.com/VinAIResearch/LP-OVOD.
Chau Pham, Truong Vu, Khoi Nguyen
2023-10-26T02:37:08Z
http://arxiv.org/abs/2310.17109v2
# LP-OVOD: Open-Vocabulary Object Detection by Linear Probing ###### Abstract This paper addresses the challenging problem of open-vocabulary object detection (OVOD) where an object detector must identify both seen and unseen classes in test images without labeled examples of the unseen classes in training. A typical approach for OVOD is to use joint text-image embeddings of CLIP to assign box proposals to their closest text label. However, this method has a critical issue: many low-quality boxes, such as over- and under-covered-object boxes, have the same similarity score as high-quality boxes since CLIP is not trained on exact object location information. To address this issue, we propose a novel method, LP-OVOD, that discards low-quality boxes by training a sigmoid linear classifier on pseudo labels retrieved from the top relevant region proposals to the novel text. Experimental results on COCO affirm the superior performance of our approach over the state of the art, achieving **40.5** in AP\({}_{\text{novel}}\) using **ResNet50** as the backbone and without external datasets or knowing novel classes during training. Our code will be available at [https://github.com/VinAIResearch/LP-OVOD](https://github.com/VinAIResearch/LP-OVOD). ## 1 Introduction Open-Vocabulary Object Detection (OVOD) is an important and emerging computer vision problem. The task is to detect both seen and unseen classes in test images, given only bounding box annotations of seen classes in the training set. Seen classes are called base classes, while unseen classes are called novel classes and explicitly specified by their names. Novel classes are determined based on the availability of annotations for those classes in the training set. Classes present in training images without annotations are still considered novel classes. OVOD has various applications where a detector should be capable of extending its detected categories to novel classes without human annotation such as in autonomous driving or augmented reality where new classes can appear in deployment without annotation. OVOD is also useful as an automatic labeling system in scenarios where it is impractical for annotators to exhaustively label all objects of all classes in a large dataset. The main challenge in OVOD is to detect novel classes without labels while maintaining good performance for base classes. To address this challenge, a pretrained visual-text embedding model, such as CLIP [28] or ALIGN [15], is provided as a joint text-image embedding where base and novel classes co-exist. This embedding can be used to align box proposals with their closest classes. However, the box proposals are not perfect as they are not trained on the labels of novel classes. Consequently, low-quality proposals, such as over- and under-covered-object boxes, can co-exist with high-quality ones, with the same similarity scores to their text embeddings. This is because CLIP is trained on images without object location information, leading to high false positive and false negative rates in the OVOD approaches as exemplified in Fig. 1. To address this limitation, we propose a novel linear probing method called LP-OVOD that learns a linear classifier for novel classes on top of the features extracted from the penultimate layer of a Faster R-CNN model pre Figure 1: Comparison of box predictions for novel classes ‘bus’ and ‘cake’ between ViLD [10] (top) and our approach (bottom). In the ViLD results, low-quality boxes have similar scores to high-quality ones, leading to high false positive (left) and false negative rates (right). Our approach significantly improves the detection performance in both cases by using classification scores instead of similarity scores as in ViID. trained on base classes. These features are highly discriminative among novel classes, as shown in Fig. 2, despite being trained only on base classes. To obtain pseudo labels for training the linear classifier, we retrieve box candidates from the top relevant proposal boxes to the novel text. In this way, our approach leverages the presence of novel classes or similar in the training images, even in the absence of annotations. Furthermore, to facilitate quick combining with the linear classifier learned from base classes without hand-crafted calibration of the predicted scores, we propose to learn a sigmoid classifier instead of a softmax classifier for both base and novel classes since each class is predicted independently in the sigmoid classifier. Accordingly, we only need to concatenate the weights of the linear classifier of novel classes to that of base classes to enable object detection on both base and novel classes. We demonstrate the effectiveness of our approach on two standard OVOD datasets: COCO [22] and LVIS [11]. LPCOD significant improvement over state-of-the-art methods, without relying on external datasets or retraining the whole network whenever novel classes arrive. In summary, the contributions of our work are as follows: * A linear probing approach that leverages the highly discriminative features extracted from the penultimate layer of a pretrained Faster R-CNN on base classes to train a linear classifier for novel classes on the pseudo labels from retrieving the top relevant box proposals. * Sigmoid classifiers for both pretraining on base classes and linear probing of novel classes to predict class scores independently, forming a unified classifier for both base and novel classes in testing. In the following, Sec. 2 reviews prior work; Sec. 3 specifies our approach; and Sec. 4 presents our experimental results. Sec. 5 concludes with some remarks. ## 2 Related Work **Object detection** approaches aiming to localize and classify objects in images can be classified into three groups: anchor-based, anchor-free, and DETR-based detectors. Anchor-based detectors, such as Faster RCNN [32], RetinaNet [21], and YOLO [31], first classify and then regress the predefined anchor boxes. In contrast, anchor-free detectors like CenterNet [45] and FCOS [34] regress the bounding box extent directly without using predefined anchor boxes. DETR-based detectors [3, 19, 23, 47, 41, 23] leverage encoder-decoder transformer architecture along with one-to-one matching loss to predict object bounding boxes in an end-to-end manner without using NMS. However, these methods are designed to work in a closed-vocabulary setting, where detectors are trained and evaluated on predefined categories and cannot detect unseen categories in testing, unlike our OVOD setting. **Few-shot object detection (FSOD)** approaches [7, 27, 35, 38] aim to detect novel objects with a few labeled examples. On the other hand, OVOD only requires the names of the novel classes instead. These two inputs are complementary since some fine-grained classes may be easier to identify through exemplars, while others may be more common and easier to identify through their names. **Zero-shot or open-vocabulary object detection (ZSOD/OVOD)** aims to detect unseen categories given the class name. To enable open-vocabulary learning, during training, we are provided with labeled examples of the base classes and a pretrained word embedding (such as Word2vec [24], GloVe [26]), or vision-language models (such as CLIP [28], ALIGN [15]). OVOD methods can be groupped as follows: _External-dataset-based methods_[2, 8, 9, 14, 20, 25, 30, 43, 44] utilize huge external datasets, including image-caption pairs or image-level labeled annotations, to improve the pretrained vision-language model or detectors to recognize more classes, including the novel ones. Thus, these methods have an advantage over those that do not. _Novel-class-aware methods_ including OV-DETR [39], VL-PLM [42] assume that novel categories are known during training. These methods retrieve large-scale region proposals of novel classes based on the joint text-image embedding of CLIP [28] as pseudo-GT labels, which are jointly trained with GT-labeled examples of base classes. As a result, these methods need to regenerate the pseudo labels and retrain the detectors whenever new classes arrive. _Novel-class-unaware methods_[10, 5, 18] follow the same setting as ours. ViLD [10] uses knowledge distillation from CLIP visual features to learn the embedding for unseen categories. DetPro [5] proposes a learnable-text prompt instead of a fixed-text prompt. F-VLM [18] utilizes a pre-trained CLIP's image encoder as a backbone to retain the locality-sensitive features necessary for detection. Figure 2: The feature embeddings of COCO novel classes are extracted from the penultimate layer of a Faster R-CNN pretrained on base classes. These embeddings are highly discriminative, which motivates us to learn a robust classifier from them. However, these methods attempt to align the text embedding with the feature embedding of each proposal to predict its class. In contrast, our method approaches a different way that learns a linear classifier for novel classes using features extracted from a Faster R-CNN pretrained on base classes. ## 3 Our Approach **Problem statement:** During training, we are provided with a large set of annotated examples of base classes \(C_{B}\), i.e., bounding boxes \(b_{i}\) and their categories \(c_{i}\in C_{B}\). In testing, given the names of novel classes \(C_{N}\), our goal is to detect objects of both base and novel classes, i.e., \(\hat{c}_{i},\hat{b}_{i}\), where \(\hat{c}_{i}\in C_{B}\cup C_{N}\) for test images. To facilitate learning, a pretrained CLIP [28] is provided as the joint image-text embedding of both base and novel classes. **Our scope:** Our approach strictly assumes that we do not know novel classes during training, as we cannot anticipate the classes that an open-vocabulary detector (OVD) will encounter in practical use. Additionally, to ensure a fair comparison, we utilize only the images and annotations provided by each benchmark without any external datasets, such as image-caption or image-level label datasets. Fig. 3 illustrates our approach, which is based on Faster R-CNN [32]. We adopt the same backbone, region proposal network (RPN), and box regression modules, and refer readers to [32] for details. However, we make two modifications: replacing the softmax classifier with a sigmoid classifier and adding a distillation head as in ViLD [10]. For novel classes, we extract features from the top relevant proposals to the novel text embedding as pseudo labels for training a sigmoid classifier of the novel classes. In testing, we concatenate the weights of the two sigmoid classifiers to form a unified sigmoid classifier for object detection. ### Pretraining on Base Classes As motivated in the introduction, to facilitate the fast learning of novel classes when they arrive in testing, we propose to replace the softmax classifier of Faster R-CNN [32] with a sigmoid classifier to pretrain on base classes. In this way, instead of classifying among different categories and a background class, we predict the presence or absence of a category in an image. In other words, the embeddings of new classes are distributed diversely far from those of the base classes rather than grouping together into a 'background' class as shown in Fig. 2. Also, such a classifier pre Figure 3: **Overview of our approach.** LP-OVOD starts from the given ROI features extracted from Faster R-CNN [32] with the same prior steps. In the pretraining step **(left)**, a distillation head is added to mimic the prediction of CLIP’s image encoder as in ViLD [10]. Furthermore, the softmax classifier is replaced with a sigmoid classifier and trained with the GT labels for the base classes. In the linear probing step **(middle)**, a new sigmoid classifier with a learnable linear layer is trained on the pseudo labels of the novel classes. The pseudo labels are obtained by retrieving the top box proposals from the given novel text embedding. In the inference step **(right)**, we simply concatenate the weights of the two sigmoid classifiers together to form a unified sigmoid classifier for both base and novel classes where the score of each class is predicted independently. Finally, the classification scores are combined with the distillation score to form the final score for detection. dicts each category independently so that when new classes arrive, we can incrementally concatenate the weights of the newly trained classifier to that of the base classes to form a new unified classifier that can readily work on both base and new classes without any retraining or temperature tuning. Concretely, the ROI features for proposals \(\tilde{b}_{i}\) are extracted from the backbone and forwarded to the classification head and distillation head to obtain classification feature \(f_{i}^{\text{cls}}\) and distillation feature \(f_{i}^{\text{dis}}\), respectively. We then jointly train the new sigmoid classifier and the distillation head. The sigmoid classifier is supervised by the ground-truth labels \(c_{i}\) of the base classes using sigmoid focal loss [21]. Meanwhile, the distillation head is supervised by the CLIP image embedding \(e_{i}^{\text{image}}\), which is obtained from CLIP's image encoder using cropped images from the proposal \(\tilde{b}_{i}\). The distillation head is trained using the L1 loss. In particular, \[\mathcal{L}_{\text{cls}}^{\text{Base}} =\sum_{i}\textbf{Focal loss}(\text{Sigmoid}(f_{i}^{\text{cls}};W_{B}),c_{i}), \tag{1}\] \[\mathcal{L}_{\text{dis}} =\sum_{i}\|f_{i}^{\text{dis}}-e_{i}^{\text{image}}\|_{1}, \tag{2}\] where \(W_{B}\) are the weights of the base classes. ### Linear Probing on Novel Classes As illustrated in Fig. 1, low-quality boxes usually have the same similarity score to the novel text embeddings as the high-quality ones do, resulting in high false positive and false negative rates. Therefore, we need to have better positive/negative proposals for training a sigmoid classifier to discard these low-quality proposals. To this end, first, the top relevant proposals of each novel class are retrieved and served as pseudo-GT labels \(\tilde{c}_{i}\). Specifically, we extract all image embeddings \(e_{i}^{\text{image}}\) of all proposals \(\tilde{b}_{i}\) having the objectness score \(o_{i}\) larger than \(\tau\) in the training set. For each novel category with text embedding \(e_{c}^{\text{text}}\) where \(c\in C_{N}\), we retrieve the top \(K\) closest proposals in order to form a set \(\mathcal{P}=\{(\tilde{b}_{i},\tilde{c}_{i})\}_{i=1..K\times C_{N}}\) using cosine similarity \(\text{cos}(e_{c}^{\text{text}},e_{i}^{\text{image}})\). We visualize the examples of top-4 retrieved proposal for four novel classes in Fig. 4. To speed up the retrieval process, we resort the Faiss [16] tool. Then, we leverage the sampling mechanism of Faster R-RCNN to sample positive/negative proposals where the positives \(\mathcal{P}^{+}=\{(\tilde{b}_{i},\tilde{c}_{i})\},\tilde{c}_{i}\in N_{c}\) are the ones having \(\text{IoU}>0.5\) with the pseudo-GT boxes \(\mathcal{P}\) while the rest are the negatives \(\mathcal{P}^{-}=\{(\tilde{b}_{i},0)\}\). When novel classes arrive, a new sigmoid classifier \(W_{N}\) is added on top of the pretrained classification feature \(f_{i}^{\text{cls}}\). The sigmoid classifier for novel classes is trained as follows: \[\mathcal{L}_{\text{cls}}^{\text{Novel}}=\sum_{i=1}^{|\mathcal{P}^{+}\cup \mathcal{P}^{-}|}\textbf{Focal loss}(\text{Sigmoid}(f_{i}^{\text{cls}};W_{N}),c_{ i}), \tag{3}\] where \(W_{N}\) are the weights of the novel classes. Notably, our approach is fast, i.e., 5 minutes on COCO, because we only retrieve the top proposals. This differs from OV-DETR [39] and VL-PLM [42], which extract pseudo labels for new classes from the entire training set and jointly train them with the ground truth labels for the base classes. ### Inference on Both Base and Novel Classes Given a proposal box \(\tilde{b}_{i}\) with classification feature \(f_{i}^{\text{cls}}\) and distillation feature \(f_{i}^{\text{dis}}\), the inference on both base and novel classes is visualized in the right of Fig. 3. For the classification head, we concatenate the weights of the sigmoid classifiers learned on the base and novel classes to form a unified classifier with weight \(W=[W_{B};W_{N}]\). The classification score \(s_{i}^{\text{cls}}\) is calculated as: \[s_{i}^{\text{cls}}=\text{Sigmoid}(f_{i}^{\text{cls}};W)\in[0,1]^{|C_{B}|+|C_ {N}|}. \tag{4}\] For the distillation head, we compute the distillation score \(s_{i}^{\text{dis}}\) as the softmax score of the cosine similarity between the distillation features \(f_{i}^{\text{dis}}\) and text embeddings \(e_{c}^{\text{text}}\) with temperature \(\kappa\) as: \[s_{i}^{\text{dis}}=\text{Softmax}_{c}\left(\frac{\text{cos}(f_{i}^{\text{dis}},e_{c}^{\text{text}})}{\kappa}\right)\in[0,1]^{|C_{B}|+|C_{N}|}. \tag{5}\] Finally, the final score for prediction of each proposal \(\tilde{b}_{i}\) with objectness score \(o_{i}\) is computed as: \[s_{i}=o_{i}\cdot\begin{cases}s_{i}^{\text{cls}}\text{ for base classes}\\ (s_{i}^{\text{cls}})^{\beta}(s_{i}^{\text{dis}})^{1-\beta}\text{ for novel classes}\end{cases} \tag{6}\] where \(\beta\) are coefficient hyper-parameter for novel classes. Figure 4: Top-4 box proposal retrievals from CLIP’s embeddings of four novel classes: ‘elephant’, ‘dog’, and ‘knife’. The quality is good enough to be used as pseudo labels for training a few-shot classifier on novel classes. ## 4 Experimental Results **Datasets:** We conduct our experiments using the OVOD versions called OV-COCO [1] and OV-LVIS [10] of two public datasets: COCO [22] and LVIS [11]. The OV-COCO dataset comprises 118,000 images with 48 base categories and 17 novel categories. OV-LVIS [11] shares the image set with OV-COCO. Its categories are divided into 'frequent', 'common', and 'rare' groups based on their occurrences, representing the long-tailed distributions of 1,203 categories. We treated the 'frequent' and 'common' groups of 866 categories as the base classes while considering the rare' group of 337 categories as the novel classes. **Evaluation metrics:** Consistent with the standard OVOD evaluation protocol [10, 43], we report the box Average Precision (AP) with an IoU threshold of 0.5 for object detection on the COCO dataset, i.e. \(\text{AP}_{novel}\) for novel classes, \(\text{AP}_{base}\) for base classes, and AP for all classes. For instance segmentation on the LVIS dataset, we report the mask AP, which is the average AP over IoU thresholds ranging from 0.5 to 0.95, i.e., \(\text{AP}_{r},\text{AP}_{f}\), \(\text{AP}_{c}\), and AP for 'rare', 'frequent', 'common', and all classes, respectively. **Implementation details:** In our implementation, we use the Faster R-CNN detector [32] for COCO and the Mask-RCNN detector [12] for LVIS, both with the ResNet50 [13] backbone. The ResNet50 backbone is initialized with the self-supervised pre-trained SoCo [37]. We use multi-scale training with different image sizes while maintaining the aspect ratio for data augmentation. We employ OLN [17] as the object proposal network. For training on base classes, we use the SGD optimizer with an initial learning rate of 0.02 and an image batch size of 16. We adopt the 20-epoch schedule from MMDetection [4], where the learning rate is decreased by a factor of 10 after the 16th and 19th epochs, and apply a linear warm-up learning rate for the first 500 iterations. For quick adapting to novel classes, we set the objectness score threshold to \(\tau=0.6\) to filter proposals before retrieval. We train the novel weights \(W_{N}\) for 12 epochs using the SGD optimizer with an initial learning rate of 0.01 and decreasing the learning rate by a factor of 10 after the 8th and 11th epochs. In testing, we use a temperature of \(\kappa=0.01\) for the distillation head. ### Comparison with State-of-the-art Approaches **Results on COCO** are shown in Tab. 1 and Fig. 5. In Tab.1, we compare our approach to various methods, including ZSOD, external-dataset-based, novel-class-aware, and novel-class-unaware methods. Our approach significantly outperforms the second-best method on COCO with a significant margin of +11.5 in \(\text{AP}_{novel}\), while maintaining good performance on base classes. In Fig. 5, our approach achieves superior performance, while RegionCLIP incorrectly classifies foreground instances as background \begin{table} \begin{tabular}{l l c c c c c c c} \hline \hline \multirow{2}{*}{**Method**} & \multirow{2}{*}{**Venue**} & \multirow{2}{*}{**Training source**} & \multicolumn{3}{c}{**Box AP on COCO**} & \multicolumn{3}{c}{**Mask AP on LVIS**} \\ & & & \(\text{AP}_{novel}\) & \(\text{AP}_{base}\) & **AP** & \(\text{AP}_{r}\) & \(\text{AP}_{f}\) & \(\text{AP}_{c}\) & **AP** \\ \hline OVR-CNN [40] & CVPR 21 & & **22.8** & 46.0 & **39.9** & - & - & - \\ XPM [14] & CVPR 22 & & 27.0 & 46.3 & 41.3 & - & - & - & - \\ RegionCLIP [43] & CVPR 22 & & 31.4 & 57.1 & 50.4 & 17.1 & 27.4 & 34.0 & 28.2 \\ PromptDet [8] & ECCV 22 & & 26.6 & 59.1 & 50.6 & 19.0 & 18.5 & 25.8 & 21.4 \\ Detic [44] & ECCV 22 & & 27.8 & 47.1 & 42.1 & 17.8 & 26.3 & 31.6 & 26.8 \\ PB-OVD [9] & ECCV 22 & & 30.8 & 46.1 & 30.1 & - & - & - & - \\ OWL-ViT [25] & ECCV 22 & & 41.8 & 49.1 & 47.2 & 16.9 & - & - & 19.3 \\ VLDet [20] & ICLR 23 & & 32.0 & 50.6 & 45.8 & 21.7 & 29.8 & 34.3 & 30.1 \\ \hline OV-DETR [39] & ECCV 22 & instance-level labels in \(C_{B}\) & 29.4 & 61.0 & 52.7 & 17.4 & 25.0 & 32.5 & **26.6** \\ VL-PLM [42] & ECCV 22 & known novel classes during training & 34.4 & 60.2 & 53.5 & 17.2 & 23.7 & 35.1 & **27.0** \\ \hline ZSD [1] & ECCV 18 & & 0.31 & 29.2 & 24.9 & - & - & - & - \\ PL [29] & AAAI 20 & & 4.12 & 35.9 & 27.9 & - & - & - & - \\ DELO [46] & CVPR 20 & & 3.41 & 13.8 & 11.1 & - & - & - & - \\ \hline ViLD [10] & ICLR 22 & & 27.6 & 59.5 & 51.2 & 16.6 & 24.6 & 30.3 & 25.5 \\ RegionCLIP\({}^{\dagger}\)[43] & CVPR 22 & & 14.2 & 52.8 & 42.7 & - & - & - & - \\ DetPro\({}^{\ddagger}\)[5] & CVPR 22 & & instance-level labels in \(C_{B}\) & 19.8 & 60.2 & 49.6 & **19.8** & 25.6 & 28.9 & 25.9 \\ F-VLM [18] & ICLR 23 & & 28.0 & 43.7 & 39.6 & 18.6 & - & - & 24.2 \\ LP-OVOD (ours) & - & & **40.5** & 60.5 & **55.2** & 19.3 & 26.1 & 29.4 & **26.2** \\ \hline \hline \end{tabular} \end{table} Table 1: **Performance on COCO and LVIS. -’-’ denotes numbers that are not reported. \({}^{\dagger}\) denotes another version of RegionCLIP using only the COCO object detection dataset for training. \({}^{\ddagger}\) denotes our re-run of the provided DetPro source code on COCO without the transferring from LVIS. Methods in faded rows are for reference only, not a direct comparison to ours. Best results are in bold and the second best are in underlined.** and ViLD generates redundant predictions. The last column shows a failure case, where all methods struggle to generate accurate boxes for the airplane due to its aspect ratio being significantly different from the base classes. Therefore, these results demonstrate the effectiveness of our approach without relying on any external datasets or known novel classes during training. **Results on LVIS** are shown in Tab. 1 and Fig. 6. As can be seen, we obtain comparable results with DetPro [5] and the improvement is less significant than that in COCO. That is due to two main reasons. _First_, the semantic difference between base and novel classes in COCO is relatively high owing to the smaller number of classes while the difference in LVIS is lower since classes are more fine-grained, giving rise to easier transfer of the learned embedding in base classes to novel classes in LVIS. Hence, novel text embeddings are readily matched with the predicted feature in testing. _Second_, our method is mainly based on the assumption that novel classes exist even though they are not annotated in training images. As a result, the performance of the few-shot learner mostly depends on the quality of the retrieved proposal given novel classes' names. In LVIS, the distribution of classes is long-tail, especially for rare classes which are tested as novel classes. All of them appear less than 10 times in the training set. It is very challenging for our approach to retrieve relevant proposals. Fortunately, even though our method cannot retrieve the exact proposals for each novel class, it can retrieve the close-meaning proposals such as 'neckercheif' vs. 'tie', 'puppet' vs. 'doll', and 'elephant' vs.'mammoth'. Thus, our method performs comparably with prior work in LVIS. ### Ablation Study In this section, we conduct ablation studies on the COCO dataset on various aspects to analyze our approach. **Impact of different proposal networks.** Tab. 2 presents the results of our approach using RPN [32] and OLN [17] proposals. OLN is a SOTA object proposal network in the open-world setting. On COCO, the quality of the OLN proposals is higher than that of RPN with the same supervision in training, as evidenced by an improvement of +3.3 in AP\({}_{novel}\). This is because OLN is more robust to object sizes and aspect ratios by replacing foreground/background classification with centerness and IoU score predictions. However, when the number of base classes increases, as in the \begin{table} \begin{tabular}{l c c} \hline \hline \multicolumn{3}{c}{**AP\({}_{novel}\) on COCO**} & \multicolumn{1}{c}{**AP\({}_{r}\) on LVIS**} \\ \hline Ours + RPN [32] & 37.2 & 19.3 \\ Ours + OLN [17] & **40.5** & 19.3 \\ \hline \hline \end{tabular} \end{table} Table 2: The effectiveness of RPN [32] and OLN [17]. Figure 5: **Qualitative comparison of different approaches on COCO’s novel classes.** The first four columns show our superior performance while the last one shows a failure case where all of them cannot generate boxes for the airplane due to its rare aspect ratio. case of LVIS, these predictions become less effective since the base classes can cover a wider range of object sizes and aspect ratios of the novel classes. **Ablation study on each component's contribution** is summarized in Tab. 3. Our baseline is ViLD with OLN proposals. By using retrieval of top boxes as the pseudo labels for novel classes, the performance improves significantly by +5.6 in AP\({}_{novel}\) compared to the baseline, while keeping the performance of base classes intact. Moreover, combining the sigmoid classifier and the pseudo-labeling strategy results in the best performance of 40.5 in AP\({}_{novel}\). **Study on features to learn the sigmoid classifier.** To quantitatively show that the classification features of Faster R-CNN pre-trained on base classes are superior, we train a sigmoid classifier on top of the classification feature \(f_{i}^{\text{cls}}\) and the distillation feature \(f_{i}^{\text{dis}}\), which is trained to distill the CLIP's image embedding. The results are presented in Tab. 4. The feature of the classification head yields 35.9 in AP\({}_{novel}\), greatly outperforming that of the distillation head. **Number of retrieved proposals per novel class.** Tab. 5 presents the performance of our approach for different numbers of proposals \(K\) per novel class in Sec. 3.2. The performance improves as the value of \(K\) increases and saturates at K=100. We speculate that a higher number of proposals provides more diverse examples for training whereas too many proposals increase the likelihood of including noisy boxes, resulting in suboptimal performance. Moreover, too many proposals can slow down the retrieval and few-step training of the linear classifier for novel classes. \begin{table} \begin{tabular}{c c c c c} \hline \hline **Features** & **AP\({}_{novel}\)** & **AP\({}_{base}\)** & **AP** \\ \hline Classification & **35.9** & 60.5 & **54.1** \\ Distillation & 19.7 & 60.5 & 49.8 \\ \hline \hline \end{tabular} \end{table} Table 4: Types of features to learn the sigmoid linear classifier. \begin{table} \begin{tabular}{c c c c c} \hline \hline **Retrieval** & **Sigmoid** & **AP\({}_{novel}\)** & **AP\({}_{base}\)** & **AP** \\ \hline & & 27.6 & 61.2 & 52.4 \\ ✓ & & 33.2 & **61.2** & 53.9 \\ ✓ & ✓ & **40.5** & 60.5 & **55.2** \\ \hline \hline \end{tabular} \end{table} Table 3: Ablation study on the contribution of each component. **Retrieval**: retrieving top boxes as pseudo labels for novel classes. **Sigmoid**: replace softmax with sigmoid classifier. Figure 6: **Qualitative results of novel classes on the LVIS dataset [11]. Our approach can successfully detect some novel classes including “lab coat”, “mallet”, and “hand glass”. However, due to the rarity of some novel classes in training, our method retrieves the proposals of close-meaning classes instead, i.e., “tie” vs “neckerchetier”, leading to the wrong prediction in testing.** \begin{table} \begin{tabular}{c c c c c} \hline \hline **Retrieval** & **Sigmoid** & **AP\({}_{novel}\)** & **AP\({}_{base}\)** & **AP** \\ \hline & & 27.6 & 61.2 & 52.4 \\ ✓ & & 33.2 & **61.2** & 53.9 \\ ✓ & ✓ & **40.5** & 60.5 & **55.2** \\ \hline \hline \end{tabular} \end{table} Table 3: Ablation study on the contribution of each component. **Retrieval**: retrieving top boxes as pseudo labels for novel classes. **Sigmoid**: replace softmax with sigmoid classifier. **Study on the coefficient of novel classes \(\beta\)** is summarized in Tab. 6. ViLD uses \(\beta=1/3\), indicating that the distillation head's novel scores have more impact on the final prediction than the classification head's scores. However, in our case, we achieve the best performance when using \(\beta=0.8\), implying that the classification score has a greater contribution than the distillation score to the final score. **The importance of the objectness score in Eq. (6).** We compare the performance of our model with and without multiplication of the objectness score \(o_{i}\). The object detector's objectness score provides an indication of the presence of an object in an image. Hence, multiplying the final score by the objectness score can mitigate false positive and false negative detections. In Tab. 7, we observe a performance gain of +5.9 in \(\text{AP}_{novel}\) with the multiplication of the objectness score compared to the model without it. **Reason to choose top retrieved boxes as pseudo labels.** Unlike the CLIP features of random proposals, the top-retrieved boxes are distinct as visualized in Fig. 7. Therefore, these top-retrieved boxes are good candidates for training the sigmoid classifier for novel classes. ### Transfer from LVIS to Objects365 and VOC We evaluate the transfer learning performance of our approach on Objects365 [33] and PASCAL VOC [6] datasets, following the protocol in [5, 10]. We use a pretrained model on the LVIS dataset, which includes the 'frequent' and 'common' classes, and evaluate its performance on the validation sets of Objects365 and PASCAL VOC, consisting of 365 and 20 classes, respectively. For Objects365, we use part V1 of the newly released Objects365 V2 dataset, consisting of 30,310 images and over 1.2M bounding boxes. For PASCAL VOC, we retrieve the top \(K=10\) proposals per novel class for Objects365 and the top \(K=50\) proposals for PASCAL VOC and set \(\beta=0.6\). Results are reported in Tab. 8. Our approach outperforms ViLD [10] and DetPro [5] with a substantial margin of approximately +1.5 in APs, demonstrating the effectiveness of our approach in various transfer learning settings beyond COCO and LVIS. ## 5 Discussion and Conclusion **Limitations:** As shown in Tab. 1, the performance of novel classes is still lagging behind that of base classes, with a gap of 20 points in Box AP on the COCO dataset. One of the main reasons for this is that we did not fine-tune or improve the box regression for novel classes, as we only focused on the classification head. This is due to the lack of box annotations for novel classes, which is a common issue in OVOD. Additionally, CLIP's visual embeddings are not highly sensitive to the precise box location but only require that the box contains the object or important parts of the object. As a result, there is limited information available for improving the bounding boxes based solely on CLIP. Therefore, further research on improving box regression would be an interesting direction for OVOD. **Conclusion:** In this work, we have introduced a simple yet effective approach for OVOD with two contributions. Firstly, we propose a linear probing approach that utilizes a pretrained Faster R-CNN to learn a highly discriminative feature representation in the penultimate layer, which is then used to train a linear classifier for novel classes. Secondly, we propose to replace the standard softmax classifier with a sigmoid classifier that is able to predict scores for each class independently, which unifies the classifier heads for both base and novel classes. Our approach outperforms strong baselines of OVOD on the COCO dataset with an \(\text{AP}_{novel}\) of 40.5, setting a new state of the art. \begin{table} \begin{tabular}{c c c c c} \hline \hline \(\beta\) & 0.9 & 0.8 & 0.7 & 0.6 \\ \hline \(\text{AP}_{novel}\) & 40.2 & **40.5** & 39.7 & 38.5 \\ \hline \hline \end{tabular} \end{table} Table 6: Study on the coefficient of novel classes \(\beta\). \begin{table} \begin{tabular}{c c c c c} \hline \hline & \multicolumn{3}{c}{**Objects365**} & \multicolumn{2}{c}{**PASCAL VOC**} \\ **Method** & **AP** & **AP50** & **AP75** & **AP50** & **AP75** \\ \hline ViLD\({}^{\dagger}\)[10] & 10.2 & 16.2 & 10.9 & 72.2 & 56.7 \\ DetPro [5] & 10.9 & 17.3 & 11.5 & 74.6 & 57.9 \\ \hline Ours & **12.6** & **18.9** & **13.1** & **76.0** & **59.4** \\ \hline \hline \end{tabular} \end{table} Table 8: Transfer from LVIS to Objects365 and PASCAL VOC. \({}^{\dagger}\)denotes the re-implementation of ViLD in the DetPro repository. \begin{table} \begin{tabular}{c c c c c} \hline \hline \(\beta\) & 0.9 & 0.8 & 0.7 & 0.6 \\ \hline \(\text{AP}_{novel}\) & 40.2 & **40.5** & 39.7 & 38.5 \\ \hline \hline \end{tabular} \end{table} Table 6: Study on the coefficient of novel classes \(\beta\). Figure 7: The CLIP’s image embeddings of top retrieved boxes.
2308.03336
Colliding of two high Mach-number quantum degenerate plasma jets
Colliding of two high Mach-number quantum degenerate plasmas is one of the most essential components in the double-cone ignition (DCI) inertial confinement fusion scheme, in which two highly compressed plasma jets from the cone-tips collide along with rapid conversion from the colliding kinetic energies to the internal energy of a stagnated isochoric plasma. Due to the effects of high densities and high Mach-numbers of the colliding plasma jets, quantum degeneracy and kinetic physics might play important roles and challenge the predictions of traditional hydrodynamic models. In this work, the colliding process of two high Mach number quantum degenerate Deuterium-plasma jets with sizable scale ($\sim 1000\ \si{\mu m}$, $\sim 300\ \si{ps}$, $\sim 100\ \si{g/cc}$, $\sim 300\ \si{km/s}$) were investigated with first-principle kinetic simulations and theoretical analyses. In order to achieve high-density compression, the colliding kinetic pressure should be significantly higher than the pressure raised by the quantum degeneracy. This means high colliding Mach numbers are required. However, when the Mach number is further increased, we surprisingly found a decreasing trend of density compression, due to kinetic effects. It is therefore suggested that there is theoretically optimal colliding velocity to achieve the highest density compression. Our results would provide valuable suggestions for the base-line design of the DCI experiments and also might be of relevance in some violent astrophysical processes, such as the merger of two white dwarfs.
W. B. Zhang, Y. H. Li, D. Wu, J. Zhang
2023-08-07T06:36:16Z
http://arxiv.org/abs/2308.03336v1
# Colliding of two high Mach-number quantum degenerate plasma jets ###### Abstract Colliding of two high Mach-number quantum degenerate plasmas is one of the most essential components in the double-cone ignition (DCI) inertial confinement fusion scheme, in which two highly compressed plasma jets from the cone-tips collide along with rapid conversion from the colliding kinetic energies to the internal energy of a stagnated isochoric plasma. Due to the effects of high densities and high Mach-numbers of the colliding plasma jets, quantum degeneracy and kinetic physics might play important roles and challenge the predictions of traditional hydrodynamic models. In this work, the colliding process of two high Mach number quantum degenerate Deuterium-plasma jets with sizable scale (\(\sim 1000\)\(\mu\)m, \(\sim 300\) ps, \(\sim 100\) g/cc, \(\sim 300\) km/s) were investigated with first-principle kinetic simulations and theoretical analyses. In order to achieve high-density compression, the colliding kinetic pressure should be significantly higher than the pressure raised by the quantum degeneracy. This means high colliding Mach numbers are required. However, when the Mach number is further increased, we surprisingly found a decreasing trend of density compression, due to kinetic effects. It is therefore suggested that there is theoretically optimal colliding velocity to achieve the highest density compression. Our results would provide valuable suggestions for the base-line design of the DCI experiments and also might be of relevance in some violent astrophysical processes, such as the merger of two white dwarfs. Double-cone ignition (DCI) [1] is a new type of laser inertial confinement fusion (ICF) scheme proposed recently. In this scheme, the deuterium-tritium (DT) fuel shells assembled in two head-on gold cones are compressed and accelerated along the cone axis by carefully tailored nanosecond laser pulses, forming high-speed DT plasma jets (\(\sim 100\) g/cc, \(\sim 300\) km/s) from the cone-tips, and then collide with each other in the central open space (shown in Fig. 1(a)). Due to the momentum filtering and transverse confinement of the wall [2], the colliding DT fuels remain cryogenic (\(\sim 50\) eV) and fall in highly degenerated states. During the colliding process, densities of the fuels increase rapidly and reach the required \(\sim 300\) g/cc [3]. Finally, fast electrons generated by picosecond petawatt laser pulses are injected into the stagnated isochoric plasma perpendicular to the colliding direction, locally heating plasma to keVs. The colliding of two highly compressed DT plasma jets from the cone-tips is one of the key components in the DCI scheme. Distinguished from the conventional ICF schemes where the DT pellet undergoes a spherical stagnation [5], a strong colliding shock, as depicted in Fig. 1(a), is generated to convert the kinetic energies of the DT plasma jets to their internal energies, forming an isochoric preheated plasma in the colliding center for the following fast heatings. The colliding of two plasma jets is also an active research area in ICF and laboratory astrophysics communities. Previous studies mainly focused on either collisionless kinetic shocks [6; 7; 8; 9; 10] or collisional hydrodynamic shocks [11; 12; 13; 14]. In indirect-drive ICF schemes, collisionless shock appears when the high-Z plasma expands from the hohlraum wall and collides with filling gases or blow-off from the fuel capsule [15; 16; 17]. As for the collisional cases, a representative research topic is the shock ignition scheme [18; 19; 20], in which the shocks are produced and maintained in high density DT fuels. In recent laboratory astrophysics studies, the oblique merge of supersonic plasma jets [21], interpenetration and stagnation of colliding plasma [22; 23] and shock-generated electromagnetic fields [24] were also investigated experimentally. With in-depth research, it is found that conventional hydrodynamic theory is inadequate to describe strong shocks propagating in plasmas, especially for the case of high Mach numbers [25]. As a result, kinetic approaches [26; 27; 28; 29] of shock front and series of new PIC and hydro-PIC hybrid simulation methods [30; 31; 32; 33; 34] have been successively developed. Researcher's attention was shifted to kinetic effects in the ablation shock wave during the implosion and compression process in ICF, including the electron thermal conduction [35], species separation of deuterium and tritium [36; 37; 38], and non-local transport in the shock front [39; 40]. Recently, the quantum hydrodynamic method has also been introduced [41; 42; 43] to describe the quantum effects in colliding shocks. Although a great deal of work had been done to study colliding shock waves, it is still a challenge to study the colliding DT fuels in the DCI scheme. For the colliding process of the DCI scheme, significant non-equilibrium phenomena exist near the shock interface beyond hydro dynamics, including shock-induced mix and penetration of high energy ions (shown in Fig. 1(b)). On the other hand, to simulate large-scale dynamics of high-density plasmas, the numerical noise and cost of computational resources are unaffordable for most PIC codes. In the meanwhile, due to the momentum filtering and transverse confinement of the wall [2], the colliding of the two highly compressed DT plasma jets fall in highly degenerated states, which typically can not be treated as classical plasmas. In order to tackle the above challenges, we have developed a new simulation method [44; 45] with an ingenious kinetic-ion and kinetic/hydrodynamic-electron treatment. This method takes advantage of modern particle simulation techniques and binary Monte Carlo collisions, including both long-range collective electromagnetic fields and short-range particle-particle interactions, thereby collisional coupling and state-dependent coefficients, that are usually approximately used with different forms in fluid descriptions, are removed. Especially, in this method [45], the restrictions of simulation grid size and time step on electron scales, which usually appear in a fully kinetic description, are eliminated. In order to take quantum degeneracy into account, the Boltzmann-Uehling-Uhlenbeck equation is adopted for the transport of electrons, and the Fermi-Dirac distributions and the Pauli-exclusion principle among electrons are naturally fulfilled in the above first principle kinetic method. For simplicity, we conducted large-scale one-dimensional simulations for the colloidings of two pure Deuterium-plasmas (D-plasmas) to avoid the effects of ion species separation. In order to achieve high-density compression, colliding Mach number should be high enough to ensure that the colliding kinetic pressure is significantly higher than the pressure raised by the quantum degeneracy. Moreover, we surprisingly found a decreasing trend of density compression in the colliding center as the Mach number is larger than a particular value. This is rarely discussed in previous works. This work may be not only of significance to the base-line design of DCI scheme, but also instructive to astronomical studies in respect of the merger of super-dense objects such as white dwarfs [46]. The configuration of our simulations is listed as follows. In the 1000 \(\mu\)m simulation region, two plasma jets with sizes of 250 \(\mu\)m are symmetrically set and cling to each other. The plasmas have initially uniform temperature of 50 eV and central density of 100 g/cc. The colliding velocity is assigned from 100 km/s to 900 km/s at intervals of 100 km/s, corresponding to the Mach number \(M\) ranging from \(M=1.6\) to \(M=14.2\). The details of the simulation parameters (time step, grid size and particles per cell) are presented in Sec. I of the Supplemental Material [47]. To ensure robustness and correctness, the convergence benchmarks with varying simulation parameters are also performed, in Sec. I of the Supplemental Material [47]. Fig. 2 shows typical simulation results of density and "effective electron temperature" for the colliding plasmas, where the "effective electron temperature" is equivalently represented by the average electron kinetic energy (the drift kinetic energy of electrons is ignorable compared with its internal energy) for convenience of counting. In the simulations, it is found that there are two shock waves propagating outward from the colliding center, characterized by sharp density and temperature discontinuities. The density and temperature of the central plasmas behind the colliding fronts are several times of that in unperturbed cold plasmas. Meanwhile, rarefaction waves enter the colliding plasmas from outside, and rapidly decrease the density and temperature of plasmas after the collision. After intersecting with the rarefaction, the shock declines and finally vanishes, as displayed in Fig. 2(c) and (f) after \(t=0.2\) ns. Nevertheless, in the early stage of the colliding, the density and electron temperature of plasmas behind shock fronts almost remain spatially and temporally uniform. This ensures the col Figure 1: (color online). (a) Schematic of head-on collision in the DCI scheme. The fuels are initially compressed and accelerated by lasers. Then two plasma jets eject out from gold cone-tips and collide with each other with high Mach number, forming strong shocks in the colliding region. (b) Key features near the shock front: the blue, orange and red regions represent the upstream, shock interface and downstream respectively. Shock is maintained through the equilibrium between kinetic and thermal pressure. Degenerate and non-degenerate plasmas mix in the shock interface, resulting in non-equilibrium states at the shock front. The high energy ions in the downstream region penetrate upstream beyond the shock front, leading to enhancement of the shock width [4]. lections of average density and effective temperature of post-shock plasmas during the colliding as shown in Fig. 3. We have made a hydrodynamic model for the colliding. According to the Rankine-Hugoniot relation, the density compression ratio \(\rho_{2}/\rho_{1}\) depends on the pressure \(p\) in the pre-shock (index 1) and post-shock (index 2) as follows \[\frac{\rho_{2}}{\rho_{1}}=\frac{(\gamma+1)p_{2}+(\gamma-1)p_{1}}{(\gamma+1)p_{ 1}+(\gamma-1)p_{2}}. \tag{1}\] Taking the adiabatic coefficient \(\gamma\) as \(5/3\) for monatomic plasmas, the theoretical supremum of the density compression ratio for a single shock is 4, on condition that \(p_{2}\ll p_{1}\) and \(\rho_{2}/\rho_{1}=(\gamma+1)/(\gamma-1)\). For one-dimensional problems, we assume that the kinetic energies of ions are completely converted to their internal energies, and that thermal equilibrium is reached between electrons and ions (i.e. \(T_{i}=T_{e}=T\)). These assumptions are confirmed by our simulations. Consider a certain fluid element at a fixed position, the conservation of energy before and after the shock front is written as For one-dimensional problems, we assume that the kinetic energies of ions are completely converted to their internal energies, and that thermal equilibrium is reached between electrons and ions (i.e. \(T_{i}=T_{e}=T\)). These assumptions are confirmed by our simulations (see Sec. II of [47]). Consider a certain fluid element at a fixed position, the conservation of energy before and after the shock front is written as \[\frac{1}{2}m_{\rm D}v^{2}=\frac{3}{2}k_{B}(T_{2}-T_{1})+[\varepsilon_{e}(T_{2 },n_{2})-\varepsilon_{e}(T_{1},n_{1})], \tag{2}\] where \(m_{\rm D}\) is the mass of a deuterium ion, \(\varepsilon_{e}\) is the average energy of electrons and \(n_{i}=\rho_{i}/m_{D}\) is the number density of D-plasmas. In Eq. (10), D-ions lie in states that can be well described by ideal gas models, with constant heat capacity \(c_{v}=(3/2)k_{B}\), and \(\varepsilon_{i}=(3/2)k_{B}T\). Electrons, especially in the pre-shock regions, lie in quantum degenerate states, and follow Fermi-Dirac distributions \[f_{e}(E;T_{e},n_{e})=\frac{(2m_{e})^{3/2}}{n_{e}\hbar^{3}\pi^{2}}\frac{\sqrt{ E}}{\exp[(E/T_{e})-\eta]+1}, \tag{3}\] which is normalized by \(\int f_{e}(E)dE=1\) to determine the coefficient \(\eta\). The effective electron temperature \(\varepsilon_{e}(T_{e},n_{e})\) in Eq. (10), is \(\varepsilon_{e}(T_{e},n_{e})=\int Ef_{e}(E;T_{e},n_{e})dE\), which is determined by both thermal temperature \(T_{e}\) and number density \(n_{e}\). It is noticed that as \(T_{e}\to 0\), the normalizing coefficient \(\eta\) in Eq. (11) approaches \(\infty\) and \(\varepsilon_{e}\) becomes almost independent of \(T_{e}\), being merely proportional to \(n_{e}^{2/3}\). The energy related to the density of fermions at zero temperature is called Fermi energy. Relatively, we also performed simulations to analyze the scenario treating electrons classically, and for this scenario, \(\varepsilon_{e}^{\prime}\) also equals to \((3/2)k_{B}T\). Therefore \(\varepsilon_{e}\) is always higher than \(\varepsilon_{e}^{\prime}\) for fixed electron densities and temperatures. It can also be proved that for Fermi-Dirac distributions, the equation of state (EOS) \(p_{e}=(2/3)n_{e}\varepsilon_{e}\) always holds, identical to the EOS of classical ideal monatomic gas. Therefore, we have \[p=\frac{2}{3}n(\varepsilon_{i}+\varepsilon_{e}). \tag{4}\] Figure 2: (color online). The spatial-temporal evolutions of plasma density (in first row), effective electron temperature (in second row): (a) and (d) show the case when the colliding velocity is \(v_{0}=100\) km/s. (b) and (e) show the case when \(v_{0}=300\) km/s. (c) and (f) show the case when \(v_{0}=900\) km/s. Values in the color-bar represent plasma density \(\rho\) in unit of g/cc and effective electron temperature \(T_{eff,e}\) in unit of eV. By combining Eq. (9)-(12), the post-shock density and temperature of plasmas in the colliding center can be obtained. Fig. 3 shows the densities and temperatures of the central plasmas. For simulations, the density and temperature values are picked at the early stage of the colliding, where both values remain of spatially and temporally uniform. When comparing the results between simulations with the effects of quantum degeneracy and classical models, it is found that the density of post-shock plasmas in the former is less than that in the latter, especially for low colliding velocities. Additionally, a noteworthy aspect of Fig. 3 is the comparison between the PIC simulation results and the hydrodynamic calculations for high velocities. It is observed that the post-shock density in the simulations shows an opposite trend to the theoretical calculations when \(v\) is greater than 500 km/s, that is, \(M>8\). The red lines representing the hydrodynamics predictions of densities keeps rising approaching the 4 times limit of shock compression; in contrast, the blue line of simulation results reaches a maximum ratio of 3.3 and then decreases. The results of both degenerate and classical simulations converge when \(M\) is high, indicating that the effects of quantum degeneracy are no longer significant since the colliding has heated electrons up to classical states. The divergence of density trends in Fig. 3 has indicated that hydrodynamics is not applicable to strong shocks in the supreme high Mach number collidings. Extra simulations have been conducted with initial density of 1 g/cc, 10 g/cc and 50 g/cc under the same colliding velocity of \(v=900\) km/s, and the results are marked on Fig. 3 with open diamonds. It is noted that in the post-shock region, the compression ratio \(\rho/\rho_{0}\) is merely 2.3 at \(\rho_{0}=1\) g/cc, while rising to 2.9 as \(\rho_{0}\) increases to 100 g/cc. This result conflicts with Eq. (9), where the compression ratio of shock is independent of the density of unperturbed pre-shock plasmas. Fig. 4 shows the detailed simulation results for colliding velocity of 900 km/s and initial center density of 1 g/cc and 50 g/cc. It is evident that the shock front in Fig. 4(a) is weaker and blurred, which indicates that the thickness of shock front is compatible to the simulation scale. The thickness of shock is more discernible in the phase space. According to Fig. 4(b) and (e), the narrow transition region between clusters gathering around \(v_{z}=\pm v_{0}\) to \(v_{z}=0\), which is enlarged in Fig. 4(c) and (f), is clearly observed as the shock front of several \(\mu\)ms in length. Viewed along \(v_{z}\) axis, particles in the slope of shock front violate the Maxwellian distribution, and the presumptions of local thermodynamic equilibrium no longer hold. Hence, it is necessary to step further surpassing the hydrodynamic theory, and investigate the kinetic effects in collidings with supreme high \(M\). Semi-quantitative kinetic analysis is conducted based on Mott Smith's and Tidman's work [26; 28], in which the distribution function near the shock front is expressed by the superposition of two different equilibrium distributions \[\begin{split} f(v,z)=& n_{\alpha}(z)(\frac{m_{\rm D }}{2\pi k_{B}T_{\alpha}})^{3/2}\exp[-\frac{m_{\rm D}(v-u_{\alpha})^{2}}{2k_{B }T_{\alpha}}]+\\ & n_{\beta}(z)(\frac{m_{\rm D}}{2\pi k_{B}T_{\beta}})^{3/2}\exp [-\frac{m_{\rm D}(v-u_{\beta})^{2}}{2k_{B}T_{\beta}}],\end{split} \tag{5}\] where \(u_{\alpha,\beta}\) is the average velocity. By substituting Eq. (13) into the Fokker-Planck equation and performing integrals for velocity with stable condition \(\partial f/\partial t=0\), the number density in the shock front is deduced as \[\frac{n(z)}{n_{0}}=\frac{M^{2}+a-2+M^{2}(a-1)e^{Bx/l}}{(M^{2}+a-2)(1+e^{Bx/l} )}. \tag{6}\] In Eq. (14), \(n_{0}=n(-\infty)\) is the number density of particles far away in front of the shock, which are considered unperturbed by the shock. The parameter \(a\) is given by \(a=2\gamma/(\gamma-1)\), \(l\) is the mean free path, and \(B\) is a coefficient depending on the collision model selected in the Fokker-Planck equation. We further apply Eq. (14) to our colliding shock problem. For monatomic D-ions, \(\gamma=5/3\) and \(a=5\). In the supreme high Mach number limit \(M^{2}>>1\), Eq. (14) turns to Figure 3: (color online). Density (in solid lines) and temperature (in dotted lines) of the post-shock plasma, in unit of their initial values, where \(\rho_{0}=100\) g/cc and \(T_{i,e}=50\) eV. The blue diamonds represent data obtained by PIC simulations with the effects of quantum degeneracy; the red dots represent data obtained by the hydrodynamic model, and the black squares represent data obtained by classical PIC simulations. The hollow diamonds show the cases where the initial density are \(\rho_{0}=50\) g/cc, \(\rho_{0}=10\) g/cc and \(\rho_{0}=1\) g/cc respectively. \[\frac{n(z)}{n_{0}}=\frac{1+4e^{Bz/l}}{1+e^{Bz/l}}=\frac{1+4e^{z/\delta}}{1+e^{z/ \delta}}. \tag{7}\] where \(l/B\) is on the scale of the shock thickness \(\delta=[n(\infty)-n(-\infty)]/[dn/dz]_{max}\). Therefore, the structure of shock is dependent on the ratio of system spatial scale to shock thickness \(z/\delta\). For inspection, in the hydrodynamic limit, as \(z/\delta\rightarrow\infty\) for an infinitesimal shock thickness, according to Eq. (15), \(n(z)/n_{0}=4\), which is the maximum compression ratio in hydrodynamics. In our one-dimensional collision cases, the spatial scale is of \(\sim 100\)\(\mu\)m, and the estimation of shock thickness refers to Keenan's two-component analytical calculation [4]. The shock thickness is expressed as \[\delta=\delta_{0}+\frac{m_{i}}{m_{e}}\lambda_{\text{D}} \tag{8}\] where \(\delta_{0}\) is proportional to \(M^{4}\) (the theoretical derivation of this result is shown in Sec. III of [47]), and the last term is a kinetic modification merely as a function of dependent on the mean free path \(\lambda_{\text{D}}=\frac{16\sqrt{6\pi}e_{0}^{2}T^{2}}{n_{0}e^{4}ln\Lambda}\). According to Keenan's model, for fully ionized plasmas, when \(M>5\), the first term \(\delta_{0}\) is starting to dominate over the second term. With colliding velocity of 900 km/s, density of 10 g/cc and initial temperature of 50 eV, we have \(M\approx 15\) and \(\delta\sim 80\)\(\mu\)m. It is noted that the shock thickness \(\delta\) is comparable to the length of the colliding region. According to Eq. (15), the one-dimensional shock tube model has a steady-state downstream at \(z\rightarrow\infty\), with a density four times of that in the upstream. However, in the colliding case, the profile of shock, which starts from the undisturbed upstream region, is cut off at the colliding center. As a result, the post-shock density of the center is apart from the 4 times of compression supreme. Furthermore, as the colliding velocity increases, the Mach number of colliding shock accordingly increases, as well as the shock thickness shown in Eq. (16). Since the compression ratio in Eq. (15) increases monotonically with the coefficient \(z/\delta\), the rise of \(\delta\) decreases the final density in the colliding center, which sensitively accounts for the downtrend result of simulations shown in Fig. 3. In order to validate the above arguments, we have additionally performed another set of simulations for one-component ideal gases with interactions modelled by elastic sphere collisions. For fixed colliding velocity and gas density, the compression ratio is also decreasing when the shock thickness is increasing (Sec. IV of Supplemental Material [47]). In conclusion, we have investigated the effects of quantum degeneracy and kinetics in high Mach number collidings of two quantum degenerate plasmas. Via large-scale one-dimensional kinetic simulations and hydrodynamic calculations, we found both quantum degeneracy and kinetics play key roles in density compression. In order to achieve high-density compression, the colliding kinetic pressure should be significantly higher than the pressure by quantum degeneracy. However, when the Mach number is further increased, a decreasing trend of density compression is surprisingly observed, attributing to kinetic effects. This result is physically reasonable for plasmas. As the colliding velocity increases, the thickness of the shock is eventually comparable to the system scale. This means the shock is cut off in the middle by the colliding center and is apart from the 4 times compression Figure 4: (color online). The first row represents simulation results with initial density of 1 g/cc and the second row represents simulation results with initial density of 50 g/cc. (a) and (d) show the spatial-temporal evolutions of plasma density with color-bar in unit of initial values. (b) and (e) show the \(v_{z}-z\) phase space distributions at 0.067 ns (labeled as dotted line in (a) and (d) respectively) with color-bar in arbitrary unit. (e) and (f) show the \(v_{z}-z\) phase space distributions of shock front region (between the two dotted lines in (b) and (e) respectively) with color-bar in arbitrary unit. supreme. Consequently, the shock structure significantly affects the physical properties in the post-shock region. Our results provide a guide for the design of DCI experiments. It is suggested that a theoretically optimal colliding velocity can be found. At this velocity, the colliding kinetic pressure starts to surpass the degenerate pressure and the thickness of colliding shock is not significantly broadened by high-Mach number kinetics, resulting in theoretically highest density compression. Moreover, since the colliding of the degenerate plasmas is a common phenomenon in astrophysical systems, our results may be of relevance to the physics process such as the merger of neutron stars and white dwarfs. W.-B. Zhang and Y.-H. Li contributed equally to this work. W.-B. Zhang conducted the one-dimensional simulations by the LAPINS code and contributed to the deduction of kinetic equations; Y.-H. Li was responsible for constructing the hydrodynamic colliding model and undertook a portion of the kinetic theoretical analysis. This work is supported by the Strategic Priority Research Program of Chinese Academy of Sciences (Grant Nos. XDA25010100 and XDA2500500), National Natural Science Foundation of China (Grants No. 12075204), and Shanghai Municipal Science and Technology Key Project (No. 22JC1401500). Dong Wu thanks the sponsorship from Yangyang Development Fund. ## I I. Benchmark of the simulation setup To provide a benchmark of our colliding simulation configuration in the main text, we perform another set of large-scale kinetic simulations with the code LAPINS [44; 45]. To ensure an affordable time cost, the simulation region is set to 400 \(\mu\)m while it is 1000 \(\mu\)m in the main text. By keeping the total number of computational particles constant, we conduct simulations with different cell sizes. To illustrate the convergence of our simulations in detail, we first pay attention to the spatial-temporal evolutions of plasma density and electron pressure, which are shown in Fig. 5. The left column shows the case when the cell size is \(d_{z}=0.4\)\(\mu\)m and the number of particles per cell is 1000 and the right one shows the case when the cell size is \(d_{z}=0.2\)\(\mu\)m and the number of particles per cell is 500. Compared with the right column of Fig. 5, it is nearly identical in the left column in terms of the peak value and the profile of the physical quantities during the simulation time. It is therefore convergent in terms of the density and temperature evolutions during the simulation time. We then focus on the temporal evolutions of the total energy of the electrons and the ions, which are shown in Fig. 6 (the simulation parameters are the same as that in Fig. 5). Solid lines and dotted lines represent the cases when the cell size is \(d_{z}=0.4\)\(\mu\)m and the cell size is \(d_{z}=0.2\)\(\mu\)m respectively. Considering the simulations in the two cases, as displayed in Fig. 6, it is strictly convergent in the compression process which we are interested in and is nearly convergent in the diffusion process. Therefore, it is convergent in terms of the total energy evolutions during the simulation time. To simulate the colliding plasma in a larger region with affordable time cost and reasonable nodes allocation, we slightly increase the cell size up to \(d_{z}=0.5\)\(\mu\)m with the number of particles per cell 1000 in the main text simulations. This may generate mere errors of the physical values, but it is justified in terms of the trend of the physical quantities. ## II II. Efficiency of energy conversion In our hydrodynamic model, for the sake of simplicity, we assume that in the post-shock region, the drift kinetic energy of ions (the drift kinetic energy of electrons can be neglected in the simulation parameters) is entirely converted to the thermal energy of ions and electrons in our theoretical model. For the scenario of symmetric collision, we may directly conclude the zero drift velocity in the post-shock region, resulting in the above assumption. However, this assumption should be carefully confirmed. The above assumption is verified by a series of large-scale kinetic simulations which were carried out. We pay Figure 5: (color online). The spatial-temporal evolutions of plasma density (in first row) and electron pressure (in second row) when the colliding velocity is \(v_{0}=500\) km/s: (a) and (c) show the case when the cell size is \(d_{z}=0.4\)\(\mu\)m and the number of particles per cell is 1000. (b) and (d) show the case when the cell size is \(d_{z}=0.2\)\(\mu\)m and the number of particles per cell is 500. Values in the color-bar represent plasma density \(\rho\) in unit of its initial value where \(\rho_{0}=100\) g/cc and electron pressure \(P_{e}\) in arbitrary unit. attention to the temporal evolutions of the total energy of the electrons and the ions which are shown in Fig. 6. The convergence of the two cases is discussed in Sec. I, and here we focus on the total energy trend of the ions and electrons. In the compression process (red region in Fig. 6), as the drift velocity of ions and electrons decreases and the temperature of them increases, the kinetic energy is converted to the thermal energy. The mass of ions is far larger than that of electrons, so that the kinetic energy of ions is far larger than that of electrons when they have identical velocity. Additionally, the thermal energy of ions is the same as that of electrons when they have identical temperature. Accordingly, in the compression process, the total energy of ions decreases and that of electrons has a opposite trend. For the same reason, the total energy of ions increases and that of electrons decreases in the diffusion region. At the junction of the red and green regions, since the total energy of ions and electrons are close, the majority of kinetic energy is converted to the thermal energy, which indicates the rationality of our theoretical assumption. ## III III. Kinetic theory for strong shock thickness in plasmas In the colliding process of high Mach number plasmas, the key reason for the decreasing trend of density compression is that the thickness of shock scales as \(K^{4}\) where \(K\) is the Mach number. This is the result derived by Mott-Smith and Tidman [26; 28] theoretically. In the following parts of this section, we briefly summarize the derivation. For simplify, we consider plasmas with protons and electrons moving in the \(x\) direction, which have masses \(M\) and \(m\) per particle respectively. Consistent with Mott-Smith treatment, the distribution of ion \(F\) is bi-Maxwell, namely \[\begin{split} F=& N_{\alpha}(x)\left(\frac{M}{2 \pi kT_{\alpha}}\right)^{\frac{3}{2}}\exp\left[-\frac{M}{2kT_{\alpha}}\left( \mathbf{c}-\mathbf{i}U_{\alpha}\right)^{2}\right]\\ &+N_{\beta}(x)\left(\frac{M}{2\pi kT_{\beta}}\right)^{\frac{3}{2 }}\exp\left[-\frac{M}{2kT_{\beta}}\left(\mathbf{c}-\mathbf{i}U_{\beta}\right) ^{2}\right]\\ =& F_{\alpha}+F_{\beta},\end{split} \tag{9}\] where the suffix \(\alpha\) and \(\beta\) represent the conditions ahead of and behind the shock respectively, \(\mathbf{c}\) is velocity, \(N_{\alpha}\) and \(N_{\beta}\) are densities, \(T_{\alpha}\) and \(T_{\beta}\) are temperatures and \(U_{\alpha}\) and \(U_{\beta}\) are stream velocities along the \(x\) direction \(\mathbf{i}\). Since the relaxation time for electrons to reach equilibrium is small, we take electron distribution \(f\) as self-equilibrium, namely \[f(x)=n(x)\left(\frac{m}{2\pi kT(x)}\right)^{\frac{3}{2}}\exp\left[-\frac{m}{2 kT}\left(\mathbf{c}-\mathbf{i}U_{e}\right)^{2}\right], \tag{10}\] where \(n\), \(T\) and \(U_{e}\) are all functions of \(x\). When simplified for two-body Coulomb interactions, the evolutions of the two distribution functions \(F\) and \(f\) are described by Fokker-Planck equations, namely \[\begin{split}&\frac{\partial F}{\partial t}+\mathbf{c}\cdot \frac{\partial F}{\partial\mathbf{r}}+\frac{e\mathbf{E}}{M}\cdot\frac{\partial F }{\partial\mathbf{c}}=\left(\frac{\partial F}{\partial t}\right)_{c},\\ &\frac{\partial f}{\partial t}+\mathbf{c}\cdot\frac{\partial f} {\partial\mathbf{r}}-\frac{e\mathbf{E}}{m}\cdot\frac{\partial f}{\partial \mathbf{c}}=\left(\frac{\partial f}{\partial t}\right)_{c},\end{split} \tag{11}\] where the collision terms are given by \[\begin{split}&\frac{1}{\Gamma}\left(\frac{\partial F}{\partial t }\right)_{c}=-\frac{\partial}{\partial c_{i}}\left(F\frac{\partial H}{ \partial c_{i}}\right)+\frac{1}{2}\frac{\partial}{\partial c_{i}}\frac{ \partial}{\partial c_{j}}\left(F\frac{\partial}{\partial c_{i}}\frac{\partial} {\partial c_{j}}G\right),\\ &\frac{1}{\gamma}\left(\frac{\partial f}{\partial t}\right)_{c}=- \frac{\partial}{\partial c_{i}}\left(f\frac{\partial h}{\partial c_{i}}\right) +\frac{1}{2}\frac{\partial}{\partial c_{i}}\frac{\partial}{\partial c_{j}} \left(f\frac{\partial}{\partial c_{i}}\frac{\partial}{\partial c_{j}}g \right),\end{split} \tag{12}\] and \[h =2\int d\mathbf{c}_{1}\frac{f\left(\mathbf{c}_{1}\right)}{\left| \mathbf{c}-\mathbf{c}_{1}\right|}+\left(\frac{m+M}{M}\right)\int d\mathbf{c}_{1 }\frac{F\left(\mathbf{c}_{1}\right)}{\left|\mathbf{c}-\mathbf{c}_{1}\right|},\] \[g =G=\int d\mathbf{c}_{1}(f+F)\left|\mathbf{c}-\mathbf{c}_{1}\right|, \tag{13}\] \[H =\left(\frac{M+m}{m}\right)\int d\mathbf{c}_{1}\frac{f\left( \mathbf{c}_{1}\right)}{\left|\mathbf{c}-\mathbf{c}_{1}\right|}+2\int d \mathbf{c}_{1}\frac{F\left(\mathbf{c}_{1}\right)}{\left|\mathbf{c}-\mathbf{c}_ {1}\right|}.\] Figure 6: (color online). The temporal evolutions of the total energy of the electrons (blue lines) and the ions (red lines) when the colliding velocity is \(v_{0}=500\) km/s. Solid lines represent the results when the cell size is \(d_{z}=0.4\)\(\mu\)m and the number of particles per cell is 1000. Dotted lines represent the results when the cell size is \(d_{z}=0.2\)\(\mu\)m and the number of particles per cell is 500. The slowly varying quantity \(\Gamma\) is \[\Gamma=\frac{4\pi e^{4}}{M^{2}}\ln\left[\frac{3}{4(\pi n)^{\frac{1}{2}}}\left(\frac {kT}{e^{2}}\right)^{\frac{3}{2}}\right], \tag{14}\] and \(\gamma\) is obtained by replacing \(M\) with \(m\) in Eq. (14). The form of collision terms used is the same as that used by Rosenbluth, MacDonald and Judd [27]. Multiplying Eq. (11) by \(v^{2}\) (\(\mathbf{c}=(u,v,w)\)) and integrating the collision terms Eq. (12)-(13) over \(\mathbf{c}\) by parts using \(F(c=\pm\infty)=f(c=\pm\infty)=0\), we find \[\begin{split} U_{\alpha}&\left(\frac{kT_{\alpha}}{M }\right)\frac{\partial N_{\alpha}}{\partial x}+U_{\beta}\left(\frac{kT_{\beta} }{M}\right)\frac{\partial N_{\beta}}{\partial x}+\frac{U_{\beta}N_{\beta}k}{M }\frac{\partial T_{\beta}}{\partial x}\\ &=2\left(\frac{M+m}{m}\right)\Gamma\int d\mathbf{c}vF(\mathbf{c} )\int d\mathbf{c}_{1}f\left(\mathbf{c}_{1}\right)\frac{\partial}{\partial v} \left|\mathbf{c}-\mathbf{c}_{1}\right|^{-1}\\ &+\Gamma\int d\mathbf{c}F(\mathbf{c})\int d\mathbf{c}_{1}F\left( \mathbf{c}_{1}\right)\frac{\partial^{2}}{\partial v^{2}}\left|\mathbf{c}- \mathbf{c}_{1}\right|\\ &+4\Gamma\int d\mathbf{c}vF(\mathbf{c})\int d\mathbf{c}_{1}F \left(\mathbf{c}_{1}\right)\frac{\partial}{\partial v}\left|\mathbf{c}- \mathbf{c}_{1}\right|^{-1}\\ &+\Gamma\int d\mathbf{c}f(\mathbf{c})\int d\mathbf{c}_{1}F\left( \mathbf{c}_{1}\right)\frac{\partial^{2}}{\partial v^{2}}\left|\mathbf{c}- \mathbf{c}_{1}\right|,\end{split} \tag{15}\] for proton \(v^{2}\) equation. In the shock region, since \(T_{\beta}\) has a slow variation compared with \(N_{\alpha}(x)\) and \(N_{\beta}(x)\) which we are interested in, it can be considered as constant. Thus Eq. (15) becomes \[\begin{split} U_{\alpha}\left(\frac{kT_{\alpha}}{M}\right)\frac{ \partial N_{\alpha}}{\partial x}+& U_{\beta}\left(\frac{kT_{ \beta}}{M}\right)\frac{\partial N_{\beta}}{\partial x}\\ &=\frac{2\Gamma N_{\alpha}N_{\beta}}{\pi^{3}}\left(\frac{M}{2kT_ {\alpha}}\right)^{\frac{1}{2}}\Psi,\end{split} \tag{16}\] where \[\begin{split}\Psi=&\left(\frac{2kT_{\alpha}}{M} \right)^{\frac{1}{2}}\int d\mathbf{c}d\mathbf{c}_{1}\exp\left[-\left(c^{2}+c_ {1}^{2}\right)\right]\\ &\times\left\{\left|\left(\frac{2kT_{\alpha}}{M}\right)^{\frac{ 1}{2}}\mathbf{c}-\left(\frac{2kT_{\beta}}{M}\right)^{\frac{1}{2}}\mathbf{c}_{1 }+\mathbf{i}\left(U_{\alpha}-U_{\beta}\right)\right|^{-1}\right.\\ &-\left.\left.\frac{3\left[\left(2kT_{\alpha}/M\right)^{\frac{1} {2}}v-\left(2kT_{\beta}/M\right)^{\frac{1}{2}}v_{1}\right]^{2}}{\left|\left(2 kT_{\alpha}/M\right)^{\frac{1}{2}}\mathbf{c}_{1}+\mathbf{i}\left(U_{\alpha}-U_{ \beta}\right)\right|^{3}}\right\}.\end{split} \tag{17}\] Notice that \(\Psi\) can be evaluated by using Fourier integrals. We next make use of the conservation of mass equation, namely \[\begin{split} N_{\alpha}U_{\alpha}+N_{\beta}U_{\beta}=\bar{N}_{ \alpha}U_{\alpha},\\ \frac{\partial N_{\beta}}{\partial x}=-\frac{U_{\alpha}}{U_{\beta }}\frac{\partial N_{\alpha}}{\partial x},\end{split} \tag{18}\] where \(\bar{N}_{\alpha}=N_{\alpha}(-\infty)\). Therefore, Eq. (16) becomes \[\begin{split}\frac{\partial N_{\alpha}}{\partial x}& \left[\frac{1}{N_{\alpha}}+\frac{1}{\bar{N}_{\alpha}-N_{\alpha}}\right]\\ &=\frac{2\Gamma\bar{N}_{\alpha}}{\pi^{3}U_{\beta}}\left(\frac{M} {2kT_{\alpha}}\right)^{\frac{1}{2}}\frac{M\Psi}{k\left(T_{\alpha}-T_{\beta} \right)}.\end{split} \tag{19}\] By choosing the origin \(x=0\) at the point where \(N_{\alpha}(x=0)=\frac{1}{2}\bar{N}_{\alpha}\), we will find the solutions of Eq. (19) which is formally similar to that in Mott-Smith work [26], namely \[\begin{split} N_{\alpha}&=\frac{\bar{N}_{\alpha}e^{ -x/l}}{\left(1+e^{-x/l}\right)},\\ N_{\beta}&=\frac{\bar{N}_{\beta}}{\left(1+e^{-x/l }\right)},\end{split} \tag{20}\] where the shock thickness \(l\) is \[l=\frac{\pi^{3}U_{\beta}k\left(T_{\beta}-T_{\alpha}\right)}{2\Gamma\bar{N}_{ \alpha}M\Psi}\left(\frac{2kT_{\alpha}}{M}\right)^{\frac{1}{2}}. \tag{21}\] Introducing Mach number \(K=(U_{\alpha}/V)\) for the stream ahead of the shock where \(V\) is the velocity of sound in the plasma, we can express the shock thickness as \[\begin{split} l\left(\frac{\bar{N}_{\alpha}\ln\Lambda}{V^{4}} \right)=&\frac{3\pi^{2}}{128}\left(\frac{3}{5}\right)^{\frac{1}{2}} \frac{M^{2}K\left(3+K^{2}\right)}{e^{4}\Psi}\\ &\times\left(\frac{1}{4}-\frac{3}{20K^{4}}-\frac{1}{10K^{2}} \right).\end{split} \tag{22}\] For \(K\) is large, \[\Psi\to 0.309\pi^{4}/a, \tag{23}\] so that \[l\rightarrow\frac{29.1K^{4}V^{4}}{512\pi\bar{N}_{\alpha}\Gamma}, \tag{24}\] which indeed reveals that the shock thickness scales as \(K^{4}\) as \(K\) is large. ## IV IV. The colliding of one-component ideal masses with elastic sphere collisions To verify the robustness of our kinetic interpretation in the main text, another set of large-scale simulations are carried out. In these simulations, the simulated particles are modelled as elastic spheres instead of charged particle, so that the thickness of the shock is proportional to the reciprocal differential cross section [26], namely \[l\sim 1/\sigma^{2} \tag{25}\] where \(l\) is the shock thickness and \(\sigma\) is the diameter of the elastic sphere. In the LAPINS code, the characteristic velocity \(v_{r}\), which is in unit of \(c\) the speed of light, is introduced to adjust the diameter \(\sigma\) with the relation \[\frac{1}{2}mv_{r}^{2}=\frac{1}{4\pi\epsilon_{0}}\frac{e^{2}}{\sigma} \tag{26}\] where \(m\) is the mass of a single deuterium particle. Accordingly, the shock thickness \(l\) scales as \(v_{r}^{4}\), so that it is a simpler scenario to verify our interpretation in the main text. The density compression results are shown in Fig. 7. It should be remarked that the colliding velocity is initialized as \(v_{0}=500\) km/s, which is large enough to assure the supreme density compression ratio 4. In addition, the initial density is \(\rho_{0}=100\) g/cc, which is the same as that in the main text simulations. As shown in Fig. 7, as the characteristic velocity \(v_{r}\) increases, the post-shock density decreases from 4 to nearly 2. Since the shock thickness \(l\) scales as \(v_{r}^{4}\), this result is consistent with the interpretation in the main text.
2303.11440
On first subharmonic bifurcations in a branch of Stokes waves
Steady surface waves in a two-dimensional channel are considered. We study bifurcations, which occur on a branch of Stokes water waves starting from a uniform stream solution. Two types of bifurcations are considered: bifurcations in the class of Stokes waves (Stokes bifurcation) and bifurcations in a class of periodic waves with the period M times the period of the Stokes wave (M-subharmonic bifurcation). If we consider the first Stokes bifurcation point then there are no M-subharmonic bifurcations before this point and there exists M-subharmonic bifurcation points after the first Stokes bifurcation for sufficiently large M, which approach the Stokes bifurcation point when M tends to infinity. Moreover the set of M- subharmonic bifurcating solutions is a closed connected continuum. We give also a more detailed description of this connected set in terms of the set of its limit points, which must contain extreme waves, or overhanging waves, or solitary waves or waves with stagnation on the bottom, or Stokes bifurcation points different from the initial one.
Vladimir Kozlov
2023-03-20T20:35:25Z
http://arxiv.org/abs/2303.11440v2
# On first subharmonic bifurcations in a branch of Stokes waves. ###### Abstract. Steady surface waves in a two-dimensional channel are considered. We study bifurcations, which occur on a branch of Stokes water waves starting from a uniform stream solution. Two types of bifurcations are considered: bifurcations in the class of Stokes waves (Stokes bifurcation) and bifurcations in a class of periodic waves with the period \(M\) times the period of the Stokes wave (\(M\)-subharmonic bifurcation). If we consider the first Stokes bifurcation point then there are no \(M\)-subharmonic bifurcations before this point and there exists \(M\)-subharmonic bifurcation points after the first Stokes bifurcation for sufficiently large \(M\), which approach the Stokes bifurcation point when \(M\to\infty\). Moreover the set of \(M\)-subharmonic bifurcating solutions is a closed connected continuum. We give also a more detailed description of this connected set in terms of the set of its limit points, which must contain extreme waves, or overhanging waves, or solitary waves or waves with stagnation on the bottom, or Stokes bifurcation points different from the initial one. ## 1. Introduction ### Background We consider steady surface waves in a two-dimensional channel bounded below by a flat, rigid bottom and above by a free surface. The main subject of this work is subharmonic bifurcations on the branches of Stokes waves. Stokes and solitary waves (regular waves) were the main subject of study up to 1980. In 1980 (see Chen [4] and Saffman [22]) it was discovered numerically and in 2000 (see [1, 2]) this was supported theoretically for the ir-rotational case for a flow of infinite depth that there exist new types of periodic waves with several crests on the period (the Stokes wave has only one crest). These waves occur as a result of bifurcation on a branch of Stokes waves when they approach the wave of greatest amplitude. Starting point of our study is a branch of Stokes waves starting from uniform stream solution and approaching an extreme wave. This branch is parameterized by a parameter \(t\). One can study bifurcations of branches of Stokes waves of period \(\Lambda(t)\) in one of the following settings: (i) in the class of \(\Lambda(t)\)-periodic solutions (Stokes bifurcation); (ii) in the class of \(M\Lambda(t)\)-periodic solutions (M-subharmonic bifurcation); (iii) in the class of bounded solutions. In this paper we will deal mainly with (i) and (ii). Let us denote by \(\{\tilde{t}_{j}\}\), \(j=1,\ldots,\infty\), the Stokes bifurcation points. The following theorem is proved in [9] **Theorem 1.1**.: (i) _There exists a sequence \((\widehat{t}_{j},M_{j})\), where \(\widehat{t}_{j}\neq\tilde{t}_{k}\) for all \(j\) and \(k\); \(M_{j}\) are integers, and_ \[\widehat{t}_{j}\to\infty,\,\,\,M_{j}\to\infty\,\,\,\text{as}\,\,j\to\infty.\] _Moreover, \(\widehat{t}_{j}\) is a \(M_{j}\)-subharmonic bifurcation point._ (ii) _There exists a sequence \((\widehat{t}_{j},M_{j})\), where \(\widehat{t}_{j}\neq\tilde{t}_{k}\) for all \(j\) and \(k\), the sequence \(\{\widehat{t}_{j}\}\) is bounded and_ \[M_{j}\to\infty\,\,\,\text{as}\,\,j\to\infty.\] _Furthermore, \(\widehat{t}_{j}\) is a a \(M_{j}\)-subharmonic bifurcation point. The numbers \(\widehat{t}_{j}\) are pairwise different in both cases._ The proof of this theorem is based on the fact that when the Stokes waves approach the extreme wave many new bifurcation points appear. The proof of this theorem involves a bifurcation theorem for potential operators with non-zero crossing number, which gives unfortunately no information on the structure of the set of bifurcating solutions (see [24] for a discussion of this structure). Here we study a mechanism of appearance of the first subharmonic bifurcations. The main result of this paper is the following **Theorem 1.2**.: _Let \(t_{0}>0\) be the first Stokes bifurcation point, i.e. the condition \((\ref{1.1})\) be valid. Then there exists \((t_{M},M)\), where \(M\) is a large integer and_ \[t_{M}\to t_{0}\ \ \text{as}\ \ M\to\infty,\] _such that \(t_{M}\) is \(M\)- subharmonic bifurcation point. There are no subharmonic bifurcations for \(t<t_{0}\)._ We prove this theorem by using a bifurcation theorem with odd crossing number, which leads to existence of a closed connected continuum of \(M\Lambda\)-periodic solutions containing the bifurcation point \(t_{M}\) (see Theorems 8.1, 9.1 and 9.2 for more details). In the study of bifurcations the Frechet derivative plays important role. In our case it is a self-adjoint operator bounded from below. Certainly the Frechet derivatives are defined on different spaces in the cases (i)-(iii). In the study of the Stokes bifurcations this is a usual Frechet derivative defined on \(\Lambda\)-periodic, even solutions. If we denote by \(\mu_{0}(t)\) and \(\mu_{1}(t)\) the first and the second eigenvalue of the Frechet derivative, then \(\mu_{0}(t)<0\) for all \(t\) and \(\mu_{1}(0)=0\). We consider the case when \[\begin{split}&\mu_{1}(t)>0\ \ \text{for small positive $t$ and assume that there exists $t_{0}>0$}\\ &\ \ \text{such that $\mu_{1}(t_{0})=0$ and $\mu_{1}(t)<0$ for small positive $t-t_{0}$}.\end{split} \tag{1.1}\] We will call \(t_{0}\) the first Stokes bifurcation. In the case of \(M\) -subharmonic bifurcations we have the same expression for the Frechet derivative but now it is defined on \(M\Lambda\)-periodic, even functions and finally in the case of bifurcations in a class of bounded solutions the Frechet derivative is defined on the even functions defined on the whole domain without periodicity condition. By introducing quasi-momentum \(\tau\) we can give another equivalent definition of the Frechet derivative in the cases (ii) and (iii) which is defined on the space of \(\Lambda\)-periodic functions but depending on the real parameter \(\tau\). We denote by \(\widehat{\mu}_{j}(t,\tau)\), \(j=0,\ldots\), the eigenvalues of this problem, which are numerated according to the increasing order: \[\widehat{\mu}_{0}(t,\tau)\leq\widehat{\mu}_{1}(t,\tau)\leq\cdots\] The essential part of the paper is devoted to a study of properties of these eigenvalues. The eigenvalue \(\widehat{\mu}_{0}(t,\tau)\) is always less than \(0\) for all \(t\) and \(\tau\in\mathbb{R}\). If the first Stokes bifurcation occurs at a certain \(t=t_{0}\), then it is proved that the kernel of the Frechet derivative at this point is one-dimensional. Another important property concerning the Frechet derivative is the inequalities \[\widehat{\mu}_{1}(t,\tau)<0\ \ \text{for}\ t\in(t_{0},t_{0}+\epsilon)\ \text{and}\ \tau\in\mathbb{R},\] where \(\epsilon\) is a small positive number, and \[\widehat{\mu}_{2}(t_{0},\tau)>0\ \ \text{for}\ \tau\in(0,\tau_{*}),\ \ \tau_{*}=2\pi/\Lambda.\] These inequalities together with asymptotic properties of \(\widehat{\mu}_{2}\) in a neighborhood of the point \((t,\tau)=(t_{0},0)\) allows to perform analysis of subharmonic bifurcations near the point \(t=t_{0}\). \(M\)-subharmonic bifurcation points \(t_{M}\) are bigger then \(t_{0}\) and \(t_{M}\to t_{0}\) as \(M\to\infty\). Moreover, we show that the crossing number at \(t_{M}\) is odd and hence there is a closed connected continuum of \(M\)-subharmonic bifurcations started from the bifurcation point. As is known one of the indication of bifurcation is the change of the Morse index, or changing of the number of negative eigenvalues of the Frechet derivatives, which depend on the parameter \(t\). In the papers [16] and [9] it was proved that for the branch of Stokes waves approaching the extreme Stokes wave unlimited number of negative eigenvalues appears, which implies unlimited number of bifurcations, see [9]. The problem here is that all these bifurcations can be Stokes bifurcations and the above fact on negative eigenvalues does not directly imply existence of subharmonic bifurcations. More advance analysis is required. Our aim is to study the first Stokes bifurcation and show that there are always accompanying \(M\)-subharmonic bifurcations with arbitrary large \(M\). They generates by \(\tau\)-roots of the equation \(\widehat{\mu}_{2}(t,\tau)=0\). The main body of the paper is devoted to the study of this equation. This analysis is based on spectral analysis of different spectral problems and asymptotic analysis of eigenvalues for small \(\tau\). ### Formulation of the problem We consider steady surface waves in a two-dimensional channel bounded below by a flat, rigid bottom and above by a free surface that does not touch the bottom. The surface tension is neglected and the water motion can be rotational. In appropriate Cartesian coordinates \((X,Y)\), the bottom coincides with the \(x\)-axis and gravity acts in the negative \(y\) -direction. We choose the frame of reference so that the velocity field is time-independent as well as the free-surface profile which is supposed to be the graph of \(Y=\xi(X)\), \(X\in\mathbb{R}\), where \(\xi\) is a positive and continuous unknown function. Thus \[\mathcal{D}_{\xi}=\{X\in\mathbb{R},0<Y<\xi(X)\},\ \ \mathcal{S}_{\xi}=\{X\in \mathbb{R},\ Y=\xi(X)\}\] is the water domain and the free surface respectively. We will use the stream function \(\Psi\), which is connected with the velocity vector \((\mathbf{u},\mathbf{v})\) by \(\mathbf{u}=-\Psi_{Y}\) and \(\mathbf{v}=\Psi_{X}\). We assume that \(\xi\) is a positive, periodic function having period \(\Lambda>0\) and that \(\xi\) is even and strongly decreasing on the interval \((0,\Lambda/2)\). Since the surface tension is neglected, \(\Psi\) and \(\xi\) satisfy, after a certain scaling, the following free-boundary problem (see for example [11]): \[\Delta\Psi+\omega(\Psi)=0\ \ \text{in}\ \mathcal{D}_{\xi},\] \[\frac{1}{2}|\nabla\Psi|^{2}+\xi=R\ \ \text{on}\ \mathcal{S}_{\xi},\] \[\Psi=1\ \ \text{on}\ \mathcal{S}_{\xi},\] \[\Psi=0\ \ \text{for}\ Y=0, \tag{1.2}\] where \(\omega\in C^{1,\alpha}\), \(\alpha\in(0,1)\), is a vorticity function and \(R\) is the Bernoulli constant. We assume that \(\Psi\) is even, \(\Lambda\)-periodic in \(x\) and \[\Psi_{Y}>0\ \ \text{on}\ \overline{\mathcal{D}_{\xi}}, \tag{1.3}\] which means that the flow is unidirectional. The Frechet derivative for the problem is evaluated for example in [16], [9], and the corresponding eigenvalue problem for the Frechet derivative has the form \[\Delta w+\omega^{\prime}(\Psi)w+\mu w=0\ \ \text{in}\ \mathcal{D}_{\xi},\] \[\partial_{\nu}w-\rho w=0\ \ \text{on}\ \mathcal{S}_{\xi},\] \[w=0\ \ \text{for}\ Y=0, \tag{1.4}\] where \(\nu\) is the unite outward normal to \(\mathcal{S}_{\xi}\) and \[\rho=\rho(X)=\frac{(1+\Psi_{X}\Psi_{XY}+\Psi_{Y}\Psi_{YY})}{\Psi_{Y}(\Psi_{X}^ {2}+\Psi_{Y}^{2})^{1/2}}\Big{|}_{Y=\xi(X)}. \tag{1.5}\] The function \(w\) in (1.4) is supposed also to be even and \(\Lambda\)-periodic. Let us introduce several function spaces. Let \(\alpha\in(0,1)\) and \(k=0,1,\ldots\). The space \(C^{k,\alpha}(\mathcal{D})\) consists of bounded functions in \(\mathcal{D}\) such that the norms \(C^{k,\alpha}(\overline{\mathcal{D}_{a,a+1}})\) are uniformly bounded with respect to \(a\in\mathbb{R}\). Here \[\mathcal{D}_{a,a+1}=\{(X,Y)\in\mathcal{D},\,:\,a<x<a+1\}.\] The space \(C^{k,\alpha}_{0,\Lambda}(\mathcal{D})\)\(\big{(}C^{k,\alpha}_{0,\lambda,e}(\mathcal{D})\big{)}\) consists of \(\Lambda\)-periodic (\(\Lambda\)-periodic and even) functions, which belong to \(C^{k,\alpha}(\mathcal{D})\) and vanish at \(y=0\). Similarly we define the space \(C^{k,\alpha}_{\Lambda}(\mathbb{R})\)\((C^{k,\alpha}_{\Lambda,e}(\mathbb{R}))\) consisting of functions in \(C^{k,\alpha}(\mathbb{R})\), which are \(\Lambda\)-periodic (\(\Lambda\)-periodic and even). We will consider a branch of Stokes water waves depending on a parameter \(t\in\mathbb{R}\), i.e. \[\xi=\xi(X,t),\ \ \Psi=\Psi(X,Y;t),\ \ \Lambda=\Lambda(t).\] For each \(t\) the function \(\xi\in C^{2,\alpha}(\mathbb{R})\) and \(\Psi\in C^{3,\alpha}(\mathcal{D})\). This branch starts from a uniform stream solution for \(t=0\) and approach an extreme wave when \(t\to\infty\). The dependence on \(t\) is analytic in the following sense: the functions \[\xi(X\Lambda(0)/\Lambda(t),t)\,:\,\mathbb{R}\to C^{2,\alpha}_{\Lambda(0),e}( \mathbb{R})\,\,\,\text{and}\,\,\,\Lambda\,:\,\mathbb{R}\to(0,\infty)\] are analytic with respect to \(t\) and the function \(\Psi\) can be found from the problem \[\Delta\Psi+\omega(\Psi)=0\,\,\,\text{in}\,\,\mathcal{D}_{\xi},\] \[\Psi=1\,\,\,\text{on}\,\,\mathcal{S}_{\xi},\] \[\Psi=0\,\,\,\text{for}\,\,Y=0.\] Another equivalent description of the analytical property of functions \(\xi\) and \(\Psi\) is presented in Sect. 5.2. The main assumption is given in terms of the second eigenvalue of the spectral problem (1.4). One can show that the first eigenvalue \(\mu_{0}(t)\) is always negative and the second one \(\mu_{1}(t)\) is zero for \(t=0\). Our main assumption is that \[\mu_{1}(t)>0\,\,\,\text{for small positive}\,\,t. \tag{1.6}\] This property will be discussed in detail in forthcoming paper. ## 2. Spectral problem (1.4), \(\Lambda\)-periodic, even functions Let \[D=D_{\xi}=\{(X,Y)\,:\,0<X<\Lambda/2,\,\,0<X<\xi(X)\}\] and \[S=S_{\xi}=\{(X,Y)\,:\,0<X<\Lambda/2,\,\,Y=\xi(X)\}.\] We introduce the form \[a(u,v)\!\!=\!\!a_{D}(u,v)\!\!=\!\!\int_{D}\Big{(}\nabla u\cdot\nabla\overline{ v}-\omega^{\prime}(\Psi)u\overline{v}\Big{)}dXdY-\int_{0}^{\Lambda/2}\rho(X)u \overline{v}dS,\,\,\,dS=\sqrt{1+\xi^{\prime 2}}dX, \tag{2.1}\] which is defined on functions from \(H^{1}(D)\) vanishing for \(Y=0\). This space will be denoted by \(H^{1}_{0}(D)\). The assumption (1.3) implies \[a_{D}(u,u)>0\,\,\,\text{for nonzero}\,\,u\in H^{1}_{0}(D)\,\,\text{satisfying}\,\,u(X,\xi(X))=0\,\,\text{for}\,\,X\in(0, \Lambda/2). \tag{2.2}\] Since \(\partial_{X}w(0,Y)=\partial_{X}w(\Lambda/2,Y)=0\) in (1.4), the spectral problem (1.4) admits the following variational formulation. A real number \(\mu\) and a function \(\phi\in H^{1}_{0}(D)\) satisfy (1.4), after natural extension on \(\mathcal{D}_{\xi}\), if and only if \[a(\phi,v)=\mu(\phi,v)_{L^{2}(D)},\,\,\,\text{for all}\,\,v\in H^{1}_{0}(D). \tag{2.3}\] Here \[(u,v)_{L^{2}(D)}=\int_{D}u\overline{v}dXdY.\] We put also \(||u||^{2}_{L^{2}(D)}=(u,u)_{L^{2}(D)}\). We numerate the eigenvalues accounting their multiplicity as \[\mu_{0}\leq\mu_{1}\leq\cdots,\,\,\,\mu_{j}\to\infty\,\,\,\text{as}\,\,\,j\to\infty,\] and denote by \(\Phi_{j}\), \(j=0,1,\ldots\), corresponding eigenfunctions. They can be chosen to be orthogonal to each other, real-valued and with the norm \[||\Phi_{j}||_{L^{2}(D)}=1.\] Thus \(\{\Phi_{j}\}\) forms an orthogonal basis in \(L^{2}(D)\). In what follows we shall use the function \[u_{*}(X,Y):=\Psi_{X}(X,Y), \tag{2.4}\] which is odd, \(\Lambda\)-periodic and belongs to \(C^{2,\alpha}(\overline{D})\). It satisfies the boundary value problem \[\Delta u_{*}+\omega^{\prime}(\Psi)u_{*}=0\,\,\,\mbox{in}\,\,D_{\xi},\] \[\partial_{\nu}u_{*}-\rho u_{*}=0\,\,\,\mbox{on}\,\,S_{\xi},\] \[u_{*}=0\,\,\,\mbox{for}\,\,X=0,\] \[u_{*}(0,Y)=0\,\,\,\mbox{for}\,\,Y\in(0,\xi(0)),\,\,\,u_{*}( \Lambda/2,Y)=0\,\,\,\mbox{for}\,\,Y\in(0,\xi(\Lambda/2)), \tag{2.5}\] and it has the following properties along branches of Stokes waves (see [10]) \[\mbox{(i)}\quad u_{*}>0\,\,\,\mbox{for}\,\,(X,Y)\in D\cup S;\] \[\mbox{(ii)}\quad(u_{*})_{X}(0,Y)>0\,\,\,\mbox{for}\,\,0<Y\leq\xi( 0),\,\,\,\,(u_{*})_{X}(\Lambda/2,Y)<0\,\,\mbox{for}\,\,0<Y\leq\xi(\Lambda/2);\] \[\mbox{and}\quad(u_{*})_{Y}(X,0)>0\,\,\,\mbox{for}\,\,0<X< \Lambda/2;\] \[\mbox{(iii)}\quad(u_{*})_{XY}(0,0)>0,\,\,\,(u_{*})_{XY}(\Lambda/ 2,0)<0. \tag{2.6}\] The above properties of the function \(\Psi_{x}\)-the vertical component of the velocity vector, are found interesting application in study of branches of Stokes waves, see [5], [7], [27], [26] and [10]. It appears that they are also important in study of subharmonic bifurcations. They are used, in particular, in the proof of the following assertion. **Proposition 2.1**.: _Let \(\xi\) be not identically constant then the eigenvalue \(\mu_{0}\) is simple 1. Moreover \(\mu_{0}<0\) and the corresponding eigenfunction (\(\Phi_{0}\)) does not change sign inside \(D\). If we assume that \(\Phi_{0}\) is positive inside \(D\) then \(\Phi_{0}\) is also positive for \(Y=\xi(X)\) and for \(X=0\), \(Y\in(0,\xi(0))\) and \(X=\Lambda/2\), \(Y\in(0,\xi(0))\), \(Y\in(0,\xi(\Lambda/2))\)._ Footnote 1: This implies in particular that \(\mu_{0}<\mu_{1}\). Proof.: Let \(v_{*}\) be the restriction of the function (2.4) to the domain \(D\). Clearly it is positive inside \(D\), belongs to \(H^{1}_{0}(D)\) and \(a(v_{*},v_{*})=0\). This implies that either \(\mu_{0}<0\) or \(\mu_{0}=0\) and the function \(v_{*}\) is an eigenfunction corresponding to \(\mu_{0}\). The latest alternative is impossible. Indeed, if \(v_{*}\) is an eigenfunction then \((v_{*})_{X}=0\) for \(X=0\) and for \(X=\Lambda/2\). So \(v_{*}\) has homogeneous Cauchy data and satisfies an elliptic equation. Hence \(v_{*}\) is identically zero inside \(D\). Thus \(\mu_{0}<0\). Since \[\mu_{0}=\min_{w\in H^{1}_{0}(D),||w||_{0}=1}a(w,w) \tag{2.7}\] the corresponding eigenfunction cannot change sign inside \(D\). Indeed, if \(\phi_{0}\) changes sign then we introduce two functions \(v_{\pm}(X,Y)=\max(0,\pm v(X,Y))\). One can verify that \(a(v_{\pm},v_{\pm})=\mu_{0}||v_{\pm}||_{L^{2}(D)}^{2}\). Therefore both functions \(v_{\pm}\) deliver minimum in (2.7) and hence they are also eigenfunctions corresponding to \(\mu_{0}\) but this is impossible because they are not smooth. So \(\Phi_{0}\) is positive or negative inside \(D\). Assume that \(\Phi_{0}>0\) in \(D\). Then it cannot vanish on the part of the boundary where \(Y>0\). Indeed if \(\Phi\) is zero there then the normal derivative is also zero and this leads to a change of sign inside \(D\). The following Green's formula for the form \(a\) will be useful in the proof of the nest proposition: \[-\int_{D}(\Delta u+\omega^{\prime}(\Psi)u)\overline{v}dXdY+\int_ {0}^{\Lambda/2}(\partial_{\nu}u-\rho(X)u)\overline{v}dS\] \[=a(u,v)+\int_{0}^{\xi(0)}u_{X}\overline{v}|_{X=0}dY-\int_{0}^{ \xi(\Lambda/2)}u_{X}\overline{v}|_{X=\Lambda/2}dY. \tag{2.8}\] Here the function \(u\) belongs to the space \(H^{2}_{0}(D)\) consisting of functions from \(H^{2}(D)\) vanishing for \(Y=0\) and \(v\in H^{1}_{0}(D)\). **Proposition 2.2**.: _Let \(\xi\) be not identically constant. If \(\mu_{1}=0\) then \(\mu_{1}=0\) is simple. Moreover the corresponding eigenfunction has exactly two nodal sets and the nodal line separating these nodal sets has one of its end points on the curve \(S\)._ Proof.: Denote by \(\mathcal{X}\) the space of real eigenfunctions corresponding to the eigenvalue \(\mu_{1}=0\). Since all eigenfunctions from \(\mathcal{X}\) orthogonal to \(\Phi_{0}\), each eigenfunction must change sign inside \(D\). Let us show that \(\dim\mathcal{X}=1\). The proof consists of several steps. (a) First we show that every nonzero eigenfunction \(w\in\mathcal{X}\) has exactly two nodal sets. Assume that there is an eigenfunction \(w\in\mathcal{X}\) which has more then two nodal sets, say \(Y_{j}\), \(j=1,\ldots,N\), \(N>2\). Introduce the functions \(u_{j}(X,Y)=w(X,Y)\) for \((X,Y)\in Y_{j}\) and zero otherwise. Then \(u_{j}\in H^{1}_{0}(D)\) and \[a(u_{j},u_{k})=0\;\;\text{for all}\;k,j=1,\ldots,N. \tag{2.9}\] We choose the constant \(\alpha\) such that \[(u_{1}-\alpha u_{2},\Phi_{0})_{L^{2}(D)}=0\] and observe that \(a(u_{1}-\alpha u_{2},u_{1}-\alpha u_{2})=0\). Since \[\mu_{1}=\min a(w,w)\] where \(\min\) is taken over \(w\in H^{1}_{0}(D)\) satisfying \((w,\Phi_{0})_{L^{2}(D)}=0\) and \(||w||_{L^{2}(D)}=1\), the minimum is attained on the function \((u_{1}-\alpha u_{2})/||u_{1}-\alpha u_{2}||_{L^{2}(D)}\). So we conclude that \(u_{1}-\alpha u_{2}\) is an eigenfunction corresponding to the eigenvalue \(\mu_{1}=0\), which is impossible since this function is zero on \(Y_{j}\) for \(j>2\). (b) Second, let us prove that the nodal line is not closed and one of its end points lies oh \(S\). Consider an eigenfunction \(w\) with two nodal sets and let \(\gamma\) be the nodal line separating these two nodal sets. If this nodal line is closed then introduce the function \(w_{1}\) which coincides with \(w\) inside the closed nodal line and vanishes outside. Then \(a(w_{1},w_{1})=0\) and \(w_{1}=0\) on \(S\) which contradicts to the inequality (2.2). If both end points of \(\gamma\) lie outside \(S\) then introduce \(w_{2}\) which coincides with \(w\) on the nodal set separated from \(S\) and vanishes otherwise. Then \(a(w_{2},w_{2})=0\) and \(w_{2}=0\) on \(S\) which again contradicts to the inequality (2.2). Thus one of end points lies on \(S\). (c) Now we are in position to prove that \(\dim\mathcal{X}=1\). If \(\dim\mathcal{X}>1\) then there is an eigenfunction, say \(w_{*}\) which is zero at the point \(z_{1}=(0,\eta(0))\), which must be one of the end points of the nodal line separating two nodal sets. By b) another end-point \(z_{2}\) of the nodal line lies on \(S\). Denote by \(Y_{1}\) the nodal set attached to the part of \(S\) between \(z_{1}\) and \(z_{2}\). Let also \(Y_{2}\) be the remaining nodal domain. We can assume that \(w_{*}<0\) in \(Y_{1}\) and \(w_{*}>0\) in \(Y_{2}\). Let \(v_{*}\) be the function introduced in Proposition 2.1. Using that both functions \(w_{*}\) and \(u_{*}\) satisfy the problem (1.4) but the first one satisfies \(\partial_{X}w_{*}=0\) for \(X=0,\Lambda/2\) and the second is subject to \(v_{*}=0\) for the same values of \(x\), we have, by using (2.8), \[\int_{0}^{\xi(0)}\partial_{X}v_{*}(0,Y)w_{*}(0,Y)dY=\int_{0}^{\xi(\Lambda/2)} \partial_{X}v_{*}(\Lambda/2,Y)w_{*}(\Lambda/2,Y)dY. \tag{2.10}\] Consider the function \[U=w_{*}+\beta v_{*},\;\;\text{where}\;\beta>0.\] Since \(v_{*}\) is a positive function inside \(D\), for small \(\beta\) the function \(U\) has also two nodal sets \(\widetilde{Y}_{1}\) and \(\widetilde{Y}_{2}\), separated by a nodal line \(\widetilde{\gamma}\) with end points \(z_{1}=(0,\eta(0))\) and \(\widetilde{z}_{2}\in S\) which is close to \(z_{2}\). We assume that the nodal set \(\widetilde{Y}_{1}\) is attached to the part of \(S\) between the points \(z_{1}\) and \(\widetilde{z}_{2}\). Introduce the functions \[U_{1}(X,Y)=U(X,Y)\;\;\text{if}\;(X,Y)\in\widetilde{Y}_{1}\;\text {and zero otherwise},\] \[U_{2}(X,Y)=U(X,Y)\;\;\text{if}\;(X,Y)\in\widetilde{Y}_{2}\;\text {and zero otherwise}.\] We choose the constant \(\theta\) such that \[(U_{1}+\theta U_{2},\Phi_{0})_{L^{2}(D)}=0. \tag{2.11}\] It is clear that \(\theta\neq 0\) because of \[a(U_{1},\Phi_{0})=\mu_{0}(U_{1},\Phi_{0})_{L^{2}(D)}\neq 0.\] Here we have used that \(\Phi_{0}>0\) inside \(D\) by Proposition 2.1. Furthermore due to (2.10) and the fact that \(w_{*}\) is an eigenfunction corresponding to \(\mu_{1}=0\), we have \[a(U_{1}+\theta U_{2},U_{1}+\theta U_{2})=0.\] This together with (2.11) implies that \(U_{1}+\theta U_{2}\) is an eigenfunction corresponding to the eigenvalues \(\mu_{1}=0\) of the spectral problem (2.3). But this contradicts to the smoothness properties of the function \(U_{1}+\theta U_{2}\). (Compare with the argument used in a) and b)). ### Auxiliary spectral problems In this section we introduce and study eigenvalues of several eigenvalue problems, which will be used in subsequent sections to estimate eigenvalues of "generalized" eigenvalue problem introduced in the next section. Zeros of generalized eigenvalues will give us the subharmonic bifurcations in what follows. Let us introduce the spaces \[H^{1}_{00*}(D)=\{u\in H^{1}_{0}(D)\,:\,\,u=0\;\text{for}\;X=0\},\] \[H^{1}_{0*0}(D)=\{u\in H^{1}_{0}(D)\,:\,\,u=0\;\text{for}\;X=\Lambda/2\}\] and \[H^{1}_{000}(D)=\{u\in H^{1}_{0}(D)\,:\,\,u=0\;\text{for}\;X=0\;\text{and}\;X= \Lambda/2\}.\] The following spectral problems will play important role in what follows. Find \(\nu^{0*}\) and \(u\in H^{1}_{00*}(D)\) satisfying \[a(u,v)=\nu^{0*}(u,v)_{L^{2}(D)}\;\,\text{for all}\;v\in H^{1}_{00*}(D). \tag{2.12}\] Find \(\nu^{*0}\) and \(u\in H^{1}_{0*0}(D)\) satisfying \[a(u,v)=\nu^{*0}(u,v)_{L^{2}(D)}\;\,\text{for all}\;v\in H^{1}_{0*0}(D). \tag{2.13}\] Find \(\nu^{00}\) and \(u\in H^{1}_{000}(D)\) satisfying \[a(u,v)=\nu^{00}(u,v)_{L^{2}(D)}\;\,\text{for all}\;v\in H^{1}_{000}(D). \tag{2.14}\] We denote the eigenvalues of these problems by \[\nu^{0*}_{0}\leq\nu^{0*}_{1}\leq\cdots,\] \[\nu^{*0}_{0}\leq\nu^{*0}_{1}\leq\cdots\] and \[\nu^{00}_{0}\leq\nu^{00}_{1}\leq\cdots,\] where the multiplicity is taken into account. We note that the spectral problem considered in Sect. 2 represents the fourth problem in this list of spectral problems, where no restrictions for \(X=0\) and \(X=\Lambda/2\) are given. **Proposition 2.3**.: _The following properties are valid:_ \[\nu^{0*}_{0}<0,\;\;\nu^{*0}_{0}<0,\;\;\nu^{00}_{0}=0 \tag{2.15}\] _and_ \[\nu^{0*}_{1}>\mu_{1},\;\;\nu^{*0}_{1}>\mu_{1},\;\;\nu^{00}_{1}>0. \tag{2.16}\] Proof.: The function \(v_{*}\) from Proposition 2.1 satisfies the problem (2.14) with \(\nu^{00}=0\) and it is positive inside \(D\). Therefore \(\nu^{00}\) is the lowest eigenvalue of (2.14) and we arrive at the last relation in (2.15). Since \[H^{1}_{000}(D)\subset H^{1}_{00*}(D)\;\,\,\text{and}\;\,H^{1}_{000}(D)\subset H ^{1}_{0*0}(D), \tag{2.17}\] we obtain the first two inequalities in (2.15), compare with the proof of Proposition 2.1. Inclusions \[H^{1}_{00*}(D)\subset H^{1}_{0}(D)\;\,\,\text{and}\;\,H^{1}_{0*0}(D)\subset H ^{1}_{0}(D) \tag{2.18}\] together with (2.17) implies the inequalities (2.16). ## 3. Generalized eigenvalue problem To study subharmonic and more general bifurcations a more general eigenvalue problem, which includes quasi momentum as a parameter, is needed. This section is devoted to this spectral problem. As it is known (see, for example [19], [21]) all bounded solutions to the problem \[\Delta w+\omega^{\prime}(\Psi)w+\mu w=0\;\;\mbox{in}\;{\mathcal{D} }_{\xi},\] \[\partial_{\nu}w-\rho w=0\;\;\mbox{on}\;{\mathcal{S}}_{\xi},\] \[w=0\;\;\mbox{for}\;y=0, \tag{3.1}\] with real \(\mu\), are described by \[w(X,Y)=\sum_{j}a_{j}e^{i\tau_{j}X}w_{j}(X,Y),\] where \(\tau_{j}\in[0,\tau_{*})\) and \(w_{j}\) is a \(\Lambda\)-periodic solution to the problem \[(\partial_{X}+i\tau)^{2}w+\partial_{Y}^{2}w+\omega^{\prime}(\Psi) w+\mu w=0\;\;\mbox{in}\;{\mathcal{D}}_{\xi},\] \[N(\tau)w-\rho w=0\;\;\mbox{on}\;{\mathcal{S}}_{\xi},\] \[w=0\;\;\mbox{for}\;Y=0, \tag{3.2}\] where \[N(\tau)w=e^{-i\tau X}\partial_{\nu}(e^{i\tau X}w).\] The number \(\tau\) is called quasi momentum. It is not assumed in this consideration that \(w\) is even. By [23] the number of such \(\tau_{j}\) is finite for a fixed \(\mu\). In this section we prove important estimates for eigenvalues of the generalized eigenvalue problem. ### Variational formulation The problem (3.2) is a spectral problem for a self-adjoint operator for every real \(\tau\). To give its variational formulation we introduce \[\Omega=\{(X,Y)\,:\,-\Lambda/2<X<\Lambda/2,\;0<Y<\xi(X)\}\] and denote by \(H^{1}_{0,p}(\Omega)\) the subspace of all function \(u\) in \(H^{1}(\Omega)\), which satisfies \(u(-\Lambda/2,Y)=u(\Lambda/2,Y)\) for \(Y\in(0,\xi(\Lambda/2)\) and \(u(X,0)=0\) for \(X\in(-\Lambda/2,\Lambda/2)\). We put \[{\bf a}(u,v;\tau)=\int_{\Omega}\Big{(}(\partial_{X}+i\tau)u\overline{( \partial_{X}+i\tau)v}+\partial_{Y}u\partial_{Y}\overline{v}-\omega^{\prime}( \Psi)u\overline{v}\Big{)}dXdY-\int_{-\Lambda/2}^{\Lambda/2}\rho(X)u\overline{ v}dS, \tag{3.3}\] which can be written also as \[{\bf a}(u,v;\tau)={\bf a}(u.v)+i\tau{\bf b}(u,v)+\tau^{2}{\bf c}(u,v),\] where \[{\bf a}(u,v)=a_{\Omega}(u,v),\;\;{\bf b}(u,v)=\int_{\Omega}(u\partial_{X} \overline{v}-\partial_{X}u\overline{v})dXdY,\;\;{\bf c}(u,v)=(u,v)_{L^{2}( \Omega)}.\] Consider the spectral problem: find \(w\in H^{1}_{0p}(\Omega)\) and \(\mu\in{\mathbb{R}}\) satisfying \[{\bf a}(w,V;\tau)=\widehat{\mu}(\tau)(\phi,V)_{L^{2}(\Omega)}\;\;\mbox{for all}\;V\in H^{1}_{0,p}(\Omega). \tag{3.4}\] This is a variational formulation of the spectral problem (3.2). Note that we do not suppose in this formulation the functions \(w\) and \(V\) are even or odd, both of them are only periodic. We will consider the form \({\bf a}\) only for real \(\tau\). In this case the form is symmetric and the spectrum consists of isolated real eigenvalues of finite multiplicity. Denote by \(\widehat{\mu}_{j}(\tau)\), \(\tau\in{\mathbb{R}}\), the eigenvalues of the problem (3.2) (equivalently (3.4)) numerated according to the increasing order \[\widehat{\mu}_{0}(\tau)\leq\widehat{\mu}_{1}(\tau)\leq\cdots\] Clearly \[\widehat{\mu}_{0}(0)=\mu_{0},\;\;\widehat{\mu}_{1}(0)=\min(\mu_{1},0),\;\; \widehat{\mu}_{2}(0)=\min(\max(\mu_{1},0),\mu_{2},\nu_{1}^{00})\] and \[\widehat{\mu}_{3}(0)>0\;\;\mbox{if}\;\mu_{2}>0. \tag{3.5}\] Here we use that the first and second eigenvalues of the problem (1.4), considered for \(\Lambda\)-periodic odd functions coincide with \(\nu_{0}^{00}\) and \(\nu_{1}^{00}\) respectively. There is another variational formulation. We put \(w=e^{-i\tau X}\phi\) in (3.4) and \(V=e^{-i\tau X}v\), \(v\in H^{1}_{0}(\Omega,\tau)\), where \[H^{1}_{0}(\Omega,\tau)=\{v\in H^{1}_{0}(\Omega)\,:\,v(-\Lambda/2,Y)=e^{-i\tau \Lambda}v(\Lambda/2,Y)\}.\] Then the spectral problem (3.4) is equivalent to: find \(\widehat{\mu}(\tau)\) and \(\phi\in H^{1}_{0}(\Omega,\tau)\) satisfying \[a(\phi,v)=\widehat{\mu}(\tau)(\phi,v)_{L^{2}(\Omega)}\;\;\mbox{for all}\;v\in H ^{1}_{0}(\Omega,\tau). \tag{3.6}\] ### Estimates of the eigenvalues \(\widehat{\mu}_{j}\) We will need some estimate for the function \(\widehat{\mu}_{j}(\tau)\). For this purpose we introduce two auxiliary spectral problems. Let \(H^{1}_{00}(\Omega)\) consists of functions from \(H^{1}(\Omega)\) which vanish for \(Y=0\) and \(x=\pm\Lambda/2\). The first spectral problem is to find \(\mu_{D}\in[0,\tau_{*})\) and \(\phi\in H^{1}_{00}(\Omega)\) such that \[a(\phi,v)=\mu_{D}(\phi,v)_{L^{2}(\Omega)}\;\;\mbox{for all}\;v\in H^{1}_{00}( \Omega). \tag{3.7}\] The second one is to find \(\mu_{N}\in[0,\tau_{*})\) and \(\phi\in H^{1}_{0}(\Omega)\) such that \[a(\phi,v)=\mu_{N}(\phi,v)_{L^{2}(\Omega)}\;\;\mbox{for all}\;v\in H^{1}_{0}( \Omega). \tag{3.8}\] We denote by \(\{\mu_{Dj}\}\) and \(\{\mu_{Nj}\}\), \(j=0,1,\ldots\), the eigenvalues of the problems (3.7) and (3.8) respectively numerated according to \[\mu_{D0}\leq\mu_{D1}\leq\cdots\,,\;\;\mu_{N0}\leq\mu_{N1}\leq\cdots\,,\] where the multiplicity is taken into account. Since \[H^{1}_{00}(\Omega)\subset H^{1}_{0}(\Omega,\tau)\subset H^{1}_{0}(\Omega), \tag{3.9}\] we conclude \[\mu_{Nj}\leq\widehat{\mu}_{j}(\tau)\leq\mu_{Dj},\;\;j=0,1,\ldots \tag{3.10}\] Note that both eigenvalues \(\mu_{Dj}\) and \(\mu_{Nj}\) do not depend on \(\tau\). In the following lemma we show that all inequalities (3.10) are strong **Lemma 3.1**.: _Let \(\xi\) be not identically constant. Then the following inequalities hold:_ \[\mu_{Nj}<\widehat{\mu}_{j}(\tau)<\mu_{Dj},\;\;j=0,1,\ldots \tag{3.11}\] _for \(\tau\in(0,\tau_{*})\)._ Proof.: First, we assume that \(n\) satisfies \[\mu_{Dn}>\mu_{D(n-1)} \tag{3.12}\] and prove the inequalities in right-hand side of (3.11). Let \(\Phi_{j}\in H^{1}_{00}(\Omega)\) be the eigenfunction of (3.7) corresponding to \(\mu_{Dj}\), \(j=0,\ldots\), and let \(X_{n}\) be the space of linear combinations of \(\{\Phi_{j}\}_{j=0}^{n-1}\) and \[\mathcal{X}_{n}=\{\Phi\in H^{1}_{00}(\Omega)\,:\,(\Phi,\Phi_{j})_{L^{2}(\Omega )},\,j=0,\ldots,n-1\}.\] Then \[\mu_{Dn}=\min_{\Phi\in X_{n}}\frac{a(\Phi,\Phi)}{||\Phi||^{2}_{L^{2}(\Omega)}}.\] We assume here and in what follows that \(\Phi\neq 0\) in similar relations. Introduce the subspace \[\mathcal{Y}_{n}=\mathcal{Y}_{n}(\tau)=\{\Phi\in H^{1}_{0}(\Omega,\tau)\,:\,a( \Phi,\Phi_{j})-\mu(\Phi,\Phi_{j})=0,\,j=0,\ldots,n-1\},\] where we use the short notation \(\mu=\mu_{Dn}\). The relation \(\mathcal{X}_{n}\subset\mathcal{Y}_{n}\), follows from \[a(\Phi,\Phi_{j})-\mu(\Phi,\Phi_{j})_{L^{2}(\Omega)}=(\mu_{Dj}-\mu)(\Phi,\Phi_{ j})_{L^{2}(\Omega)}=0\;\;\mbox{for}\;\Phi\in\mathcal{X}_{n},\,j=0,\ldots,n-1.\] To show that the codimension of \(\mathcal{Y}_{n}\) is \(n\), we assume that there exists \[\Phi_{*}=\sum_{j=0}^{n-1}c_{j}\Phi_{j}\] such that \[a(\Phi,\Phi_{*})-\mu(\Phi,\Phi_{*})_{L^{2}(\Omega)}=0\,\,\,\mbox{for all}\,\,\Phi \in H^{1}_{0}(\Omega,\tau).\] Due to (3.12) \[a(\Phi_{*},\Phi_{*})-\mu||\Phi_{*}||^{2}_{L^{2}(\Omega)}<0\] if one of coefficient \(c_{j}\) is non-zero and hence co-dimension of \(\mathcal{Y}_{n}\) is \(n\). Let us also prove that every \(u\in H^{1}_{0}(\Omega,\tau)\) admits the representation \(u=\Phi+\phi\), where \(\Phi\in\mathcal{Y}_{n}\) and \(\phi\in X_{n}\). Indeed, we choose \(c_{j}\) and \(\phi\) according to \[c_{j}(\mu_{Dj}-\mu)||\Phi_{j}||^{2}_{L^{2}(\Omega)}=a(u,\Phi_{j})-\mu(u,\Phi_{j })_{L^{2}(\Omega)},\,\,\,\phi=\sum_{j=0}^{n-1}c_{j}\Phi_{j}.\] Then \[a(\Phi,\Phi_{j})-\mu(\Phi,\Phi_{j})_{L^{2}(\Omega)}=a(u,\Phi_{j})-\mu(u,\Phi_{ j})_{L^{2}(\Omega)}+a(\phi,\Phi_{j})+\mu(\phi,\Phi_{j})_{L^{2}(\Omega)}=0.\] Therefore \(\Phi\in\mathcal{Y}_{n}\). By the min-max principle for eigenvalues of (3.6): \[\widehat{\mu}_{n}\leq\min_{\Phi\in\mathcal{Y}_{n}}\frac{a(\Phi,\Phi)}{||\Phi ||^{2}_{L^{2}(\Omega)}}, \tag{3.13}\] If this inequality is strong then the right inequality in (3.11) is proved. Assume that \[\widehat{\mu}_{n}=\min_{\Phi\in\mathcal{Y}_{n}}\frac{a(\Phi,\Phi)}{||\Phi||^{ 2}_{L^{2}(\Omega)}}. \tag{3.14}\] Then the minimum is attained at a certain function \(\widehat{\Phi}\in\mathcal{Y}_{n}\) and \[a(\widehat{\Phi},v)=\mu(\widehat{\Phi},v)_{L^{2}(\Omega)}\,\,\,\mbox{for all}\,\,v\in \mathcal{Y}_{n}. \tag{3.15}\] Using the definition of \(\mathcal{Y}_{n}\), we conclude that \[a(\widehat{\Phi},v+g)-\mu(\widehat{\Phi},v+g)_{L^{2}(\Omega)}=0\,\,\,\mbox{ for all}\,\,g\in X_{n}.\] Therefore \(\widehat{\Phi}\in\mathcal{Y}_{n}\) is an eigenfunction of (3.6) and (3.7) with \(\mu=\mu_{Dn}\) considered in the space \(H^{1}_{0}(\Omega,\tau)\) and \(H^{1}_{00}(\Omega)\). Considering the functions \(\widehat{\Phi}(X,Y)+\widehat{\Phi}(-X,Y)\) and \(\widehat{\Phi}(X,Y)-\widehat{\Phi}(-X,Y)\), which are even and odd respectively and using that \(\tau\in(0,\tau_{*})\) (this implies that \(\partial_{X}\Phi=0\) for \(X=\pm\Lambda/2\)), we conclude that \(\widehat{\Phi}=0\), since there is a homogeneous Cauchy data at \(X=\pm\Lambda/2\) for the function \(\widehat{\Phi}\). Thus the right inequality in (3.11) is strong. Consider the case when (3.12) is not valid. Then we choose \(l\) such that \[\mu_{Dn}=\cdot=\mu_{D(n-l)}>\mu_{D(n-l-1)}.\] By just proved, \(\widehat{\mu}_{n-l}(\tau)<\mu_{D(n-l)}\), which implies \(\widehat{\mu}_{n}(\tau)<\mu_{D(n)}\). Let us turn to the left inequality in (3.11). Now, we assume that \(n\) satisfies \[\widehat{\mu}_{n}>\widehat{\mu}_{n-1}. \tag{3.16}\] We represent \(H^{1}_{0}(\Omega)\) as \[H^{1}_{0}(\Omega)=\mathcal{X}+\mathcal{Y},\] where \(\mathcal{X}\) consists of even functions in \(H^{1}_{0}(\Omega)\) and \(\mathcal{Y}\) consists of odd functions in \(H^{1}_{0}(\Omega)\). Let \(\Phi_{j}\in H^{1}_{0}(\Omega,\tau)\) be the eigenfunction of (3.7) corresponding to \(\mu_{Dj}\), \(j=0,\ldots\), and let now \(X_{n}\) be the space of linear combinations of \(\{\Phi_{j}\}_{j=0}^{n-1}\) and \[\mathcal{X}_{n}(\tau)=\{\Phi\in H^{1}_{0}(\Omega,\tau)\,:\,(\Phi,\Phi_{j})_{L^ {2}(\Omega)},\,j=0,\ldots,n-1\}.\] Then \[\widehat{\mu}_{n}=\min_{\Phi\in X_{n}}\frac{a(\Phi,\Phi)}{||\Phi||^{2}_{L^{2} (\Omega)}}.\] Introduce the subspace \[\mathcal{Y}_{n}=\{\Phi\in H^{1}_{0}(\Omega)\,:\,a(\Phi,\Phi_{j})-\mu(\Phi,\Phi _{j})=0,\,j=0,\ldots,n-1\},\] where we use the short notation \(\mu=\mu_{Dn}\). The relation \(\mathcal{X}_{n}\subset\mathcal{Y}_{n}\), follows from \[a(\Phi,\Phi_{j})-\mu(\Phi,\Phi_{j})_{L^{2}(\Omega)}=(\mu_{Dj}-\mu)(\Phi,\Phi_{j} )_{L^{2}(\Omega)}=0\;\;\text{for}\;\Phi\in\mathcal{X}_{n},\,j=0,\ldots,n-1.\] In the same way as before one can show that the codimension of \(\mathcal{Y}_{n}\) is equal to \(n\) and that every \(u\in H^{2}_{0}(\Omega,\tau)\) admits the representation \(u=\Phi+\phi\), where \(\Phi\in\mathcal{Y}_{n}\) and \(\phi\in X_{n}\). By the min-max principle for eigenvalues of (3.6): \[\widehat{\mu}_{n}\leq\min_{\Phi\in\mathcal{Y}_{n}}\frac{a(\Phi,\Phi)}{||\Phi|| _{L^{2}(\Omega)}^{2}}, \tag{3.17}\] If this inequality is strong then the right inequality in (3.11) is proved. Assume that \[\widehat{\mu}_{n}=\min_{\Phi\in\mathcal{Y}_{n}}\frac{a(\Phi,\Phi)}{||\Phi||_{ L^{2}(\Omega)}^{2}}. \tag{3.18}\] Then the minimum is attained at a certain function \(\widehat{\Phi}\in\mathcal{Y}_{n}\) and \[a(\widehat{\Phi},v)=\mu(\widehat{\Phi},v)_{L^{2}(\Omega)}\;\;\text{for all}\;v \in\mathcal{Y}_{n}.\] Using the definition of \(\mathcal{Y}_{n}\), we conclude that \[a(\widehat{\Phi},v+g)-\mu(\widehat{\Phi},v+g)_{L^{2}(\Omega)}=0\;\;\text{for all}\;g\in X_{n}.\] Therefore \(\widehat{\Phi}\in\mathcal{Y}_{n}\) is an eigenfunction of (3.6) and (3.7) with \(\mu=\widehat{\mu}_{n}\) considered in the space \(H^{1}_{0}(\Omega)\) and \(H^{1}_{0}(\Omega,\tau)\). The same argument as before proves that the right inequality in (3.11) is strong. The next lemma gives important estimates for \(\widehat{\mu}_{j}\), which are essential for the proof our main theorem. **Lemma 3.2**.: _For all \(\tau\in(0,\tau_{*})\)_ (i)__ \[\mu_{0}<\widehat{\mu}_{0}(\tau)<\nu_{0}^{*0}\;\;. \tag{3.19}\] (ii) _If \(0<\nu_{1}^{*0}\) and \(\nu_{0}^{0*}<\mu_{1}\) then_ \[\nu_{0}^{0*}<\widehat{\mu}_{1}(\tau)<0, \tag{3.20}\] _and_ \[\mu_{1}<\widehat{\mu}_{2}(\tau)<\nu_{1}^{*0} \tag{3.21}\] (iii) \[\nu_{1}^{0*}<\widehat{\mu}_{3}(\tau).\] (3.22) Proof.: First let us prove the equalities \[\mu_{D0}=\nu_{0}^{*0},\;\;\mu_{D1}=\nu_{0}^{00},\;\;\mu_{D2}=\nu_{1}^{*0} \tag{3.23}\] assuming \(\nu_{0}^{00}<\nu_{1}^{*0}\) (this is needed for the second and third relations above). We represent \(H^{1}_{00}(\Omega)\) as \[H^{1}_{00}(\Omega)=\widehat{X}+\widehat{Y},\] where \(\widehat{X}\) consists of even functions in \(H^{1}_{00}(\Omega)\) and \(\widehat{Y}\) consists of odd functions in \(H^{1}_{00}(\Omega)\). Then the spaces \(\widehat{X}\) and \(\widehat{Y}\) are invariant for this spectral problem. Introduce two spectral problems \[a_{\Omega}(u,v)=\mu(u,v)_{L^{2}(\Omega)}\;\;\text{for all}\;v\in\widehat{X}, \tag{3.24}\] where \(u\in\widehat{X}\), and \[a_{\Omega}(u,v)=\mu(u,v)_{L^{2}(\Omega)}\;\;\text{for all}\;v\in\widehat{Y}, \tag{3.25}\] where \(u\in\widehat{Y}\). Then the eigenvalues and eigenfunctions of the problems (3.24) and (3.25) coincide with the eigenvalues and eigenfunctions of the problem (3.7). Furthermore, the eigenvalues of the problem (3.24) coincides with the eigenvalues of the problem (2.12) and the eigenvalues of the problem (3.25) coincides with the eigenvalues of the problem (2.14). This implies (3.23) and due to Proposition 2.3 and Lemma 3.1 we get the right-hand inequalities in (3.19)-(3.21). Let us turn to the estimates (3.19)-(3.21) and (3.22) from below. We start from proving \[\mu_{N0}=\mu_{0},\ \ \mu_{N1}=\nu_{0}^{0*},\ \ \mu_{N2}=\mu_{1},\ \ \mu_{N3}=\nu_{1}^{0*} \tag{3.26}\] assuming \(\nu_{0}^{0*}<\mu_{1}\) (this is needed only for the second and third relations above). We use the representation \[H_{0}^{1}(\Omega)=\tilde{X}+\tilde{Y},\] where \(\tilde{X}\) consists of even functions with respect to \(X\) in \(H_{0}^{1}(\Omega)\) and \(\tilde{Y}\) consists of odd functions in \(H_{0}^{1}(\Omega)\). Introduce two more spectral problem \[a(u,v)=\mu(u,v)_{L^{2}(\Omega)}\ \ \mbox{for all}\ v\in\tilde{X}, \tag{3.27}\] where \(u\in\tilde{X}\), and \[a(u,v)=\mu(u,v)_{L^{2}(\Omega)}\ \ \mbox{for all}\ v\in\tilde{Y}, \tag{3.28}\] where \(u\in\tilde{Y}\). Then the eigenvalues and eigenfunctions of the problems (3.27) and (3.28) coincides with of the eigenvalues and eigenfunctions of the problem (3.8). Furthermore, the eigenvalues of the problem (3.27) coincides with the eigenvalues of the problem (2.3) and the eigenvalues of the problem (3.28) coincides with the eigenvalues of the problem (2.13). This implies (3.26) and due to Proposition 2.3 and Lemma 3.1 we get the right-hand inequalities in (3.19)-(3.22). **Corollary 3.3**.: _If \(\mu_{1}>0\) then_ \[\widehat{\mu}_{1}(\tau)<0,\ \ \widehat{\mu}_{2}(\tau)>0\ \ \mbox{for}\ \tau\in(0,\tau_{*})\ \mbox{and}\ \ \widehat{\mu}_{2}(0)=\mu_{1}>0,\ \widehat{\mu}_{1}=0.\] _Moreover the eigenvalue \(\mu=0\) is simple with the eigenfunction \(\psi_{x}\)._ ### Asymptotics of eigenvalues \(\widehat{\mu}_{1}(\tau)\) and \(\widehat{\mu}_{2}(\tau)\) In this section we assume that \(\mu_{1}=0\). According to Proposition 2.2 this eigenvalue is simple in the space of periodic even function. Since \(0\) is always simple eigenvalue in the space of odd periodic functions, the multiplicity of the eigenvalue \(\mu_{1}\) is two in the space of periodic functions. One can check that if \(\mu\) is an eigenvalue of (3.2) for a certain \(\tau\) then \(\mu\) is also the eigenvalue of the same multiplicity of the same problem for \(-\tau\). Therefore \[\widehat{\mu}_{1}(-\tau)=\widehat{\mu}_{1}(\tau),\ \ \widehat{\mu}_{2}(-\tau)= \widehat{\mu}_{2}(\tau).\] Furthermore due to Lemma 3.2 \[\widehat{\mu}_{1}(\tau)<0\ \ \mbox{and}\ \ \widehat{\mu}_{2}(\tau)>0\ \ \mbox{for}\ \tau\in(0,\tau_{*}). \tag{3.29}\] Since \(\mu(-\tau)\), \(\mu(\tau+\tau_{*})\) are eigenvalues if \(\mu(\tau)\) is an eigenvalue we have \[\widehat{\mu}_{1}(\tau_{*}-\tau)=\widehat{\mu}_{1}(\tau)\ \ \mbox{and}\ \ \widehat{\mu}_{2}(\tau_{*}-\tau)=\widehat{\mu}_{2}(\tau).\] Moreover if \(\phi(X,Y;\tau)\) is an eigenfunction corresponding to \(\mu(\tau)\) then \(\phi(-X.Y;-\tau)\) is an eigenfunction corresponding to the eigenvalue \(\mu(-\tau)\). **Lemma 3.4**.: _Assume that \(\mu_{1}=0\). Then one of the following two options is valid_ (i) _There exists an integer \(n\geq 1\) such that_ \[\widehat{\mu}_{1}(\tau)=-\kappa_{n}\tau^{2n-1}+O(\tau^{2n}),\ \ \widehat{\mu}_{2}(\tau)=\kappa_{n}\tau^{2n-1}+O(\tau^{2n}) \tag{3.30}\] _for small positive \(\tau\), where \(\kappa_{n}>0\)._ (ii) _There exist integers \(n,\,m\geq 1\) such that_ \[\widehat{\mu}_{1}(\tau)=A\tau^{2n}+O(|\tau|^{2n+1}),\ \ \widehat{\mu}_{2}(\tau)=B \tau^{2m}+O(|\tau|^{2m+1}) \tag{3.31}\] _for small positive \(\tau\), where \(A<0\) and \(B>0\)._ Proof.: Since the multiplicity of the eigenvalue \(\mu=0\) of the problem (3.2) has multiplicity two for \(\tau=0\) there are two analytic in \(\tau\) branches of eigenvalues of (3.2), denote them by \(\theta_{1}(\tau)\) and \(\theta_{2}(\tau)\) such that \(\theta_{1}(0)=\theta_{2}(0)=0\). Since \(\theta_{1}(-\tau)\) and \(\theta_{2}(-\tau)\) are also eigenvalues one of the following options is valid: (a) \(\theta_{1}(\tau)=\theta_{2}(\tau)\) and the function \(\theta_{1}\) is even with respect to \(\tau\); (b) \(\theta_{1}(-\tau)=\theta_{2}(\tau)\); (c) \(\theta_{1}(-\tau)=\theta_{1}(\tau)\) and \(\theta_{2}(-\tau)=\theta_{2}(\tau)\). Observing that the functions \(\theta_{j}(\tau)\) coincides with one of functions \(\widehat{\mu}_{j}\), \(j=1,2\), for small positive \(\tau\), we conclude that only the options (b) and (c) may occur. This leads to the options (i) and (ii) in our lemma. The sign of \(\kappa_{n}\), \(A\) and \(B\) follows from (3.29). The asymptotics of the corresponding eigenfunctions is given in the following **Lemma 3.5**.: _Let \(\upsilon_{1}(\tau)\) and \(\upsilon_{2}(\tau)\) be eigenfunctions analytically depending on \(\tau\) corresponding to the eigenvalues \(\theta_{1}\) and \(\theta_{2}\) in the proof of_ Lemma 3.4_. In the case_ (3.30) _the corresponding eigenfunctions satisfy_ \[\upsilon_{2}(0)=\phi_{0}+i\beta\psi_{0},\ \ \upsilon_{1}(0)=\phi_{0}-i\beta^{-1} \psi_{0},\ \ \mbox{with $\beta\neq 0$}. \tag{3.32}\] _In the case_ (3.31) _the corresponding eigenfunctions satisfy_ \[\upsilon_{2}(0)=\phi_{0}\ \ \mbox{and}\ \ \upsilon_{1}(0)=\psi_{0}. \tag{3.33}\] Proof.: Introduce polynomials \[\Phi(\tau)=\phi_{0}+i\tau\phi_{1}+\cdots+(i\tau)^{p}\phi_{p}\ \ \mbox{and}\ \ \Psi(\tau)=\psi_{0}+i\tau\psi_{1}+\cdots+(i\tau)^{p}\psi_{p},\] where \(\phi_{j}\), \(\psi_{k}\) are real-valued functions from \(H^{1}_{p0}(\Omega)\) such that they are even when \(j\) is even and \(k\) is odd, and they are odd when \(j\) is odd and \(k\) is even. Let us construct the functions \(\phi_{j}\) and \(\psi_{k}\), \(j,k=1,\ldots,p-1\), satisfying \[{\bf a}(\Phi(\tau),v;\tau)=O(|\tau|^{p}),\ \ {\bf a}(\Psi(\tau),v;\tau)=O(| \tau|^{p})\ \ \mbox{for all}\ v\in H^{1}_{p0}(\Omega).\] Equating terms of the same power of \(\tau\) in the first relation, we obtain \[{\bf a}(\phi_{1},v)+{\bf b}(\phi_{0},v)=0 \tag{3.34}\] and \[{\bf a}(\phi_{k+2},v)+{\bf b}(\phi_{k+1},v)+{\bf c}(\phi_{k},v)=0\ \ \mbox{for}\ k=0,1,\ldots,p-3\ \mbox{and}\ v\in H^{1}_{0}(\Omega). \tag{3.35}\] Similar relations hold with \(\phi\) replaced by \(\psi\). Equations (3.34) and (3.35) are solvable if "the right-hand side is orthogonal to the kernel of the main operator corresponding to the form \({\bf a}\)", i.e. \[{\bf b}(\phi_{0},\psi_{0})=0, \tag{3.36}\] \[{\bf b}(\phi_{k+1},\phi_{0})+{\bf c}(\phi_{k},\phi_{0})=0,\ \ \ {\bf b}(\phi_{k+1},\psi_{0})+{\bf c}(\phi_{k},\psi_{0})=0, \tag{3.37}\] and \[{\bf b}(\psi_{k+1},\phi_{0})+{\bf c}(\psi_{k},\phi_{0})=0,\ \ \ {\bf b}(\psi_{k+1},\psi_{0})+{\bf c}(\psi_{k},\psi_{0})=0. \tag{3.38}\] Here we used that the form \({\bf b}\) is anti-symmetric. In this case solutions are not unique and we choose the solutions orthogonal to the kernel, i.e. \[{\bf c}(\phi_{k},\phi_{0})=0,\ \ {\bf c}(\phi_{k},\psi_{0})=0,\ \ {\bf c}(\psi_{k},\phi_{0})=0,\ \ {\bf c}(\psi_{k},\psi_{0})=0, \tag{3.39}\] for \(k=1,\ldots,p-1\). We note that some terms in relations (3.39) vanish since some of functions are odd and some of them are even. (i) Assume that (3.30) is valid and \(p=2n-1\). In order to write equations similar to (3.34) and (3.35) for the next term we introduce the function \[\Phi(\alpha,\beta,\tau)=\alpha\Phi(\tau)+\beta\Psi(\tau),\] where \(\alpha\) and \(\beta\) are unknown constants. The equation for finding \(\alpha\) and \(\beta\) is \[{\bf a}(\Phi(\alpha,\beta,\tau),v;\tau)=\kappa(\alpha\phi_{0}+\beta\psi_{0},v)+ O(|\tau|^{p+1})\ \ \mbox{for all}\ v\in H^{1}_{p0}(\Omega).\] as the result we obtain \[\mathbf{a}(\alpha\phi_{p}+\beta\psi_{p},v)+\mathbf{b}(\alpha\phi_{p-1}+\beta\psi_ {p-1},v)+\mathbf{c}(\alpha\phi_{p-2}+\beta\psi_{p-2},v)=(-i)^{p}\kappa\mathbf{c} (\alpha\phi_{0}+\beta\psi_{0},v)\] for all \(v\in H^{1}_{p0}(\Omega)\). This problem is solvable if \[\mathbf{b}(\alpha\phi_{p-1}+\beta\psi_{p-1},\phi_{0})+\mathbf{c}(\alpha\phi_{ p-2}+\beta\psi_{p-2},\phi_{0})=(-i)^{p}\kappa\alpha \tag{3.40}\] and \[\mathbf{b}(\alpha\phi_{p-1}+\beta\psi_{p-1},\psi_{0})+\mathbf{c}(\alpha\phi_{ p-2}+\beta\psi_{p-2},\psi_{0})=(-i)^{p}\kappa\beta, \tag{3.41}\] From (3.40) and (3.41) it follows \[\beta\mathbf{b}(\psi_{p-1},\phi_{0})=(-i)^{p}\kappa\alpha \tag{3.42}\] and \[\alpha\mathbf{b}(\phi_{p-1},\psi_{0})=(-i)^{p}\kappa\beta, \tag{3.43}\] which implies \[\kappa^{2}=(-1)^{p}\mathbf{b}(\phi_{p-1},\psi_{0})\mathbf{b}(\psi_{p-1},\phi_ {0}). \tag{3.44}\] This implies that the left-hand side in (3.44) is positive, \(\kappa\) and \(-\kappa\) satisfy and \[\alpha=1\ \ \text{and}\ \ \beta=(-i)^{p}\kappa^{-1}\mathbf{b}(\phi_{p-1},\psi_ {0}).\] This implies that \[(-1)^{p}\mathbf{b}(\phi_{p-1},\psi_{0})\mathbf{b}(\phi_{p-1},\psi_{0})=\alpha ^{2}.\] (ii) Let (3.31) be valid. The same calculation as above shows \[\alpha\mathbf{b}(\phi_{p-1},\phi_{0})=(-1)^{p/2}\kappa\alpha,\] and \[\beta\mathbf{b}(\psi_{p-1},\psi_{0})=(-1)^{p/2}\kappa\beta.\] So one of quantities \(\mathbf{b}(\phi_{p-1},\phi_{0}\) or \(\mathbf{b}(\psi_{p-1},\psi_{0})\) must be different from \(0\) and we arrive at (3.33). **Lemma 3.6**.: _Let the eigenvalue \(\mu=0\) is simple with the eigenfunction \(\psi_{x}\). Then_ \[\mu(\tau)=c\tau^{2}+O(\tau^{4}),\] _where \(c\) is negative._ Proof.: We are looking for the eigenvalue and eigenfunction in the form \[\mu(\tau)=c_{1}\tau+c_{2}\tau^{2}+\cdots\ \ \text{and}\ \ u(\tau)=u_{0}+i\tau u _{1}+(i\tau)^{2}u_{2}+\cdots\] then \[\mathbf{a}(u_{0},w)=0\ \ \text{for all}\ w\in H^{1}_{0}(\Omega).\] Therefore \(u_{0}=\psi_{x}\). Furthermore, \[i\tau\mathbf{a}(u_{1},w)+i\tau\mathbf{b}(u_{0},w)=c_{1}(u_{0},w)_{L^{2}(\Omega )}\ \ \text{for all}\ w\in H^{1}_{0}(\Omega).\] Choosing \(w=u_{0}\) we see that \(c_{1}=0\). Taking \(w=u_{1}\), we get \[\mathbf{a}(u_{1},u_{1})+\mathbf{b}(u_{0},u_{1})=0\ \ \Rightarrow\ \ \mathbf{b}(u_{1},u_{0})=\mathbf{a}(u_{1},u_{1}). \tag{3.45}\] Equation for \(u_{2}\) is \[\mathbf{a}(u_{2},w)+\mathbf{b}(u_{1},w)+(u_{0},w)_{L^{2}(\Omega)}=-c_{2}(u_{0 },w)_{L^{2}(\Omega)}\ \ \text{for all}\ w\in H^{1}_{0}(\Omega).\] Taking \(w=u_{0}\) we get \[\mathbf{b}(u_{1},u_{0})+(u_{0},u_{0})_{L^{2}(\Omega)}=-c_{2}(u_{0},u_{0})_{L^{2 }(\Omega)}.\] Using (3.45), we arrive at \[c_{2}||u_{0}||^{2}_{L^{2}(\Omega)}=||u_{0}||^{2}_{L^{2}(\Omega)}+\mathbf{a}(u _{1},u_{1}),\] which proves positivity of \(c_{2}\). ## 4. Subharmonic spectral problems Let \(M\) be a positive integer. As is known (see for example [21], [19]), \(M\Lambda\)-periodic, even solutions to the problem (1.4) are linear combinations of functions \[e^{i\tau_{j}X}w_{j}(X,Y)+e^{-i\tau_{j}X}w_{j}(-X,Y), \tag{4.1}\] where only such \(\tau=\tau_{j}=j\tau_{*}/M\) and \(w=w_{j}\), \(j=0,1,\ldots,M-1\), are taken which solves the problem (3.2) with \(\mu=0\). Therefore the knowledge of the characteristic exponents \(\tau_{j}\) helps to describe \(M\)-subharmonic solutions. The spectral problem (1.4) can be considered on the functions \(w\) which are even and has period \(M\Lambda\). We shall call this spectral problem \(M\)-subharmonic spectral problem. To give a variational formulation of this spectral problem we introduce \[D_{M}=\{(X,Y)\in\mathcal{D}\,:\,0<X<M\Lambda/2\},\,\,\,S_{M}=\{(X,Y)\in \mathcal{S}\,:\,0<X<M\Lambda/2\},\] and denote by \(H^{1}_{0}(D_{M})\) the space of functions in \(H^{1}(D_{M})\) which vanish for \(Y=0\). We put \[a_{M}(u,v)=\int_{D_{M}}\Big{(}\nabla u\cdot\nabla\overline{v}-\omega^{\prime} (\Psi)u\overline{v}\Big{)}dXdY-\int_{0}^{M\Lambda/2}\rho(X)u\overline{v}dS,\] Using that \(\partial_{X}w(0,Y)=0\) and \(\partial_{X}w(M\Lambda/2,Y)=0\), we arrive at the following variational formulation for the spectral problem (1.4) on even \(M\Lambda\)-periodic functions: find a function \(\phi\in H^{1}_{0}(D_{M})\) and the number \(\mu=\mu^{(M)}\in\mathbb{R}\) satisfying \[a_{M}(\phi,v)=\mu^{(M)}(\phi,v)_{L^{2}(D_{M})}\,\,\,\mbox{for all}\,\,v\in H^{1 }_{0}(D_{M}). \tag{4.2}\] As it is known the pair \((\phi,\mu^{(M)})\) solves the problem (4.2) if and only if \(\mu^{(M)}\) is an eigenvalue of the problem (1.4), considered on even, \(M\Lambda\)-periodic functions, and \(\phi\), after a natural extension, is a corresponding eigenfunction. We numerate the eigenvalues accounting their multiplicity as \[\mu^{(M)}_{0}\leq\mu^{(M)}_{1}\leq\cdots,\,\,\,\mu^{(M)}_{j}\to\infty\,\,\, \mbox{as}\,\,\,j\to\infty.\] Denote the number of eigenvalues of the problem (4.2) which are less than zero by \(n_{0}(M)\) and by \(n(M)\) if we include zero eigenvalues, i.e. \[n_{0}(M)=\mbox{the number of}\,\,j\,\,\mbox{such that}\,\,\,\,\mu^{(M)}_{j}<0,\] \[n(M)=\mbox{the number of}\,\,j\,\,\mbox{such that}\,\,\,\mu^{(M)}_{j}\leq 0.\] **Proposition 4.1**.: _Let \(\mu_{1}=0\). Then for every \(M=1,2,\ldots\)_ \[n_{0}(M)=M\,\,\,\mbox{when}\,\,M\,\,\mbox{is odd and}\,\,\,n(M)=M+1\,\,\,\mbox{ when}\,\,M\,\,\mbox{is even} \tag{4.3}\] _and_ \[n(M)=M+1\,\,\,\mbox{when}\,\,M\,\,\mbox{is odd and}\,\,\,n(M)=M+2\,\,\,\mbox{ when}\,\,M\,\,\mbox{is even}. \tag{4.4}\] Proof.: Using that \[\widehat{\mu}_{j}(\tau)=\widehat{\mu}_{j}(-\tau)=\widehat{\mu}_{j}(\tau_{*}+ \tau)\,\,\,\mbox{for}\,\,j=0,1,2,\] we conclude that \(\widehat{\mu}_{j}(\tau)=\widehat{\mu}_{j}(\tau_{*}-\tau)\). Due to the description of even, \(M\Lambda\) periodic solution given by (4.1), the number \(n(M)\) is equal to the number of indexes \(k\) and \(m\) such that \[\widehat{\mu}_{0}\Big{(}\frac{k}{M}\Big{)}\leq 0,\,\,\,\widehat{\mu}_{1}\Big{(} \frac{m}{M}\Big{)}\leq 0, \tag{4.5}\] where \(0\leq k,m\leq M/2\). Therefore \[n(M)=M+1\,\,\,\mbox{when}\,\,M\,\,\mbox{is odd and}\,\,\,n(M)=M+2\,\,\,\mbox{ when}\,\,M\,\,\mbox{is even},\] which proves (4.4). Since the equality in (4.5) is reached only when \(m=0\), we get (4.3). A necessary condition for \(M\) subharmonic bifurcation is the absence of zero eigenvalues of the Frechet derivative corresponding to \(M\)-subharmonic problem. The next proposition deals with this problem. **Proposition 4.2**.: _If \(\mu_{1}>0\) then the numbers \(\mu^{(M)}_{j}\) are different from \(0\)._ Proof.: The proof of this proposition follows directly from Corollary 3.3 and from description of zero eigenvalues of the Frechet derivative for \(M\)-subharmonic problem given at the beginning of this section. ## 5. Branches of Stokes waves ### Uniform stream solution, dispersion equation The uniform stream solution \(\Psi=U(y)\) with the constant depth \(\eta=d\) and \(\lambda=1\), satisfies the problem \[U^{{}^{\prime\prime}}+\omega(U)=0\ \ \text{on}\ (0;d),\] \[U(0)=0,\ \ U(d)=1,\] \[\frac{1}{2}U^{\prime}(d)^{2}+d=R. \tag{5.1}\] Let \(s=U^{\prime}(0)\) and \(s>s_{0}:=2\max_{\tau\in[0,1]}\Omega(\tau)\), where \[\Omega(\tau)=\int_{0}^{\tau}\omega(p)dp.\] Then the problem (5.1) has a solution \((U,d)\) with a strongly monotone function \(U\) if \[\mathcal{R}(s):=\frac{1}{2}s^{2}+d(s)-\Omega(1)=R. \tag{5.2}\] In this case \((U,d)\) is found from the relations \[y=\int_{0}^{U}\frac{d\tau}{\sqrt{s^{2}-2\Omega(\tau)}},\ \ d=d(s)=\int_{0}^{1} \frac{d\tau}{\sqrt{s^{2}-2\Omega(\tau)}}. \tag{5.3}\] The equation (5.2) is solvable if \(R>R_{c}\), \[R_{c}=\min_{s\geq s_{0}}\mathcal{R}(s). \tag{5.4}\] We denote by \(s_{c}\) the point where the minimum in (5.4) is attained. Existence of small amplitude Stokes waves is determined by the dispersion equation (see, for example, [11]). It is defined as follows. The strong monotonicity of \(U\) guarantees that the problem \[\gamma^{{}^{\prime\prime}}+\omega^{\prime}(U)\gamma-\tau^{2}\gamma=0,\ \ \gamma(0,\tau)=0,\ \ \gamma(d,\tau)=1 \tag{5.5}\] has a unique solution \(\gamma=\gamma(y,\tau)\) for each \(\tau\in\mathbb{R}\), which is even with respect to \(\tau\) and depends analytically on \(\tau\). Introduce the function \[\sigma(\tau)=\kappa\gamma^{\prime}(d,\tau)-\kappa^{-1}+\omega(1),\ \ \kappa=\Psi^{\prime}(d). \tag{5.6}\] It depends also analytically on \(\tau\) and it is strongly increasing with respect to \(\tau>0\). Moreover it is an even function. The dispersion equation (see, for example [11]) is the following \[\sigma(\tau)=0. \tag{5.7}\] It has a positive solution if \[\sigma(0)<0. \tag{5.8}\] By [11] this is equivalent to \(s+d^{\prime}(s)<0\) or what is the same \[1<\int_{0}^{d}\frac{dy}{U^{\prime 2}(y)}. \tag{5.9}\] The left-hand side here is equal to \(1/F^{2}\) where \(F\) is the Froude number (see [28]). Therefore (5.9) means that \(F<1\), which is well-known condition for existence of water waves of small amplitude. Another equivalent formulation is given by requirement (see, for example [13]) \[s\in(s_{0},s_{c})\ \ \text{and satisfies (\ref{eq:1})}. \tag{5.10}\] The existence of such \(s\) is guaranteed by \(R\in(R_{c},R_{0})\). Moreover in this case the equation \(\mathcal{R}(s)=R\) has exactly two solutions \(s_{-}\) and \(s_{+}\), \(s_{0}<s_{+}<s_{c}<s_{-}\). The corresponding solutions to (5.1) are given by (5.3) and we denote them by \[(U_{+},d_{+})\;\;\text{and}\;\;(U_{-},d_{-}). \tag{5.11}\] The function \(\sigma\) has the following asymptotic representation \[\sigma(\tau)=\kappa\tau+O(1)\;\;\text{for large}\;\tau\] and equation (5.7) has a unique positive root, which will be denoted by \(\tau_{*}\). It is connected with \(\Lambda_{0}\) by the relation \[\tau_{*}=\frac{2\pi}{\Lambda_{0}}. \tag{5.12}\] The function \(\gamma(y,\tau)\) is positive in \((0,d]\) for \(\tau>\tau_{*}\). Let \[\rho_{0}=\frac{1+\Psi^{\prime}(d)\Psi^{{}^{\prime\prime}}(d)}{\Psi^{\prime}(d) ^{2}}. \tag{5.13}\] We note that \[\frac{1+\Psi^{\prime}(d)\Psi^{{}^{\prime\prime}}(d)}{\Psi^{\prime}(d)^{2}}= \kappa^{-2}-\frac{\omega(1)}{\kappa}\] and hence another form for (5.6) is \[\sigma(\tau)=\kappa\gamma^{\prime}(d,\tau)-\kappa\rho_{0}. \tag{5.14}\] The spectral problem (1.4) takes the form \[\Delta w+\omega^{\prime}(U)w+\mu w=0\;\;\text{for}\;X\in\mathbb{ R}\;\text{and}\;0<Y<d,\] \[\partial_{Y}w-\rho_{0}w=0\;\;\text{for}\;Y=d,\] \[w=0\;\;\text{for}\;Y=0, \tag{5.15}\] where \(w\) is even, \(\Lambda=2\pi/\tau_{*}\)-function. One can verify that \(\mu_{0}<0\) and corresponding eigenfunction does not depend on \(X\), \(\mu_{1}=0\) and corresponding eigenfunction is \(\cos(\tau_{*}X)\gamma(Y;\tau_{*})\) and \(\mu_{2}>0\). For small \(t\), \(\mu_{0}<0\) and \(\mu_{2}>0\). We assume that \[\mu_{1}(t)>0\;\;\text{for small positive}\;t. \tag{5.16}\] The validity of this assumption will be discussed in forthcoming paper. ### Partial hodograph transform In what follows we will study branches of Stokes waves \((\Psi(X,Y;t),\xi(X;t))\) of period \(\Lambda(t)\), \(t\in\mathbb{R}\), started from the uniform stream at \(t=0\). It is convenient to make the following change of variables \[x=\lambda X,\;\;y=Y,\;\;\lambda=\frac{\Lambda(0)}{\Lambda(t)} \tag{5.17}\] in order to deal with the problem with a fixed period. As the result we get \[\Big{(}\lambda^{2}\partial_{x}^{2}+\partial_{y}^{2}\Big{)}\psi+ \omega(\psi)=0\;\;\text{in}\;D_{\eta},\] \[\frac{1}{2}\Big{(}\lambda^{2}\psi_{x}^{2}+\psi_{y}^{2}\Big{)}+\eta =R\;\;\text{on}\;B_{\eta},\] \[\psi=1\;\;\text{on}\;B_{\eta},\] \[\psi=0\;\;\text{for}\;y=0, \tag{5.18}\] where \[\psi(x,y;t)=\Psi(\lambda^{-1}x,y;t)\;\;\text{and}\;\;\eta(x;t)=\xi(\lambda^{ -1}x;t).\] Here all functions have the same period \(\Lambda_{0}:=\Lambda(0)\), \(D_{\eta}\) and \(B_{\eta}\) are the domain and the free surface after the change of variables (5.17). Due to the assumption (1.3) we can introduce the variables \[q=x,\;\;p=\psi.\] Then \[q_{x}=1,\ \ q_{y}=0,\ \ p_{x}=\psi_{x},\ \ p_{y}=\psi_{y},\] and \[\psi_{x}=-\frac{h_{q}}{h_{p}},\ \ \psi_{y}=\frac{1}{h_{p}},\ \ dxdy=h_{p}dqdp. \tag{5.19}\] System (5.18) in the new variables takes the form \[\Big{(}\frac{1+\lambda^{2}h_{q}^{2}}{2h_{p}^{2}}+\Omega(p)\Big{)}_ {p}-\lambda^{2}\Big{(}\frac{h_{q}}{h_{p}}\Big{)}_{q}=0\ \ \mbox{in}\ Q,\] \[\frac{1+\lambda^{2}h_{q}^{2}}{2h_{p}^{2}}+h=R\ \ \mbox{for}\ p=1,\] \[h=0\ \ \mbox{for}\ p=0. \tag{5.20}\] Here \[Q=\{(q,p)\,:\,q\in\mathbb{R}\,,\ \ p\in(0,1)\}.\] The uniform stream solution corresponding to the solution \(U\) of (5.1) is \[H(p)=\int_{0}^{p}\frac{d\tau}{\sqrt{s^{2}-2\Omega(\tau)}},\ \ s=U^{\prime}(0)=H_{p}^{-1}(0). \tag{5.21}\] One can check that \[H_{pp}-H_{p}^{3}\omega(p)=0 \tag{5.22}\] or equivalently \[\Big{(}\frac{1}{2H_{p}^{2}}\Big{)}_{p}+\omega(p)=0. \tag{5.23}\] Moreover it satisfies the boundary conditions \[\frac{1}{2H_{p}^{2}(1)}+H(1)=R,\ \ H(0)=0. \tag{5.24}\] The problem (5.20) has a variational formulation (see [6]) and the potential is given by \[f(h;\lambda)=\int_{-\Lambda_{0}/2}^{\Lambda_{0}/2}\int_{0}^{1}\Big{(}\frac{1+ \lambda^{2}h_{q}^{2}}{2h_{p}^{2}}-h+R-(\Omega(p)-\Omega(1))\Big{)}h_{p}dqdp. \tag{5.25}\] Then according to [15] there exists a branch of solutions to (5.20) \[h=h(q,p;t):\mathbb{R})\to C^{2,\alpha}_{pe}(\overline{Q}),\ \ \lambda=\lambda(t):\mathbb{R}\to(0,\infty), \tag{5.26}\] which has a real analytic reparametrization locally around each \(t\). ## 6. Bifurcation analysis Here we present two equation for finding bifurcation points in \(q,p\) variables. One of them is defined by a boundary value problem in a two-dimensional domain and another one including a Dirichlet-Neumann operator is defined one a part of one-dimensional boundary. It was proved in Sect. 6.4 that the Frechet derivatives of operators in these two formulations have the same number of negative eigenvalues and the same dimension of the kernels. So both of them can be used in analysis of the Morse index and the corresponding crossing number. The first bifurcation 2D problem is useful in analysis of the number of negative eigenvalues and the second one is more convenient for application of general bifurcation results. Both formulations can be easily extended for study of subharmonic bifurcations, see Sect. 6.5 In Sect. 6.3 we give an explicit connection between the Frechet derivatives in \((x,y)\) and \((q,p)\) variables. This material is borrowed from [9], where all missing proofs can be found. ### First formulation of the bifurcation equation In order to find bifurcation points and bifuracating solutions we put \(h+w\) instead of \(h\) in (5.20) and introduce the operators \[\mathcal{F}(w;t)=\Big{(}\frac{1+\lambda^{2}(h_{q}+w_{q})^{2}}{2(h_ {p}+w_{p})^{2}}\Big{)}_{p}-\Big{(}\frac{1+\lambda^{2}h_{q}^{2}}{2h_{p}^{2}} \Big{)}_{q}\] \[-\lambda^{2}\Big{(}\frac{h_{q}+w_{q}}{h_{p}+w_{p}}\Big{)}_{q}+ \lambda^{2}\Big{(}\frac{h_{q}}{h_{p}}\Big{)}_{q}\] and \[\mathcal{G}(w;t)=\frac{1+\lambda^{2}(h_{q}+w_{q})^{2}}{2(h_{p}+w_{p})^{2}}- \frac{1+\lambda^{2}h_{q}^{2}}{2h_{p}^{2}}+w\] acting on \(\Lambda_{0}\)-periodic, even functions \(w\) defined in \(Q\). After some cancelations we get \[\mathcal{F}=\Big{(}\frac{\lambda^{2}h_{p}^{2}(2h_{q}+w_{q})w_{q}-(2h_{p}+w_{p })(1+\lambda^{2}h_{q}^{2})w_{p}}{2h_{p}^{2}(h_{p}+w_{p})^{2}}\Big{)}_{p}- \lambda^{2}\Big{(}\frac{h_{p}w_{q}-h_{q}w_{p}}{h_{p}(h_{p}+w_{p})}\Big{)}_{q} \tag{6.1}\] and \[\mathcal{G}=\frac{\lambda^{2}h_{p}^{2}(2h_{q}+w_{q})w_{q}-(2h_{p}+w_{p})(1+ \lambda^{2}h_{q}^{2})w_{p}}{2h_{p}^{2}(h_{p}+w_{p})^{2}}+w. \tag{6.2}\] Both these functions are well defined for small \(w_{p}\). Then the problem for finding solutions close to \(h\) is the following \[\mathcal{F}(w;t)=0\;\;\text{in}\;Q\] \[\mathcal{G}(w;t)=0\;\;\text{for}\;p=1\] \[w=0\;\;\text{for}\;p=0. \tag{6.3}\] Furthermore, the Frechet derivative (the linear approximation of the functions \(\mathcal{F}\) and \(\mathcal{G}\)) is the following \[Aw=A(t)w=\Big{(}\frac{\lambda^{2}h_{q}w_{q}}{h_{p}^{2}}-\frac{(1+\lambda^{2} h_{q}^{2})w_{p}}{h_{p}^{3}}\Big{)}_{p}-\lambda^{2}\Big{(}\frac{w_{q}}{h_{p}}- \frac{h_{q}w_{p}}{h_{p}^{2}}\Big{)}_{q} \tag{6.4}\] and \[\mathcal{N}w=\mathcal{N}(t)w=(Nw-w)|_{p=1}, \tag{6.5}\] where \[Nw=N(t)w=\Big{(}-\frac{\lambda^{2}h_{q}w_{q}}{h_{p}^{2}}+\frac{(1+\lambda^{2} h_{q}^{2})w_{p}}{h_{p}^{3}}\Big{)}\Big{|}_{p=1}. \tag{6.6}\] The eigenvalue problem for the Frechet derivative, which is important for the analysis of bifurcations of the problem (6.3), is the following \[A(t)w=\mu w\;\;\text{in}\;Q,\] \[\mathcal{N}(t)w=0\;\;\text{for}\;p=1,\] \[w=0\;\;\text{for}\;p=0. \tag{6.7}\] ### Second formulation of the bifurcation equation There is another formulation for the bifurcating solutions, which is used the Dirichlet-Neumann operator. Let us consider the problem \[\mathcal{F}(w;t)=0\;\;\text{in}\;Q,\] \[w=g\;\;\text{for}\;p=1,\] \[w=0\;\;\text{for}\;p=0. \tag{6.8}\] We define the operator \(\mathcal{S}=\mathcal{S}(g;h,t)\) by \[\mathcal{S}(g;t)=\mathcal{G}(w;t)|_{p=1}, \tag{6.9}\] where \(w\) is the solution of the problem (6.8). Then the equation for bifurcating solutions is \[\mathcal{S}(g;t)=0. \tag{6.10}\] Here we note that spectral problem for the the Frechet derivative of the left-hand side in (6.10) is given by \[A(t)w=0\;\;\text{in}\;Q,\] \[N(t)w-w=\theta w\;\;\text{for}\;p=1,\] \[w=0\;\;\text{for}\;p=0. \tag{6.11}\] More exactly if we introduce the problem \[A(t)w=0\;\;\text{in}\;Q,\] \[w=g\;\;\text{for}\;p=1,\] \[w=0\;\;\text{for}\;p=0, \tag{6.12}\] then the operator \[Sg=(Nw-w)|_{p=1} \tag{6.13}\] is the Frechet derivative of the operator (6.10). The corresponding spectral problem is \[Sg=\theta g. \tag{6.14}\] To prove solvability of the problem (6.8) we consider first the Dirichlet problem \[Aw=f,\] \[w=g,\;\;\text{for}\;p=1\] \[w=0\;\;\text{for}\;p=0. \tag{6.15}\] Let \[a_{1}=\min_{\overline{Q}}h_{p},\;\;a_{2}=||h||_{C^{2,\alpha}(Q)}.\] **Proposition 6.1**.: _(Proposition 2.4, [9]) There exists a positive constant \(C\) depending on \(a_{1}\) and \(a_{2}\) such that if \(g\in C^{2,\alpha}_{pe}(\mathbb{R})\) and \(f\in C^{0,\alpha}_{pe}(Q)\), \(\alpha\in(0,\gamma]\), then the problem (6.15) has a unique solution \(w\in C^{2,\alpha}_{pe}(Q)\), which satisfies the estimate_ \[||w||_{C^{2,\alpha}_{pe}(Q)}\leq C(||f||_{C^{0,\alpha}_{pe}(Q)}+||g||_{C^{2, \alpha}_{pe}(\mathbb{R})}).\] Next consider the nonlinear problem \[\mathcal{F}(w;t)=f\;\;\text{in}\;Q,\] \[w=g\;\;\text{for}\;p=1,\] \[w=0\;\;\text{for}\;p=0. \tag{6.16}\] It can be considered as a small perturbation of the problem (6.15). **Proposition 6.2**.: _(Proposition 2.5, [9]) There exist positive number \(\delta_{*}\) and a Constant \(C\) depending on \(a_{1}\) and \(a_{2}\) such that if_ \[||f||_{C^{0,\gamma}_{pe}(Q)}+||g||_{C^{2,\gamma}_{pe}(\mathbb{R})}\leq\delta \leq\delta_{*},\] _then there exists a unique solution \(w\in C^{2,\gamma}_{pe}(Q)\) such that_ \[||w||_{C^{2,\gamma}_{pe}(Q)}\leq C\delta.\] ### Spectral problems (9.4) and (6.11) in \((x,y)\) variables Let \[F(q,p;\lambda)=\Gamma(q,h)h_{p}. \tag{6.17}\] Then \[AF=-\omega^{\prime}(p)\Gamma-\lambda^{2}\Gamma_{xx}-\Gamma_{yy}. \tag{6.18}\] and \[-NF+F=\sqrt{\psi_{x}^{2}+\psi_{y}^{2}}\rho\Gamma-\lambda^{2}\psi_{x}\Gamma_{x }-\psi_{y}\Gamma_{y}, \tag{6.19}\] where \[\rho=\rho(x;t)=\frac{(1+\lambda^{2}\psi_{x}\psi_{xy}+\psi_{y}\psi_{yy})}{\psi _{y}(\psi_{x}^{2}+\psi_{y}^{2})^{1/2}}\Big{|}_{y=\eta(x;t)}.\] Then the following assertion is proved in [9]. **Lemma 6.3**.: _Let \(a=a(x,y)\) be positive, continuous, \(\Lambda_{0}\)-periodic function. If \(\Gamma=\Gamma(x,y)\) satisfies the problem_ \[(\lambda^{2}\partial_{x}^{2}+\partial_{y}^{2})\Gamma+\omega^{\prime}( \psi)\Gamma+\mu a\frac{1}{\psi_{y}}\Gamma=0\;\;\mbox{in}\;D_{\eta},\] \[(\lambda^{2}\nu_{x}\Gamma_{x}+\nu_{y}\Gamma_{y}-\rho\Gamma)_{p=1}=0\;\;\mbox{on }\;B_{\eta},\] \[\Gamma=0\;\;\mbox{for}\;x=0, \tag{6.20}\] _then_ \[AF=\mu aF,\] \[NF-F=0\;\;\mbox{for}\;p=1,\] \[F=0\;\;\mbox{for}\;p=0. \tag{6.21}\] ### Comparison of two spectral problems Here we will compare the nagative spectrum of the following spectral problems \[A(t)u=0\;\;\mbox{in}\;Q\] \[N(t)u-u=\theta au\;\;\mbox{for}\;p=1\] \[u=0\;\;\mbox{for}\;p=0. \tag{6.22}\] and \[A(t)u=\mu bu\;\;\mbox{in}\;Q\] \[N(t)u-u=0\;\;\mbox{for}\;p=1\] \[u=0\;\;\mbox{for}\;p=0, \tag{6.23}\] where \(a\) and \(b\) are continuous, positive, \(\Lambda\)-periodic functions. **Proposition 6.4**.: _The spectral problems (6.22) and (6.23) have the same number of negative eigenvalues (accounting their multiplicities). Moreover, if we consider these spectral problems problems on \(M\Lambda\)-periodic, even functions they have the same number of negative eigenvalues also._ Proof.: For \(a=1\) and \(b=1\) this assertion is proved in [9]. In general case the proof is literally the same. ### Equation for subharmonic bifurcations For positive integer \(M=1,2,\ldots\), and \(\alpha\in(0,1)\) let us introduce the subspaces \(C^{k,\alpha}_{M,e}(\overline{Q})\) and \(C^{k,\alpha}_{M,e}(\mathbb{R})\) of \(C^{k,\alpha}(\overline{Q})\) and \(C^{k,\alpha}(\mathbb{R})\) respectively consisting of even functions of period \(M\Lambda_{0}\). A similar space for the domain \(\mathcal{D}_{\xi}\) we denote by \(C^{2,\alpha}_{M,e}(\overline{\mathcal{D}_{\xi}})\). Equation (6.10) can be considered also on functions of period \(M\Lambda_{0}\). Having this in mind we write as before \(h+w\) instead of \(h\) but now we assume that \(w\in C^{2,\alpha}_{M,e}(\overline{Q})\). Now the operator \(\mathcal{F}(w;h,t)\), \(\mathcal{G}(w;h,t)\), \(A(t)\) and \(N(t)\) from Sect.6.1 are considered on functions from \(C^{2,\gamma}_{Me}(\overline{Q})\). To indicate this difference we will use the notations \(\mathcal{F}_{M}(w;h,t)\), \(\mathcal{G}_{M}(w;h,t)\), \(A_{M}(t)\) and \(N_{M}(t)\) for corresponding operators. Moreover we can define analogs of the operators \(\mathcal{S}\) and \(S\) and denote them by \(\mathcal{S}_{M}\) and \(S_{M}\) respectively. New operators are acting on functions from \(C^{1,\alpha}_{M,e}(\mathbb{R})\). The equation for subharmonic bifurcations is \[\mathcal{S}_{M}(g;t)=0, \tag{6.24}\] where \[\mathcal{S}_{M}(g;t)\,:\,\mathcal{U}\to C^{1,\alpha}_{M,e}(\mathbb{R}). \tag{6.25}\] Here \(\mathcal{U}\) is a neighborhood oh \(0\) in the space \(C^{2,\alpha}_{M,e}(\mathbb{R})\). The operator \(\mathcal{S}_{M}\) is also potential and for the definition of the potential the integration in (5.25) must be taken over the interval \((-M\Lambda_{0}/2,M\Lambda_{0}/2)\). The corresponding eigenvalue problem for the Frechet derivative \(S_{M}(t)\) is \[S_{M}(t)g=-\theta g, \tag{6.26}\] where \(S_{M}\) is defined by the same formulas as \(S\) but now the functions \(g\) and \(w\) are \(M\Lambda_{0}\)-periodic. Clearly, \[S_{M}(t)\,:\,C^{2,\alpha}_{M,e}(\mathbb{R})\to C^{1,\alpha}_{M,e}(\mathbb{R}). \tag{6.27}\] ## 7. Spectral properties of the Frechet derivative, \(t\) dependence. Since the Frechet derivatives now depends on \(t\in[0,\infty)\), we introduce the notations \(\mu_{j}(t)\) and \(\widehat{\mu}_{j}(t,\tau)\) in order to indicate the dependence od the eigenvalues on the variable \(t\). Certainly we assume that \[\mu_{0}(t)<\mu_{1}(t)\leq\mu_{2}(t)\leq\cdots\,\,\,\text{and}\,\,\,\widehat{ \mu}_{0}(t,\tau)<\widehat{\mu}_{1}(t,\tau)\leq\widehat{\mu}_{2}(t,\tau)\leq\cdots\] Clearly \(\widehat{\mu}_{j}(t,0)=\mu_{j}(t)\). Using Proposition 2.1, we conclude \(\mu_{0}(t)<0\) for all \(t\). In what follows we will assume that \[\mu_{1}(t_{0})=0,\,\,\,\mu_{1}(t)>0\,\,\,\text{for}\,\,t\in(0,t_{0})\,\,\, \text{and}\,\,\,\mu_{1}(t)<0\,\,\,\text{for}\,\,t\in(t_{0},t_{0}+\varepsilon),\] where \(\varepsilon\) is a small positive number. **Lemma 7.1**.: _The functions_ \[\widehat{\mu}_{1}(t,\tau)+\widehat{\mu}_{2}(t,\tau)\,\,\,\,\text{ and}\,\,\,\widehat{\mu}_{1}(t,\tau)\widehat{\mu}_{2}(t,\tau) \tag{7.1}\] _are analytic in a neighborhood of the point \(t=t_{0}\), \(\tau=0\)._ Proof.: We introduce the domain \[\mathcal{D}=\{u\in H^{2}_{0,p}(Q)\,:\,N(t)u-u=0\,\,\text{for}\,\,p=1\}\] and consider the operator \(A(t,\tau)\) defined on this domain. If \(t=t_{0}\) and \(\tau=0\) then the spectrum of this operator in a neighborhood of \((t_{0},0)\) consists of the point \(0\) and its multiplicity is two. For \((t,\tau)\) close to \((t_{0},0)\) we introduce the spectral projector \(P(t,\tau)\) of the operator \(A(t,\tau)\) corresponding to the eigenvalues located close to the point \(0\). Its kernel has dimensions \(2\) and it depends analytically on \((t,\tau)\) in a neighborhood of \((t_{0},0)\). The operator \(P(t,\tau))A(t,\tau)\) has finite rank and its trace is invariant with respect to the choice of orthogonal normalized basis. Therefore the functions (7.1) are analytical with respect to \(t\) and \(\tau\) in a neighborhood of \((t_{0},0)\). **Corollary 7.2**.: _The functions \(\widehat{\mu}_{1}(t,\tau)\) and \(\widehat{\mu}_{2}(t,\tau)\) are Lipschitz continuous in a neighborhood of \((t_{0},0)\)._ Proof.: Consider the equation \[\mu^{2}-B(t,\tau)\mu+A(t,\tau)=0,\,\,\,A=\widehat{\mu}_{1}\widehat{\mu}_{2}, \,\,\,B=\widehat{\mu}_{1}\widehat{\mu}_{2}.\] By Lemma 7.1 the coefficients \(A\) an \(B\) are analytic with respect to \((t,\tau)\) in a neighborhood of \((t_{0},0)\). Since \(\mu=\widehat{\mu}_{1}\) and \(\mu=\widehat{\mu}_{2}\) are roots of this equation the result follows from [20], for example. ### Generalized spectral problem, \(\mu=0\) Here we study bounded solutions of the problem (3.1) with \(\mu=0\), i.e. \[\Delta w+\omega^{\prime}(\psi)w=0\,\,\,\text{in}\,\,\mathcal{D}_{\eta},\] \[\partial_{\nu}w-\rho w=0\,\,\,\text{on}\,\,\,\mathcal{S}_{\eta},\] \[w=0\,\,\,\text{for}\,\,y=0. \tag{7.2}\] According to Lemma 6.3 this problem in \(q\), \(p\) variables has the form \[A(\tau,t)w=0\;\;\mbox{in}\;Q\] \[N(\tau,t)w+w=0\;\;\mbox{for}\;p=1\] \[w=0\;\;\mbox{for}\;p=0. \tag{7.3}\] where \[A(\tau,t)w=e^{-i\tau q}A(t)(e^{i\tau q}w),\;\;N(t,\tau)w=e^{-i\tau q}N(t)(e^{i \tau q}w)\] We are interested in such real \(\tau\) for which the problem (7.3) has non-trivial solutions when \(|\tau|\) and \(|t-t_{0}|\) are small. **Proposition 7.3**.: _Let \(t_{0}>0\) be the first value of \(t\) when \(\mu_{1}(t_{0})=0\) and \(\mu_{1}(t)<0\) for \(t\in(t_{0},t_{0}+\varepsilon)\), where \(\varepsilon\) is a small positive number. Then there exist a small positive \(\delta\) such that all \(\tau\) eigenvalues of (7.3) subject to \(|t-t_{0}|<\delta\) and \(|\tau|<\delta\) are described by_ \[\widehat{\tau}_{k}(s)=\sum_{j=0}^{\infty}a_{kj}s^{j/n_{k}},\;\;k=1,\ldots L, \tag{7.4}\] _where \(s=t-t_{0}\). This formula gives all possible small \(\tau\)-eigenvalues. Here \(L\) is the algebraic multiplicity of the \(\tau\) eigenvalue \(\tau=0\) when \(t=t_{0}\)._ Proof.: We can reduce (7.3) to a standard eigenvalue problem by introducing \[v=\frac{(w_{q}+i\tau w)}{h_{p}}-\frac{h_{q}w_{p}}{h_{p}^{2}}.\] Then \[Aw=\Big{(}\frac{\lambda^{2}h_{q}}{h_{p}}v-\frac{w_{p}}{h_{p}^{2}}\Big{)}_{p}- \lambda^{2}(i\tau+\partial_{q})v\] and \[Nw=\frac{\lambda^{2}h_{q}}{h_{p}}v-\frac{w_{p}}{h_{p}^{2}}.\] The problem (7.3) can be written as \[i\tau w=\frac{h_{q}w_{p}}{h_{p}}-h_{p}v-w_{q}\] \[i\tau v=\lambda^{-2}\Big{(}\frac{\lambda^{2}h_{q}}{h_{p}}v-\frac{w_{p}}{h_{p} ^{2}}\Big{)}_{p}-\partial_{q}v \tag{7.5}\] with boundary conditions \[\frac{\lambda^{2}h_{q}}{h_{p}}v-\frac{w_{p}}{h_{p}^{2}}=0\;\; \mbox{for}\;p=1\] \[w=0\;\;\mbox{for}\;p=0. \tag{7.6}\] We introduce the operator \[\mathcal{A}\left(\begin{array}{c}w\\ u\end{array}\right)=\left(\begin{array}{cc}\frac{h_{q}w_{p}}{h_{p}}-w_{q}&- h_{p}v\\ -\lambda^{-2}\Big{(}\frac{w_{p}}{h_{p}^{2}}\Big{)}_{p}-\partial_{q}v&\Big{(} \frac{h_{q}}{h_{p}}v\Big{)}_{p}-\partial_{q}v\end{array}\right)\] Then \[\mathcal{A}=\mathcal{A}(t):C^{2,\gamma}\times C^{1,\gamma}\to C^{1,\gamma} \times C^{0,\gamma}\] We put \[\mathcal{D}=\{(w,v)\in C^{2,\gamma}\times C^{1,\gamma}\,:\,\mbox{(\ref{eq:1}) is satisfied}\}.\] Then the operator \[\mathcal{A}:\mathcal{D}\to C^{1,\gamma}\times C^{0,\gamma}\] is Fredholm with zero index and its spectrum consists of isolated \(\tau\)-eigenvalues of finite algebraic multiplicity. Denote by \(X_{0}\) the kernel of this operator for \(t=t_{0}\) and by \(P(t)\) the spectral projector corresponding to the eigenvalues of \(\mathcal{A}(t)\) located near \(0\). Let also \(N_{0}=\dim X_{0}\). Then \(P(t)\) is the spectral projector of rank \(N_{0}\) depending analytically on \(t\) in a small neighborhood of \(t_{0}\). Since the characteristic equation for the operator \(P(t)\mathcal{A}(t)\) is well defined and is invariant with respect of the choice of the basis, the coefficient of the characteristic equation analytically depends on \(t\) in a neighborhood of \(t_{0}\). Therefore the its coefficients analytically depend on \(t\) also. Therefore we obtain a polynomial equation for \(\tau\)-eigenvalues located in a small neighborhood of \(0\) with coefficients analytically depending on \(t\). According to [25] such equation has roots (7.4) which give all possible small \(\tau\)-eigenvalues. Assume that \(t_{0}>0\) is the first Stokes bifurcation point, which means \[\mu_{1}(t_{0})=0,\ \ \mu_{1}(t)>0\ \ \text{for}\ t\in(0,t_{0})\ \text{and}\ \ \mu_{1}(t)<0\ \ \text{for}\ t\in(t_{0},t_{0}+\epsilon), \tag{7.7}\] where \(\epsilon\) is a small positive number. Since \(\widehat{\mu}_{0}\leq\nu_{0}^{*0}\) according to (3.19), we have \[\widehat{\mu}_{0}(t,\tau)<0\ \ \text{for all}\ t\geq 0,\,\tau\in\mathbb{R} \tag{7.8}\] by (2.15). Using (3.22) and (3.5), we get \[\widehat{\mu}_{3}(t,\tau)>0\ \ \text{for all}\ t\geq 0,\,\tau\in\mathbb{R}, \tag{7.9}\] provided \(\mu_{2}(t)>0\) for \(t\in(t_{0},t_{0}+\epsilon)\). Furthermore, by (3.21) and Proposition 2.3 \[\widehat{\mu}_{1}(t,\tau)<0\ \ \text{for}\ t\in(0,t_{0}+\epsilon)\ \text{and}\ \tau\in(0,\tau_{*}), \tag{7.10}\] where \(\epsilon\) is chosen to satisfy \(\nu_{1}^{*0}(t)>0\) for \(t\in(t_{0},t_{0}+\epsilon)\). Since \(\nu_{1}^{*0}(t)>\mu_{1}(t)\), the inequality (7.10) is always valid for \(t\leq t_{0}\) and its validity for \(t\in(t_{0},t_{0}+\epsilon)\) requires a restriction on \(\epsilon\). Therefore all small \(\tau\) eigenvalues of the problem (7.3) in the interval \((0,\tau_{*})\) for \(t\in(t_{0},t_{0}+\epsilon)\) are given by the equation \(\widehat{\mu}_{2}(t,\tau)=0\), where \(t\in(t_{0},t_{0}+\epsilon)\). Here we have used inequality (3.21), which implies \[\widehat{\mu}_{2}(t,\tau)>0\ \ \text{for}\ t\leq t_{0}\ \text{and}\ \tau\in(0,\tau_{*}). \tag{7.11}\] Since \(\tau\) eigenvalues are roots of the equation \(\widehat{\mu}_{2}(t,\tau)=0\), there are roots among (7.4) with real coefficients. Moreover if \(\tau(t)\) is a real root then \(-\tau(t)\) is also a real root. We numerate all positive real roots by \(\widehat{\tau}_{k}(t)\), \(k=1,\dots,m\), \[0<\widehat{\tau}_{1}(t)<\dots<\widehat{\tau}_{m}(t),\ \ \text{for}\ t\in(t_{0},t_{0}+ \epsilon). \tag{7.12}\] We note that since all roots in (7.12) satisfies \(\widehat{\mu}_{2}(t,\tau_{j}(t))=0\), \(j=1,\dots,m\), they are different and we have strong inequalities in (7.12). Since the leading term of functions \(\widehat{\tau}_{j}\) is positive the sign of the derivativ is positive also in a small neigborhood of \(t_{0}\). So we will assume that \(\varepsilon\) is chosen to satisfy \[\partial_{t}\widehat{\tau}_{j}(t)>0\ \ \text{for}\ t\in(t_{0},t_{0}+\varepsilon). \tag{7.13}\] **Lemma 7.4**.: _Let (7.7) hold. For each \(j=1,\dots,m\) there exists an integer \(n>0\) and \(\epsilon_{1}>0\) such that_ \[\partial_{t}^{k}\widehat{\mu}_{2}(t,\tau)=0\ \ \text{for}\ \tau=\widehat{\tau}_{j}(t) \ \text{and for}\ t\in(t_{0},t_{0}+\epsilon_{1}) \tag{7.14}\] _for \(k<n\) and_ \[\partial_{t}^{n}\widehat{\mu}_{2}(t,\tau)\neq 0\ \ \text{for}\ \tau=\widehat{\tau}_{j}(t) \ \text{and for}\ t\in(t_{0},t_{0}+\epsilon_{1}). \tag{7.15}\] Proof.: We note that by (7.8) the function \(\widehat{\mu}_{2}(t,\tau)\) is analytic with respect to \(\tau\) near a root \(\tau=\widehat{\tau}_{j}\). Let \(n=1,2,\dots\). If the equation \[\partial_{\tau}^{n}\widehat{\mu}_{2}(t,\widehat{\tau}_{j}(t))=0 \tag{7.16}\] has infinitely many roots in the interval \((t_{0},t_{0}+\epsilon)\) then the equation (7.16) is valid for all \(t\in(t_{0},t_{0}+\epsilon)\). If (7.16) holds for a certain \(t\in(t_{0},t_{0}+\epsilon)\) and all \(n\) then due to analyticity of \(\widehat{\mu}_{t}(t,\tau)\) with respect to \(\tau\) we get that \(\widehat{\mu}_{2}(t,\tau)=0\) for all small \(\tau\) in a neighborhood of \(\widehat{\tau}_{j}(t)\), but this is not true by [23]. Let us choose the smallest \(n\) such that \(\partial_{t}^{n}\widehat{\mu}_{2}(t,\tau)|_{\tau=\widehat{\tau}_{j}(t)}\neq 0\) for certain \(t\). Then we can choose \(\epsilon_{1}\) for which \[\partial_{\tau}^{n}\widehat{\mu}_{2}(t,\tau)|_{\tau=\widehat{\tau}_{j}(t)}\neq 0 \ \ \text{for}\ t\in(t_{0},t_{0}+\epsilon_{1}). \tag{7.17}\] Differentiating \(\widehat{\mu}_{2}(t,\widehat{\tau}_{j}(t))=0\)\(n\) times and (7.13), we arrive at (7.15). ## 8. Local subharmonic bifurcation theorem In this section we assume that \(\mu_{1}(t_{0})=0\) for certain \(t_{0}>0\) and (7.7) holds. Let \[\widehat{a}_{1}=\widehat{a}_{1}(t_{0},\epsilon)=\min_{\overline{Q}\times[t_{0}- \epsilon,t_{0}+\epsilon]}h_{p}(p,q;t),\ \ \widehat{a}_{2}=\widehat{a}_{2}(t_{0},\epsilon)=\sup_{[t_{0}-\epsilon,t_{0}+ \epsilon]}||h(\cdot,\cdot;t)||_{C^{2,\alpha}(Q)}.\] Denote \[\mathcal{A}_{M}=\{g\in C^{2,\alpha}_{p,M\Lambda_{0}}(\mathbb{R})\,:\,||g||_{C ^{2,\alpha}(\mathbb{R})}\leq\delta=\delta(\widehat{a}_{1},\widehat{a}_{2})\},\] where \(M\) is a positive integer. Consider operator \(\mathcal{S}_{M}\) defined on the set \(\mathcal{A}_{M}\) for each \(t\in(t_{0}-\epsilon,t_{0}+\epsilon)\), i.e \[\mathcal{S}_{M}\,:\,\mathcal{A}_{M}\times(t_{0}-\epsilon,t_{0}+\epsilon) \to C^{1,\alpha}_{p,M\Lambda_{0}}(\mathbb{R}).\] Here we consider the equation (6.24) for subharmonic bifurcations. This operator satisfies \[\mathcal{S}_{M}\in C(\mathcal{A}_{M}\times(t_{0}-\epsilon,t_{0}+\epsilon),C^ {1,\alpha}_{p,M\Lambda_{0}}(\mathbb{R}))\] and \[S_{M}\in C(\mathcal{A}_{M}\times(t_{0}-\epsilon,t_{0}+\epsilon),L(C^{2,\alpha }_{p,M\Lambda_{0}}(\mathbb{R}),C^{1,\alpha}_{p,M\Lambda_{0}}(\mathbb{R}))).\] **Theorem 8.1**.: _Let \(t_{0}>0\) satisfy (7.7). Then there exists an integer \(M_{0}\) such that for every \(M>M_{0}\) there exists \(\varepsilon_{M}>0\) such that the branch (5.26) has \(M\)-subharmonic bifurcation at \(t=t_{0}+\varepsilon_{M}\). Moreover the crossing number at these bifurcation point is \(1\) and \(\varepsilon_{M}\to 0\) as \(M\to\infty\). The closure of the set of nontrivial solutions of_ \[\mathcal{S}_{M}(g;t)=0\] _near \((0,t_{0}+\varepsilon_{M})\) contains a connected component to which \((0,t_{0}+\varepsilon_{M})\) belongs._ Proof.: We use the bifurcation equation (6.24) in a neighborhood of \(t=t_{0}\). First we observe that \(t_{0}\) is a Stokes bifurcation point. We choose an integer \(M\). In order to apply Theorem II,4,4 from [8], it is sufficient to verify the following spectral properties of the Frechet derivative \(S_{M}(t)\): (i) find \(t_{M}>t_{0}\) close to \(t_{0}\) such that the kernel of the operator \(S_{M}(t_{M})\) is non-trivial; (ii) There exists a positive \(\varepsilon\) such that the kernel of \(S_{M}(t)\) is trivial for \(t\in(t_{M},t_{M}-\varepsilon)\cup(t_{M},t_{M}+\varepsilon)\). Furthermore \[n_{M}(t_{2})-n_{M}(t_{1})=1\ \ \text{for\ \rm{cerain}}\ t_{2}\in(t_{M},t_{M}+ \epsilon)\ \text{and}\ t_{1}\in(t_{M},t_{M}-\varepsilon).\] Due to Proposition 6.4 it is sufficient to verify properties (i) and (ii) for the branch of spectral problems (6.23) (or (6.20)) defined on \(M\Lambda_{0}\) (\(M\Lambda(t)\)) periodic functions with eigenvalues \(\mu^{M}\). We choose a sufficiently large integer \(M_{0}\) and a small \(\delta_{0}>0\), and suppose that \(M>M_{0}\) and \(t_{0}<t<t_{0}+\delta_{0}\). The size of \(M_{0}\) and \(\delta_{0}\) will be clarified below. We may assume that \(\delta_{0}\leq\epsilon\) which guarantees the validity of (7.8)-(7.10). The inequalities (7.8) and (7.9) show that the the only \(\tau\) eigenvalues which may contribute to the kernel of the operator \(S_{M}(t)\) come from equations \(\widehat{\mu}_{1}(t,\tau)=0\) or \(\widehat{\mu}_{2}(t,\tau)=0\). By (7.10) the equation \(\widehat{\mu}_{1}(t,\tau)=0\) has the only root \(\tau=0\) on the interval \([0,\tau_{*})\) with the corresponding eigenfunction \(\psi_{x}\). Since the function \(\psi_{x}\) is odd this equations does not contribute to the kernel also. Let us consider the equation \(\widehat{\mu}_{2}(t,\tau)=0\). We assume that \(\delta_{0}\) is chosen such that the eigenvalue \(\mu=0\) of the problem (3.2) for \(\tau=0\) and \(t\in(t_{0},t_{0}+\delta_{0})\) is simple 2. By Lemma 3.6 Footnote 2: We recall that this problem is considered on \(\Lambda\)-periodic solution and it is not a spectral point for \(t\in(t_{0},t_{0}+\delta_{0})\) \[\widehat{\mu}_{2}(t,\tau)<0\ \ \text{for\ all}\ t\in(t_{0},t_{0}+\delta_{0}) \ \text{and\ small}\ |\tau|. \tag{8.1}\] Applying (3.21) to the eigenvalue \(\widehat{\mu}_{2}(t_{0},\tau)\) we get \(\widehat{\mu}_{2}(t_{0},\tau)>0\) for \(\tau\in(0,\tau_{*})\). Using Lemma 3.4, we may assume that \(\delta_{0}\) is chosen to guarantee \[\widehat{\mu}_{2}(t,\tau)>0\ \ \text{for}\ t\in(t_{0},t_{0}+\delta_{0})\ \text{and}\ \tau\in[\delta_{1},\tau_{*}/2], \tag{8.2}\] where \(\delta_{1}\) is a small positive number. Second, consider the sequence of functions (7.12) \[\widehat{\tau}_{1}(t)<\widehat{\tau}_{2}(t)<\cdots\widehat{\tau}_{m}(t).\] By Lemma 7.4 some of them satisfy (7.15) with the the inequality \(>0\) and some of the with the inequality \(<0\). Let \(n_{*}\) be the largest index among \(1,\ldots,m\) for which the inequality \(<0\) is valid in (7.15) and hence for indexes \(j=n_{*}+1,\ldots,m\) the inequality \(>0\) holds in (7.15). Such index exists otherwise \(\widehat{\mu}_{2}(t,\widehat{\tau}_{j})\geq 0\) for \(t\in(t_{0},t_{0}+\epsilon)\), which contradicts to (8.1) and (8.2). We choose \(t_{M}=t_{0}+\varepsilon_{M}\) satisfying the relation \[\widehat{\tau}_{n_{*}}(t_{M})=\tau_{*}/M. \tag{8.3}\] Our requirement on \(M_{0}\) is that the equation (8.3) is solvable for all \(M>M_{0}\). (i) Let us check (i). The kernel of the operator \(A(t)\) consists of functions (4.1) with \(\tau_{j}\) satisfying \[\widehat{\tau}_{k}(t_{M})=\tau_{j}:=j\tau_{*}/M\;\;\mbox{for certain $k=n_{*}+1, \ldots,m$.} \tag{8.4}\] If we take \(t\neq t_{M}\) in a small neighborhood of \(t_{M}\) then the kernel will be trivial since all functions \(\widehat{\tau}_{j}\) are strongly increasing. (ii) Since the functions \(\widehat{\tau}_{k}\), \(k=n_{*}+1,\ldots,m\), satisfy (7.15) with the inequality \(>0\) they do not contribute to the changing of the Morse index at \(t_{M}\). Since the function \(\widehat{\tau}_{n_{*}}\) satisfies (7.15) with \(<0\) instead of \(\neq 0\) the eigenvalue \(\widehat{\mu}_{2}(t_{M},\tau)\) changes sign at the point \(\tau=\widehat{\tau}_{n_{*}}(t_{M})\). Hence \(\widehat{\tau}_{n_{*}}\) contributes to the changing of the Morse index at \(t_{M}\) with \(+1\). So the crossing number at \(t_{M}\) is \(1\). This proves properties (i) and (ii) and the theorem. ## 9. Global subharmonic bifurcation theorems Let \[X=C^{2,\alpha}_{0,M\Lambda,e}(Q),\;\;Y=Y_{1}\times Y_{2}=C^{0,\alpha}_{0,M \Lambda,e}(Q)\times C^{1,\alpha}_{M\Lambda}(\mathbb{R}).\] Introduce a subset in \(\mathbb{R}\times X\): \[\mathcal{O}_{\delta}=\{(t,w)\in\mathbb{R}\times X\,:\,(h+w)_{p}>\delta\; \mbox{in}\;\overline{Q},\delta<\lambda(t)\}.\] Then the functional \[\Gamma(w,t)=\left(\mathcal{F}(w,t),\mathcal{G}(w,t)\right)\,:\,\mathcal{O}_{ \delta}\to Y=Y_{1}\times Y_{2},\] where \(\mathcal{F}\) and \(\mathcal{G}\) are defined by (6.24) and (6.26), is well defined and continuous. The Frechet derivative at the point \(w\) has the form \[A\xi=A(w;t)\xi=\Big{(}\frac{\lambda^{2}(h+w)_{q}\xi_{q}}{(h+w)_{p}^{2}}-\frac {(1+\lambda^{2}(h+w)_{q}^{2})\xi_{p}}{(h+w)_{p}^{3}}\Big{)}_{p}-\lambda^{2} \Big{(}\frac{\xi_{q}}{(h+w)_{p}}-\frac{(h+w)_{q}\xi_{p}}{(h+w)_{p}^{2}}\Big{)} _{q} \tag{9.1}\] and \[\mathcal{N}\xi=\mathcal{N}(w;t)\xi=(N\xi-\xi)|_{p=1}, \tag{9.2}\] where \[N\xi=N(w;t)\xi=\Big{(}-\frac{\lambda^{2}(h+w)_{q}\xi_{q}}{(h+\xi)_{p}^{2}}+ \frac{(1+\lambda^{2}(h+w)_{q}^{2})\xi_{p}}{(h+w)_{p}^{3}}\Big{)}\Big{|}_{p=1}. \tag{9.3}\] The eigenvalue problem for the Frechet derivative, which is important for the analysis of bifurcations of the problem (6.3), is the following \[A(w;t)\xi=\mu\xi\;\;\mbox{in}\;Q,\] \[\mathcal{N}(w;t)\xi=0\;\;\mbox{for}\;p=1,\] \[\xi=0\;\;\mbox{for}\;p=0. \tag{9.4}\] Introduce some sets: \[\mathcal{S}_{\delta}=\mbox{closure in $\mathbb{R}\times X$ of }\;\{(t,w)\in \mathcal{O}_{\delta}\,:\,\Gamma(w,t)=0,\;w\;\mbox{ is not identically zero}\}.\] Let also \(\mathcal{C}^{\prime}_{\delta}\) be the connected component of \(\mathcal{S}_{\delta}\) containing the point \((t_{M},0)\) together with the connected set from Theorem 8.1. The following theorem is an analog of Theorem 4.2 [5]. **Theorem 9.1**.: _Let \(\delta>0\). Then either_ (i)_\(\mathcal{C}^{\prime}_{\delta}\) is unbounded in \(\mathbb{R}\times X\), or_ (ii)_\(\mathcal{C}^{\prime}_{\delta}\) contains another trivial point \((t_{1},0)\) with \(t_{1}\neq t_{M}\), or_ (iii)_\(\mathcal{C}^{\prime}_{\delta}\) contains a point \((t,w)\in\partial\mathcal{O}_{\delta}\)._ In [15] it is proved an estimate \(\Lambda(t)\geq c_{0}>0\) for all \(t\in\mathbb{R}\), where the constant \(c_{0}\) depends only on \(\omega\) and \(r\). This estimate implies \[\lambda(t)\leq\frac{\Lambda(0)}{c_{0}}.\] ### Proof of Theorem 9.1 The proof of the theorem is actually the same as that of Theorem 4.2 [5]. We have \[\mathcal{F}=-\frac{1+\lambda^{2}(h_{q}+w_{q})^{2}}{(h_{p}+w_{p})^ {3}}(h_{pp}+w_{pp})-\lambda^{2}\frac{h_{qq}+w_{qq}}{h_{p}+w_{p}}+2\lambda^{2} \frac{(h_{q}+w_{q})(h_{qp}+w_{qp})}{(h_{p}+w_{p})^{2}}\] \[=\frac{1}{(h_{p}+w_{p})^{3}}F(w,t),\] where \[F(w,t)=-(1+\lambda^{2}(h_{q}+w_{q})^{2})(h_{pp}+w_{pp})-\lambda^{2}(h_{p}+w_{ p})^{2}(h_{qq}+w_{qq})+2\lambda^{2}(h_{q}+w_{q})(h_{p}+w_{p})(h_{qp}+w_{qp})\] and \[\mathcal{G}=\frac{1}{(h_{p}+w_{p})^{2}}G(w,t),\] where \[G(w,t)=1+\lambda^{2}(h_{q}+w_{q})^{2}+2(w+h-R)(h_{p}+w_{p})^{2}.\] Since the functions (5.26) solve (5.20) we have \[F(0,t)=0,\ \ G(0,t)=0\ \ \mbox{for all}\ t.\] Therefore the system \(\Gamma(w,t)=0\) has the same solutions as the system \((F,G)=0\) provided \(h_{p}+w_{p}>0\) inside \(\overline{Q}\). Since the arguments in the proof of Theorem 4.2 [5] were based only on the periodicity of functions and ellipticity of the problem \((F,G)=0\) the proof given in [5] can be used with small changes to prove Theorem 9.1. ### Main Theorem on global subharmonic bifurcations Let \[\mathcal{C}^{\prime}=\bigcup_{\delta}\mathcal{C}^{\prime}_{\delta}\] The sets \(\mathcal{C}^{\prime}_{\delta}\) increase as \(\delta\) decreases. Let \[\widehat{\psi}(X,Y;t),\ \widehat{\xi}(X;t),\ \Lambda(t) \tag{9.5}\] be the elements from \(\mathcal{C}^{\prime}\) in \((X,\,Y)\)-coordinates. We will denote the set pf such functions also by \(\mathcal{C}^{\prime}\). Let \[\widehat{\Psi}(X,Y;t)=\Psi(X,Y;t)+\widehat{\psi}(X,Y;t),\ \ \Xi(X;t)=\xi(X;t)+ \widehat{\xi}(X,Y;t) \tag{9.6}\] The main result of this section is the following **Theorem 9.2**.: _The following alternatives are valid for the set \((\ref{eq:1})\):_ (i)_\(\sup_{\mathcal{C}^{\prime}}\sup_{X\in\mathbb{R}}|\Xi^{\prime}(X)|=\infty\), or_ (ii)_\(\sup_{\mathcal{C}^{\prime}}\max_{X\in\mathbb{R}}(R-\Xi(X))=0\), or_ (iii)_\(\sup_{\mathcal{C}^{\prime}}\sup_{X\in\mathbb{R}}|\widehat{\Psi}_{Y}(X,0)|=0\), or_ (iv) _there exists \(\delta>0\) such that_ (ii) from Theorem 9.1 _is valid._ ### Proof of Theorem 9.2 The proof will use the following assertion, which is proved in [15], Proposition 3.2. **Proposition 9.3**.: _Assume that \(\omega\in C^{1,\alpha}([0,1])\). Let \(\delta>0\) be given as well as a ball \(B\) of radius \(\rho>0\) and let \(M=\sup_{(X,\xi(X))\in B}|\xi^{\prime}(X)|\). Then there exist constants \(\widehat{\alpha}\in(0,1)\) and \(C>0\), depending only on \(R\), \(\delta\), \(\rho\) and \(M\) such that any solution \((\Psi,\xi)\in C^{2,\alpha}(\overline{D_{\xi}})\times C^{2,\gamma}(\mathbb{R})\) of (1.2) with \(\inf_{B}\Psi_{Y}\geq\delta\) satisfies \(||\Psi||_{C^{3,\delta}(D_{\xi}\cap\frac{1}{2}B)}\leq C\), where \(\frac{1}{2}B\) is a ball with the same centre and radius \(\frac{1}{2}\rho\)._ The next lemma contains the main step of the proof. **Lemma 9.4**.: _Let \(\widehat{\Psi}_{Y}(X,Y;t)\geq 0\) in \(\mathcal{D}_{\Xi}\) and let there exist positive constants \(C_{1}\), \(C_{2}\), \(\delta_{1}\) and \(\widehat{\delta}_{1}\) such that_ \[|\Xi^{\prime}(X;t)|\leq C_{1},\ \ \Lambda(t)\leq C_{2}\ \,\,\mbox{and}\ \,\,R-\Xi(X;t)\geq \delta_{1}\ \ \widehat{\Psi}_{Y}(X,0;t)\geq\widehat{\delta}_{1} \tag{9.7}\] _for functions from \(\mathcal{C}^{\prime}\). Then there exist \(\delta>0\) such that_ \[\widehat{\Psi}_{Y}(X,Y;t)\geq\delta\,\,\,\mbox{for}\ (X,Y)\in\mathcal{D}_{\Xi}. \tag{9.8}\] Proof.: The proof consists of several steps. 1. (The first estimate of \(\widehat{\Psi}_{Y}\).) Differentiating \(\widehat{\Psi}(X,\Xi(X;t);t)=1\) with respect to \(X\), we get \[\widehat{\Psi}_{X}+\widehat{\Psi}_{Y}\Xi^{\prime}=0.\] Hence \[|\widehat{\Psi}_{X}|\leq C_{1}|\widehat{\Psi}_{Y}|\ \,\,\mbox{on}\ \mathcal{S}_{\Xi}.\] From the Bernoulli equation, we get \[\delta_{1}\leq R-\Xi(X)\leq\frac{1}{2}(1+C_{1}^{2})\widehat{\Psi}_{Y}^{2}.\] Therefore \[\widehat{\Psi}_{Y}^{2}\geq\delta_{2}^{2}:=\frac{2\delta_{1}}{(1+C_{1}^{2})}.\] To estimate the function \(\widehat{\Psi}_{Y}\) on the whole domain \(\mathcal{D}_{\Xi}\) we consider the function \[U=\widehat{\Psi}_{Y}-a\sinh(\beta Y),\] where \(a\) and \(\beta\) are positive constants. Then \[-\Delta U-\omega(\widehat{\Psi})U=a(\beta^{2}+\omega^{\prime})\sinh(\beta Y) \geq 0\ \,\,\mbox{if}\ \beta^{2}\geq\max_{p\in[0,1]}|\omega^{\prime}(p)|.\] Furthermore, \(U(X,0)=0\) and \[U(X,\Xi(X))\geq\delta_{2}-a\sinh(\beta d_{-}).\] We choose \(a\) to satisfy \[\delta_{2}-a\sinh(\beta d_{-})=0.\] By strong maximum principle we get \(U\geq 0\) inside \(\mathcal{D}_{\Xi}\). Thus \[\widehat{\Psi}_{Y}(X,Y)\geq a\sinh(\beta Y)\ \,\,\mbox{in}\ \mathcal{D}_{\Xi}. \tag{9.9}\] 2. (The estimate of \(\widehat{\Psi}\)). Let \(d_{-}\) be defined by (5.11). Then by Theorem 1.1 from [14], \[\xi(X)\geq d_{-}\,\,\,\mbox{for all}\ X\in\mathbb{R}\ \mbox{and}\ \mbox{for all}\ \mbox{solutions to}\ \eqref{eq:2}\ \mbox{from}\ \mathcal{C}^{\prime}.\] We split the domain \(\mathcal{D}_{\Xi}\) as follows \[\mathcal{D}_{\Xi}=\widehat{\mathcal{D}}_{\Xi}\bigcup\mathcal{Q}_{d_{-}},\] where \[\widehat{\mathcal{D}}_{\Xi}=\{(X,Y)\,:\,X\in\mathbb{R},\,d_{-}\leq Y<\Xi(X)\}, \ \,\,\mathcal{Q}_{d_{-}}=\mathbb{R}\times(0,d_{-}).\] Using estimate (9.9) together with Proposition 9.3, we get \[||\widehat{\Psi}||_{C^{3,\delta}(\overline{\mathcal{D}_{\Xi}})}\leq C \tag{9.10}\] Since \(\widehat{\Psi}(\cdot,d_{-})\in C^{3,\widehat{\alpha}}(\mathbb{R})\) we have that \(\widehat{\Psi}\in C^{3,\widehat{\alpha}}(\overline{\mathcal{Q}_{d_{-}}})\). 3. (The estimate of \(\Psi_{Y}\) from below on \(\mathcal{D}_{\Xi}\)) Let us show that \(\widehat{\Psi}_{Y}\geq\delta_{3}\) in \(\mathcal{Q}_{d_{-}}\), where \(\delta_{3}\) is a positive constant. Indeed, consider all solutions of \[\Delta U+\omega^{\prime}(\widehat{\Psi})U=0\;\;\mbox{in}\;\;\mathcal{Q}_{d_{-}}\] satisfying (9.10), \(U(X,d_{-})\geq\delta_{4}\) and \(U\geq 0\) inside \(\mathcal{Q}_{d_{-}}\) and \[U(X,d_{-})\geq\delta_{2}\;\;\mbox{and}\;\;U(X,0)\geq\widehat{\delta}_{1}. \tag{9.11}\] Since the functions from this set cannot be zero inside \(\mathcal{Q}_{d_{-}}\) we get that all functions must be positive in \(\overline{\mathcal{Q}}_{d_{-}}\). Since the function \(\widehat{\Psi}_{Y}\) also belongs to this set we get the inequality (9.8) on \(\mathcal{Q}_{d_{-}}\). This together with (9.9) gives the proof of (9.8) on \(\mathcal{D}_{\Xi}\). **Proof of Theorem** Asuume that the alternatives (i)-(iii) from Theorem 9.2 are not valid. Then the bifurcating solutions satisfy (9.8) and (9.10). This implies boundedness of solutions and that they belong to \(\mathcal{C}^{\prime}_{\delta}\) for certain positive delta. So we can apply Theorem 9.1 and it can be verified that only the alternative (ii) in Theorem 9.1 can be applied here. This proves the theorem. **Remark 9.5**.: In [15] it was shown that under the condition \(R<d_{0}\), where \[d_{0}=d(s_{0})=\int_{0}^{1}\frac{d\tau}{\sqrt{s_{0}^{2}-2\Omega(\tau)}},\] the condition (iii) in Theorem 9.2 can be omitted, **Acknowledgements.** The author was supported by the Swedish Research Council (VR), 2017-03837. The author expresses also his appreciation to a anonymous reviewer for many useful comments and corrections. **Data availability statement** Data sharing not applicable to this article as no datasets were generated or analysed during the current study. ## 10. Conflict of interest On behalf of all authors, the corresponding author states that there is no conflict of interest.
2310.11578
Improving Interface Physics Understanding in High-Frequency Cryogenic Normal Conducting Cavities
As progress towards real implementations of cryogenic high gradient normal conducting accelerating cavities continues, a more mature understanding of the surface physics in this novel environment becomes increasingly necessary. To this end, we here focus on developing a deeper understanding of one cavity figure of merit, the radiofrequency (RF) surface resistivity, $R_s$. A combination of experimental measurements and theory development form the basis of this work. For many cases, existing theory is sufficient but there are nuances leading to systemic errors in prediction which we address here. In addition, for certain cases there exist unexpected local minimum in $R_s$ found at temperatures above 0K. We compare here several alternative models for RF surface resistivity those which incorporate thin film like behavior which we use to predict the location of the local minimum in surface resistivity. Our experimental results focus on C-band frequencies for the benefit of several future cryogenic linear accelerator concepts intended to operate in this regime. To this end we have measured factor of $2.89\pm 0.05$ improvements in quality factor at $77$K and $4.61\pm 0.05$ at 45K. We further describe the test setup and cooling capabilities to address systematic issues associated with the measurements as well as a comparison of RF cavity preparation and the significant effect on $R_s$. Some implications of our measurements to linear accelerators combined with the theoretical considerations are extended to a wider range of frequencies especially the two additional aforementioned bands. Additional possible implications for condensed matter physics studies are mentioned.
Gerard Lawler, Fabio Bosco, James Rosenzweig
2023-10-17T20:58:14Z
http://arxiv.org/abs/2310.11578v1
# Improving Interface Physics Understanding in High-Frequency Cryogenic Normal Conducting Cavities ###### Abstract As progress towards real implementations of cryogenic high gradient normal conducting accelerating cavities continues, a more mature understanding of the surface physics in this novel environment becomes increasingly necessary. To this end, we here focus on developing a deeper understanding of one cavity figure of merit, the radiofrequency (RF) surface resistivity, \(R_{s}\). A combination of experimental measurements and theory development form the basis of this work. For many cases, existing theory is sufficient but there are nuances leading to systemic errors in prediction which we address here. In addition, for certain cases there exist unexpected local minimum in \(R_{s}\) found at temperatures above 0K. We compare here several alternative models for RF surface resistivity those which incorporate thin film like behavior which we use to predict the location of the local minimum in surface resistivity. Our experimental results focus on C-band frequencies for the benefit of several future cryogenic linear accelerator concepts intended to operate in this regime. To this end we have measured factor of \(2.89\pm 0.05\) improvements in quality factor at \(77\)K and \(4.61\pm 0.05\) at 45K. We further describe the test setup and cooling capabilities to address systematic issues associated with the measurements as well as a comparison of RF cavity preparation and the significant effect on \(R_{s}\). Some implications of our measurements to linear accelerators combined with the theoretical considerations are extended to a wider ranger of frequencies especially the two additional aforementioned bands. Additional possible implications for condensed matter physics studies are mentioned. Normal conducting RF cavities C-band Future Accelerators ## 1 Introduction An area of significant interest in particle accelerator research is the development of accelerating cavities which can support very high field gradients. Field gradients are usually limited by breakdown rates (BDR) so significant experimental progress has been made in the advancement of technologies which reduce BDR thus allowing for higher field gradient [1], [2]. In particular, empirical studies have observed ultra high peak fields in excess of 500 MV/m at 45K which correspond to 250 MV/m accelerating fields [3]. Significant consideration has been given to X-band \(\left(8-12\text{ GHz}\right)\) and S-band \(\left(2-4\text{ GHz}\right)\) for a number of reasons [4, 5, 6, 7]. C-band \(\left(4-8\text{ GHz}\right)\) however presents a number of advantageous features especially from the stand point of a practical achievable linear accelerator based on high gradient normal conducting cryogenic cavities. Several future linear accelerator concepts have been proposed which in this frequency range which utilize the high gradients made possible from cryogenic operation [8, 9, 10]. Theoretical progress regarding surface behavior especially in the context of BDR has been slow largely owing to complicated nature of the phenomena involving extreme environments on varying time and length scales [11]. The gap in understanding is in several ways insufficient for the needs of a growing field of research into high gradient cavities. Indeed as progress towards cryogenic operation of normal conducting cavities continues, a more mature understanding of the surface physics is necessary. This is especially true at cryogenic temperatures in an ultra-high gradient regime. Still there is a significant body of work to which we can refer for our specific RF oscillating field case, especially developed in the context of DC breakdown experiments and theory [12, 13]. The theory here is complex and involved since it involves multiple length and time scales. To this end we want to develop improvements which are more modest in scope and simple to compute but still notabley improve understanding especially for experimental realization. Specifically, we here focus on one important figure of merit (FoM), the radiofrequency (RF) surface resistivity \(R_{s}\). We attempt to iteratively improve predictions and understanding of this single FoM from which we can derive more accurate experimentally relevant near term useful predictions. The relevance of \(R_{s}\) as a proxy for general cryogenic cavity performance is multidimensional but can be explained primarily with reference to the RF pulse heating which oscillating field cavities experience per cycle. Through the affect on pulse heating, \(R_{s}\) then becomes useful in informing BDR behavior [14]. These numbers have been analytically computed in the past to calculate for example temperature rise in room temperature cavities [15]. Extension to cryogenic temperature then becomes possible as long as one has an understanding of low temperature \(R_{s}\). In the high temperature limit, computing \(R_{s}\) is as simple as using Maxwell's equations with appropriate boundary conditions to compute the real component of the complex impedance as shown in Equ. 1. \[R_{s}\left(T\rightarrow\infty\right)=Re\left(Z_{s}\right)=\sqrt{\frac{2\pi f \mu_{0}\rho}{2}} \tag{1}\] From this expression we observe the implicit dependence on temperature via the bulk electrical resistivity \(\rho\) and to a lesser extant the eigenmode frequency \(f\). The simplest theory for temperature dependence of \(\rho\) which we have comes via the work of Bloch and Gruneisen [16, 17] \[\rho\left(T\right)=A\left(\frac{T}{\Theta_{R}}\right)^{n}\int_{0}^{\Theta_{R} /T}\frac{t^{n}}{\left(e^{t}-1\right)\left(1-e^{-t}\right)}dt+C \tag{2}\] Where \(A\) is a scaling constant, \(\Theta_{R}\) is the Fermi temperature and \(C\) is an constant left as a free parameter to adjust for the residual resistivity ratio (RRR) defined as the ratio of room temperature resistivity to that at 4K. The remaining parameter \(n\) is dimensionless and established by the dominant scattering mechanism at play for the situation. The value used in the past has been \(n=5\) which is true for ideal metals [17, 18]. We can compute this temperature dependent resistivity and plot for the case of a number of different RRR values. These are showed in Fig. 1. There are several features here to note, namely that for \(C=0\), the resistivity drops to 0 at 0K such that the \(RRR\rightarrow\infty\). In the low temperature limit one common theory used for calculation of \(R_{s}\) is derived from the work of Reuter and Sondheimer [19] with further significant contribution from Chambers [20]. The theory was formulated to explain the systematic underestimate of \(R_{s}\) as the surface approaches 0K, a phenomena termed the anomalous skin effect (ASE). The primary effect here is the differing functional dependence on temperature of the electron mean free path length (MFPL) and RF skin depth, \(\delta\). MFPL reduces much faster than \(\delta\) and cryogenic temperatures such that at cryogenic temperatures MFPL becomes large compared to \(\delta\), more scattering events occur and \(R_{s}\) is reduced. It is common to compute the asymptotic value of surface resistivity at \(0\) K. As shown in Equ. 3. It is worth noting for future discussion that there is no explicit dependence on bulk properties of the metal in this theory. \[R_{s}\left(T\to 0\right)=Z_{0}\left[\frac{\sqrt{3}v_{f}}{16\pi c} \left(\frac{\omega}{\omega_{p}}\right)^{2}\right]^{\frac{1}{3}} \tag{3}\] A patching function is then used to interpolate between the two regimes with dimensional parameters \(a\) and \(b\) given by the relative proportion of diffusive to specular scattering of electrons in the surface material as shown in Equ. 4. \[R_{s}\left(T\to 0\right)=R_{\infty}\left(1+a\alpha^{-b}\right)\qquad\text{for } \alpha\geq 3 \tag{4}\] The dimensionless parameters range of order unity and different values are used by different analysis to reflect once again the relavent scattering behavior in the metal [21]. In the case of Reuter and Sondheimer's original formulation these values correspond to \(a=1.004\) and \(b=0.3333\); values which were are circumstances experimentally established by Chambers. Of note then is that predictions are useful but not sufficient for assessing \(R_{s}\) from first principles in the regime of practical consideration to normal conducting cryogenic RF cavities. Intended temperature ranges for our normal conducting cavities are primarily in the range of \(35-77\) K, unfortunately placed directly in the interpolation regime. In addition, more advanced specially designed _mushroom dome_ RF cavity measurements deviate from the aformentioned model in that there exists a local minimum in \(R_{s}\) around \(30\) K [3]. Furthermore, subtle bulk material property dependence was observed [11]. Neither of these features are predicted at all with the existing framework of calculation. Observing that the complex nature of the theory of BDR would be difficult to incorporate into multiphysics simulations of the type found in CST or HFSS, we are thus we are justified in seeking simple improvements of \(R_{s}\) predictions. In the field of RF cavities, \(R_{s}\) is measured often measured via the unloaded quality factor \(Q_{0}\). \(R_{s}\) for these simple cavities are then derived from \(Q_{0}\) as an inverse quantity scaled by some geometric factor \(G\). More information on the systemics this introduces into our measurements and simulations will be covered in the next section but here we note that this introduces the necessity to precisely consider another bulk property in addition to the bulk resistivity, the coefficient of thermal expansion. Now there is the additional implicit temperature dependence via the changing eigenmode \(\omega\) at which we measure \(S11\) and the changing cavity coupling via the RF antenna. More on the temperature dependent COE values will be explained in the methodology section. ## 2 Methodology Our intention is thus to compute with full transparency, the RSC predictions for a number of different scattering parameters make adjustments along the way where necessary. We will further introduce more subtle thin film physics effects in RF cavities and propose simple alternative models. An experimental setup and measurement suite was designed for measuring \(R_{s}\) in Cband RF cavities in order to validate the models down to at least application relevant temperatures. ### Theory Our first modification beyond the introduced theory \(R_{s}\) comes from a nuance regarding the BG equation itself. We note that the value of \(n=5\) is only strictly speaking true in certain circumstances special circumstances [18]. For transition metals, like copper, a more accurate value is \(n=3\). The calculations with this improvement are shown in Fig. Figure 1: **(a)** Bloch-Gruneisen temperature dependent bulk resistivity from Equ. 2 with \(n=5\)**(b)** Surface resistivity for Cband cavity using Reuter-Sondheimer-Chambers theory to compute limits and using patching formula. 2. The notable significance here is that the asymptotic high temperature and low temperature values are agnostic to this change but the intermediary values decidedely are not. For a not unreasonable number of \(RRR=500\), the 77K \(R_{s}\) is increased by a factor of \(\approx 1.3\) and nearly \(2\) at \(45\)K. #### 2.1.1 Thin Film Physics To address the existence of experimental local minimum at intermediate temperatures we want to explore more in depth the space of interface physics and in doing so indeed find that there are situations where \(R_{s}\) does not monotonically decrease with temperature. Indeed for certain circumstances it increases with decreasing temperatures within the intermediate range of temperatures with which we are concerned. This is called the Gurzhi effect and is well established for thin film physics representing a novel regime where electrons in the material take on an effective viscosity [22, 23, 24]. The justification of considering one skin depth as an effective thin film with thickness \(\delta\) has been made and we reiterate it here [25]. To this end two alternative models we will be our focus here: the qualitative components of the viscous electron model developed by Gurzhi and a more simple application of thin film resistivity developed by Fuchs & Sondheimer [26]. Both predict local minimum in \(R_{s}\). Based on these improved models for surface resistivity we find a nonzero optimal temperature for normal conducting copper cavities as a function of RRR. As an application of Gurzhi theory, we first must consider some of the length scales involved in computation of \(R_{s}\) for our C-band cavities. We plot these in Fig. 3. Specifically we have the same asymptotic values as as existing theory with exception of an intermediary regime where as temperature increases, the resistance increase. This regime corresponds to the situation where the effective mean free path length is approximately equivalent to the following expression. \[\ell_{eff}\sim\frac{\delta^{2}}{\ell_{ee}}\] That is to say the effective electron mean free path becomes approximately equal to the square of the sample size divided by the electron electron mean free path length. For our effective mean free path length we can use NIST polylog fit curves published from measured data which relate the thermal conductivity to temperature dependence. We can then relate this to the \(\ell_{eff}\) with which it should scale linearly. We further consider Mathiessen's rule such that we can say the following [27] \[\ell_{eff}=\left(\frac{1}{\ell_{phonon}}+\frac{1}{\ell_{ impurities}}+\cdots\right)^{-1}\] So for the case where \(\ell_{ee}\) is largely a material dependent property and thus independent of temperature we can compute the regime where this behavior should occur for the case of \(RRR450\) where if \(\ell_{ee}\approx 40\) nm [28] we would have the Gurzhi effect regime kicking in around \(35-40\)K. In addition, for a low purity material like \(RRR050\) we would not Figure 2: Bloch-Gruneisen temperature dependent bulk resistivity with \(n=3\) with this exception of \(C_{BG}=0\) and a RRR500 curve which still use \(n=5\) for comparison. The feature to note here is the significant deviation at intermediate values. expect the regime to be present at all. We can see this in Fig. 3. Our intention thus is to cool to below this value for our existing pillbox structures to determine this negative slope \(dR_{s}/dT\) in experimental data. A fully analytic application of Gurzhi theory is too involved for the time being so considering the need for a more computationally tractable theory. For simplicity and the sake of computational feasability, we also propose a model where since the RF skin depth is treated as a thin film sample with thickness equal to the skin depth we calculate directly the resistivity of a thin film. We refer here to the formulation of the Fuchs and Sondheimer theory to compute an thin film resistance[26]. With constants of proportionality composed of elementary constants and material properties not explicitly dependent on temperature. So we have the following where the value \(p\) relates the proportion of specular to diffusive scattering at the boundary \[\frac{\rho_{film}}{\rho_{bulk}}\approx\left[1.0-\frac{C_{0}}{\rho^{3/2}}\left( 1-p\right)\right]^{-1}\] We again make the assumption that the impurity mean free path length dominates at low temperature and is independent of temperature. For this model we assume the case of ideal metals, not transition metals, so we will limit our calculations in these cases to \(n=5\). Also since \(p\) is free parameter here we can make a semi-empirical model where this is scaled to previous data. This can then be said to be a near total specular scattering case since such that \(1-p\) is vanishingly small. We can then use this thin film resistivity in place of the bulk resistivity in our existing formalism for computing \(R_{s}\). Plotting these results as a function of temperature gives is shown in Fig. 7 ### Experimental Methods As mentioned in the introduction, experimentally the easiest method to measure the \(R_{s}\) in RF cavities is via unloaded quality factors, \(Q_{0}\). In order to do this we insert microwave antennas into the cavities and measure a the reflection coefficient called \(S11\) produced from a low level input signal. \(Q_{0}\) can then be computed from the shorted position as the central eigenmode frequency divided by the \(f3db\) bandwidth both of which are measured on a vector network analyzer (VNA). We are then left with the need to calculate the geometric factor \(G\) which related \(Q_{0}\) to \(R_{s}\) in the following manner. \[R_{s}=\frac{G}{Q_{0}} \tag{5}\] The resonant cavities that we use are as close to ideal cylindrical resonant cavity as possible, a geometry commonly called a pillbox. Such a cavity is ideal since it has analytic solutions for the eigenmodes in terms of cylindrical Bessel functions from which we can quickly compute analytic curves based on direct material properties. These offer useful checks of measured data during experiment in addition to full multiphysics simulations which we later perform in CST. The idealized Bessel function fields are depicted in Fig. 4. Figure 3: Length scales involved in theoretical computations of \(R_{s}\) relevant to our C-band cavity dimensions The RF geometry of the cavities are machined as close to the idealization as possible. There are of course perturbations from the placement of four features in particular: an RF antenna, a vacuum port, a filter for machining considerations, and the possibility of excess braze material in the edge where the cavity cap is added. These features are replicated in the CST multiphysics model for simulation. The consideration is now especially notable due to our attempt to remove as many systematics as possible in a measurements since high precision is needed. There is an additional subtlety associated with \(G\) also since it has an implicit dependence on temperature via the coefficient of thermal expansion we can consider it separately in simulation. We can analytically solve for the \(G\) for the case of the ideal pillbox cavity to obtain the following expression relating \(Q_{0}\) and \(R_{s}\) \[R_{s}=\frac{\chi_{01}\eta}{2}\frac{1}{1+a/h}\frac{1}{Q_{0}} \tag{6}\] Where \(a\) and \(h\) are the radius and height of the cylinder. Thankfully then for an isotropic material the effect contraction cancels out and \(G\) remains temperature independent. For completeness we simulate the situation with the real geometry with the expectation that the additional effect of antenna cooling and contraction should be of vanishing significance if any measurable at all. With respect to the cryogenics we have commissioned at UCLA multiple new cryocoolers for novel RF testing including a single stage low temperatuer high power cryocooler used for a new Cryogenic Brightness Optimized Radiofrequency Photogun (CYBORG) [29]. In parallel to development of the photogun we have the versatility to configure our cryostat for pillbox cavity testing by direct coupling to the cooler heat exchanger. The CYBORG cryosat openned for access is shown in Figure 5 along with the cavity under testing. The cooler relevant to these measurements is a Sumitomo Heavy Industries CH-110LT. Because of the high cooling capacity of this device our setup simply consists of the pillbox cavities mounted to the cold head via a simple polished C101 copper coupling plate. The composite system of cooler, coupler, and cavity are then wrapped in 14 layers of multi-layer insulation (MLI) with small penetration for temperature sensors and the RF antenna cabler. For cooling, the device under test is then placed within the CYBORG vacuum cryostat and pumped to around \(10^{-5}\) tor. ## 3 Results We first address the results of our examination of improved simple models. For the Fuchs and Sondheimer case we note now they because of the additional dependence of bulk resistivity there is now additional dependence on RRR, a bulk Figure 4: E-field profile in Cband cavity with antenna port material property. The theory then also predicts a local minimum around 25K for the high purity case. For a full space of solutions over many values of RRR we calculate a sweep for the contour plot shown in Fig. 7. Contained within Fig. 6 are some of the results of our measurements of \(Q_{0}\) enhancement factor as a function of temperature. The values of note from the plot are the 45K and 77K values which are most relevant for the UCXFEL and C\({}^{3}\) use cases. These and select other values are shown in Table 1. We also from the \(Q_{0}\) enhancement compute the surface resistivity values. In addition, to measuring the temperature dependence of the cavity quality factor we also measure the detuning of the cavity itself by tracking the minimum of the cavity S11 value. The detuning when paired with NIST log fitting for cryogenic copper coefficients of thermal expansion can then be used as a secondary verification measurement of the temperature of the cavity. To address some of the systematics, the cavity data was recorded on warming of the cavity in order to better ensure thermalization at each temperature. As a note on the temperature dependence of the geometric, our CST simulations show a relatively minute impact on the low level RF figures of merit that were measured and reported here. There is more significant impact on the coupling coefficient which were here note are necessary for data analysis consideration but are omitted simply for brevity. Figure 5: **(a)** CYBORg beamline cryostat which can be configured for LLRF tests. Including measurements presented here. **(b)** COMEB manufactured Cband cavity measured within CYBORG cryostat to measure data here presented. Figure 6: Data from measured COMEB pillbox cavity during cryogenic testing. Curves represent warming of cavities after cool down to minimum value of 38K ## 4 Discussion First of all, the LLRF measurements have immediate implications for the UCXFEL [8] and more long term implications for the Cool Copper Collider [30]. As shown in the Table 1, the two considered operation points are \(45\) and \(77\)K corresponding to the cryocooled UCXFEL photoinjector and the liquid nitrogen cooled linac sections to be used in both concepts. Quality factor enhancement approximately \(4.61\pm 0.05\) and \(2.89\pm 0.05\) respectively are necessary numbers for RF design in the the two cases. In the Cband regime to our knowledge these are reported here for the first term for a cavity manufactured to accelerating cavity specifications. In addition to the Q factor enhancement, the most significant observable implication is the impact on the RF pulse heating. Our model predicts that there is an optimum working temperature for a cryogenic normal conducting cavity definitively based on minimizing pulse heating. During the initial stages of the commissioning of CYBORG tests RF pulse heating as a a function of temperature are currently underway and will provide an additional measurement in addition to the S11 reflection coefficient [31]. Now with respect to the bulk material properties, we should clarify the physical significance of what RRR means. In our consideration we use this as a free parameter derived ex post facto from experiment and incorporated into the Bloch-Gruneisen model. Strictly speaking this FoM is only a proxy for the quantity of interest which is the material purity in terms of defects, grain boundaries etc. We can then refer to high RRR as high purity metal and lower RR as less pure. Less pure here need not refer to a failure of metallurgy, rather it can also refer to intentional alloying. The main implication of the models presented here is that there may exist local surface resistivity minima for purity. More specifically, if one is willing to make the voyage to single digit kelvin, an alloy with purity corresponding to below RRR250 is preferable. If higher temperatures are the only option then a more pure metal is required with RRR450 matching the performance at 25K. So hardening the material with alloying CuAg alloys then presents an appealing option to reduce the thermal loads required. In addition, arbitrary purity is known to lead to unmanageable soft copper from the machining perspective [32] so hardening at room temperature first before cooling may be further be a necessity. We have begun to consider this case with alloys, some of the measurements reported here [33]. We reiterate this here for the sake of saying that the future testing of the pillbox tests here are not only intending to iteratively reduce temperature to below 30K but also repeat the \(R_{s}\) measurements for varying alloys, especially CuAg, using this pillbox template. \begin{table} \begin{tabular}{||c c c c||} \hline 5.718 GHz cavity & 295K & 77K & 45K \\ \hline \hline \(\Delta f\) [MHz] & 0 & 17.8 & 18.7 \\ \(Q_{0}\) enhancement & 1 & \(2.89\pm 0.05\) & \(4.61\pm 0.05\) \\ \(R_{s}\) [\(\Omega\) m] & \(3\times 10^{-2}\) & \(1.05\times 10^{-2}\) & \(6.5\times 10^{-3}\) \\ \hline \end{tabular} \end{table} Table 1: Measured properties for the two relevant use cases: the 45K operation point for UCXFEL and the 77K operation point for \(C^{3}\) Figure 7: Results of simplified Fuchs-Sondheimer style toy model used to compute \(R_{s}\)(T) in our relevant regime of Cband frequency. Finally we can further the discussion of the implications beyond the scope of cryogenic cavity development by considering the new thin film model as a possible novel physical environment in which to study thin film physics. We consider the possibility of going beyond the scope of linear accelerator development and instead using cryogenic RF cavities as a new test environment for the study of viscous electron flow. This is an area of intense study for condensed matter and new examples of viscous electron are of certain interest [23, 24]. ## 5 Conclusion We are here convinced of the validity of our surface physics understanding for a Cband resonant cavity down to at least 38K. With a Q factor enhancement of \(4.61\pm 0.05\) at 45K and \(2.89\pm 0.05\) at 77K this has immediate impacts on the development of cryogenic copper linear accelerator development, especially UCXFEL and \(C^{3}\). Preparation and machining become the important knobs for precision study but insofar as these are established processes in accelerator physics, we can envision here low level RF cavities as test beds for material physics properties relevant to high gradient cavities in parallel to the CYBORG beamline at UCLA. Future studies will include in order of chronology, a more in depth analysis of \(R_{s}\) in the CYBORG photopoint from the standpoint of RF heating, cooling of the cavity to the limits of our cryocooler \(<30K\), and the manufacture of equivalent cavities with hard CuAg alloys. We also begin to consider cryogenic RF cavities as possible novel test beds for studying thin film and surface physics in a novel environment. ## 6 Acknowledgements This work was supported by the Center for Bright Beams, National Science Foundation Grant No. PHY-1549132 and DOE HEP Grant DE-SC0009914